Sort by:
Page 6 of 2472462 results

A machine learning model based on high-frequency ultrasound for differentiating benign and malignant skin tumors.

Qin Y, Zhang Z, Qu X, Liu W, Yan Y, Huang Y

pubmed logopapersSep 17 2025
This study aims to explore the potential of machine learning as a non-invasive automated tool for skin tumor differentiation. Data were included from 156 lesions, collected retrospectively from September 2021 to February 2024. Univariate and multivariate analyses of traditional clinical features were performed to establish a logistic regression model. Ultrasound-based radiomics features are extracted from grayscale images after delineating regions of interest (ROIs). Independent samples t-tests, Mann-Whitney U tests, and Least Absolute Shrinkage and Selection Operator (LASSO) regression were employed to select ultrasound-based radiomics features. Subsequently, five machine learning methods were used to construct radiomics models based on the selected features. Model performance was evaluated using receiver operating characteristic (ROC) curves and the Delong test. Age, poorly defined margins, and irregular shape were identified as independent risk factors for malignant skin tumors. The multilayer perception (MLP) model achieved the best performance, with area under the curve (AUC) values of 0.963 and 0.912, respectively. The results of DeLong's test revealed a statistically significant discrepancy in efficacy between the MLP and clinical models (Z=2.611, p=0.009). Machine learning based skin tumor models may serve as a potential non-invasive method to improve diagnostic efficiency.

Non-iterative and uncertainty-aware MRI-based liver fat estimation using an unsupervised deep learning method.

Meneses JP, Tejos C, Makalic E, Uribe S

pubmed logopapersSep 17 2025
Liver proton density fat fraction (PDFF), the ratio between fat-only and overall proton densities, is an extensively validated biomarker associated with several diseases. In recent years, numerous deep learning-based methods for estimating PDFF have been proposed to optimize acquisition and post-processing times without sacrificing accuracy, compared to conventional methods. However, the lack of interpretability and the often poor generalizability of these DL-based models undermine the adoption of such techniques in clinical practice. In this work, we propose an Artificial Intelligence-based Decomposition of water and fat with Echo Asymmetry and Least-squares (AI-DEAL) method, designed to estimate both proton density fat fraction (PDFF) and the associated uncertainty maps. Once trained, AI-DEAL performs a one-shot MRI water-fat separation by first calculating the nonlinear confounder variables, R<sub>2</sub><sup>∗</sup> and off-resonance field. It then employs a weighted least squares approach to compute water-only and fat-only signals, along with their corresponding covariance matrix, which are subsequently used to derive the PDFF and its associated uncertainty. We validated our method using in vivo liver CSE-MRI, a fat-water phantom, and a numerical phantom. AI-DEAL demonstrated PDFF biases of 0.25% and -0.12% at two liver ROIs, outperforming state-of-the-art deep learning-based techniques. Although trained using in vivo data, our method exhibited PDFF biases of -3.43% in the fat-water phantom and -0.22% in the numerical phantom with no added noise. The latter bias remained approximately constant when noise was introduced. Furthermore, the estimated uncertainties showed good agreement with the observed errors and the variations within each ROI, highlighting their potential value for assessing the reliability of the resulting PDFF maps.

Habitat-aware radiomics and adaptive 2.5D deep learning predict treatment response and long-term survival in ESCC patients undergoing neoadjuvant chemoimmunotherapy.

Gao X, Yang L, She T, Wang F, Ding H, Lu Y, Xu Y, Wang Y, Li P, Duan X, Leng X

pubmed logopapersSep 17 2025
Current radiomic approaches inadequately resolve spatial intratumoral heterogeneity (ITH) in esophageal squamous cell carcinoma (ESCC), limiting neoadjuvant chemoimmunotherapy (NACI) response prediction. We propose an interpretable multimodal framework to: (1) quantitatively map intra-/peritumoral heterogeneity via voxel-wise habitat radiomics; (2) model cross-sectional tumor biology using 2.5D deep learning; and (3) establish mechanism-driven biomarkers via SHAP interpretability to identify resistance-linked subregions. This dual-center retrospective study analyzed 269 treatment-naïve ESCC patients with baseline PET/CT (training: n = 144; validation: n = 62; test: n = 63). Habitat radiomics delineated tumor subregions via K-means clustering (Calinski-Harabasz-optimized) on PET/CT, extracting 1,834 radiomic features per modality. A multi-stage pipeline (univariate filtering, mRMR, LASSO regression) selected 32 discriminative features. The 2.5D model aggregated ± 4 peri-tumoral slices, fusing PET/CT via MixUp channels using a fine-tuned ResNet50 (ImageNet-pretrained), with multi-instance learning (MIL) translating slice-level features to patient-level predictions. Habitat features, MIL signatures, and clinical variables were integrated via five-classifier ensemble (ExtraTrees/SVM/RandomForest) and Crossformer architecture (SMOTE-balanced). Validation included AUC, sensitivity, specificity, calibration curves, decision curve analysis (DCA), survival metrics (C-index, Kaplan-Meier), and interpretability (SHAP, Grad-CAM). Habitat radiomics achieved superior validation AUC (0.865, 95% CI: 0.778-0.953), outperforming conventional radiomics (ΔAUC + 3.6%, P < 0.01) and clinical models (ΔAUC + 6.4%, P < 0.001). SHAP identified the invasive front (H2) as dominant predictor (40% of top features), with wavelet_LHH_firstorder_Entropy showing highest impact (SHAP = + 0.42). The 2.5D MIL model demonstrated strong generalizability (validation AUC: 0.861). The combined model achieved state-of-the-art test performance (AUC = 0.824, sensitivity = 0.875) with superior calibration (Hosmer-Lemeshow P > 0.800), effective survival stratification (test C-index: 0.809), and 23-41% net benefit improvement in DCA. Integrating habitat radiomics and 2.5D deep learning enables interpretable dual diagnostic-prognostic stratification in ESCC, advancing precision oncology by decoding spatial heterogeneity.

A Novel Ultrasound-based Nomogram Using Contrast-enhanced and Conventional Ultrasound Features to Improve Preoperative Diagnosis of Parathyroid Adenomas versus Cervical Lymph Nodes.

Xu Y, Zuo Z, Peng Q, Zhang R, Tang K, Niu C

pubmed logopapersSep 17 2025
Precise preoperative localization of parathyroid gland lesion is essential for guiding surgery in primary hyperparathyroidism (PHPT). The aim of our study was to investigate the contrast-enhanced ultrasound (CEUS) characteristics of parathyroid gland adenoma (PGA) and to evaluate whether PGA can be differentiated from central cervical lymph nodes (CCLN). Fifty-four consecutive patients with PHPT were retrospectively enrolled and underwent preoperative imaging with high-resolution ultrasound (US) and CEUS, and underwent subsequent parathyroidectomy. One hundred and seventy-four lymph nodes of papillary thyroid carcinomas (PTC) patients were examined by high-resolution US and CEUS, and underwent unilateral, subtotal, or total thyroidectomy with central neck dissection were enrolled. By incorporating US and CEUS characteristics, a predictive model presented as a nomogram was developed, and their performance and utility were evaluated by plotting receiver operating characteristic (ROC) curves, calibration curves and decision curve analysis (DCA). Three US characteristics and two CEUS characteristics were independent characteristics related to PGA for their differentiation from CCLN, and were obtained for machine learning model construction. The area under the receiver characteristic curve (AUC) of the US+CEUS model was 0.915, was higher than the other US model (0.874) and CEUS model (0.791). It is recommended that CEUS techniques be used to enhance the diagnostic utility of US in cases of suspected parathyroid lesions. This is the first study to use a combination of US+CEUS to build a nomogram to distinguish between PGA and CCLN, filling a gap in the existing literatures.

Diagnostic Performance of Large Language Models in Multimodal Analysis of Radiolucent Jaw Lesions.

Kim K, Kim BC

pubmed logopapersSep 16 2025
Large language models (LLMs), such as ChatGPT and Gemini, are increasingly being used in medical domains, including dental diagnostics. Despite advancements in image-based deep learning systems, LLM diagnostic capabilities in oral and maxillofacial surgery (OMFS) for processing multi-modal imaging inputs remain underexplored. Radiolucent jaw lesions represent a particularly challenging diagnostic category due to their varied presentations and overlapping radiographic features. This study evaluated diagnostic performance of ChatGPT 4o and Gemini 2.5 Pro using real-world OMFS radiolucent jaw lesion cases, presented in multiple-choice (MCQ) and short-answer (SAQ) formats across 3 imaging conditions: panoramic radiography only, panoramic + CT, and panoramic + CT + pathology. Data from 100 anonymized patients at Wonkwang University Daejeon Dental Hospital were analyzed, including demographics, panoramic radiographs, CBCT images, histopathology slides, and confirmed diagnoses. Sample size was determined based on institutional case availability and statistical power requirements for comparative analysis. ChatGPT and Gemini diagnosed each case under 6 conditions using 3 imaging modalities (P, P+C, P+C+B) in MCQ and SAQ formats. Model accuracy was scored against expert-confirmed diagnoses by 2 independent evaluators. McNemar's and Cochran's Q tests evaluated statistical differences across models and imaging modalities. For MCQ tasks, ChatGPT achieved 66%, 73%, and 82% accuracies across the P, P+C, and P+C+B conditions, respectively, while Gemini achieved 57%, 62%, and 63%, respectively. In SAQ tasks, ChatGPT achieved 34%, 45%, and 48%; Gemini achieved 15%, 24%, and 28%, respectively. Accuracy improved significantly with additional imaging data for ChatGPT; ChatGPT consistently outperformed Gemini across all conditions (P < .001 for MCQ; P = .008 to < .001 for SAQ). MCQ format, which incorporates a human-in-the-loop (HITL) structure, showed higher overall performance than SAQ. ChatGPT demonstrated superior diagnostic performance compared to Gemini in OMFS diagnostic tasks when provided with richer multimodal inputs. Diagnostic accuracy increased with additional imaging data, especially in MCQ formats, suggesting LLMs can effectively synthesize radiographic and pathological data. LLMs have potential as diagnostic support tools for OMFS, especially in settings with limited specialist access. Presenting clinical cases in structured formats using curated imaging data enhances LLM accuracy and underscores HITL integration. Although current LLMs show promising results, further validation using larger datasets and hybrid AI systems are necessary for broader contextualised, clinical adoption.

Role of Artificial Intelligence in Lung Transplantation: Current State, Challenges, and Future Directions.

Duncheskie RP, Omari OA, Anjum F

pubmed logopapersSep 16 2025
Lung transplantation remains a critical treatment for end-stage lung diseases, yet it continues to have 1 of the lowest survival rates among solid organ transplants. Despite its life-saving potential, the field faces several challenges, including organ shortages, suboptimal donor matching, and post-transplant complications. The rapidly advancing field of artificial intelligence (AI) offers significant promise in addressing these challenges. Traditional statistical models, such as linear and logistic regression, have been used to predict post-transplant outcomes but struggle to adapt to new trends and evolving data. In contrast, machine learning algorithms can evolve with new data, offering dynamic and updated predictions. AI holds the potential to enhance lung transplantation at multiple stages. In the pre-transplant phase, AI can optimize waitlist management, refine donor selection, and improve donor-recipient matching, and enhance diagnostic imaging by harnessing vast datasets. Post-transplant, AI can help predict allograft rejection, improve immunosuppressive management, and better forecast long-term patient outcomes, including quality of life. However, the integration of AI in lung transplantation also presents challenges, including data privacy concerns, algorithmic bias, and the need for external clinical validation. This review explores the current state of AI in lung transplantation, summarizes key findings from recent studies, and discusses the potential benefits, challenges, and ethical considerations in this rapidly evolving field, highlighting future research directions.

Artificial intelligence aided ultrasound imaging of foetal congenital heart disease: A scoping review.

Norris L, Lockwood P

pubmed logopapersSep 16 2025
Congenital heart diseases (CHD) are a significant cause of neonatal mortality and morbidity. Detecting these abnormalities during pregnancy increases survival rates, enhances prognosis, and improves pregnancy management and quality of life for the affected families. Foetal echocardiography can be considered an accurate method for detecting CHDs. However, the detection of CHDs can be limited by factors such as the sonographer's skill, expertise and patient specific variables. Using artificial intelligence (AI) has the potential to address these challenges, increasing antenatal CHD detection during prenatal care. A scoping review was conducted using Google Scholar, PubMed, and ScienceDirect databases, employing keywords, Boolean operators, and inclusion and exclusion criteria to identify peer-reviewed studies. Thematic mapping and synthesis of the found literature were conducted to review key concepts, research methods and findings. A total of n = 233 articles were identified, after exclusion criteria, the focus was narrowed to n = 7 that met the inclusion criteria. Themes in the literature identified the potential of AI to assist clinicians and trainees, alongside emerging new ethical limitations in ultrasound imaging. AI-based tools in ultrasound imaging offer great potential in assisting sonographers and doctors with decision-making in CHD diagnosis. However, due to the paucity of data and small sample sizes, further research and technological advancements are needed to improve reliability and integrate AI into routine clinical practice. This scoping review identified the reported accuracy and limitations of AI-based tools within foetal cardiac ultrasound imaging. AI has the potential to aid in reducing missed diagnoses, enhance training, and improve pregnancy management. There is a need to understand and address the ethical and legal considerations involved with this new paradigm in imaging.

AI-powered insights in pediatric nephrology: current applications and future opportunities.

Nada A, Ahmed Y, Hu J, Weidemann D, Gorman GH, Lecea EG, Sandokji IA, Cha S, Shin S, Bani-Hani S, Mannemuddhu SS, Ruebner RL, Kakajiwala A, Raina R, George R, Elchaki R, Moritz ML

pubmed logopapersSep 16 2025
Artificial intelligence (AI) is rapidly emerging as a transformative force in pediatric nephrology, enabling improvements in diagnostic accuracy, therapeutic precision, and operational workflows. By integrating diverse datasets-including patient histories, genomics, imaging, and longitudinal clinical records-AI-driven tools can detect subtle kidney anomalies, predict acute kidney injury, and forecast disease progression. Deep learning models, for instance, have demonstrated the potential to enhance ultrasound interpretations, refine kidney biopsy assessments, and streamline pathology evaluations. Coupled with robust decision support systems, these innovations also optimize medication dosing and dialysis regimens, ultimately improving patient outcomes. AI-powered chatbots hold promise for improving patient engagement and adherence, while AI-assisted documentation solutions offer relief from administrative burdens, mitigating physician burnout. However, ethical and practical challenges remain. Healthcare professionals must receive adequate training to harness AI's capabilities, ensuring that such technologies bolster rather than erode the vital doctor-patient relationship. Safeguarding data privacy, minimizing algorithmic bias, and establishing standardized regulatory frameworks are critical for safe deployment. Beyond clinical care, AI can accelerate pediatric nephrology research by identifying biomarkers, enabling more precise patient recruitment, and uncovering novel therapeutic targets. As these tools evolve, interdisciplinary collaborations and ongoing oversight will be key to integrating AI responsibly. Harnessing AI's vast potential could revolutionize pediatric nephrology, championing a future of individualized, proactive, and empathetic care for children with kidney diseases. Through strategic collaboration and transparent development, these advanced technologies promise to minimize disparities, foster innovation, and sustain compassionate patient-centered care, shaping a new horizon in pediatric nephrology research and practice.

Prediction of cerebrospinal fluid intervention in fetal ventriculomegaly via AI-powered normative modelling.

Zhou M, Rajan SA, Nedelec P, Bayona JB, Glenn O, Gupta N, Gano D, George E, Rauschecker AM

pubmed logopapersSep 16 2025
Fetal ventriculomegaly (VM) is common and largely benign when isolated. However, it can occasionally progress to hydrocephalus, a more severe condition associated with increased mortality and neurodevelopmental delay that may require surgical postnatal intervention. Accurate differentiation between VM and hydrocephalus is essential but remains challenging, relying on subjective assessment and limited two-dimensional measurements. Deep learning-based segmentation offers a promising solution for objective and reproducible volumetric analysis. This work presents an AI-powered method for segmentation, volume quantification, and classification of the ventricles in fetal brain MRI to predict need for postnatal intervention. This retrospective study included 222 patients with singleton pregnancies. An nnUNet was trained to segment the fetal ventricles on 20 manually segmented, institutional fetal brain MRIs combined with 80 studies from a publicly available dataset. The validated model was then applied to 138 normal fetal brain MRIs to generate a normative reference range across a range of gestational ages (18-36 weeks). Finally it was applied to 64 fetal brains with VM (14 of which required postnatal intervention). ROC curves and AUC to predict VM and need for postnatal intervention were calculated. The nnUNet predicted segmentation of the fetal ventricles in the reference dataset were high quality and accurate (median Dice score 0.96, IQR 0.93-0.99). A normative reference range of ventricular volumes across gestational ages was developed using automated segmentation volumes. The optimal threshold for identifying VM was 2 standard deviations from normal with sensitivity of 92% and specificity of 93% (AUC 0.97, 95% CI 0.91-0.98). When normalized to intracranial volume, fetal ventricular volume was higher and subarachnoid volume lower among those who required postnatal intervention (p<0.001, p=0.003). The optimal threshold for identifying need for postnatal intervention was 11 standard deviations from normal with sensitivity of 86% and specificity of 100% (AUC 0.97, 95% CI 0.86-1.00). This work introduces a deep-learning based method for fast and accurate quantification of ventricular volumes in fetal brain MRI. A normative reference standard derived using this method can predict VM and need for postnatal CSF intervention. Increased ventricular volume is a strong predictor for postnatal intervention. VM = ventriculomegaly, 2D = two-dimensional, 3D = three-dimensional, ROC = receiver operating characteristics, AUC = area under curve.

Head-to-Head Comparison of Two AI Computer-Aided Triage Solutions for Detecting Intracranial Hemorrhage on Non-Contrast Head CT.

Garcia GM, Young P, Dawood L, Elshikh M

pubmed logopapersSep 16 2025
This study aims to provide a comprehensive comparison of the performance and reproducibility of two commercially available artificial intelligence (AI) software computer-aided triage and notification solutions, Vendor A (Aidoc) and Vendor B (Viz.ai), for the detection of intracranial hemorrhage (ICH) on non-contrast enhanced head CT (NCHCT) scans performed within a single academic institution. The retrospective analysis was conducted on a large patient cohort from multiple healthcare settings within a single academic institution, utilizing standardized scanning protocols. Sensitivity, specificity, false positive, and false negative rates were evaluated for both vendors. Outputs assessed included AI-generated case-level classification. Among 4,081 scans, 595 were positive for ICH. Vendor A demonstrated a sensitivity of 94.4% and specificity of 97.4%, PPV of 85.9%, and NPV of 99.1%. Vendor B showed a sensitivity of 59.5% and specificity of 99.0%, PPV of 90.0%, and NPV of 92.6%. Vendor A had 20 false negatives, which primarily involved subdural and intraparenchymal hemorrhages, and 97 false positives, which appear to be related to motion artifact. Vendor B had 145 false negatives, largely comprised of subdural and subarachnoid hemorrhages, and 36 false positives, which appeared to be related to motion artifact and calcified or dense lesions. Concordantly, 18 cases were false negatives and 11 cases were false positives for both AI solutions. The findings of this study provide valuable information for clinicians and healthcare institutions considering the implementation of AI software for computer aided-triage and notification in the detection of intracranial hemorrhage. The discussion encompasses the implications of the results, the importance of evaluating AI findings in context-especially in the absence of explainability tools, potential areas for improvement, and the relevance of standardized scanning protocols in ensuring the reliability of AI-based diagnostic tools in clinical practice. ICH = Intracranial Hemorrhage; NCHCT = Non-contrast Enhanced Head CT; AI = Artificial Intelligence; SDH = Subdural Hemorrhage; SAH = Subarachnoid Hemorrhage; IPH = Intraparenchymal Hemorrhage; IVH = Intraventricular Hemorrhage; PPV = Positive Predictive Value; NPV = Negative Predictive Value; CADt = Computer-Aided Triage; PACS = Picture Archiving and Communication System; FN = False Negative; FP = False Positive; CI = Confidence Interval.
Page 6 of 2472462 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.