Sort by:
Page 109 of 3433427 results

Sex estimation with parameters of the facial canal by computed tomography using machine learning algorithms and artificial neural networks.

Secgin Y, Kaya S, Harmandaoğlu O, Öztürk O, Senol D, Önbaş Ö, Yılmaz N

pubmed logopapersJul 18 2025
The skull is highly durable and plays a significant role in sex determination as one of the most dimorphic bones. The facial canal (FC), a clinically significant canal within the temporal bone, houses the facial nerve. This study aims to estimate sex using morphometric measurements from the FC through machine learning (ML) and artificial neural networks (ANNs). The study utilized Computed Tomography (CT) images of 200 individuals (100 females, 100 males) aged 19-65 years. These images were retrospectively retrieved from the Picture Archiving and Communication Systems (PACS) at Düzce University Faculty of Medicine, Department of Radiology, covering 2021-2024. Bilateral measurements of nine temporal bone parameters were performed in axial, coronal, and sagittal planes. ML algorithms including Quadratic Discriminant Analysis (QDA), Linear Discriminant Analysis (LDA), Decision Tree (DT), Extra Tree Classifier (ETC), Random Forest (RF), Logistic Regression (LR), Gaussian Naive Bayes (GaussianNB), and k-Nearest Neighbors (k-NN) were used, alongside a multilayer perceptron classifier (MLPC) from ANN algorithms. Except for QDA (Acc 0.93), all algorithms achieved an accuracy rate of 0.97. SHapley Additive exPlanations (SHAP) analysis revealed the five most impactful parameters: right SGAs, left SGAs, right TSWs, left TSWs and, the inner mouth width of the left FN, respectively. FN-centered morphometric measurements show high accuracy in sex determination and may aid in understanding FN positioning across sexes and populations. These findings may support rapid and reliable sex estimation in forensic investigations-especially in cases with fragmented craniofacial remains-and provide auxiliary diagnostic data for preoperative planning in otologic and skull base surgeries. They are thus relevant for surgeons, anthropologists, and forensic experts. Not applicable.

Deep learning-based ultrasound diagnostic model for follicular thyroid carcinoma.

Wang Y, Lu W, Xu L, Xu H, Kong D

pubmed logopapersJul 18 2025
It is challenging to preoperatively diagnose follicular thyroid carcinoma (FTC) on ultrasound images. This study aimed to develop an end-to-end diagnostic model that can classify thyroid tumors into benign tumors, FTC and other malignant tumors based on deep learning. This retrospective multi-center study included 10,771 consecutive adult patients who underwent conventional ultrasound and postoperative pathology between January 2018 and September 2021. We proposed a novel data augmentation method and a mixed loss function to solve an imbalanced dataset and applied them to a pre-trained convolutional neural network and transformer model that could effectively extract image features. The proposed model can directly identify FTC from other malignant subtypes and benign tumors based on ultrasound images. The testing dataset included 1078 patients (mean age, 47.3 years ± 11.8 (SD); 811 female patients; FTCs, 39 of 1078 (3.6%); Other malignancies, 385 of 1078 (35.7%)). The proposed classification model outperformed state-of-the-art models on differentiation of FTC from other malignant sub-types and benign ones, achieved an excellent diagnosis performance with balanced-accuracy 0.87, AUC 0.96 (95% CI: 0.96, 0.96), mean sensitivity 0.87 and mean specificity 0.92. Meanwhile, it was superior to radiologists included in this study for thyroid tumor diagnosis (balanced-accuracy: Junior 0.60, p < 0.001; Mid-level 0.59, p < 0.001; Senior 0.66, p < 0.001). The developed classification model addressed the class-imbalanced problem and achieved higher performance in differentiating FTC from other malignant subtypes and benign tumors compared with existing methods. Question Deep learning has the potential to improve preoperatively diagnostic accuracy for follicular thyroid carcinoma (FTC). Findings The proposed model achieved high accuracy, sensitivity and specificity in diagnosing follicular thyroid carcinoma, outperforming other models. Clinical relevance The proposed model is a promising computer-aided diagnostic tool for the clinical diagnosis of FTC, which potentially could help reduce missed diagnosis and misdiagnosis for FTC.

Diagnostic interchangeability of deep-learning based Synth-STIR images generated from T1 and T2 weighted spine images.

Li J, Xu M, Jiang B, Dong Q, Xia Y, Zhou T, Lin X, Ma Y, Jiang S, Zhang Z, Xiang L, Fan L, Liu S

pubmed logopapersJul 18 2025
To evaluate image quality and diagnostic interchangeability of synth short-tau inversion recovery (STIR) generated by deep learning in comparison with standard STIR. This prospective study recruited participants between July 2023 and August 2023. Participants were scanned with T1WI and T2WI, then generated Synth-STIR. Signal-to-noise ratios (SNR), contrast-to-noise ratios (CNR) were calculated for quantitative evaluation. Four independent, blinded radiologists performed subjective quality and lesion characteristic assessment. Wilcoxon tests were used to assess the differences in SNR, CNR, and subjective image quality. Various diagnostic findings pertinent to the spine were tested for interchangeability using the individual equivalence index (IEI). Inter-reader and intra-reader agreement and concordance were computed, and McNemar tests were performed for comprehensive evaluation. One hundred ninety-nine participants (106 male patients, mean age 46.8 ± 16.9 years) were included. Compared to standard-STIR, Synth-STIR reduces sequence scanning time by approximately 180 s, has significantly higher SNR and CNR (p < 0.001). For artifacts, noise, sharpness, and diagnostic confidence, all readers agreed that Synth-STIR was significantly better than standard-STIR (all p < 0.001). In addition, the IEI was less than 1.61%. Kappa and Kendall showed a moderate to excellent agreement in the range of 0.52-0.97. There was no significant difference in the frequencies of the major features as reported with standard-STIR and Synth-STIR (p = 0.211-1). Synth-STIR shows significantly higher SNR and CNR, and is diagnostically interchangeable with standard-STIR with a substantial overall reduction in the imaging time, thereby improving efficiency without sacrificing diagnostic value. Question Can generating STIR improve image quality while reducing spine MRI acquisition time in order to increase clinical spine MRI throughput? Findings With reduced acquisition time, Synth-STIR has significantly higher SNR and CNR than standard-STIR and can be interchangeably diagnosed with standard-STIR in detecting spinal abnormalities. Clinical relevance Our Synth-STIR provides the same high-quality images for clinical diagnosis as standard-STIR, while reducing scanning time for spine MRI protocols. Increase clinical spine MRI throughput.

Development of a clinical decision support system for breast cancer detection using ensemble deep learning.

Sandhu JK, Sharma C, Kaur A, Pandey SK, Sinha A, Shreyas J

pubmed logopapersJul 18 2025
Advancements in diagnostic technology are required to improve patient outcomes and facilitate early diagnosis, as breast cancer is a substantial global health concern. This research discusses the creation of a unique Deep Learning (DL) Ensemble Deep Learning based on a Clinical Decision Support System (EDL-CDSS) that enables the precise and expeditious diagnosis of breast cancer. Numerous DL models are combined in the proposed EDL-CDSS to create an ensemble method that optimizes the advantages and reduces the disadvantages of individual techniques. The team improves its capacity to extricate intricate patterns and features from medical imaging data by incorporating the Kelm Extreme Learning Machine (KELM), Deep Belief Network (DBN), and other DL architectures. Comprehensive testing has been conducted across various datasets to assess the efficacy of this system in comparison to individual DL models and traditional diagnostic methods. Among other objectives, the evaluation prioritizes precision, sensitivity, specificity, F1-score, accuracy, and overall accuracy to mitigate false positives and negatives. The experiment's conclusion exhibits a remarkable accuracy of 96.14% in comparison to prior advanced methodologies.

Commercialization of medical artificial intelligence technologies: challenges and opportunities.

Li B, Powell D, Lee R

pubmed logopapersJul 18 2025
Artificial intelligence (AI) is already having a significant impact on healthcare. For example, AI-guided imaging can improve the diagnosis/treatment of vascular diseases, which affect over 200 million people globally. Recently, Chiu and colleagues (2024) developed an AI algorithm that supports nurses with no ultrasound training in diagnosing abdominal aortic aneurysms (AAA) with similar accuracy as ultrasound-trained physicians. This technology can therefore improve AAA screening; however, achieving clinical impact with new AI technologies requires careful consideration of commercialization strategies, including funding, compliance with safety and regulatory frameworks, health technology assessment, regulatory approval, reimbursement, and clinical guideline integration.

Artificial Intelligence for Tumor [<sup>18</sup>F]FDG PET Imaging: Advancements and Future Trends - Part II.

Safarian A, Mirshahvalad SA, Farbod A, Jung T, Nasrollahi H, Schweighofer-Zwink G, Rendl G, Pirich C, Vali R, Beheshti M

pubmed logopapersJul 18 2025
The integration of artificial intelligence (AI) into [<sup>18</sup>F]FDG PET/CT imaging continues to expand, offering new opportunities for more precise, consistent, and personalized oncologic evaluations. Building on the foundation established in Part I, this second part explores AI-driven innovations across a broader range of malignancies, including hematological, genitourinary, melanoma, and central nervous system tumors as well applications of AI in pediatric oncology. Radiomics and machine learning algorithms are being explored for their ability to enhance diagnostic accuracy, reduce interobserver variability, and inform complex clinical decision-making, such as identifying patients with refractory lymphoma, assessing pseudoprogression in melanoma, or predicting brain metastases in extracranial malignancies. Additionally, AI-assisted lesion segmentation, quantitative feature extraction, and heterogeneity analysis are contributing to improved prediction of treatment response and long-term survival outcomes. Despite encouraging results, variability in imaging protocols, segmentation methods, and validation strategies across studies continues to challenge reproducibility and remains a barrier to clinical translation. This review evaluates recent advancements of AI, its current clinical applications, and emphasizes the need for robust standardization and prospective validation to ensure the reproducibility and generalizability of AI tools in PET imaging and clinical practice.

CT derived fractional flow reserve: Part 1 - Comprehensive review of methodologies.

Shaikh K, Lozano PR, Evangelou S, Wu EH, Nurmohamed NS, Madan N, Verghese D, Shekar C, Waheed A, Siddiqui S, Kolossváry M, Almeida S, Coombes T, Suchá D, Trivedi SJ, Ihdayhid AR

pubmed logopapersJul 18 2025
Advancements in cardiac computed tomography angiography (CCTA) have enabled the extraction of physiological data from an anatomy-based imaging modality. This review outlines the key methodologies for deriving fractional flow reserve (FFR) from CCTA, with a focus on two primary methods: 1) computational fluid dynamics-based FFR (CT-FFR) and 2) plaque-derived ischemia assessment using artificial intelligence and quantitative plaque metrics. These techniques have expanded the role of CCTA beyond anatomical assessment, allowing for concurrent evaluation of coronary physiology without the need for invasive testing. This review provides an overview of the principles, workflows, and limitations of each technique and aims to inform on the current state and future direction of non-invasive coronary physiology assessment.

Enhanced Image Quality and Comparable Diagnostic Performance of Prostate Fast Bi-MRI with Deep Learning Reconstruction.

Shen L, Yuan Y, Liu J, Cheng Y, Liao Q, Shi R, Xiong T, Xu H, Wang L, Yang Z

pubmed logopapersJul 18 2025
To evaluate image quality and diagnostic performance of prostate biparametric MRI (bi-MRI) with deep learning reconstruction (DLR). This prospective study included 61 adult male urological patients undergoing prostate MRI with standard-of-care (SOC) and fast protocols. Sequences included T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), and apparent diffusion coefficient (ADC) maps. DLR images were generated from FAST datasets. Three groups (SOC, FAST, DLR) were compared using: (1) five-point Likert scale, (2) signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), (3) lesion slope profiles, (4) dorsal capsule edge rise distance (ERD). PI-RADS scores were assigned to dominant lesions. ADC values were measured in histopathologically confirmed cases. Diagnostic performance was analyzed via receiver operating characteristic (ROC) curves (accuracy/sensitivity/specificity). Statistical tests included Friedman test, one-way ANOVA with post hoc analyses, and DeLong test for ROC comparisons (P<0.05). FAST scanning protocols reduced acquisition time by nearly half compared to the SOC scanning protocol. When compared to T2WI<sub>FAST</sub>, DLR significantly improved SNR, CNR, slope profile, and ERD (P < 0.05). Similarly, DLR significantly enhanced SNR, CNR, and image sharpness when compared to DWI<sub>FAST</sub> (P < 0.05). No significant differences were observed in PI-RADS scores and ADC values between groups (P > 0.05). The areas under the ROC curves, sensitivity, and specificity of ADC values for distinguishing benign and malignant lesions remained consistent (P > 0.05). DLR enhances image quality in fast prostate bi-MRI while preserving PI-RADS classification accuracy and ADC diagnostic performance.

Lack of Methodological Rigor and Limited Coverage of Generative AI in Existing AI Reporting Guidelines: A Scoping Review.

Luo X, Wang B, Shi Q, Wang Z, Lai H, Liu H, Qin Y, Chen F, Song X, Ge L, Zhang L, Bian Z, Chen Y

pubmed logopapersJul 18 2025
This study aimed to systematically map the development methods, scope, and limitations of existing artificial intelligence (AI) reporting guidelines in medicine and to explore their applicability to generative AI (GAI) tools, such as large language models (LLMs). We reported a scoping review adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR). Five information sources were searched, including MEDLINE (via PubMed), EQUATOR Network, CNKI, FAIRsharing, and Google Scholar, from inception to December 31, 2024. Two reviewers independently screened records and extracted data using a predefined Excel template. Data included guideline characteristics (e.g., development methods, target audience, AI domain), adherence to EQUATOR Network recommendations, and consensus methodologies. Discrepancies were resolved by a third reviewer. 68 AI reporting guidelines were included. 48.5% focused on general AI, while only 7.4% addressed GAI/LLMs. Methodological rigor was limited: 39.7% described development processes, 42.6% involved multidisciplinary experts, and 33.8% followed EQUATOR recommendations. Significant overlap existed, particularly in medical imaging (20.6% of guidelines). GAI-specific guidelines (14.7%) lacked comprehensive coverage and methodological transparency. Existing AI reporting guidelines in medicine have suboptimal methodological rigor, redundancy, and insufficient coverage of GAI applications. Future and updated guidelines should prioritize standardized development processes, multidisciplinary collaboration, and expanded focus on emerging AI technologies like LLMs.

Imaging biomarkers of ageing: a review of artificial intelligence-based approaches for age estimation.

Haugg F, Lee G, He J, Johnson J, Zapaishchykova A, Bitterman DS, Kann BH, Aerts HJWL, Mak RH

pubmed logopapersJul 18 2025
Chronological age, although commonly used in clinical practice, fails to capture individual variations in rates of ageing and physiological decline. Recent advances in artificial intelligence (AI) have transformed the estimation of biological age using various imaging techniques. This Review consolidates AI developments in age prediction across brain, chest, abdominal, bone, and facial imaging using diverse methods, including MRI, CT, x-ray, and photographs. The difference between predicted and chronological age-often referred to as age deviation-is a promising biomarker for assessing health status and predicting disease risk. In this Review, we highlight consistent associations between age deviation and various health outcomes, including mortality risk, cognitive decline, and cardiovascular prognosis. We also discuss the technical challenges in developing unbiased models and ethical considerations for clinical application. This Review highlights the potential of AI-based age estimation in personalised medicine as it offers a non-invasive, interpretable biomarker that could transform health risk assessment and guide preventive interventions.
Page 109 of 3433427 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.