Sort by:
Page 93 of 3993982 results

AI-Driven segmentation and morphogeometric profiling of epicardial adipose tissue in type 2 diabetes.

Feng F, Hasaballa AI, Long T, Sun X, Fernandez J, Carlhäll CJ, Zhao J

pubmed logopapersJul 18 2025
Epicardial adipose tissue (EAT) is associated with cardiometabolic risk in type 2 diabetes (T2D), but its spatial distribution and structural alterations remain understudied. We aim to develop a shape-aware, AI-based method for automated segmentation and morphogeometric analysis of EAT in T2D. A total of 90 participants (45 with T2D and 45 age-, sex-matched controls) underwent cardiac 3D Dixon MRI, enrolled between 2014 and 2018 as part of the sub-study of the Swedish SCAPIS cohort. We developed EAT-Seg, a multi-modal deep learning model incorporating signed distance maps (SDMs) for shape-aware segmentation. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), the 95% Hausdorff distance (HD95), and the average symmetric surface distance (ASSD). Statistical shape analysis combined with partial least squares discriminant analysis (PLS-DA) was applied to point cloud representations of EAT to capture latent spatial variations between groups. Morphogeometric features, including volume, 3D local thickness map, elongation and fragmentation index, were extracted and correlated with PLS-DA latent variables using Pearson correlation. Features with high-correlation were identified as key differentiators and evaluated using a Random Forest classifier. EAT-Seg achieved a DSC of 0.881, a HD95 of 3.213 mm, and an ASSD of 0.602 mm. Statistical shape analysis revealed spatial distribution differences in EAT between T2D and control groups. Morphogeometric feature analysis identified volume and thickness gradient-related features as key discriminators (r > 0.8, P < 0.05). Random Forest classification achieved an AUC of 0.703. This AI-based framework enables accurate segmentation for structurally complex EAT and reveals key morphogeometric differences associated with T2D, supporting its potential as a biomarker for cardiometabolic risk assessment.

Open-access ultrasonic diaphragm dataset and an automatic diaphragm measurement using deep learning network.

Li Z, Mao L, Jia F, Zhang S, Han C, Fu S, Zheng Y, Chu Y, Chen Z, Wang D, Duan H, Zheng Y

pubmed logopapersJul 18 2025
The assessment of diaphragm function is crucial for effective clinical management and the prevention of complications associated with diaphragmatic dysfunction. However, current measurement methodologies rely on manual techniques that are susceptible to human error: How does the performance of an automatic diaphragm measurement system based on a segmentation neural network focusing on diaphragm thickness and excursion compare with existing methodologies? The proposed system integrates segmentation and parameter measurement, leveraging a newly established ultrasound diaphragm dataset. This dataset comprises B-mode ultrasound images and videos for diaphragm thickness assessment, as well as M-mode images and videos for movement measurement. We introduce a novel deep learning-based segmentation network, the Multi-ratio Dilated U-Net (MDRU-Net), to enable accurate diaphragm measurements. The system additionally incorporates a comprehensive implementation plan for automated measurement. Automatic measurement results are compared against manual assessments conducted by clinicians, revealing an average error of 8.12% in diaphragm thickening fraction measurements and a mere 4.3% average relative error in diaphragm excursion measurements. The results indicate overall minor discrepancies and enhanced potential for clinical detection of diaphragmatic conditions. Additionally, we design a user-friendly automatic measurement system for assessing diaphragm parameters and an accompanying method for measuring ultrasound-derived diaphragm parameters. In this paper, we constructed a diaphragm ultrasound dataset of thickness and excursion. Based on the U-Net architecture, we developed an automatic diaphragm segmentation algorithm and designed an automatic parameter measurement scheme. A comparative error analysis was conducted against manual measurements. Overall, the proposed diaphragm ultrasound segmentation algorithm demonstrated high segmentation performance and efficiency. The automatic measurement scheme based on this algorithm exhibited high accuracy, eliminating subjective influence and enhancing the automation of diaphragm ultrasound parameter assessment, thereby providing new possibilities for diaphragm evaluation.

Machine learning and discriminant analysis model for predicting benign and malignant pulmonary nodules.

Li Z, Zhang W, Huang J, Lu L, Xie D, Zhang J, Liang J, Sui Y, Liu L, Zou J, Lin A, Yang L, Qiu F, Hu Z, Wu M, Deng Y, Zhang X, Lu J

pubmed logopapersJul 18 2025
Pulmonary Nodules (PNs) are a trend considered as the early manifestation of lung cancer. Among them, PNs that remain stable for more than two years or whose pathological results suggest not being lung cancer are considered benign PNs (BPNs), while PNs that conform to the growth pattern of tumors or whose pathological results indicate lung cancer are considered malignant PNs (MPNs). Currently, more than 90% of PNs detected by screening tests are benign, with a false positive rate of up to 96.4%. While a range of predictive models have been developed for the identification of MPNs, there are still some challenges in distinguishing between BPNs and MPNs. We included a total of 5197 patients for the case-control study according to the preset exclusion criteria and sample size. Among them, 4735 with BPNs and 2509 with MPNs were randomly divided into training, validation, and test sets according to a 7:1.5:1.5 ratio. Three widely applicable machine learning algorithms (Random Forests, Gradient Boosting Machine, and XGBoost) were used to screen the metrics, and then the corresponding predictive models were constructed using discriminative analysis, and the best performing model was selected as the target model. The model is internally validated with 10-fold cross validation and compared with PKUPH and Block models. We collated information from chest CT examinations performed from 2018 to 2021 in the physical examination population and found that the detection rate of PNs was 21.57% and showed an overall upward trend. The GMU_D model constructed by discriminative analysis based on machine learning screening features had an excellent discriminative performance (AUC = 0.866, 95% CI: 0.858-0.874), and higher accuracy than the PKUPH model (AUC = 0.559, 95% CI: 0.552-0.567) and the Block model (AUC = 0.823, 95% CI: 0.814-0.833). Moreover, the cross-validation results also exhibit excellent performance (AUC = 0.866, 95% CI: 0.858-0.874). The detection rate of PNs was 21.57% in the physical examination population undergoing chest CT. Meanwhile, based on real-world studies of PNs, a greater prediction tool was developed and validated that can be used to accurately distinguish between BPNs and MPNs with the excellent predictive performance and differentiation.

Sex estimation with parameters of the facial canal by computed tomography using machine learning algorithms and artificial neural networks.

Secgin Y, Kaya S, Harmandaoğlu O, Öztürk O, Senol D, Önbaş Ö, Yılmaz N

pubmed logopapersJul 18 2025
The skull is highly durable and plays a significant role in sex determination as one of the most dimorphic bones. The facial canal (FC), a clinically significant canal within the temporal bone, houses the facial nerve. This study aims to estimate sex using morphometric measurements from the FC through machine learning (ML) and artificial neural networks (ANNs). The study utilized Computed Tomography (CT) images of 200 individuals (100 females, 100 males) aged 19-65 years. These images were retrospectively retrieved from the Picture Archiving and Communication Systems (PACS) at Düzce University Faculty of Medicine, Department of Radiology, covering 2021-2024. Bilateral measurements of nine temporal bone parameters were performed in axial, coronal, and sagittal planes. ML algorithms including Quadratic Discriminant Analysis (QDA), Linear Discriminant Analysis (LDA), Decision Tree (DT), Extra Tree Classifier (ETC), Random Forest (RF), Logistic Regression (LR), Gaussian Naive Bayes (GaussianNB), and k-Nearest Neighbors (k-NN) were used, alongside a multilayer perceptron classifier (MLPC) from ANN algorithms. Except for QDA (Acc 0.93), all algorithms achieved an accuracy rate of 0.97. SHapley Additive exPlanations (SHAP) analysis revealed the five most impactful parameters: right SGAs, left SGAs, right TSWs, left TSWs and, the inner mouth width of the left FN, respectively. FN-centered morphometric measurements show high accuracy in sex determination and may aid in understanding FN positioning across sexes and populations. These findings may support rapid and reliable sex estimation in forensic investigations-especially in cases with fragmented craniofacial remains-and provide auxiliary diagnostic data for preoperative planning in otologic and skull base surgeries. They are thus relevant for surgeons, anthropologists, and forensic experts. Not applicable.

Deep learning-based ultrasound diagnostic model for follicular thyroid carcinoma.

Wang Y, Lu W, Xu L, Xu H, Kong D

pubmed logopapersJul 18 2025
It is challenging to preoperatively diagnose follicular thyroid carcinoma (FTC) on ultrasound images. This study aimed to develop an end-to-end diagnostic model that can classify thyroid tumors into benign tumors, FTC and other malignant tumors based on deep learning. This retrospective multi-center study included 10,771 consecutive adult patients who underwent conventional ultrasound and postoperative pathology between January 2018 and September 2021. We proposed a novel data augmentation method and a mixed loss function to solve an imbalanced dataset and applied them to a pre-trained convolutional neural network and transformer model that could effectively extract image features. The proposed model can directly identify FTC from other malignant subtypes and benign tumors based on ultrasound images. The testing dataset included 1078 patients (mean age, 47.3 years ± 11.8 (SD); 811 female patients; FTCs, 39 of 1078 (3.6%); Other malignancies, 385 of 1078 (35.7%)). The proposed classification model outperformed state-of-the-art models on differentiation of FTC from other malignant sub-types and benign ones, achieved an excellent diagnosis performance with balanced-accuracy 0.87, AUC 0.96 (95% CI: 0.96, 0.96), mean sensitivity 0.87 and mean specificity 0.92. Meanwhile, it was superior to radiologists included in this study for thyroid tumor diagnosis (balanced-accuracy: Junior 0.60, p < 0.001; Mid-level 0.59, p < 0.001; Senior 0.66, p < 0.001). The developed classification model addressed the class-imbalanced problem and achieved higher performance in differentiating FTC from other malignant subtypes and benign tumors compared with existing methods. Question Deep learning has the potential to improve preoperatively diagnostic accuracy for follicular thyroid carcinoma (FTC). Findings The proposed model achieved high accuracy, sensitivity and specificity in diagnosing follicular thyroid carcinoma, outperforming other models. Clinical relevance The proposed model is a promising computer-aided diagnostic tool for the clinical diagnosis of FTC, which potentially could help reduce missed diagnosis and misdiagnosis for FTC.

Diagnostic interchangeability of deep-learning based Synth-STIR images generated from T1 and T2 weighted spine images.

Li J, Xu M, Jiang B, Dong Q, Xia Y, Zhou T, Lin X, Ma Y, Jiang S, Zhang Z, Xiang L, Fan L, Liu S

pubmed logopapersJul 18 2025
To evaluate image quality and diagnostic interchangeability of synth short-tau inversion recovery (STIR) generated by deep learning in comparison with standard STIR. This prospective study recruited participants between July 2023 and August 2023. Participants were scanned with T1WI and T2WI, then generated Synth-STIR. Signal-to-noise ratios (SNR), contrast-to-noise ratios (CNR) were calculated for quantitative evaluation. Four independent, blinded radiologists performed subjective quality and lesion characteristic assessment. Wilcoxon tests were used to assess the differences in SNR, CNR, and subjective image quality. Various diagnostic findings pertinent to the spine were tested for interchangeability using the individual equivalence index (IEI). Inter-reader and intra-reader agreement and concordance were computed, and McNemar tests were performed for comprehensive evaluation. One hundred ninety-nine participants (106 male patients, mean age 46.8 ± 16.9 years) were included. Compared to standard-STIR, Synth-STIR reduces sequence scanning time by approximately 180 s, has significantly higher SNR and CNR (p < 0.001). For artifacts, noise, sharpness, and diagnostic confidence, all readers agreed that Synth-STIR was significantly better than standard-STIR (all p < 0.001). In addition, the IEI was less than 1.61%. Kappa and Kendall showed a moderate to excellent agreement in the range of 0.52-0.97. There was no significant difference in the frequencies of the major features as reported with standard-STIR and Synth-STIR (p = 0.211-1). Synth-STIR shows significantly higher SNR and CNR, and is diagnostically interchangeable with standard-STIR with a substantial overall reduction in the imaging time, thereby improving efficiency without sacrificing diagnostic value. Question Can generating STIR improve image quality while reducing spine MRI acquisition time in order to increase clinical spine MRI throughput? Findings With reduced acquisition time, Synth-STIR has significantly higher SNR and CNR than standard-STIR and can be interchangeably diagnosed with standard-STIR in detecting spinal abnormalities. Clinical relevance Our Synth-STIR provides the same high-quality images for clinical diagnosis as standard-STIR, while reducing scanning time for spine MRI protocols. Increase clinical spine MRI throughput.

Development of a clinical decision support system for breast cancer detection using ensemble deep learning.

Sandhu JK, Sharma C, Kaur A, Pandey SK, Sinha A, Shreyas J

pubmed logopapersJul 18 2025
Advancements in diagnostic technology are required to improve patient outcomes and facilitate early diagnosis, as breast cancer is a substantial global health concern. This research discusses the creation of a unique Deep Learning (DL) Ensemble Deep Learning based on a Clinical Decision Support System (EDL-CDSS) that enables the precise and expeditious diagnosis of breast cancer. Numerous DL models are combined in the proposed EDL-CDSS to create an ensemble method that optimizes the advantages and reduces the disadvantages of individual techniques. The team improves its capacity to extricate intricate patterns and features from medical imaging data by incorporating the Kelm Extreme Learning Machine (KELM), Deep Belief Network (DBN), and other DL architectures. Comprehensive testing has been conducted across various datasets to assess the efficacy of this system in comparison to individual DL models and traditional diagnostic methods. Among other objectives, the evaluation prioritizes precision, sensitivity, specificity, F1-score, accuracy, and overall accuracy to mitigate false positives and negatives. The experiment's conclusion exhibits a remarkable accuracy of 96.14% in comparison to prior advanced methodologies.

Commercialization of medical artificial intelligence technologies: challenges and opportunities.

Li B, Powell D, Lee R

pubmed logopapersJul 18 2025
Artificial intelligence (AI) is already having a significant impact on healthcare. For example, AI-guided imaging can improve the diagnosis/treatment of vascular diseases, which affect over 200 million people globally. Recently, Chiu and colleagues (2024) developed an AI algorithm that supports nurses with no ultrasound training in diagnosing abdominal aortic aneurysms (AAA) with similar accuracy as ultrasound-trained physicians. This technology can therefore improve AAA screening; however, achieving clinical impact with new AI technologies requires careful consideration of commercialization strategies, including funding, compliance with safety and regulatory frameworks, health technology assessment, regulatory approval, reimbursement, and clinical guideline integration.

Artificial Intelligence for Tumor [<sup>18</sup>F]FDG PET Imaging: Advancements and Future Trends - Part II.

Safarian A, Mirshahvalad SA, Farbod A, Jung T, Nasrollahi H, Schweighofer-Zwink G, Rendl G, Pirich C, Vali R, Beheshti M

pubmed logopapersJul 18 2025
The integration of artificial intelligence (AI) into [<sup>18</sup>F]FDG PET/CT imaging continues to expand, offering new opportunities for more precise, consistent, and personalized oncologic evaluations. Building on the foundation established in Part I, this second part explores AI-driven innovations across a broader range of malignancies, including hematological, genitourinary, melanoma, and central nervous system tumors as well applications of AI in pediatric oncology. Radiomics and machine learning algorithms are being explored for their ability to enhance diagnostic accuracy, reduce interobserver variability, and inform complex clinical decision-making, such as identifying patients with refractory lymphoma, assessing pseudoprogression in melanoma, or predicting brain metastases in extracranial malignancies. Additionally, AI-assisted lesion segmentation, quantitative feature extraction, and heterogeneity analysis are contributing to improved prediction of treatment response and long-term survival outcomes. Despite encouraging results, variability in imaging protocols, segmentation methods, and validation strategies across studies continues to challenge reproducibility and remains a barrier to clinical translation. This review evaluates recent advancements of AI, its current clinical applications, and emphasizes the need for robust standardization and prospective validation to ensure the reproducibility and generalizability of AI tools in PET imaging and clinical practice.

CT derived fractional flow reserve: Part 1 - Comprehensive review of methodologies.

Shaikh K, Lozano PR, Evangelou S, Wu EH, Nurmohamed NS, Madan N, Verghese D, Shekar C, Waheed A, Siddiqui S, Kolossváry M, Almeida S, Coombes T, Suchá D, Trivedi SJ, Ihdayhid AR

pubmed logopapersJul 18 2025
Advancements in cardiac computed tomography angiography (CCTA) have enabled the extraction of physiological data from an anatomy-based imaging modality. This review outlines the key methodologies for deriving fractional flow reserve (FFR) from CCTA, with a focus on two primary methods: 1) computational fluid dynamics-based FFR (CT-FFR) and 2) plaque-derived ischemia assessment using artificial intelligence and quantitative plaque metrics. These techniques have expanded the role of CCTA beyond anatomical assessment, allowing for concurrent evaluation of coronary physiology without the need for invasive testing. This review provides an overview of the principles, workflows, and limitations of each technique and aims to inform on the current state and future direction of non-invasive coronary physiology assessment.
Page 93 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.