Sort by:
Page 306 of 3433423 results

Prediction of BRAF and TERT status in PTCs by machine learning-based ultrasound radiomics methods: A multicenter study.

Shi H, Ding K, Yang XT, Wu TF, Zheng JY, Wang LF, Zhou BY, Sun LP, Zhang YF, Zhao CK, Xu HX

pubmed logopapersJun 1 2025
Preoperative identification of genetic mutations is conducive to individualized treatment and management of papillary thyroid carcinoma (PTC) patients. <i>Purpose</i>: To investigate the predictive value of the machine learning (ML)-based ultrasound (US) radiomics approaches for BRAF V600E and TERT promoter status (individually and coexistence) in PTC. This multicenter study retrospectively collected data of 1076 PTC patients underwent genetic testing detection for BRAF V600E and TERT promoter between March 2016 and December 2021. Radiomics features were extracted from routine grayscale ultrasound images, and gene status-related features were selected. Then these features were included to nine different ML models to predicting different mutations, and optimal models plus statistically significant clinical information were also conducted. The models underwent training and testing, and comparisons were performed. The Decision Tree-based US radiomics approach had superior prediction performance for the BRAF V600E mutation compared to the other eight ML models, with an area under the curve (AUC) of 0.767 versus 0.547-0.675 (p < 0.05). The US radiomics methodology employing Logistic Regression exhibited the highest accuracy in predicting TERT promoter mutations (AUC, 0.802 vs. 0.525-0.701, p < 0.001) and coexisting BRAF V600E and TERT promoter mutations (0.805 vs. 0.678-0.743, p < 0.001) within the test set. The incorporation of clinical factors enhanced predictive performances to 0.810 for BRAF V600E mutant, 0.897 for TERT promoter mutations, and 0.900 for dual mutations in PTCs. The machine learning-based US radiomics methods, integrated with clinical characteristics, demonstrated effectiveness in predicting the BRAF V600E and TERT promoter mutations in PTCs.

A Pilot Study on Deep Learning With Simplified Intravoxel Incoherent Motion Diffusion-Weighted MRI Parameters for Differentiating Hepatocellular Carcinoma From Other Common Liver Masses.

Ratiphunpong P, Inmutto N, Angkurawaranon S, Wantanajittikul K, Suwannasak A, Yarach U

pubmed logopapersJun 1 2025
To develop and evaluate a deep learning technique for the differentiation of hepatocellular carcinoma (HCC) using "simplified intravoxel incoherent motion (IVIM) parameters" derived from only 3 b-value images. Ninety-eight retrospective magnetic resonance imaging data were collected (68 men, 30 women; mean age 59 ± 14 years), including T2-weighted imaging with fat suppression, in-phase, out-of-phase, and diffusion-weighted imaging (b = 0, 100, 800 s/mm2). Ninety percent of data were used for stratified 10-fold cross-validation. After data preprocessing, diffusion-weighted imaging images were used to compute simplified IVIM and apparent diffusion coefficient (ADC) maps. A 17-layer 3D convolutional neural network (3D-CNN) was implemented, and the input channels were modified for different strategies of input images. The 3D-CNN with IVIM maps (ADC, f, and D*) demonstrated superior performance compared with other strategies, achieving an accuracy of 83.25 ± 6.24% and area under the receiver-operating characteristic curve of 92.70 ± 8.24%, significantly surpassing the baseline of 50% (P < 0.05) and outperforming other strategies in all evaluation metrics. This success underscores the effectiveness of simplified IVIM parameters in combination with a 3D-CNN architecture for enhancing HCC differentiation accuracy. Simplified IVIM parameters derived from 3 b-values, when integrated with a 3D-CNN architecture, offer a robust framework for HCC differentiation.

Atten-Nonlocal Unet: Attention and Non-local Unet for medical image segmentation.

Jia X, Wang W, Zhang M, Zhao B

pubmed logopapersJun 1 2025
The convolutional neural network(CNN)-based models have emerged as the predominant approach for medical image segmentation due to their effective inductive bias. However, their limitation lies in the lack of long-range information. In this study, we propose the Atten-Nonlocal Unet model that integrates CNN and transformer to overcome this limitation and precisely capture global context in 2D features. Specifically, we utilize the BCSM attention module and the Cross Non-local module to enhance feature representation, thereby improving the segmentation accuracy. Experimental results on the Synapse, ACDC, and AVT datasets show that Atten-Nonlocal Unet achieves DSC scores of 84.15%, 91.57%, and 86.94% respectively, and has 95% HD of 15.17, 1.16, and 4.78 correspondingly. Compared to the existing methods for medical image segmentation, the proposed method demonstrates superior segmentation performance, ensuring high accuracy in segmenting large organs while improving segmentation for small organs.

Res-Net-Based Modeling and Morphologic Analysis of Deep Medullary Veins Using Multi-Echo GRE at 7 T MRI.

Li Z, Liang L, Zhang J, Fan X, Yang Y, Yang H, Wang Q, An J, Xue R, Zhuo Y, Qian H, Zhang Z

pubmed logopapersJun 1 2025
The pathological changes in deep medullary veins (DMVs) have been reported in various diseases. However, accurate modeling and quantification of DMVs remain challenging. We aim to propose and assess an automated approach for modeling and quantifying DMVs at 7 Tesla (7 T) MRI. A multi-echo-input Res-Net was developed for vascular segmentation, and a minimum path loss function was used for modeling and quantifying the geometric parameter of DMVs. Twenty-one patients diagnosed as subcortical vascular dementia (SVaD) and 20 condition matched controls were included in this study. The amplitude and phase images of gradient echo with five echoes were acquired at 7 T. Ten GRE images were manually labeled by two neurologists and compared with the results obtained by our proposed method. Independent samples t test and Pearson correlation were used for statistical analysis in our study, and p value < 0.05 was considered significant. No significant offset was found in centerlines obtained by human labeling and our algorithm (p = 0.734). The length difference between the proposed method and manual labeling was smaller than the error between different clinicians (p < 0.001). Patients with SVaD exhibited fewer DMVs (mean difference = -60.710 ± 21.810, p = 0.011) and higher curvature (mean difference = 0.12 ± 0.022, p < 0.0001), corresponding to their higher Vascular Dementia Assessment Scale-Cog (VaDAS-Cog) scores (mean difference = 4.332 ± 1.992, p = 0.036) and lower Mini-Mental State Examination (MMSE) (mean difference = -3.071 ± 1.443, p = 0.047). The MMSE scores were positively correlated with the numbers of DMVs (r = 0.437, p = 0.037) and were negatively correlated with the curvature (r = -0.426, p = 0.042). In summary, we proposed a novel framework for automated quantifying the morphologic parameters of DMVs. These characteristics of DMVs are expected to help the research and diagnosis of cerebral small vessel diseases with DMV lesions.

Towards fast and reliable estimations of 3D pressure, velocity and wall shear stress in aortic blood flow: CFD-based machine learning approach.

Lin D, Kenjereš S

pubmed logopapersJun 1 2025
In this work, we developed deep neural networks for the fast and comprehensive estimation of the most salient features of aortic blood flow. These features include velocity magnitude and direction, 3D pressure, and wall shear stress. Starting from 40 subject-specific aortic geometries obtained from 4D Flow MRI, we applied statistical shape modeling to generate 1,000 synthetic aorta geometries. Complete computational fluid dynamics (CFD) simulations of these geometries were performed to obtain ground-truth values. We then trained deep neural networks for each characteristic flow feature using 900 randomly selected aorta geometries. Testing on remaining 100 geometries resulted in average errors of 3.11% for velocity and 4.48% for pressure. For wall shear stress predictions, we applied two approaches: (i) directly derived from the neural network-predicted velocity, and, (ii) predicted from a separate neural network. Both approaches yielded similar accuracy, with average error of 4.8 and 4.7% compared to complete 3D CFD results, respectively. We recommend the second approach for potential clinical use due to its significantly simplified workflow. In conclusion, this proof-of-concept analysis demonstrates the numerical robustness, rapid calculation speed (less than seconds), and good accuracy of the CFD-based machine learning approach in predicting velocity, pressure, and wall shear stress distributions in subject-specific aortic flows.

Explainable deep stacking ensemble model for accurate and transparent brain tumor diagnosis.

Haque R, Khan MA, Rahman H, Khan S, Siddiqui MIH, Limon ZH, Swapno SMMR, Appaji A

pubmed logopapersJun 1 2025
Early detection of brain tumors in MRI images is vital for improving treatment results. However, deep learning models face challenges like limited dataset diversity, class imbalance, and insufficient interpretability. Most studies rely on small, single-source datasets and do not combine different feature extraction techniques for better classification. To address these challenges, we propose a robust and explainable stacking ensemble model for multiclass brain tumor classification. To address these challenges, we propose a stacking ensemble model that combines EfficientNetB0, MobileNetV2, GoogleNet, and Multi-level CapsuleNet, using CatBoost as the meta-learner for improved feature aggregation and classification accuracy. This ensemble approach captures complex tumor characteristics while enhancing robustness and interpretability. The proposed model integrates EfficientNetB0, MobileNetV2, GoogleNet, and a Multi-level CapsuleNet within a stacking framework, utilizing CatBoost as the meta-learner to improve feature aggregation and classification accuracy. We created two large MRI datasets by merging data from four sources: BraTS, Msoud, Br35H, and SARTAJ. To tackle class imbalance, we applied Borderline-SMOTE and data augmentation. We also utilized feature extraction methods, along with PCA and Gray Wolf Optimization (GWO). Our model was validated through confidence interval analysis and statistical tests, demonstrating superior performance. Error analysis revealed misclassification trends, and we assessed computational efficiency regarding inference speed and resource usage. The proposed ensemble achieved 97.81% F1 score and 98.75% PR AUC on M1, and 98.32% F1 score with 99.34% PR AUC on M2. Moreover, the model consistently surpassed state-of-the-art CNNs, Vision Transformers, and other ensemble methods in classifying brain tumors across individual four datasets. Finally, we developed a web-based diagnostic tool that enables clinicians to interact with the proposed model and visualize decision-critical regions in MRI scans using Explainable Artificial Intelligence (XAI). This study connects high-performing AI models with real clinical applications, providing a reliable, scalable, and efficient diagnostic solution for brain tumor classification.

AI for fracture diagnosis in clinical practice: Four approaches to systematic AI-implementation and their impact on AI-effectiveness.

Loeffen DV, Zijta FM, Boymans TA, Wildberger JE, Nijssen EC

pubmed logopapersJun 1 2025
Artificial Intelligence (AI) has been shown to enhance fracture-detection-accuracy, but the most effective AI-implementation in clinical practice is less well understood. In the current study, four approaches to AI-implementation are evaluated for their impact on AI-effectiveness. Retrospective single-center study based on all consecutive, around-the-clock radiographic examinations for suspected fractures, and accompanying clinical-practice radiologist-diagnoses, between January and March 2023. These image-sets were independently analysed by a dedicated bone-fracture-detection-AI. Findings were combined with radiologist clinical-practice diagnoses to simulate the four AI-implementation methods deemed most relevant to clinical workflows: AI-standalone (radiologist-findings not consulted); AI-problem-solving (AI-findings consulted when radiologist in doubt); AI-triage (radiologist-findings consulted when AI in doubt); and AI-safety net (AI-findings consulted when radiologist diagnosis negative). Reference-standard diagnoses were established by two senior musculoskeletal-radiologists (by consensus in cases of disagreement). Radiologist- and radiologist + AI diagnoses were compared for false negatives (FN), false positives (FP) and their clinical consequences. Experience-level-subgroups radiologists-in-training-, non-musculoskeletal-radiologists, and dedicated musculoskeletal-radiologists were analysed separately. 1508 image-sets were included (1227 unique patients; 40 radiologist-readers). Radiologist results were: 2.7 % FN (40/1508), 28 with clinical consequences; 1.2 % FP (18/1508), 2 received full-fracture treatments (11.1 %). All AI-implementation methods changed overall FN and FP with statistical significance (p < 0.001): AI-standalone 1.5 % FN (23/1508; 11 consequences), 6.8 % FP (103/1508); AI-problem-solving 3.2 % FN (48/1508; 31 consequences), 0.6 % FP (9/1508); AI-triage 2.1 % FN (32/1508; 18 consequences), 1.7 % FP (26/1508); AI-safety net 0.07 % FN (1/1508; 1 consequence), 7.6 % FP (115/1508). Subgroups show similar trends, except AI-triage increased FN for all subgroups except radiologists-in-training. Implementation methods have a large impact on AI-effectiveness. These results suggest AI should not be considered for problem-solving or triage at this time; AI standalone performs better than either and may be a source of assistance where radiologists are unavailable. Best results were obtained implementing AI as safety net, which eliminates missed fractures with serious clinical consequences; even though false positives are increased, unnecessary treatments are limited.

Keeping AI on Track: Regular monitoring of algorithmic updates in mammography.

Taib AG, James JJ, Partridge GJW, Chen Y

pubmed logopapersJun 1 2025
To demonstrate a method of benchmarking the performance of two consecutive software releases of the same commercial artificial intelligence (AI) product to trained human readers using the Personal Performance in Mammographic Screening scheme (PERFORMS) external quality assurance scheme. In this retrospective study, ten PERFORMS test sets, each consisting of 60 challenging cases, were evaluated by human readers between 2012 and 2023 and were evaluated by Version 1 (V1) and Version 2 (V2) of the same AI model in 2022 and 2023 respectively. Both AI and humans considered each breast independently. Both AI and humans considered the highest suspicion of malignancy score per breast for non-malignant cases and per lesion for breasts with malignancy. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated for comparison, with the study powered to detect a medium-sized effect (odds ratio, 3.5 or 0.29) for sensitivity. The study included 1,254 human readers, with a total of 328 malignant lesions, 823 normal, and 55 benign breasts analysed. No significant difference was found between the AUCs for AI V1 (0.93) and V2 (0.94) (p = 0.13). In terms of sensitivity, no difference was observed between human readers and AI V1 (83.2 % vs 87.5 % respectively, p = 0.12), however V2 outperformed humans (88.7 %, p = 0.04). Specificity was higher for AI V1 (87.4 %) and V2 (88.2 %) compared to human readers (79.0 %, p < 0.01 respectively). The upgraded AI model showed no significant difference in diagnostic performance compared to its predecessor when evaluating mammograms from PERFORMS test sets.

Deep learning-driven multi-class classification of brain strokes using computed tomography: A step towards enhanced diagnostic precision.

Kulathilake CD, Udupihille J, Abeysundara SP, Senoo A

pubmed logopapersJun 1 2025
To develop and validate deep learning models leveraging CT imaging for the prediction and classification of brain stroke conditions, with the potential to enhance accuracy and support clinical decision-making. This retrospective, bi-center study included data from 250 patients, with a dataset of 8186 CT images collected from 2017 to 2022. Two AI models were developed using the Expanded ResNet101 deep learning framework as a two-step model. Model performance was evaluated using confusion matrices, supplemented by external validation with an independent dataset. External validation was conducted by an expert and two external members. Overall accuracy, confidence intervals, Cohen's Kappa value, and McNemar's test P-values were calculated. A total of 8186 CT images were incorporated, with 6386 images used for the training and 900 datasets for testing and validation in Model 01. Further, 1619 CT images were used for training and 600 datasets for testing and validation in Model 02. The average accuracy, precision, and F1 score for both models were assessed: Model 01 achieved 99.6 %, 99.4 %, and 99.6 % respectively, whereas Model 02 achieved 99.2 %, 98.8 %, and 99.1 %. The external validation accuracies were 78.6 % (95 % CI: 0.73,0.83; P < 0.001) and 60.2 % (95 % CI: 0.48,0.70; P < 0.001) for Models 01 and 02 respectively, as evaluated by the expert. Deep learning models demonstrated high accuracy, precision, and F1 scores in predicting outcomes for brain stroke patients. With larger cohort and diverse radiologic mimics, these models could support clinicians in prognosis and decision-making.

Diagnostic Performance of ChatGPT-4o in Detecting Hip Fractures on Pelvic X-rays.

Erdem TE, Kirilmaz A, Kekec AF

pubmed logopapersJun 1 2025
Hip fractures are a major orthopedic problem, especially in the elderly population. Hip fractures are usually diagnosed by clinical evaluation and imaging, especially X-rays. In recent years, new approaches to fracture detection have emerged with the use of artificial intelligence (AI) and deep learning techniques in medical imaging. In this study, we aimed to evaluate the diagnostic performance of ChatGPT-4o, an artificial intelligence model, in diagnosing hip fractures. A total of 200 anteroposterior pelvic X-ray images were retrospectively analyzed. Half of the images belonged to patients with surgically confirmed hip fractures, including both displaced and non-displaced types, while the other half represented patients with soft tissue trauma and no fractures. Each image was evaluated by ChatGPT-4o through a standardized prompt, and its predictions (fracture vs. no fracture) were compared against the gold standard diagnoses. Diagnostic performance metrics such as sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), receiver operating characteristic (ROC) curve, Cohen's kappa, and F1 score were calculated. ChatGPT-4o demonstrated an overall accuracy of 82.5% in detecting hip fractures on pelvic radiographs, with a sensitivity of 78.0% and specificity of 87.0%. PPVs and NPVs were 85.7% and 79.8%, respectively. The area under the ROC curve (AUC) was 0.825, indicating good discriminative performance. Among 22 false-negative cases, 68.2% were non-displaced fractures, suggesting the model had greater difficulty identifying subtle radiographic findings. Cohen's kappa coefficient was 0.65, showing substantial agreement with actual diagnoses. Chi-square analysis revealed a strong correlation (χ² = 82.59, <i>P</i> < 0.001), while McNemar's test (<i>P</i> = 0.176) showed no significant asymmetry in error distribution. ChatGPT-4o shows promising accuracy in identifying hip fractures on pelvic X-rays, especially when fractures are displaced. However, its sensitivity drops significantly for non-displaced fractures, leading to many false negatives. This highlights the need for caution when interpreting negative AI results, particularly when clinical suspicion remains high. While not a replacement for expert assessment, ChatGPT-4o may assist in settings with limited specialist access.
Page 306 of 3433423 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.