Sort by:
Page 76 of 3113104 results

Investigating brain tumor classification using MRI: a scientometric analysis of selected articles from 2015 to 2024.

Mounika G, Kollem S, Samala S

pubmed logopapersJul 18 2025
Magnetic resonance imaging (MRI) is a non-invasive method widely used to evaluate abnormal tissues, especially in the brain. While many studies have examined brain tumor classification using MRI, a comprehensive scientometric analysis remains limited. This study aimed to investigate brain tumor classification based on magnetic resonance imaging (MRI), using scientometric approaches, from 2015 to 2024. A total of 348 peer-reviewed articles were extracted from the Scopus database. Tools such as CiteSpace and VOSviewer were employed to analyze key metrics, including citation frequency, author collaboration, and publication trends. The analysis revealed top authors, top-cited journals, and international collaborations. Co-occurrence networks identified the top research topics and bibliometric coupling revealed knowledge advancements in the domain. Deep learning methods are increasingly used in brain tumor classification research. This study outlines the current trends, uncovers research gaps, and suggests future directions for researchers in the domain of MRI-based brain tumor classification.

Using Convolutional Neural Networks for the Classification of Suboptimal Chest Radiographs.

Liu EH, Carrion D, Badawy MK

pubmed logopapersJul 18 2025
Chest X-rays (CXR) rank among the most conducted X-ray examinations. They often require repeat imaging due to inadequate quality, leading to increased radiation exposure and delays in patient care and diagnosis. This research assesses the efficacy of DenseNet121 and YOLOv8 neural networks in detecting suboptimal CXRs, which may minimise delays and enhance patient outcomes. The study included 3587 patients with a median age of 67 (0-102). It utilised an initial dataset comprising 10,000 CXRs randomly divided into a training subset (4000 optimal and 4000 suboptimal) and a validation subset (400 optimal and 400 suboptimal). The test subset (25 optimal and 25 suboptimal) was curated from the remaining images to provide adequate variation. Neural networks DenseNet121 and YOLOv8 were chosen due to their capabilities in image classification. DenseNet121 is a robust, well-tested model in the medical industry with high accuracy in object recognition. YOLOv8 is a cutting-edge commercial model targeted at all industries. Their performance was assessed via the area under the receiver operating curve (AUROC) and compared to radiologist classification, utilising the chi-squared test. DenseNet121 attained an AUROC of 0.97, while YOLOv8 recorded a score of 0.95, indicating a strong capability in differentiating between optimal and suboptimal CXRs. The alignment between radiologists and models exhibited variability, partly due to the lack of clinical indications. However, the performance was not statistically significant. Both AI models effectively classified chest X-ray quality, demonstrating the potential for providing radiographers with feedback to improve image quality. Notably, this was the first study to include both PA and lateral CXRs as well as paediatric cases and the first to evaluate YOLOv8 for this application.

Establishment of an interpretable MRI radiomics-based machine learning model capable of predicting axillary lymph node metastasis in invasive breast cancer.

Zhang D, Shen M, Zhang L, He X, Huang X

pubmed logopapersJul 18 2025
This study sought to develop a radiomics model capable of predicting axillary lymph node metastasis (ALNM) in patients with invasive breast cancer (IBC) based on dual-sequence magnetic resonance imaging(MRI) of diffusion-weighted imaging (DWI) and dynamic contrast enhancement (DCE) data. The interpretability of the resultant model was probed with the SHAP (Shapley Additive Explanations) method. Established inclusion/exclusion criteria were used to retrospectively compile MRI and matching clinical data from 183 patients with pathologically confirmed IBC from our hospital evaluated between June 2021 and December 2023. All of these patients had undergone plain and enhanced MRI scans prior to treatment. These patients were separated according to their pathological biopsy results into those with ALNM (n = 107) and those without ALNM (n = 76). These patients were then randomized into training (n = 128) and testing (n = 55) cohorts at a 7:3 ratio. Optimal radiomics features were selected from the extracted data. The random forest method was used to establish three predictive models (DWI, DCE, and combined DWI + DCE sequence models). Area under the curve (AUC) values for receiver operating characteristic (ROC) curves were utilized to assess model performance. The DeLong test was utilized to compare model predictive efficacy. Model discrimination was assessed based on the integrated discrimination improvement (IDI) method. Decision curves revealed net clinical benefits for each of these models. The SHAP method was used to achieve the best model interpretability. Clinicopathological characteristics (age, menopausal status, molecular subtypes, and estrogen receptor, progesterone receptor, human epidermal growth factor receptor 2, and Ki-67 status) were comparable when comparing the ALNM and non-ALNM groups as well as the training and testing cohorts (P > 0.05). AUC values for the DWI, DCE, and combined models in the training cohort were 0.793, 0.774, and 0.864, respectively, with corresponding values of 0.728, 0.760, and 0.859 in the testing cohort. The predictive efficacy of the DWI and combined models was found to differ significantly according to the DeLong test, as did the predictive efficacy of the DCE and combined models in the training groups (P < 0.05), while no other significant differences were noted in model performance (P > 0.05). IDI results indicated that the combined model offered predictive power levels that were 13.5% (P < 0.05) and 10.2% (P < 0.05) higher than those for the respective DWI and DCE models. In a decision curve analysis, the combined model offered a net clinical benefit over the DCE model. The combined dual-sequence MRI-based radiomics model constructed herein and the supporting interpretability analyses can aid in the prediction of the ALNM status of IBC patients, helping to guide clinical decision-making in these cases.

Accuracy and Time Efficiency of Artificial Intelligence-Driven Tooth Segmentation on CBCT Images: A Validation Study Using Two Implant Planning Software Programs.

Ntovas P, Sirirattanagool P, Asavanamuang P, Jain S, Tavelli L, Revilla-León M, Galarraga-Vinueza ME

pubmed logopapersJul 18 2025
To assess the accuracy and time efficiency of manual versus artificial intelligence (AI)-driven tooth segmentation on cone-beam computed tomography (CBCT) images, using AI tools integrated within implant planning software, and to evaluate the impact of artifacts, dental arch, tooth type, and region. Fourteen patients who underwent CBCT scans were randomly selected for this study. Using the acquired datasets, 67 extracted teeth were segmented using one manual and two AI-driven tools. The segmentation time for each method was recorded. The extracted teeth were scanned with an intraoral scanner to serve as the reference. The virtual models generated by each segmentation method were superimposed with the surface scan models to calculate volumetric discrepancies. The discrepancy between the evaluated AI-driven and manual segmentation methods ranged from 0.10 to 0.98 mm, with a mean RMS of 0.27 (0.11) mm. Manual segmentation resulted in less RMS deviation compared to both AI-driven methods (CDX; BSB) (p < 0.05). Significant differences were observed between all investigated segmentation methods, both for the overall tooth area and each region, with the apical portion of the root showing the lowest accuracy (p < 0.05). Tooth type did not have a significant effect on segmentation (p > 0.05). Both AI-driven segmentation methods reduced segmentation time compared to manual segmentation (p < 0.05). AI-driven segmentation can generate reliable virtual 3D tooth models, with accuracy comparable to that of manual segmentation performed by experienced clinicians, while also significantly improving time efficiency. To further enhance accuracy in cases involving restoration artifacts, continued development and optimization of AI-driven tooth segmentation models are necessary.

AI-Driven segmentation and morphogeometric profiling of epicardial adipose tissue in type 2 diabetes.

Feng F, Hasaballa AI, Long T, Sun X, Fernandez J, Carlhäll CJ, Zhao J

pubmed logopapersJul 18 2025
Epicardial adipose tissue (EAT) is associated with cardiometabolic risk in type 2 diabetes (T2D), but its spatial distribution and structural alterations remain understudied. We aim to develop a shape-aware, AI-based method for automated segmentation and morphogeometric analysis of EAT in T2D. A total of 90 participants (45 with T2D and 45 age-, sex-matched controls) underwent cardiac 3D Dixon MRI, enrolled between 2014 and 2018 as part of the sub-study of the Swedish SCAPIS cohort. We developed EAT-Seg, a multi-modal deep learning model incorporating signed distance maps (SDMs) for shape-aware segmentation. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), the 95% Hausdorff distance (HD95), and the average symmetric surface distance (ASSD). Statistical shape analysis combined with partial least squares discriminant analysis (PLS-DA) was applied to point cloud representations of EAT to capture latent spatial variations between groups. Morphogeometric features, including volume, 3D local thickness map, elongation and fragmentation index, were extracted and correlated with PLS-DA latent variables using Pearson correlation. Features with high-correlation were identified as key differentiators and evaluated using a Random Forest classifier. EAT-Seg achieved a DSC of 0.881, a HD95 of 3.213 mm, and an ASSD of 0.602 mm. Statistical shape analysis revealed spatial distribution differences in EAT between T2D and control groups. Morphogeometric feature analysis identified volume and thickness gradient-related features as key discriminators (r > 0.8, P < 0.05). Random Forest classification achieved an AUC of 0.703. This AI-based framework enables accurate segmentation for structurally complex EAT and reveals key morphogeometric differences associated with T2D, supporting its potential as a biomarker for cardiometabolic risk assessment.

Open-access ultrasonic diaphragm dataset and an automatic diaphragm measurement using deep learning network.

Li Z, Mao L, Jia F, Zhang S, Han C, Fu S, Zheng Y, Chu Y, Chen Z, Wang D, Duan H, Zheng Y

pubmed logopapersJul 18 2025
The assessment of diaphragm function is crucial for effective clinical management and the prevention of complications associated with diaphragmatic dysfunction. However, current measurement methodologies rely on manual techniques that are susceptible to human error: How does the performance of an automatic diaphragm measurement system based on a segmentation neural network focusing on diaphragm thickness and excursion compare with existing methodologies? The proposed system integrates segmentation and parameter measurement, leveraging a newly established ultrasound diaphragm dataset. This dataset comprises B-mode ultrasound images and videos for diaphragm thickness assessment, as well as M-mode images and videos for movement measurement. We introduce a novel deep learning-based segmentation network, the Multi-ratio Dilated U-Net (MDRU-Net), to enable accurate diaphragm measurements. The system additionally incorporates a comprehensive implementation plan for automated measurement. Automatic measurement results are compared against manual assessments conducted by clinicians, revealing an average error of 8.12% in diaphragm thickening fraction measurements and a mere 4.3% average relative error in diaphragm excursion measurements. The results indicate overall minor discrepancies and enhanced potential for clinical detection of diaphragmatic conditions. Additionally, we design a user-friendly automatic measurement system for assessing diaphragm parameters and an accompanying method for measuring ultrasound-derived diaphragm parameters. In this paper, we constructed a diaphragm ultrasound dataset of thickness and excursion. Based on the U-Net architecture, we developed an automatic diaphragm segmentation algorithm and designed an automatic parameter measurement scheme. A comparative error analysis was conducted against manual measurements. Overall, the proposed diaphragm ultrasound segmentation algorithm demonstrated high segmentation performance and efficiency. The automatic measurement scheme based on this algorithm exhibited high accuracy, eliminating subjective influence and enhancing the automation of diaphragm ultrasound parameter assessment, thereby providing new possibilities for diaphragm evaluation.

Machine learning and discriminant analysis model for predicting benign and malignant pulmonary nodules.

Li Z, Zhang W, Huang J, Lu L, Xie D, Zhang J, Liang J, Sui Y, Liu L, Zou J, Lin A, Yang L, Qiu F, Hu Z, Wu M, Deng Y, Zhang X, Lu J

pubmed logopapersJul 18 2025
Pulmonary Nodules (PNs) are a trend considered as the early manifestation of lung cancer. Among them, PNs that remain stable for more than two years or whose pathological results suggest not being lung cancer are considered benign PNs (BPNs), while PNs that conform to the growth pattern of tumors or whose pathological results indicate lung cancer are considered malignant PNs (MPNs). Currently, more than 90% of PNs detected by screening tests are benign, with a false positive rate of up to 96.4%. While a range of predictive models have been developed for the identification of MPNs, there are still some challenges in distinguishing between BPNs and MPNs. We included a total of 5197 patients for the case-control study according to the preset exclusion criteria and sample size. Among them, 4735 with BPNs and 2509 with MPNs were randomly divided into training, validation, and test sets according to a 7:1.5:1.5 ratio. Three widely applicable machine learning algorithms (Random Forests, Gradient Boosting Machine, and XGBoost) were used to screen the metrics, and then the corresponding predictive models were constructed using discriminative analysis, and the best performing model was selected as the target model. The model is internally validated with 10-fold cross validation and compared with PKUPH and Block models. We collated information from chest CT examinations performed from 2018 to 2021 in the physical examination population and found that the detection rate of PNs was 21.57% and showed an overall upward trend. The GMU_D model constructed by discriminative analysis based on machine learning screening features had an excellent discriminative performance (AUC = 0.866, 95% CI: 0.858-0.874), and higher accuracy than the PKUPH model (AUC = 0.559, 95% CI: 0.552-0.567) and the Block model (AUC = 0.823, 95% CI: 0.814-0.833). Moreover, the cross-validation results also exhibit excellent performance (AUC = 0.866, 95% CI: 0.858-0.874). The detection rate of PNs was 21.57% in the physical examination population undergoing chest CT. Meanwhile, based on real-world studies of PNs, a greater prediction tool was developed and validated that can be used to accurately distinguish between BPNs and MPNs with the excellent predictive performance and differentiation.

Sex estimation with parameters of the facial canal by computed tomography using machine learning algorithms and artificial neural networks.

Secgin Y, Kaya S, Harmandaoğlu O, Öztürk O, Senol D, Önbaş Ö, Yılmaz N

pubmed logopapersJul 18 2025
The skull is highly durable and plays a significant role in sex determination as one of the most dimorphic bones. The facial canal (FC), a clinically significant canal within the temporal bone, houses the facial nerve. This study aims to estimate sex using morphometric measurements from the FC through machine learning (ML) and artificial neural networks (ANNs). The study utilized Computed Tomography (CT) images of 200 individuals (100 females, 100 males) aged 19-65 years. These images were retrospectively retrieved from the Picture Archiving and Communication Systems (PACS) at Düzce University Faculty of Medicine, Department of Radiology, covering 2021-2024. Bilateral measurements of nine temporal bone parameters were performed in axial, coronal, and sagittal planes. ML algorithms including Quadratic Discriminant Analysis (QDA), Linear Discriminant Analysis (LDA), Decision Tree (DT), Extra Tree Classifier (ETC), Random Forest (RF), Logistic Regression (LR), Gaussian Naive Bayes (GaussianNB), and k-Nearest Neighbors (k-NN) were used, alongside a multilayer perceptron classifier (MLPC) from ANN algorithms. Except for QDA (Acc 0.93), all algorithms achieved an accuracy rate of 0.97. SHapley Additive exPlanations (SHAP) analysis revealed the five most impactful parameters: right SGAs, left SGAs, right TSWs, left TSWs and, the inner mouth width of the left FN, respectively. FN-centered morphometric measurements show high accuracy in sex determination and may aid in understanding FN positioning across sexes and populations. These findings may support rapid and reliable sex estimation in forensic investigations-especially in cases with fragmented craniofacial remains-and provide auxiliary diagnostic data for preoperative planning in otologic and skull base surgeries. They are thus relevant for surgeons, anthropologists, and forensic experts. Not applicable.

Deep learning-based ultrasound diagnostic model for follicular thyroid carcinoma.

Wang Y, Lu W, Xu L, Xu H, Kong D

pubmed logopapersJul 18 2025
It is challenging to preoperatively diagnose follicular thyroid carcinoma (FTC) on ultrasound images. This study aimed to develop an end-to-end diagnostic model that can classify thyroid tumors into benign tumors, FTC and other malignant tumors based on deep learning. This retrospective multi-center study included 10,771 consecutive adult patients who underwent conventional ultrasound and postoperative pathology between January 2018 and September 2021. We proposed a novel data augmentation method and a mixed loss function to solve an imbalanced dataset and applied them to a pre-trained convolutional neural network and transformer model that could effectively extract image features. The proposed model can directly identify FTC from other malignant subtypes and benign tumors based on ultrasound images. The testing dataset included 1078 patients (mean age, 47.3 years ± 11.8 (SD); 811 female patients; FTCs, 39 of 1078 (3.6%); Other malignancies, 385 of 1078 (35.7%)). The proposed classification model outperformed state-of-the-art models on differentiation of FTC from other malignant sub-types and benign ones, achieved an excellent diagnosis performance with balanced-accuracy 0.87, AUC 0.96 (95% CI: 0.96, 0.96), mean sensitivity 0.87 and mean specificity 0.92. Meanwhile, it was superior to radiologists included in this study for thyroid tumor diagnosis (balanced-accuracy: Junior 0.60, p < 0.001; Mid-level 0.59, p < 0.001; Senior 0.66, p < 0.001). The developed classification model addressed the class-imbalanced problem and achieved higher performance in differentiating FTC from other malignant subtypes and benign tumors compared with existing methods. Question Deep learning has the potential to improve preoperatively diagnostic accuracy for follicular thyroid carcinoma (FTC). Findings The proposed model achieved high accuracy, sensitivity and specificity in diagnosing follicular thyroid carcinoma, outperforming other models. Clinical relevance The proposed model is a promising computer-aided diagnostic tool for the clinical diagnosis of FTC, which potentially could help reduce missed diagnosis and misdiagnosis for FTC.

Diagnostic interchangeability of deep-learning based Synth-STIR images generated from T1 and T2 weighted spine images.

Li J, Xu M, Jiang B, Dong Q, Xia Y, Zhou T, Lin X, Ma Y, Jiang S, Zhang Z, Xiang L, Fan L, Liu S

pubmed logopapersJul 18 2025
To evaluate image quality and diagnostic interchangeability of synth short-tau inversion recovery (STIR) generated by deep learning in comparison with standard STIR. This prospective study recruited participants between July 2023 and August 2023. Participants were scanned with T1WI and T2WI, then generated Synth-STIR. Signal-to-noise ratios (SNR), contrast-to-noise ratios (CNR) were calculated for quantitative evaluation. Four independent, blinded radiologists performed subjective quality and lesion characteristic assessment. Wilcoxon tests were used to assess the differences in SNR, CNR, and subjective image quality. Various diagnostic findings pertinent to the spine were tested for interchangeability using the individual equivalence index (IEI). Inter-reader and intra-reader agreement and concordance were computed, and McNemar tests were performed for comprehensive evaluation. One hundred ninety-nine participants (106 male patients, mean age 46.8 ± 16.9 years) were included. Compared to standard-STIR, Synth-STIR reduces sequence scanning time by approximately 180 s, has significantly higher SNR and CNR (p < 0.001). For artifacts, noise, sharpness, and diagnostic confidence, all readers agreed that Synth-STIR was significantly better than standard-STIR (all p < 0.001). In addition, the IEI was less than 1.61%. Kappa and Kendall showed a moderate to excellent agreement in the range of 0.52-0.97. There was no significant difference in the frequencies of the major features as reported with standard-STIR and Synth-STIR (p = 0.211-1). Synth-STIR shows significantly higher SNR and CNR, and is diagnostically interchangeable with standard-STIR with a substantial overall reduction in the imaging time, thereby improving efficiency without sacrificing diagnostic value. Question Can generating STIR improve image quality while reducing spine MRI acquisition time in order to increase clinical spine MRI throughput? Findings With reduced acquisition time, Synth-STIR has significantly higher SNR and CNR than standard-STIR and can be interchangeably diagnosed with standard-STIR in detecting spinal abnormalities. Clinical relevance Our Synth-STIR provides the same high-quality images for clinical diagnosis as standard-STIR, while reducing scanning time for spine MRI protocols. Increase clinical spine MRI throughput.
Page 76 of 3113104 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.