Sort by:
Page 170 of 2442432 results

A Robust Residual Three-dimensional Convolutional Neural Networks Model for Prediction of Amyloid-β Positivity by Using FDG-PET.

Ardakani I, Yamada T, Iwano S, Kumar Maurya S, Ishii K

pubmed logopapersJun 17 2025
Widely used in oncology PET, 2-deoxy-2-18F-FDG PET is more accessible and affordable than amyloid PET, which is a crucial tool to determine amyloid positivity in diagnosis of Alzheimer disease (AD). This study aimed to leverage deep learning with residual 3D convolutional neural networks (3DCNN) to develop a robust model that predicts amyloid-β positivity by using FDG-PET. In this study, a cohort of 187 patients was used for model development. It consisted of patients ranging from cognitively normal to those with dementia and other cognitive impairments who underwent T1-weighted MRI, 18F-FDG, and 11C-Pittsburgh compound B (PiB) PET scans. A residual 3DCNN model was configured using nonexhaustive grid search and trained on repeated random splits of our development data set. We evaluated the performance of our model, and particularly its robustness, using a multisite data set of 99 patients of different ethnicities with images at different site harmonization levels. Our model achieved mean AUC scores of 0.815 and 0.840 on images without and with site harmonization correspondingly. Respectively, it achieved higher AUC scores of 0.801 and 0.834 in the cognitively normal (CN) group compared with 0.777 and 0.745 in the dementia group. As for F1 score, the corresponding mean scores were 0.770 and 0.810 on images without and with site harmonization. In the CN group, it achieved lower F1 scores of 0.580 and 0.658 compared with 0.907 and 0.931 in the dementia group. We demonstrated that residual 3DCNN can learn complex 3D spatial patterns in FDG-PET images and robustly predict amyloid-β positivity with significantly less reliance on site harmonization preprocessing.

Enhancing cerebral infarct classification by automatically extracting relevant fMRI features.

Dobromyslin VI, Zhou W

pubmed logopapersJun 17 2025
Accurate detection of cortical infarct is critical for timely treatment and improved patient outcomes. Current brain imaging methods often require invasive procedures that primarily assess blood vessel and structural white matter damage. There is a need for non-invasive approaches, such as functional MRI (fMRI), that better reflect neuronal viability. This study utilized automated machine learning (auto-ML) techniques to identify novel infarct-specific fMRI biomarkers specifically related to chronic cortical infarcts. We analyzed resting-state fMRI data from the multi-center ADNI dataset, which included 20 chronic infarct patients and 30 cognitively normal (CN) controls. This study utilized automated machine learning (auto-ML) techniques to identify novel fMRI biomarkers specifically related to chronic cortical infarcts. Surface-based registration methods were applied to minimize partial-volume effects typically associated with lower resolution fMRI data. We evaluated the performance of 7 previously known fMRI biomarkers alongside 107 new auto-generated fMRI biomarkers across 33 different classification models. Our analysis identified 6 new fMRI biomarkers that substantially improved infarct detection performance compared to previously established metrics. The best-performing combination of biomarkers and classifiers achieved a cross-validation ROC score of 0.791, closely matching the accuracy of diffusion-weighted imaging methods used in acute stroke detection. Our proposed auto-ML fMRI infarct-detection technique demonstrated robustness across diverse imaging sites and scanner types, highlighting the potential of automated feature extraction to significantly enhance non-invasive infarct detection.

Enhancing Ultrasound-Based Diagnosis of Unilateral Diaphragmatic Paralysis with a Visual Transformer-Based Model.

Kalkanis A, Bakalis D, Testelmans D, Buyse B, Simos YV, Tsamis KI, Manis G

pubmed logopapersJun 17 2025
This paper presents a novel methodology that combines a pre-trained Visual Transformer-Based Deep Model (ViT) with a custom denoising image filter for the diagnosis of Unilateral Diaphragmatic Paralysis (UDP) using Ultrasound (US) images. The ViT is employed to extract complex features from US images of 17 volunteers, capturing intricate patterns and details that are critical for accurate diagnosis. The extracted features are then fed into an ensemble learning model to determine the presence of UDP. The proposed framework achieves an average accuracy of 93.8% on a stratified 5-fold cross-validation, surpassing relevant state-of-the-art (SOTA) image classifiers. This high level of performance underscores the robustness and effectiveness of the framework, highlighting its potential as a prominent diagnostic tool in medical imaging.

Step-by-Step Approach to Design Image Classifiers in AI: An Exemplary Application of the CNN Architecture for Breast Cancer Diagnosis

Lohani, A., Mishra, B. K., Wertheim, K. Y., Fagbola, T. M.

medrxiv logopreprintJun 17 2025
In recent years, different Convolutional Neural Networks (CNNs) approaches have been applied for image classification in general and specific problems such as breast cancer diagnosis, but there is no standardising approach to facilitate comparison and synergy. This paper attempts a step-by-step approach to standardise a common application of image classification with the specific problem of classifying breast ultrasound images for breast cancer diagnosis as an illustrative example. In this study, three distinct datasets: Breast Ultrasound Image (BUSI), Breast Ultrasound Image (BUI), and Ultrasound Breast Images for Breast Cancer (UBIBC) datasets have been used to build and fine-tune custom and pre-trained CNN models systematically. Custom CNN models have been built, and hence, transfer learning (TL) has been applied to deploy a broad range of pre-trained models, optimised by applying data augmentation techniques and hyperparameter tuning. Models were trained and tested in scenarios involving limited and large datasets to gain insights into their robustness and generality. The obtained results indicated that the custom CNN and VGG19 are the two most suitable architectures for this problem. The experimental results highlight the significance of employing an effective step-by-step approach in image classification tasks to enhance the robustness and generalisation capabilities of CNN-based classifiers.

Ultrasound for breast cancer detection: A bibliometric analysis of global trends between 2004 and 2024.

Sun YY, Shi XT, Xu LL

pubmed logopapersJun 16 2025
With the advancement of computer technology and imaging equipment, ultrasound has emerged as a crucial tool in breast cancer diagnosis. To gain deeper insights into the research landscape of ultrasound in breast cancer diagnosis, this study employed bibliometric methods for a comprehensive analysis spanning from 2004 to 2024, analyzing 3523 articles from 2176 institutions in 82 countries/regions. Over this period, publications on ultrasound diagnosis of breast cancer showed a fluctuating growth trend from 2004 to 2024. Notably, China, Seoul National University and Kim EK emerged as leading contributors in ultrasound for breast cancer detection, with the most published and cited journals being Ultrasound Med Biol and Radiology. The research spots in this area included "breast lesion", "dense breast" and "breast-conserving surgery", while "machine learning", "ultrasonic imaging", "convolutional neural network", "case report", "pathological complete response", "deep learning", "artificial intelligence" and "classification" are anticipated to become future research frontiers. This groundbreaking bibliometric analysis and visualization of ultrasonic breast cancer diagnosis publications offer clinical medical professionals a reliable research focus and direction.

Finding Optimal Kernel Size and Dimension in Convolutional Neural Networks An Architecture Optimization Approach

Shreyas Rajeev, B Sathish Babu

arxiv logopreprintJun 16 2025
Kernel size selection in Convolutional Neural Networks (CNNs) is a critical but often overlooked design decision that affects receptive field, feature extraction, computational cost, and model accuracy. This paper proposes the Best Kernel Size Estimation Function (BKSEF), a mathematically grounded and empirically validated framework for optimal, layer-wise kernel size determination. BKSEF balances information gain, computational efficiency, and accuracy improvements by integrating principles from information theory, signal processing, and learning theory. Extensive experiments on CIFAR-10, CIFAR-100, ImageNet-lite, ChestX-ray14, and GTSRB datasets demonstrate that BKSEF-guided architectures achieve up to 3.1 percent accuracy improvement and 42.8 percent reduction in FLOPs compared to traditional models using uniform 3x3 kernels. Two real-world case studies further validate the approach: one for medical image classification in a cloud-based setup, and another for traffic sign recognition on edge devices. The former achieved enhanced interpretability and accuracy, while the latter reduced latency and model size significantly, with minimal accuracy trade-off. These results show that kernel size can be an active, optimizable parameter rather than a fixed heuristic. BKSEF provides practical heuristics and theoretical support for researchers and developers seeking efficient and application-aware CNN designs. It is suitable for integration into neural architecture search pipelines and real-time systems, offering a new perspective on CNN optimization.

Predicting overall survival of NSCLC patients with clinical, radiomics and deep learning features

Kanakarajan, H., Zhou, J., Baene, W. D., Sitskoorn, M.

medrxiv logopreprintJun 16 2025
Background and purposeAccurate estimation of Overall Survival (OS) in Non-Small Cell Lung Cancer (NSCLC) patients provides critical insights for treatment planning. While previous studies showed that radiomics and Deep Learning (DL) features increased prediction accuracy, this study aimed to examine whether a model that combines the radiomics and DL features with the clinical and dosimetric features outperformed other models. Materials and methodsWe collected pre-treatment lung CT scans and clinical data for 225 NSCLC patients from the Maastro Clinic: 180 for training and 45 for testing. Radiomics features were extracted using the Python radiomics feature extractor, and DL features were obtained using a 3D ResNet model. An ensemble model comprising XGB and NN classifiers was developed using: (1) clinical features only; (2) clinical and radiomics features; (3) clinical and DL features; and (4) clinical, radiomics, and DL features. The performance metrics were evaluated for the test and K-fold cross-validation data sets. ResultsThe prediction model utilizing only clinical variables provided an Area Under the Receiver Operating Characteristic Curve (AUC) of 0.64 and a test accuracy of 77.55%. The best performance came from combining clinical, radiomics, and DL features (AUC: 0.84, accuracy: 85.71%). The prediction improvement of this model was statistically significant compared to models trained with clinical features alone or with a combination of clinical and radiomics features. ConclusionIntegrating radiomics and DL features with clinical characteristics improved the prediction of OS after radiotherapy for NSCLC patients. The increased accuracy of our integrated model enables personalized, risk-based treatment planning, guiding clinicians toward more effective interventions, improved patient outcomes and enhanced quality of life.

AN INNOVATIVE MACHINE LEARNING-BASED ALGORITHM FOR DIAGNOSING PEDIATRIC OVARIAN TORSION.

Boztas AE, Sencan E, Payza AD, Sencan A

pubmed logopapersJun 16 2025
We aimed to develop a machine-learning(ML) algorithm consisting of physical examination, sonographic findings, and laboratory markers. The data of 70 patients with confirmed ovarian torsion followed and treated in our clinic for ovarian torsion and 73 patients for control group that presented to the emergency department with similar complaints but didn't have ovarian torsion detected on ultrasound as the control group between 2013-2023 were retrospectively analyzed. Sonographic findings, laboratory values, and clinical status of patients were examined and fed into three supervised ML systems to identify and develop viable decision algorithms. Presence of nausea/vomiting and symptom duration was statistically significant(p<0.05) for ovarian torsion. Presence of abdominal pain and palpable mass on physical examination weren't significant(p>0.05). White blood cell count(WBC), neutrophile/lymphocyte ratio(NLR), systemic immune-inflammation index(SII) and systemic inflammation response index(SIRI), high values of C-reactive protein was highly significant in prediction of torsion( p<0.001,p<0.05). Ovarian size ratio, medialization, follicular ring sign, presence of free fluid in pelvis in ultrasound demonstrated statistical significance in the torsion group(p<0.001). We used supervised ML algorithms, including decision trees, random forests, and LightGBM, to classify patients as either control or having torsion. We evaluated the models using 5-fold cross-validation, achieving an average F1-score of 98%, an accuracy of 98%, and a specificity of 100% across each fold with the decision tree model. This study represents the first development of a ML algorithm that integrates clinical, laboratory and ultrasonographic findings for the diagnosis of pediatric ovarian torsion with over 98% accuracy.

Three-dimensional multimodal imaging for predicting early recurrence of hepatocellular carcinoma after surgical resection.

Peng J, Wang J, Zhu H, Jiang P, Xia J, Cui H, Hong C, Zeng L, Li R, Li Y, Liang S, Deng Q, Deng H, Xu H, Dong H, Xiao L, Liu L

pubmed logopapersJun 16 2025
High tumor recurrence after surgery remains a significant challenge in managing hepatocellular carcinoma (HCC). We aimed to construct a multimodal model to forecast the early recurrence of HCC after surgical resection and explore the associated biological mechanisms. Overall, 519 patients with HCC were included from three medical centers. 433 patients from Nanfang Hospital were used as the training cohort, and 86 patients from the other two hospitals comprised validation cohort. Radiomics and deep learning (DL) models were developed using contrast-enhanced computed tomography images. Radiomics feature visualization and gradient-weighted class activation mapping were applied to improve interpretability. A multimodal model (MM-RDLM) was constructed by integrating radiomics and DL models. Associations between MM-RDLM and recurrence-free survival (RFS) and overall survival were analyzed. Gene set enrichment analysis (GSEA) and multiplex immunohistochemistry (mIHC) were used to investigate the biological mechanisms. Models based on hepatic arterial phase images exhibited the best predictive performance, with radiomics and DL models achieving areas under the curve (AUCs) of 0.770 (95 % confidence interval [CI]: 0.725-0.815) and 0.846 (95 % CI: 0.807-0.886), respectively, in the training cohort. MM-RDLM achieved an AUC of 0.955 (95 % CI: 0.937-0.972) in the training cohort and 0.930 (95 % CI: 0.876-0.984) in the validation cohort. MM-RDLM (high vs. low) was notably linked to RFS in the training (hazard ratio [HR] = 7.80 [5.74 - 10.61], P < 0.001) and validation (HR = 10.46 [4.96 - 22.68], P < 0.001) cohorts. GSEA revealed enrichment of the natural killer cell-mediated cytotoxicity pathway in the MM-RDLM low cohort. mIHC showed significantly higher percentages of CD3-, CD56-, and CD8-positive cells in the MM-RDLM low group. The MM-RDLM model demonstrated strong predictive performance for early postoperative recurrence of HCC. These findings contribute to identifying patients at high risk for early recurrence and provide insights into the potential underlying biological mechanisms.

Classification of glioma grade and Ki-67 level prediction in MRI data: A SHAP-driven interpretation.

Bhuiyan EH, Khan MM, Hossain SA, Rahman R, Luo Q, Hossain MF, Wang K, Sumon MSI, Khalid S, Karaman M, Zhang J, Chowdhury MEH, Zhu W, Zhou XJ

pubmed logopapersJun 16 2025
This study focuses on artificial intelligence-driven classification of glioma and Ki-67 leveling using T2w-FLAIR MRI, exploring the association of Ki-67 biomarkers with deep learning (DL) features through explainable artificial intelligence (XAI) and SHapley Additive exPlanations (SHAP). This IRB-approved study included 101 patients with glioma brain tumor acquired MR images with the T2W-FLAIR sequence. We extracted DL bottleneck features using ResNet50 from glioma MR images. Principal component analysis (PCA) was deployed for dimensionality reduction. XAI was used to identify potential features. The XGBosst classified the histologic grades of the glioma and the level of Ki-67. We integrated potential DL features with patient demographics (age and sex) and Ki-67 biomarkers, utilizing SHAP to determine the model's essential features and interactions. Glioma grade classification and Ki-67 level predictions achieved overall accuracies of 0.94 and 0.91, respectively. It achieved precision scores of 0.92, 0.94, and 0.96 for glioma grades 2, 3, and 4, and 0.88, 0.94, and 0.97 for Ki-67 levels (low: 5%≤Ki-67<10%, moderate: 10%≤Ki-67≤20, and high: Ki-67>20%). Corresponding F1-scores were 0.95, 0.88, and 0.96 for glioma grades and 0.92, 0.93, and 0.87 for Ki-67 levels. SHAP analysis further highlighted a strong association between bottleneck DL features and Ki-67 biomarkers, demonstrating their potential to differentiate glioma grades and Ki-67 levels while offering valuable insights into glioma aggressiveness. This study demonstrates the precise classification of glioma grades and the prediction of Ki-67 levels to underscore the potential of AI-driven MRI analysis to enhance clinical decision-making in glioma management.
Page 170 of 2442432 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.