Sort by:
Page 203 of 3623611 results

Intralesional and perilesional radiomics strategy based on different machine learning for the prediction of international society of urological pathology grade group in prostate cancer.

Li Z, Yang L, Wang X, Xu H, Chen W, Kang S, Huang Y, Shu C, Cui F, Zhang Y

pubmed logopapersJul 4 2025
To develop and evaluate a intralesional and perilesional radiomics strategy based on different machine learning model to differentiate International Society of Urological Pathology (ISUP) grade > 2 group and ISUP ≤ 2 prostate cancers (PCa). 340 case of PCa patients confirmed by radical prostatectomy pathology were obtained from two hospitals. The patients were divided into training, internal validation, and external validation groups. Radiomic features were extracted from T2-weighted imaging, and four distinct radiomic feature models were constructed: intralesional, perilesional, combined tumoral and perilesional, and intralesional and perilesional image fusion. Four machine learning classifiers logistic regression (LR), random forest (RF), extra trees (ET), and multilayer perceptron (MLP) were employed for model training and evaluation to select the optimal model. The performance of each model was assessed by calculating the area under the ROC curve (AUC), accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and F1 score. The AUCs for the RF classifier were higher than that of LR, ET, and MLP, and was selected as the final radiomic model. The nomogram model integrating perilesional, combined intralesional and perilesional, and intralesional and perilesional image fusion had an AUC of 0.929, 0.734, 0.743 for the training, internal, and external validation cohorts, respectively, which was higher than that of the individual intralesional, perilesional, combined intralesional and perilesional, and intralesional and perilesional image fusion models. The proposed nomogram established from perilesional, combined intralesional and perilesional, and intralesional and perilesional image fusion radiomic has the potential to predict the differentiation degree of ISUP PCa patients. Not applicable.

Deep learning-based classification of parotid gland tumors: integrating dynamic contrast-enhanced MRI for enhanced diagnostic accuracy.

Sinci KA, Koska IO, Cetinoglu YK, Erdogan N, Koc AM, Eliyatkin NO, Koska C, Candan B

pubmed logopapersJul 4 2025
To evaluate the performance of deep learning models in classifying parotid gland tumors using T2-weighted, diffusion-weighted, and contrast-enhanced T1-weighted MR images, along with DCE data derived from time-intensity curves. In this retrospective, single-center study including a total of 164 participants, 124 patients with surgically confirmed parotid gland tumors and 40 individuals with normal parotid glands underwent multiparametric MRI, including DCE sequences. Data partitions were performed at the patient level (80% training, 10% validation, 10% testing). Two deep learning architectures (MobileNetV2 and EfficientNetB0), as well as a combined approach integrating predictions from both models, were fine-tuned using transfer learning to classify (i) normal versus tumor (Task 1), (ii) benign versus malignant tumors (Task 2), and (iii) benign subtypes (Warthin tumor vs. pleomorphic adenoma) (Task 3). For Tasks 2 and 3, DCE-derived metrics were integrated via a support vector machine. Classification performance was assessed using accuracy, precision, recall, and F1-score, with 95% confidence intervals derived via bootstrap resampling. In Task 1, EfficientNetB0 achieved the highest accuracy (85%). In Task 2, the combined approach reached an accuracy of 65%, while adding DCE data significantly improved performance, with MobileNetV2 achieving an accuracy of 96%. In Task 3, EfficientNetB0 demonstrated the highest accuracy without DCE data (75%), while including DCE data boosted the combined approach to an accuracy of 89%. Adding DCE-MRI data to deep learning models substantially enhances parotid gland tumor classification accuracy, highlighting the value of functional imaging biomarkers in improving noninvasive diagnostic workflows.

AI-enabled obstetric point-of-care ultrasound as an emerging technology in low- and middle-income countries: provider and health system perspectives.

Della Ripa S, Santos N, Walker D

pubmed logopapersJul 4 2025
In many low- and middle-income countries (LMICs), widespread access to obstetric ultrasound is challenged by lack of trained providers, workload, and inadequate resources required for sustainability. Artificial intelligence (AI) is a powerful tool for automating image acquisition and interpretation and may help overcome these barriers. This study explored stakeholders' opinions about how AI-enabled point-of-care ultrasound (POCUS) might change current antenatal care (ANC) services in LMICs and identified key considerations for introduction. We purposely sampled midwives, doctors, researchers, and implementors for this mixed methods study, with a focus on those who live or work in African LMICs. Individuals completed an anonymous web-based survey, then participated in an interview or focus group. Among the 41 participants, we captured demographics, experience with and perceptions of standard POCUS, and reactions to an AI-enabled POCUS prototype description. Qualitative data were analyzed by thematic content analysis and quantitative Likert and rank-order data were aggregated as frequencies; the latter was presented alongside illustrative quotes to highlight overall versus nuanced perceptions. The following themes emerged: (1) priority AI capabilities; (2) potential impact on ANC quality, services and clinical outcomes; (3) health system integration considerations; and (4) research priorities. First, AI-enabled POCUS elicited concerns around algorithmic accuracy and compromised clinical acumen due to over-reliance on AI, but an interest in gestational age automation. Second, there was overall agreement that both standard and AI-enabled POCUS could improve ANC attendance (75%, 65%, respectively), provider-client trust (82%, 60%), and providers' confidence in clinical decision-making (85%, 70%). AI consistently elicited more uncertainty among respondents. Third, health system considerations emerged including task sharing with midwives, ultrasound training delivery and curricular content, and policy-related issues such as data security and liability risks. For both standard and AI-enabled POCUS, clinical decision support and referral strengthening were deemed necessary to improve outcomes. Lastly, ranked priority research areas included algorithm accuracy across diverse populations and impact on ANC performance indicators; mortality indicators were less prioritized. Optimism that AI-enabled POCUS can increase access in settings with limited personnel and resources is coupled with expressions of caution and potential risks that warrant careful consideration and exploration.

Multi-modal convolutional neural network-based thyroid cytology classification and diagnosis.

Yang D, Li T, Li L, Chen S, Li X

pubmed logopapersJul 4 2025
The cytologic diagnosis of thyroid nodules' benign and malignant nature based on cytological smears obtained through ultrasound-guided fine-needle aspiration is crucial for determining subsequent treatment plans. The development of artificial intelligence (AI) can assist pathologists in improving the efficiency and accuracy of cytological diagnoses. We propose a novel diagnostic model based on a network architecture that integrates cytologic images and digital ultrasound image features (CI-DUF) to solve the multi-class classification task of thyroid fine-needle aspiration cytology. We compare this model with a model relying solely on cytologic images (CI) and evaluate its performance and clinical application potential in thyroid cytology diagnosis. A retrospective analysis was conducted on 384 patients with 825 thyroid cytologic images. These images were used as a dataset for training the models, which were divided into training and testing sets in an 8:2 ratio to assess the performance of both the CI and CI-DUF diagnostic models. The AUROC of the CI model for thyroid cytology diagnosis was 0.9119, while the AUROC of the CI-DUF diagnostic model was 0.9326. Compared with the CI model, the CI-DUF model showed significantly increased accuracy, sensitivity, and specificity in the cytologic classification of papillary carcinoma, follicular neoplasm, medullary carcinoma, and benign lesions. The proposed CI-DUF diagnostic model, which intergrates multi-modal information, shows better diagnostic performance than the CI model that relies only on cytologic images, particularly excelling in thyroid cytology classification.

Disease Classification of Pulmonary Xenon Ventilation MRI Using Artificial Intelligence.

Matheson AM, Bdaiwi AS, Willmering MM, Hysinger EB, McCormack FX, Walkup LL, Cleveland ZI, Woods JC

pubmed logopapersJul 4 2025
Hyperpolarized <sup>129</sup>Xenon magnetic resonance imaging (MRI) measures the extent of lung ventilation by ventilation defect percent (VDP), but VDP alone cannot distinguish between diseases. Prior studies have reported anecdotal evidence of disease-specific defect patterns such as wedge-shaped defects in asthma and polka-dot defects in lymphangioleiomyomatosis (LAM). Neural network artificial intelligence can evaluate image shapes and textures to classify images, but this has not been attempted in xenon MRI. We hypothesized that an artificial intelligence network trained on ventilation MRI could classify diseases based on spatial patterns in lung MR images alone. Xenon MRI data in six pulmonary conditions (control, asthma, bronchiolitis obliterans syndrome, bronchopulmonary dysplasia, cystic fibrosis, LAM) were used to train convolutional neural networks. Network performance was assessed with top-1 and top-2 accuracy, recall, precision, and one-versus-all area under the curve (AUC). Gradient class-activation-mapping (Grad-CAM) was used to visualize what parts of the images were important for classification. Training/testing data were collected from 262 participants. The top performing network (VGG-16) had top-1 accuracy=56%, top-2 accuracy=78%, recall=.30, precision=.70, and AUC=.85. The network performed better on larger classes (top-1 accuracy: control=62% [n=57], CF=67% [n=85], LAM=69% [n=61]) and outperformed human observers (human top-1 accuracy=40%, network top-1 accuracy=61% on a single training fold). We developed an artificial intelligence tool that could classify disease from xenon ventilation images alone that outperformed human observers. This suggests that xenon images have additional, disease-specific information that could be useful for cases that are clinically challenging or for disease phenotyping.

A Multimodal Ultrasound-Driven Approach for Automated Tumor Assessment with B-Mode and Multi-Frequency Harmonic Motion Images.

Hu S, Liu Y, Wang R, Li X, Konofagou EE

pubmed logopapersJul 4 2025
Harmonic Motion Imaging (HMI) is an ultrasound elasticity imaging method that measures the mechanical properties of tissue using amplitude-modulated acoustic radiation force (AM-ARF). Multi-frequency HMI (MF-HMI) excites tissue at various AM frequencies simultaneously, allowing for image optimization without prior knowledge of inclusion size and stiffness. However, challenges remain in size estimation as inconsistent boundary effects result in different perceived sizes across AM frequencies. Herein, we developed an automated assessment method for tumor and focused ultrasound surgery (FUS) induced lesions using a transformer-based multi-modality neural network, HMINet, and further automated neoadjuvant chemotherapy (NACT) response prediction. HMINet was trained on 380 pairs of MF-HMI and B-mode images of phantoms and in vivo orthotopic breast cancer mice (4T1). Test datasets included phantoms (n = 32), in vivo 4T1 mice (n = 24), breast cancer patients (n = 20), FUS-induced lesions in ex vivo animal tissue and in vivo clinical settings with real-time inference, with average segmentation accuracy (Dice) of 0.91, 0.83, 0.80, and 0.81, respectively. HMINet outperformed state-of-the-art models; we also demonstrated the enhanced robustness of the multi-modality strategy over B-mode-only, both quantitatively through Dice scores and in terms of interpretation using saliency analysis. The contribution of AM frequency based on the number of salient pixels showed that the most significant AM frequencies are 800 and 200 Hz across clinical cases. We developed an automated, multimodality ultrasound-based tumor and FUS lesion assessment method, which facilitates the clinical translation of stiffness-based breast cancer treatment response prediction and real-time image-guided FUS therapy.

Machine learning approach using radiomics features to distinguish odontogenic cysts and tumours.

Muraoka H, Kaneda T, Ito K, Otsuka K, Tokunaga S

pubmed logopapersJul 4 2025
Although most odontogenic lesions in the jaw are benign, treatment varies widely depending on the nature of the lesion. This study was performed to assess the ability of a machine learning (ML) model using computed tomography (CT) and magnetic resonance imaging (MRI) radiomic features to classify odontogenic cysts and tumours. CT and MRI data from patients with odontogenic lesions including dentigerous cysts, odontogenic keratocysts, and ameloblastomas were analysed. Manual segmentation of the CT image and the apparent diffusion coefficient (ADC) map from diffusion-weighted MRI was performed to extract radiomic features. The extracted radiomic features were split into training (70%) and test (30%) sets. The random forest model was adjusted or optimized using 5-fold stratified cross-validation within the training set and assessed on a separate hold-out test set. Analysis of the CT-based ML model showed cross-validation accuracy of 0.59 and 0.60 for the training set and test set, respectively, with precision, recall, and F1 score all being 0.57. Analysis of the ADC-based ML model showed cross-validation accuracy of 0.90 and 0.94 for the training set and test set, respectively; the precision, recall, and F1 score were all 0.87. ML models, particularly when using MRI radiological features, can effectively classify odontogenic lesions.

Progression risk of adolescent idiopathic scoliosis based on SHAP-Explained machine learning models: a multicenter retrospective study.

Fang X, Weng T, Zhang Z, Gong W, Zhang Y, Wang M, Wang J, Ding Z, Lai C

pubmed logopapersJul 4 2025
To develop an interpretable machine learning model, explained using SHAP, based on imaging features of adolescent idiopathic scoliosis extracted by convolutional neural networks (CNNs), in order to predict the risk of curve progression and identify the most accurate predictive model. This study included 233 patients with adolescent idiopathic scoliosis from three medical centers. CNNs were used to extract features from full-spine coronal X-ray images taken at three follow-up points for each patient. Imaging and clinical features from center 1 were analyzed using the Boruta algorithm to identify independent predictors. Data from center 1 were divided into training (80%) and testing (20%) sets, while data from centers 2 and 3 were used as external validation sets. Six machine learning models were constructed. Receiver operating characteristic (ROC) curves were plotted, and model performance was assessed by calculating the area under the curve (AUC), accuracy, sensitivity, and specificity in the training, testing, and external validation sets. The SHAP interpreter was used to analyze the most effective model. The six models yielded AUCs ranging from 0.565 to 0.989, accuracies from 0.600 to 0.968, sensitivities from 0.625 to 1.0, and specificities from 0.571 to 0.974. The XGBoost model achieved the best performance, with an AUC of 0.896 in the external validation set. SHAP analysis identified the change in the main Cobb angle between the second and first follow-ups [Cobb1(2−1)] as the most important predictor, followed by the main Cobb angle at the second follow-up (Cobb1-2) and the change in the secondary Cobb angle [Cobb2(2−1)]. The XGBoost model demonstrated the best predictive performance in the external validation cohort, confirming its preliminary stability and generalizability. SHAP analysis indicated that Cobb1(2−1) was the most important feature for predicting scoliosis progression. This model offers a valuable tool for clinical decision-making by enabling early identification of high-risk patients and supporting early intervention strategies through automated feature extraction and interpretable analysis. The online version contains supplementary material available at 10.1186/s12891-025-08841-3.

Revolutionizing medical imaging: A cutting-edge AI framework with vision transformers and perceiver IO for multi-disease diagnosis.

Khaliq A, Ahmad F, Rehman HU, Alanazi SA, Haleem H, Junaid K, Andrikopoulou E

pubmed logopapersJul 4 2025
The integration of artificial intelligence in medical image classification has significantly advanced disease detection. However, traditional deep learning models face persistent challenges, including poor generalizability, high false-positive rates, and difficulties in distinguishing overlapping anatomical features, limiting their clinical utility. To address these limitations, this study proposes a hybrid framework combining Vision Transformers (ViT) and Perceiver IO, designed to enhance multi-disease classification accuracy. Vision Transformers leverage self-attention mechanisms to capture global dependencies in medical images, while Perceiver IO optimizes feature extraction for computational efficiency and precision. The framework is evaluated across three critical clinical domains: neurological disorders, including Stroke (tested on the Brain Stroke Prediction CT Scan Image Dataset) and Alzheimer's (analyzed via the Best Alzheimer MRI Dataset); skin diseases, covering Tinea (trained on the Skin Diseases Dataset) and Melanoma (augmented with dermoscopic images from the HAM10000/HAM10k dataset); and lung diseases, focusing on Lung Cancer (using the Lung Cancer Image Dataset) and Pneumonia (evaluated with the Pneumonia Dataset containing bacterial, viral, and normal X-ray cases). For neurological disorders, the model achieved 0.99 accuracy, 0.99 precision, 1.00 recall, 0.99 F1-score, demonstrating robust detection of structural brain abnormalities. In skin disease classification, it attained 0.95 accuracy, 0.93 precision, 0.97 recall, 0.95 F1-score, highlighting its ability to differentiate fine-grained textural patterns in lesions. For lung diseases, the framework achieved 0.98 accuracy, 0.97 precision, 1.00 recall, 0.98 F1-score, confirming its efficacy in identifying respiratory conditions. To bridge research and clinical practice, an AI-powered chatbot was developed for real-time analysis, enabling users to upload MRI, X-ray, or skin images for automated diagnosis with confidence scores and interpretable insights. This work represents the first application of ViT and Perceiver IO for these disease categories, outperforming conventional architectures in accuracy, computational efficiency, and clinical interpretability. The framework holds significant potential for early disease detection in healthcare settings, reducing diagnostic errors, and improving treatment outcomes for clinicians, radiologists, and patients. By addressing critical limitations of traditional models, such as overlapping feature confusion and false positives, this research advances the deployment of reliable AI tools in neurology, dermatology, and pulmonology.

Development of a deep learning-based automated diagnostic system (DLADS) for classifying mammographic lesions - a first large-scale multi-institutional clinical trial in Japan.

Yamaguchi T, Koyama Y, Inoue K, Ban K, Hirokaga K, Kujiraoka Y, Okanami Y, Shinohara N, Tsunoda H, Uematsu T, Mukai H

pubmed logopapersJul 3 2025
Recently, western countries have built evidence on mammographic artificial Intelligence-computer-aided diagnosis (AI-CADx) systems; however, their effectiveness has not yet been sufficiently validated in Japanese women. In this study, we aimed to establish a Japanese mammographic AI-CADx system for the first time. We retrospectively collected screening or diagnostic mammograms from 63 institutions in Japan. We then randomly divided the images into training, validation, and test datasets in a balanced ratio of 8:1:1 on a case-level basis. The gold standard of annotation for the AI-CADx system is mammographic findings based on pathologic references. The AI-CADx system was developed using SE-ResNet modules and a sliding window algorithm. A cut-off concentration gradient of the heatmap image was set at 15%. The AI-CADx system was considered accurate if it detected the presence of a malignant lesion in a breast cancer mammogram. The primary endpoint of the AI-CADx system was defined as a sensitivity and specificity of over 80% for breast cancer diagnosis in the test dataset. We collected 20,638 mammograms from 11,450 Japanese women with a median age of 55 years. The mammograms included 5019 breast cancer (24.3%), 5026 benign (24.4%), and 10,593 normal (51.3%) mammograms. In the test dataset of 2059 mammograms, the AI-CADx system achieved a sensitivity of 83.5% and a specificity of 84.7% for breast cancer diagnosis. The AUC in the test dataset was 0.841 (DeLong 95% CI; 0.822-0.859). The Accuracy was almost consistent independent of breast density, mammographic findings, type of cancer, and mammography vendors (AUC (range); 0.639-0.906). The developed Japanese mammographic AI-CADx system diagnosed breast cancer with a pre-specified sensitivity and specificity. We are planning a prospective study to validate the breast cancer diagnostic performance of Japanese physicians using this AI-CADx system as a second reader. UMIN, trial number UMIN000039009. Registered 26 December 2019, https://www.umin.ac.jp/ctr/.
Page 203 of 3623611 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.