Sort by:
Page 15 of 1261258 results

Multi-modality radiomics diagnosis of breast cancer based on MRI, ultrasound and mammography.

Wu J, Li Y, Gong W, Li Q, Han X, Zhang T

pubmed logopapersJul 4 2025
To develop a multi-modality machine learning-based radiomics model utilizing Magnetic Resonance Imaging (MRI), Ultrasound (US), and Mammography (MMG) for the differentiation of benign and malignant breast nodules. This study retrospectively collected data from 204 patients across three hospitals, including MRI, US, and MMG imaging data along with confirmed pathological diagnoses. Lesions on 2D US, 2D MMG, and 3D MRI images were selected to outline the areas of interest, which were then automatically expanded outward by 3 mm, 5 mm, and 8 mm to extract radiomic features within and around the tumor. ANOVA, the maximum correlation minimum redundancy (mRMR) algorithm, and the least absolute shrinkage and selection operator (LASSO) were used to select features for breast cancer diagnosis through logistic regression analysis. The performance of the radiomics models was evaluated using receiver operating characteristic (ROC) curve analysis, curves decision curve analysis (DCA), and calibration curves. Among the various radiomics models tested, the MRI_US_MMG multi-modality logistic regression model with 5 mm peritumoral features demonstrated the best performance. In the test cohort, this model achieved an AUC of 0.905(95% confidence interval [CI]: 0.805-1). These results suggest that the inclusion of peritumoral features, specifically at a 5 mm expansion, significantly enhanced the diagnostic efficiency of the multi-modality radiomics model in differentiating benign from malignant breast nodules. The multi-modality radiomics model based on MRI, ultrasound, and mammography can predict benign and malignant breast lesions.

Medical slice transformer for improved diagnosis and explainability on 3D medical images with DINOv2.

Müller-Franzes G, Khader F, Siepmann R, Han T, Kather JN, Nebelung S, Truhn D

pubmed logopapersJul 4 2025
Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) are essential clinical cross-sectional imaging techniques for diagnosing complex conditions. However, large 3D datasets with annotations for deep learning are scarce. While methods like DINOv2 are encouraging for 2D image analysis, these methods have not been applied to 3D medical images. Furthermore, deep learning models often lack explainability due to their "black-box" nature. This study aims to extend 2D self-supervised models, specifically DINOv2, to 3D medical imaging while evaluating their potential for explainable outcomes. We introduce the Medical Slice Transformer (MST) framework to adapt 2D self-supervised models for 3D medical image analysis. MST combines a Transformer architecture with a 2D feature extractor, i.e., DINOv2. We evaluate its diagnostic performance against a 3D convolutional neural network (3D ResNet) across three clinical datasets: breast MRI (651 patients), chest CT (722 patients), and knee MRI (1199 patients). Both methods were tested for diagnosing breast cancer, predicting lung nodule dignity, and detecting meniscus tears. Diagnostic performance was assessed by calculating the Area Under the Receiver Operating Characteristic Curve (AUC). Explainability was evaluated through a radiologist's qualitative comparison of saliency maps based on slice and lesion correctness. P-values were calculated using Delong's test. MST achieved higher AUC values compared to ResNet across all three datasets: breast (0.94 ± 0.01 vs. 0.91 ± 0.02, P = 0.02), chest (0.95 ± 0.01 vs. 0.92 ± 0.02, P = 0.13), and knee (0.85 ± 0.04 vs. 0.69 ± 0.05, P = 0.001). Saliency maps were consistently more precise and anatomically correct for MST than for ResNet. Self-supervised 2D models like DINOv2 can be effectively adapted for 3D medical imaging using MST, offering enhanced diagnostic accuracy and explainability compared to convolutional neural networks.

Characteristics of brain network connectome and connectome-based efficacy predictive model in bipolar depression.

Xi C, Lu B, Guo X, Qin Z, Yan C, Hu S

pubmed logopapersJul 4 2025
Aberrant functional connectivity (FC) between brain networks has been indicated closely associated with bipolar disorder (BD). However, the previous findings of specific brain network connectivity patterns have been inconsistent, and the clinical utility of FCs for predicting treatment outcomes in bipolar depression was underexplored. To identify robust neuro-biomarkers of bipolar depression, a connectome-based analysis was conducted on resting-state functional MRI (rs-fMRI) data of 580 bipolar depression patients and 116 healthy controls (HCs). A subsample of 148 patients underwent a 4-week quetiapine treatment with post-treatment clinical assessment. Adopting machine learning, a predictive model based on pre-treatment brain connectome was then constructed to predict treatment response and identify the efficacy-specific networks. Distinct brain network connectivity patterns were observed in bipolar depression compared to HCs. Elevated intra-network connectivity was identified within the default mode network (DMN), sensorimotor network (SMN), and subcortical network (SC); and as to the inter-network connectivity, increased FCs were between the DMN, SMN and frontoparietal (FPN), ventral attention network (VAN), and decreased FCs were between the SC and cortical networks, especially the DMN and FPN. And the global network topology analyses revealed decreased global efficiency and increased characteristic path length in BD compared to HC. Further, the support vector regression model successfully predicted the efficacy of quetiapine treatment, as indicated by a high correspondence between predicted and actual HAMD reduction ratio values (r<sub>(df=147)</sub>=0.4493, p = 2*10<sup>-4</sup>). The identified efficacy-specific networks primarily encompassed FCs between the SMN and SC, and between the FPN, DMN, and VAN. These identified networks further predicted treatment response with r = 0.3940 in the subsequent validation with an independent cohort (n = 43). These findings presented the characteristic aberrant patterns of brain network connectome in bipolar depression and demonstrated the predictive potential of pre-treatment network connectome for quetiapine response. Promisingly, the identified connectivity networks may serve as functional targets for future precise treatments for bipolar depression.

Intelligent brain tumor detection using hybrid finetuned deep transfer features and ensemble machine learning algorithms.

Salakapuri R, Terlapu PV, Kalidindi KR, Balaka RN, Jayaram D, Ravikumar T

pubmed logopapersJul 4 2025
Brain tumours (BTs) are severe neurological disorders. They affect more than 308,000 people each year worldwide. The mortality rate is over 251,000 deaths annually (IARC, 2020 reports). Detecting BTs is complex because they vary in nature. Early diagnosis is essential for better survival rates. The study presents a new system for detecting BTs. It combines deep (DL) learning and machine (ML) learning techniques. The system uses advanced models like Inception-V3, ResNet-50, and VGG-16 for feature extraction, and for dimensional reduction, it uses the PCA model. It also employs ensemble methods such as Stacking, k-NN, Gradient Boosting, AdaBoost, Multi-Layer Perceptron (MLP), and Support Vector Machines for classification and predicts the BTs using MRI scans. The MRI scans were resized to 224 × 224 pixels, and pixel intensities were normalized to a [0,1] scale. We apply the Gaussian filter for stability. We use the Keras Image Data Generator for image augmentation. It applied methods like zooming and ± 10% brightness adjustments. The dataset has 5,712 MRI scans. These scans are classified into four groups: Meningioma, No-Tumor, Glioma, and Pituitary. A tenfold cross-validation method helps check if the model is reliable. Deep transfer (TL) learning and ensemble ML models work well together. They showed excellent results in detecting BTs. The stacking ensemble model achieved the highest accuracy across all feature extraction methods, with ResNet-50 features reduced by PCA (500), producing an accuracy of 0.957, 95% CI: 0.948-0.966; AUC: 0.996, 95% CI: 0.989-0.998, significantly outperforming baselines (p < 0.01). Neural networks and gradient-boosting models also show strong performance. The stacking model is robust and reliable. This method is useful for medical applications. Future studies will focus on using multi-modal imaging. This will help improve diagnostic accuracy. The research improves early detection of brain tumors.

ViT-GCN: A Novel Hybrid Model for Accurate Pneumonia Diagnosis from X-ray Images.

Xu N, Wu J, Cai F, Li X, Xie HB

pubmed logopapersJul 4 2025
This study aims to enhance the accuracy of pneumonia diagnosis from X-ray images by developing a model that integrates Vision Transformer (ViT) and Graph Convolutional Networks (GCN) for improved feature extraction and diagnostic performance. The ViT-GCN model was designed to leverage the strengths of both ViT, which captures global image information by dividing the image into fixed-size patches and processing them in sequence, and GCN, which captures node features and relationships through message passing and aggregation in graph data. A composite loss function combining multivariate cross-entropy, focal loss, and GHM loss was introduced to address dataset imbalance and improve training efficiency on small datasets. The ViT-GCN model demonstrated superior performance, achieving an accuracy of 91.43\% on the COVID-19 chest X-ray database, surpassing existing models in diagnostic accuracy for pneumonia. The study highlights the effectiveness of combining ViT and GCN architectures in medical image diagnosis, particularly in addressing challenges related to small datasets. This approach can lead to more accurate and efficient pneumonia diagnoses, especially in resource-constrained settings where small datasets are common.

Dual-Branch Attention Fusion Network for Pneumonia Detection.

Li T, Li B, Zheng C

pubmed logopapersJul 4 2025
Pneumonia, as a serious respiratory disease caused by bacterial, viral or fungal infections, is an important cause of increased morbidity and mortality in high-risk populations (e.g.the elderly, infants and young children, and immunodeficient patients) worldwide. Early diagnosis is decisive for improving patient prognosis. In this study, we propose a Dual-Branch Attention Fusion Network based on transfer learning, aiming to improve the accuracy of pneumonia classification in lung X-ray images. The model adopts a dual-branch feature extraction architecture: independent feature extraction paths are constructed based on pre-trained convolutional neural networks (CNNs) and structural spatial state models, respectively, and feature complementarity is achieved through a feature fusion strategy. In the fusion stage, a Self-Attention Mechanism is introduced to dynamically weight the feature representations of different paths, which effectively improves the characterisation of key lesion regions. The experiments are carried out based on the publicly available ChestX-ray dataset, and through data enhancement, migration learning optimisation and hyper-parameter tuning, the model achieves an accuracy of 97.78% on an independent test set, and the experimental results fully demonstrate the excellent performance of the model in the field of pneumonia diagnosis, which provides a new and powerful tool for the rapid and accurate diagnosis of pneumonia in clinical practice, and our methodology provides a high--performance computational framework for intelligent pneumonia Early screening provides a high-performance computing framework, and its architecture design of multipath and attention fusion can provide a methodological reference for other medical image analysis tasks.&#xD.

Machine learning approach using radiomics features to distinguish odontogenic cysts and tumours.

Muraoka H, Kaneda T, Ito K, Otsuka K, Tokunaga S

pubmed logopapersJul 4 2025
Although most odontogenic lesions in the jaw are benign, treatment varies widely depending on the nature of the lesion. This study was performed to assess the ability of a machine learning (ML) model using computed tomography (CT) and magnetic resonance imaging (MRI) radiomic features to classify odontogenic cysts and tumours. CT and MRI data from patients with odontogenic lesions including dentigerous cysts, odontogenic keratocysts, and ameloblastomas were analysed. Manual segmentation of the CT image and the apparent diffusion coefficient (ADC) map from diffusion-weighted MRI was performed to extract radiomic features. The extracted radiomic features were split into training (70%) and test (30%) sets. The random forest model was adjusted or optimized using 5-fold stratified cross-validation within the training set and assessed on a separate hold-out test set. Analysis of the CT-based ML model showed cross-validation accuracy of 0.59 and 0.60 for the training set and test set, respectively, with precision, recall, and F1 score all being 0.57. Analysis of the ADC-based ML model showed cross-validation accuracy of 0.90 and 0.94 for the training set and test set, respectively; the precision, recall, and F1 score were all 0.87. ML models, particularly when using MRI radiological features, can effectively classify odontogenic lesions.

Disease Classification of Pulmonary Xenon Ventilation MRI Using Artificial Intelligence.

Matheson AM, Bdaiwi AS, Willmering MM, Hysinger EB, McCormack FX, Walkup LL, Cleveland ZI, Woods JC

pubmed logopapersJul 4 2025
Hyperpolarized <sup>129</sup>Xenon magnetic resonance imaging (MRI) measures the extent of lung ventilation by ventilation defect percent (VDP), but VDP alone cannot distinguish between diseases. Prior studies have reported anecdotal evidence of disease-specific defect patterns such as wedge-shaped defects in asthma and polka-dot defects in lymphangioleiomyomatosis (LAM). Neural network artificial intelligence can evaluate image shapes and textures to classify images, but this has not been attempted in xenon MRI. We hypothesized that an artificial intelligence network trained on ventilation MRI could classify diseases based on spatial patterns in lung MR images alone. Xenon MRI data in six pulmonary conditions (control, asthma, bronchiolitis obliterans syndrome, bronchopulmonary dysplasia, cystic fibrosis, LAM) were used to train convolutional neural networks. Network performance was assessed with top-1 and top-2 accuracy, recall, precision, and one-versus-all area under the curve (AUC). Gradient class-activation-mapping (Grad-CAM) was used to visualize what parts of the images were important for classification. Training/testing data were collected from 262 participants. The top performing network (VGG-16) had top-1 accuracy=56%, top-2 accuracy=78%, recall=.30, precision=.70, and AUC=.85. The network performed better on larger classes (top-1 accuracy: control=62% [n=57], CF=67% [n=85], LAM=69% [n=61]) and outperformed human observers (human top-1 accuracy=40%, network top-1 accuracy=61% on a single training fold). We developed an artificial intelligence tool that could classify disease from xenon ventilation images alone that outperformed human observers. This suggests that xenon images have additional, disease-specific information that could be useful for cases that are clinically challenging or for disease phenotyping.

Multi-modal convolutional neural network-based thyroid cytology classification and diagnosis.

Yang D, Li T, Li L, Chen S, Li X

pubmed logopapersJul 4 2025
The cytologic diagnosis of thyroid nodules' benign and malignant nature based on cytological smears obtained through ultrasound-guided fine-needle aspiration is crucial for determining subsequent treatment plans. The development of artificial intelligence (AI) can assist pathologists in improving the efficiency and accuracy of cytological diagnoses. We propose a novel diagnostic model based on a network architecture that integrates cytologic images and digital ultrasound image features (CI-DUF) to solve the multi-class classification task of thyroid fine-needle aspiration cytology. We compare this model with a model relying solely on cytologic images (CI) and evaluate its performance and clinical application potential in thyroid cytology diagnosis. A retrospective analysis was conducted on 384 patients with 825 thyroid cytologic images. These images were used as a dataset for training the models, which were divided into training and testing sets in an 8:2 ratio to assess the performance of both the CI and CI-DUF diagnostic models. The AUROC of the CI model for thyroid cytology diagnosis was 0.9119, while the AUROC of the CI-DUF diagnostic model was 0.9326. Compared with the CI model, the CI-DUF model showed significantly increased accuracy, sensitivity, and specificity in the cytologic classification of papillary carcinoma, follicular neoplasm, medullary carcinoma, and benign lesions. The proposed CI-DUF diagnostic model, which intergrates multi-modal information, shows better diagnostic performance than the CI model that relies only on cytologic images, particularly excelling in thyroid cytology classification.

Improving risk assessment of local failure in brain metastases patients using vision transformers - A multicentric development and validation study.

Erdur AC, Scholz D, Nguyen QM, Buchner JA, Mayinger M, Christ SM, Brunner TB, Wittig A, Zimmer C, Meyer B, Guckenberger M, Andratschke N, El Shafie RA, Debus JU, Rogers S, Riesterer O, Schulze K, Feldmann HJ, Blanck O, Zamboglou C, Bilger-Z A, Grosu AL, Wolff R, Eitz KA, Combs SE, Bernhardt D, Wiestler B, Rueckert D, Peeken JC

pubmed logopapersJul 4 2025
This study investigates the use of Vision Transformers (ViTs) to predict Freedom from Local Failure (FFLF) in patients with brain metastases using pre-operative MRI scans. The goal is to develop a model that enhances risk stratification and informs personalized treatment strategies. Within the AURORA retrospective trial, patients (n = 352) who received surgical resection followed by post-operative stereotactic radiotherapy (SRT) were collected from seven hospitals. We trained our ViT for the direct image-to-risk task on T1-CE and FLAIR sequences and combined clinical features along the way. We employed segmentation-guided image modifications, model adaptations, and specialized patient sampling strategies during training. The model was evaluated with five-fold cross-validation and ensemble learning across all validation runs. An external, international test cohort (n = 99) within the dataset was used to assess the generalization capabilities of the model, and saliency maps were generated for explainability analysis. We achieved a competent C-Index score of 0.7982 on the test cohort, surpassing all clinical, CNN-based, and hybrid baselines. Kaplan-Meier analysis showed significant FFLF risk stratification. Saliency maps focusing on the BM core confirmed that model explanations aligned with expert observations. Our ViT-based model offers a potential for personalized treatment strategies and follow-up regimens in patients with brain metastases. It provides an alternative to radiomics as a robust, automated tool for clinical workflows, capable of improving patient outcomes through effective risk assessment and stratification.
Page 15 of 1261258 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.