Sort by:
Page 130 of 2432422 results

Explainable machine learning for post PKR surgery follow-up

Soubeiran, C., Vilbert, M., Memmi, B., Georgeon, C., Borderie, V., Chessel, A., Plamann, K.

medrxiv logopreprintJul 5 2025
Photorefractive Keratectomy (PRK) is a widely used laser-assisted refractive surgical technique. In some cases, it leads to temporary subepithelial inflammation or fibrosis linked to visual haze. There are to our knowledge no physics based and quantitative tools to monitor these symptoms. We here present a comprehensive machine learning-based algorithm for the detection of fibrosis based on spectral domain optical coherence tomography images recorded in vivo on standard clinical devices. Because of the rarity of these phenomena, we trained the model on corneas presenting Fuchs dystrophy causing similar, but permanent, fibrosis symptoms, and applied it to images from patients who have undergone PRK surgery. Our study shows that the model output (probability of Fuchs dystrophy classification) provides a quantified and explainable indicator of corneal healing for post-operative follow-up.

Quantifying features from X-ray images to assess early stage knee osteoarthritis.

Helaly T, Faisal TR, Moni ASB, Naznin M

pubmed logopapersJul 5 2025
Knee osteoarthritis (KOA) is a progressive degenerative joint disease and a leading cause of disability worldwide. Manual diagnosis of KOA from X-ray images is subjective and prone to inter- and intra-observer variability, making early detection challenging. While deep learning (DL)-based models offer automation, they often require large labeled datasets, lack interpretability, and do not provide quantitative feature measurements. Our study presents an automated KOA severity assessment system that integrates a pretrained DL model with image processing techniques to extract and quantify key KOA imaging biomarkers. The pipeline includes contrast limited adaptive histogram equalization (CLAHE) for contrast enhancement, DexiNed-based edge extraction, and thresholding for noise reduction. We design customized algorithms that automatically detect and quantify joint space narrowing (JSN) and osteophytes from the extracted edges. The proposed model quantitatively assesses JSN and finds the number of intercondylar osteophytes, contributing to severity classification. The system achieves accuracies of 88% for JSN detection, 80% for osteophyte identification, and 73% for KOA classification. Its key strength lies in eliminating the need for any expensive training process and, consequently, the dependency on labeled data except for validation. Additionally, it provides quantitative data that can support classification in other OA grading frameworks.

Improving prediction of fragility fractures in postmenopausal women using random forest.

Mateo J, Usategui-Martín R, Torres AM, Campillo-Sánchez F, de Temiño ÁR, Gil J, Martín-Millán M, Hernandez JL, Pérez-Castrillón JL

pubmed logopapersJul 5 2025
Osteoporosis is a chronic disease characterized by a progressive decline in bone density and quality, leading to increased bone fragility and a higher susceptibility to fractures, even in response to minimal trauma. Osteoporotic fractures represent a major source of morbidity and mortality among postmenopausal women. This condition poses both clinical and societal challenges, as its consequences include a significant reduction in quality of life, prolonged dependency, and a substantial increase in healthcare costs. Therefore, the development of reliable tools for predicting fracture risk is essential for the effective management of affected patients. In this study, we developed a predictive model based on the Random Forest (RF) algorithm for risk stratification of fragility fractures, integrating clinical, demographic, and imaging variables derived from dual-energy X-ray absorptiometry (DXA) and 3D modeling. Two independent cohorts were analyzed: the HURH cohort and the Camargo cohort, enabling both internal and external validation of the model. The results showed that the RF model consistently outperformed other classification algorithms, including k-nearest neighbors (KNN), support vector machines (SVM), decision trees (DT), and Gaussian naive Bayes (GNB), demonstrating high accuracy, sensitivity, specificity, area under the ROC curve (AUC), and Matthews correlation coefficient (MCC). Additionally, variable importance analysis highlighted that previous fracture history, parathyroid hormone (PTH) levels, and lumbar spine T-score, along with other densitometric parameters, were key predictors of fracture risk. These findings suggest that the integration of advanced machine learning techniques with clinical and imaging data can optimize early identification of high-risk patients, enabling personalized preventive strategies and improving the clinical management of osteoporosis.

PGMI assessment in mammography: AI software versus human readers.

Santner T, Ruppert C, Gianolini S, Stalheim JG, Frei S, Hondl M, Fröhlich V, Hofvind S, Widmann G

pubmed logopapersJul 5 2025
The aim of this study was to evaluate human inter-reader agreement of parameters included in PGMI (perfect-good-moderate-inadequate) classification of screening mammograms and explore the role of artificial intelligence (AI) as an alternative reader. Five radiographers from three European countries independently performed a PGMI assessment of 520 anonymized mammography screening examinations randomly selected from representative subsets from 13 imaging centres within two European countries. As a sixth reader, a dedicated AI software was used. Accuracy, Cohen's Kappa, and confusion matrices were calculated to compare the predictions of the software against the individual assessment of the readers, as well as potential discrepancies between them. A questionnaire and a personality test were used to better understand the decision-making processes of the human readers. Significant inter-reader variability among human readers with poor to moderate agreement (κ = -0.018 to κ = 0.41) was observed, with some showing more homogenous interpretations of single features and overall quality than others. In comparison, the software surpassed human inter-reader agreement in detecting glandular tissue cuts, mammilla deviation, pectoral muscle detection, and pectoral angle measurement, while remaining features and overall image quality exhibited comparable performance to human assessment. Notably, human inter-reader disagreement of PGMI assessment in mammography is considerably high. AI software may already reliably categorize quality. Its potential for standardization and immediate feedback to achieve and monitor high levels of quality in screening programs needs further attention and should be included in future approaches. AI has promising potential for automated assessment of diagnostic image quality. Faster, more representative and more objective feedback may support radiographers in their quality management processes. Direct transformation of common PGMI workflows into an AI algorithm could be challenging.

Unveiling genetic architecture of white matter microstructure through unsupervised deep representation learning of fractional anisotropy images.

Zhao X, Xie Z, He W, Fornage M, Zhi D

pubmed logopapersJul 5 2025
Fractional anisotropy (FA) derived from diffusion MRI is a widely used marker of white matter (WM) integrity. However, conventional FA based genetic studies focus on phenotypes representing tract- or atlas-defined averages, which may oversimplify spatial patterns of WM integrity and thus limiting the genetic discovery. Here, we proposed a deep learning-based framework, termed unsupervised deep representation of white matter (UDR-WM), to extract brain-wide FA features-referred to as UDIP-FA, that capture distributed microstructural variation without prior anatomical assumptions. UDIP-FAs exhibit enhanced sensitivity to aging and substantially higher SNP-based heritability compared to traditional FA phenotypes ( <i>P</i> < 2.20e-16, Mann-Whitney U test, mean h <sup>2</sup> = 50.81%). Through multivariate GWAS, we identified 939 significant lead SNPs in 586 loci, mapped to 3480 genes, dubbed UDIP-FA related genes (UFAGs). UFAGs are overexpressed in glial cells, particularly in astrocytes and oligodendrocytes (Bonferroni-corrected <i>P <</i> 2e-6, Wald Test), and show strong overlap with risk gene sets for schizophrenia and Parkinson disease (Bonferroni-corrected P < 7.06e-3, Fisher exact test). UDIP-FAs are genetically correlated with multiple brain disorders and cognitive traits, including fluid intelligence and reaction time, and are associated with polygenic risk for bone mineral density. Network analyses reveal that UFAGs form disease-enriched modules across protein-protein interaction and co-expression networks, implicating core pathways in myelination and axonal structure. Notably, several UFAGs, including <i>ACHE</i> and <i>ALDH2</i> , are targets of existing neuropsychiatric drugs. Together, our findings establish UDIP-FA as a biologically and clinically informative brain phenotype, enabling high-resolution dissection of white matter genetic architecture and its genetic links to complex brain traits.

ViT-GCN: A Novel Hybrid Model for Accurate Pneumonia Diagnosis from X-ray Images.

Xu N, Wu J, Cai F, Li X, Xie HB

pubmed logopapersJul 4 2025
This study aims to enhance the accuracy of pneumonia diagnosis from X-ray images by developing a model that integrates Vision Transformer (ViT) and Graph Convolutional Networks (GCN) for improved feature extraction and diagnostic performance. The ViT-GCN model was designed to leverage the strengths of both ViT, which captures global image information by dividing the image into fixed-size patches and processing them in sequence, and GCN, which captures node features and relationships through message passing and aggregation in graph data. A composite loss function combining multivariate cross-entropy, focal loss, and GHM loss was introduced to address dataset imbalance and improve training efficiency on small datasets. The ViT-GCN model demonstrated superior performance, achieving an accuracy of 91.43\% on the COVID-19 chest X-ray database, surpassing existing models in diagnostic accuracy for pneumonia. The study highlights the effectiveness of combining ViT and GCN architectures in medical image diagnosis, particularly in addressing challenges related to small datasets. This approach can lead to more accurate and efficient pneumonia diagnoses, especially in resource-constrained settings where small datasets are common.

Machine learning approach using radiomics features to distinguish odontogenic cysts and tumours.

Muraoka H, Kaneda T, Ito K, Otsuka K, Tokunaga S

pubmed logopapersJul 4 2025
Although most odontogenic lesions in the jaw are benign, treatment varies widely depending on the nature of the lesion. This study was performed to assess the ability of a machine learning (ML) model using computed tomography (CT) and magnetic resonance imaging (MRI) radiomic features to classify odontogenic cysts and tumours. CT and MRI data from patients with odontogenic lesions including dentigerous cysts, odontogenic keratocysts, and ameloblastomas were analysed. Manual segmentation of the CT image and the apparent diffusion coefficient (ADC) map from diffusion-weighted MRI was performed to extract radiomic features. The extracted radiomic features were split into training (70%) and test (30%) sets. The random forest model was adjusted or optimized using 5-fold stratified cross-validation within the training set and assessed on a separate hold-out test set. Analysis of the CT-based ML model showed cross-validation accuracy of 0.59 and 0.60 for the training set and test set, respectively, with precision, recall, and F1 score all being 0.57. Analysis of the ADC-based ML model showed cross-validation accuracy of 0.90 and 0.94 for the training set and test set, respectively; the precision, recall, and F1 score were all 0.87. ML models, particularly when using MRI radiological features, can effectively classify odontogenic lesions.

Dual-Branch Attention Fusion Network for Pneumonia Detection.

Li T, Li B, Zheng C

pubmed logopapersJul 4 2025
Pneumonia, as a serious respiratory disease caused by bacterial, viral or fungal infections, is an important cause of increased morbidity and mortality in high-risk populations (e.g.the elderly, infants and young children, and immunodeficient patients) worldwide. Early diagnosis is decisive for improving patient prognosis. In this study, we propose a Dual-Branch Attention Fusion Network based on transfer learning, aiming to improve the accuracy of pneumonia classification in lung X-ray images. The model adopts a dual-branch feature extraction architecture: independent feature extraction paths are constructed based on pre-trained convolutional neural networks (CNNs) and structural spatial state models, respectively, and feature complementarity is achieved through a feature fusion strategy. In the fusion stage, a Self-Attention Mechanism is introduced to dynamically weight the feature representations of different paths, which effectively improves the characterisation of key lesion regions. The experiments are carried out based on the publicly available ChestX-ray dataset, and through data enhancement, migration learning optimisation and hyper-parameter tuning, the model achieves an accuracy of 97.78% on an independent test set, and the experimental results fully demonstrate the excellent performance of the model in the field of pneumonia diagnosis, which provides a new and powerful tool for the rapid and accurate diagnosis of pneumonia in clinical practice, and our methodology provides a high--performance computational framework for intelligent pneumonia Early screening provides a high-performance computing framework, and its architecture design of multipath and attention fusion can provide a methodological reference for other medical image analysis tasks.&#xD.

Disease Classification of Pulmonary Xenon Ventilation MRI Using Artificial Intelligence.

Matheson AM, Bdaiwi AS, Willmering MM, Hysinger EB, McCormack FX, Walkup LL, Cleveland ZI, Woods JC

pubmed logopapersJul 4 2025
Hyperpolarized <sup>129</sup>Xenon magnetic resonance imaging (MRI) measures the extent of lung ventilation by ventilation defect percent (VDP), but VDP alone cannot distinguish between diseases. Prior studies have reported anecdotal evidence of disease-specific defect patterns such as wedge-shaped defects in asthma and polka-dot defects in lymphangioleiomyomatosis (LAM). Neural network artificial intelligence can evaluate image shapes and textures to classify images, but this has not been attempted in xenon MRI. We hypothesized that an artificial intelligence network trained on ventilation MRI could classify diseases based on spatial patterns in lung MR images alone. Xenon MRI data in six pulmonary conditions (control, asthma, bronchiolitis obliterans syndrome, bronchopulmonary dysplasia, cystic fibrosis, LAM) were used to train convolutional neural networks. Network performance was assessed with top-1 and top-2 accuracy, recall, precision, and one-versus-all area under the curve (AUC). Gradient class-activation-mapping (Grad-CAM) was used to visualize what parts of the images were important for classification. Training/testing data were collected from 262 participants. The top performing network (VGG-16) had top-1 accuracy=56%, top-2 accuracy=78%, recall=.30, precision=.70, and AUC=.85. The network performed better on larger classes (top-1 accuracy: control=62% [n=57], CF=67% [n=85], LAM=69% [n=61]) and outperformed human observers (human top-1 accuracy=40%, network top-1 accuracy=61% on a single training fold). We developed an artificial intelligence tool that could classify disease from xenon ventilation images alone that outperformed human observers. This suggests that xenon images have additional, disease-specific information that could be useful for cases that are clinically challenging or for disease phenotyping.

Intelligent brain tumor detection using hybrid finetuned deep transfer features and ensemble machine learning algorithms.

Salakapuri R, Terlapu PV, Kalidindi KR, Balaka RN, Jayaram D, Ravikumar T

pubmed logopapersJul 4 2025
Brain tumours (BTs) are severe neurological disorders. They affect more than 308,000 people each year worldwide. The mortality rate is over 251,000 deaths annually (IARC, 2020 reports). Detecting BTs is complex because they vary in nature. Early diagnosis is essential for better survival rates. The study presents a new system for detecting BTs. It combines deep (DL) learning and machine (ML) learning techniques. The system uses advanced models like Inception-V3, ResNet-50, and VGG-16 for feature extraction, and for dimensional reduction, it uses the PCA model. It also employs ensemble methods such as Stacking, k-NN, Gradient Boosting, AdaBoost, Multi-Layer Perceptron (MLP), and Support Vector Machines for classification and predicts the BTs using MRI scans. The MRI scans were resized to 224 × 224 pixels, and pixel intensities were normalized to a [0,1] scale. We apply the Gaussian filter for stability. We use the Keras Image Data Generator for image augmentation. It applied methods like zooming and ± 10% brightness adjustments. The dataset has 5,712 MRI scans. These scans are classified into four groups: Meningioma, No-Tumor, Glioma, and Pituitary. A tenfold cross-validation method helps check if the model is reliable. Deep transfer (TL) learning and ensemble ML models work well together. They showed excellent results in detecting BTs. The stacking ensemble model achieved the highest accuracy across all feature extraction methods, with ResNet-50 features reduced by PCA (500), producing an accuracy of 0.957, 95% CI: 0.948-0.966; AUC: 0.996, 95% CI: 0.989-0.998, significantly outperforming baselines (p < 0.01). Neural networks and gradient-boosting models also show strong performance. The stacking model is robust and reliable. This method is useful for medical applications. Future studies will focus on using multi-modal imaging. This will help improve diagnostic accuracy. The research improves early detection of brain tumors.
Page 130 of 2432422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.