Sort by:
Page 90 of 2412401 results

Robust Radiomic Signatures of Intervertebral Disc Degeneration from MRI.

McSweeney T, Tiulpin A, Kowlagi N, Määttä J, Karppinen J, Saarakkala S

pubmed logopapersJun 20 2025
A retrospective analysis. The aim of this study was to identify a robust radiomic signature from deep learning segmentations for intervertebral disc (IVD) degeneration classification. Low back pain (LBP) is the most common musculoskeletal symptom worldwide and IVD degeneration is an important contributing factor. To improve the quantitative phenotyping of IVD degeneration from T2-weighted magnetic resonance imaging (MRI) and better understand its relationship with LBP, multiple shape and intensity features have been investigated. IVD radiomics have been less studied but could reveal sub-visual imaging characteristics of IVD degeneration. We used data from Northern Finland Birth Cohort 1966 members who underwent lumbar spine T2-weighted MRI scans at age 45-47 (n=1397). We used a deep learning model to segment the lumbar spine IVDs and extracted 737 radiomic features, as well as calculating IVD height index and peak signal intensity difference. Intraclass correlation coefficients across image and mask perturbations were calculated to identify robust features. Sparse partial least squares discriminant analysis was used to train a Pfirrmann grade classification model. The radiomics model had balanced accuracy of 76.7% (73.1-80.3%) and Cohen's Kappa of 0.70 (0.67-0.74), compared to 66.0% (62.0-69.9%) and 0.55 (0.51-0.59) for an IVD height index and peak signal intensity model. 2D sphericity and interquartile range emerged as radiomics-based features that were robust and highly correlated to Pfirrmann grade (Spearman's correlation coefficients of -0.72 and -0.77 respectively). Based on our findings these radiomic signatures could serve as alternatives to the conventional indices, representing a significant advance in the automated quantitative phenotyping of IVD degeneration from standard-of-care MRI.

Artificial intelligence-assisted decision-making in third molar assessment using ChatGPT: is it really a valid tool?

Grinberg N, Ianculovici C, Whitefield S, Kleinman S, Feldman S, Peleg O

pubmed logopapersJun 20 2025
Artificial intelligence (AI) is becoming increasingly popular in medicine. The current study aims to investigate whether an AI-based chatbot, such as ChatGPT, could be a valid tool for assisting in decision-making when assessing mandibular third molars before extractions. Panoramic radiographs were collected from a publicly available library. Mandibular third molars were assessed by position and depth. Two specialists evaluated each case regarding the need for CBCT referral, followed by introducing all cases to ChatGPT under a uniform script to decide the need for further CBCT radiographs. The process was performed first without any guidelines, Second, after introducing the guidelines presented by Rood et al. (1990), and third, with additional test cases. ChatGPT and a specialist's decision were compared and analyzed using Cohen's kappa test and the Cochrane-Mantel--Haenszel test to consider the effect of different tooth positions. All analyses were made under a 95% confidence level. The study evaluated 184 molars. Without any guidelines, ChatGPT correlated with the specialist in 49% of cases, with no statistically significant agreement (kappa < 0.1), followed by 70% and 91% with moderate (kappa = 0.39) and near-perfect (kappa = 0.81) agreement, respectively, after the second and third rounds (p < 0.05). The high correlation between the specialist and the chatbot was preserved when analyzed by the different tooth locations and positions (p < 0.01). ChatGPT has shown the ability to analyze third molars prior to surgical interventions using accepted guidelines with substantial correlation to specialists.

Effective workflow from multimodal MRI data to model-based prediction.

Jung K, Wischnewski KJ, Eickhoff SB, Popovych OV

pubmed logopapersJun 20 2025
Predicting human behavior from neuroimaging data remains a complex challenge in neuroscience. To address this, we propose a systematic and multi-faceted framework that incorporates a model-based workflow using dynamical brain models. This approach utilizes multi-modal MRI data for brain modeling and applies the optimized modeling outcome to machine learning. We demonstrate the performance of such an approach through several examples such as sex classification and prediction of cognition or personality traits. We in particular show that incorporating the simulated data into machine learning can significantly improve the prediction performance compared to using empirical features alone. These results suggest considering the output of the dynamical brain models as an additional neuroimaging data modality that complements empirical data by capturing brain features that are difficult to measure directly. The discussed model-based workflow can offer a promising avenue for investigating and understanding inter-individual variability in brain-behavior relationships and enhancing prediction performance in neuroimaging research.

Detection of breast cancer using fractional discrete sinc transform based on empirical Fourier decomposition.

Azmy MM

pubmed logopapersJun 20 2025
Breast cancer is the most common cause of death among women worldwide. Early detection of breast cancer is important; for saving patients' lives. Ultrasound and mammography are the most common noninvasive methods for detecting breast cancer. Computer techniques are used to help physicians diagnose cancer. In most of the previous studies, the classification parameter rates were not high enough to achieve the correct diagnosis. In this study, new approaches were applied to detect breast cancer images from three databases. The programming software used to extract features from the images was MATLAB R2022a. Novel approaches were obtained using new fractional transforms. These fractional transforms were deduced from the fraction Fourier transform and novel discrete transforms. The novel discrete transforms were derived from discrete sine and cosine transforms. The steps of the approaches were described below. First, fractional transforms were applied to the breast images. Then, the empirical Fourier decomposition (EFD) was obtained. The mean, variance, kurtosis, and skewness were subsequently calculated. Finally, RNN-BILSTM (recurrent neural network-bidirectional-long short-term memory) was used as a classification phase. The proposed approaches were compared to obtain the highest accuracy rate during the classification phase based on different fractional transforms. The highest accuracy rate was obtained when the fractional discrete sinc transform of approach 4 was applied. The area under the receiver operating characteristic curve (AUC) was 1. The accuracy, sensitivity, specificity, precision, G-mean, and F-measure rates were 100%. If traditional machine learning methods, such as support vector machines (SVMs) and artificial neural networks (ANNs), were used, the classification parameter rates would be low. Therefore, the fourth approach used RNN-BILSTM to extract the features of breast images perfectly. This approach can be programed on a computer to help physicians correctly classify breast images.

Generalizable model to predict new or progressing compression fractures in tumor-infiltrated thoracolumbar vertebrae in an all-comer population.

Flores A, Nitturi V, Kavoussi A, Feygin M, Andrade de Almeida RA, Ramirez Ferrer E, Anand A, Nouri S, Allam AK, Ricciardelli A, Reyes G, Reddy S, Rampalli I, Rhines L, Tatsui CE, North RY, Ghia A, Siewerdsen JH, Ropper AE, Alvarez-Breckenridge C

pubmed logopapersJun 20 2025
Neurosurgical evaluation is required in the setting of spinal metastases at high risk for leading to a vertebral body fracture. Both irradiated and nonirradiated vertebrae are affected. Understanding fracture risk is critical in determining management, including follow-up timing and prophylactic interventions. Herein, the authors report the results of a machine learning model that predicts the development or progression of a pathological vertebral compression fracture (VCF) in metastatic tumor-infiltrated thoracolumbar vertebrae in an all-comer population. A multi-institutional all-comer cohort of patients with tumor containing vertebral levels spanning T1 through L5 and at least 1 year of follow-up was included in the study. Clinical features of the patients, diseases, and treatments were collected. CT radiomic features of the vertebral bodies were extracted from tumor-infiltrated vertebrae that did or did not subsequently fracture or progress. Recursive feature elimination (RFE) of both radiomic and clinical features was performed. The resulting features were used to create a purely clinical model, purely radiomic model, and combined clinical-radiomic model. A Spine Instability Neoplastic Score (SINS) model was created for a baseline performance comparison. Model performance was assessed using the area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity (with 95% confidence intervals) with tenfold cross-validation. Within 1 year from initial CT, 123 of 977 vertebrae developed VCF. Selected clinical features included SINS, SINS component for < 50% vertebral body collapse, SINS component for "none of the prior 3" (i.e., "none of the above" on the SINS component for vertebral body involvement), histology, age, and BMI. Of the 2015 radiomic features, RFE selected 19 to be used in the pure radiomic model and the combined clinical-radiomic model. The best performing model was a random forest classifier using both clinical and radiomic features, demonstrating an AUROC of 0.86 (95% CI 0.82-0.9), sensitivity of 0.78 (95% CI 0.70-0.84), and specificity of 0.80 (95% CI 0.77-0.82). This performance was significantly higher than the best SINS-alone model (AUROC 0.75, 95% CI 0.70-0.80) and outperformed the clinical-only model but not in a statistically significant manner (AUROC 0.82, 95% CI 0.77-0.87). The authors developed a clinically generalizable machine learning model to predict the risk of a new or progressing VCF in an all-comer population. This model addresses limitations from prior work and was trained on the largest cohort of patients and vertebrae published to date. If validated, the model could lead to more consistent and systematic identification of high-risk vertebrae, resulting in faster, more accurate triage of patients for optimal management.

Image-Based Search in Radiology: Identification of Brain Tumor Subtypes within Databases Using MRI-Based Radiomic Features.

von Reppert M, Chadha S, Willms K, Avesta A, Maleki N, Zeevi T, Lost J, Tillmanns N, Jekel L, Merkaj S, Lin M, Hoffmann KT, Aneja S, Aboian MS

pubmed logopapersJun 20 2025
Existing neuroradiology reference materials do not cover the full range of primary brain tumor presentations, and text-based medical image search engines are limited by the lack of consistent structure in radiology reports. To address this, an image-based search approach is introduced here, leveraging an institutional database to find reference MRIs visually similar to presented query cases. Two hundred ninety-five patients (mean age and standard deviation, 51 ± 20 years) with primary brain tumors who underwent surgical and/or radiotherapeutic treatment between 2000 and 2021 were included in this retrospective study. Semiautomated convolutional neural network-based tumor segmentation was performed, and radiomic features were extracted. The data set was split into reference and query subsets, and dimensionality reduction was applied to cluster reference cases. Radiomic features extracted from each query case were projected onto the clustered reference cases, and nearest neighbors were retrieved. Retrieval performance was evaluated by using mean average precision at k, and the best-performing dimensionality reduction technique was identified. Expert readers independently rated visual similarity by using a 5-point Likert scale. t-Distributed stochastic neighbor embedding with 6 components was the highest-performing dimensionality reduction technique, with mean average precision at 5 ranging from 78%-100% by tumor type. The top 5 retrieved reference cases showed high visual similarity Likert scores with corresponding query cases (76% 'similar' or 'very similar'). We introduce an image-based search method for exploring historical MR images of primary brain tumors and fetching reference cases closely resembling queried ones. Assessment involving comparison of tumor types and visual similarity Likert scoring by expert neuroradiologists validates the effectiveness of this method.

BioTransX: A novel bi-former based hybrid model with bi-level routing attention for brain tumor classification with explainable insights.

Rajpoot R, Jain S, Semwal VB

pubmed logopapersJun 20 2025
Brain tumors, known for their life-threatening implications, underscore the urgency of precise and interpretable early detection. Expertise remains essential for accurate identification through MRI scans due to the intricacies involved. However, the growing recognition of automated detection systems holds the potential to enhance accuracy and improve interpretability. By consistently providing easily comprehensible results, these automated solutions could boost the overall efficiency and effectiveness of brain tumor diagnosis, promising a transformative era in healthcare. This paper introduces a new hybrid model, BioTransX, which uses a bi-former encoder mechanism, a dynamic sparse attention-based transformer, in conjunction with ensemble convolutional networks. Recognizing the importance of better contrast and data quality, we applied Contrast-Limited Adaptive Histogram Equalization (CLAHE) during the initial data processing stage. Additionally, to address the crucial aspect of model interpretability, we integrated Grad-CAM and Gradient Attention Rollout, which elucidate decisions by highlighting influential regions within medical images. Our hybrid deep learning model was primarily evaluated on the Kaggle MRI dataset for multi-class brain tumor classification, achieving a mean accuracy and F1-score of 99.29%. To validate its generalizability and robustness, BioTransX was further tested on two additional benchmark datasets, BraTS and Figshare, where it consistently maintained high performance across key evaluation metrics. The transformer-based hybrid model demonstrated promising performance in explainable identification and offered notable advantages in computational efficiency and memory usage. These strengths differentiate BioTransX from existing models in the literature and make it ideal for real-world deployment in resource-constrained clinical infrastructures.

Three-dimensional U-Net with transfer learning improves automated whole brain delineation from MRI brain scans of rats, mice, and monkeys.

Porter VA, Hobson BA, D'Almeida AJ, Bales KL, Lein PJ, Chaudhari AJ

pubmed logopapersJun 20 2025
Automated whole-brain delineation (WBD) techniques often struggle to generalize across pre-clinical studies due to variations in animal models, magnetic resonance imaging (MRI) scanners, and tissue contrasts. We developed a 3D U-Net neural network for WBD pre-trained on organophosphate intoxication (OPI) rat brain MRI scans. We used transfer learning (TL) to adapt this OPI-pretrained network to other animal models: rat model of Alzheimer's disease (AD), mouse model of tetramethylenedisulfotetramine (TETS) intoxication, and titi monkey model of social bonding. We assessed an OPI-pretrained 3D U-Net across animal models under three conditions: (1) direct application to each dataset; (2) utilizing TL; and (3) training disease-specific U-Net models. For each condition, training dataset size (TDS) was optimized, and output WBDs were compared to manual segmentations for accuracy. The OPI-pretrained 3D U-Net (TDS = 100) achieved the best accuracy [median[min-max]] for the test OPI dataset with a Dice coefficient (DC) = [0.987 [0.977-0.992]] and Hausdorff distance (HD) = [0.86 [0.55-1.27]]mm. TL improved generalization across all models [AD (TDS = 40): DC = 0.987 [0.977-0.992] and HD = 0.72 [0.54-1.00]mm; TETS (TDS = 10): DC = 0.992 [0.984-0.993] and HD = 0.40 [0.31-0.50]mm; Monkey (TDS = 8): DC = 0.977 [0.968-0.979] and HD = 3.03 [2.19-3.91]mm], showing performance comparable to disease-specific networks. The OPI-pretrained 3D U-Net with TL achieved accuracy comparable to disease-specific networks with reduced training data (TDS ≤ 40 scans) across all models. Future work will focus on developing a multi-region delineation pipeline for pre-clinical MRI brain data, utilizing the proposed WBD as an initial step.

Generative deep-learning-model based contrast enhancement for digital subtraction angiography using a text-conditioned image-to-image model.

Takata T, Yamada K, Yamamoto M, Kondo H

pubmed logopapersJun 20 2025
Digital subtraction angiography (DSA) is an essential imaging technique in interventional radiology, enabling detailed visualization of blood vessels by subtracting pre- and post-contrast images. However, reduced contrast, either accidental or intentional, can impair the clarity of vascular structures. This issue becomes particularly critical in patients with chronic kidney disease (CKD), where minimizing iodinated contrast is necessary to reduce the risk of contrast-induced nephropathy (CIN). This study explored the potential of using a generative deep-learning-model based contrast enhancement technique for DSA. A text-conditioned image-to-image model was developed using Stable Diffusion, augmented with ControlNet to reduce hallucinations and Low-Rank Adaptation for model fine-tuning. A total of 1207 DSA series were used for training and testing, with additional low-contrast images generated through data augmentation. The model was trained using tagged text labels and evaluated using metrics such as Root Mean Square (RMS) contrast, Michelson contrast, signal-to-noise ratio (SNR), and entropy. Evaluation results indicated significant improvements, with RMS contrast, Michelson contrast, and entropy respectively increased from 7.91 to 17.7, 0.875 to 0.992, and 3.60 to 5.60, reflecting enhanced detail. However, SNR decreased from 21.3 to 8.50, indicating increased noise. This study demonstrated the feasibility of deep learning-based contrast enhancement for DSA images and highlights the potential for generative deep-learning-model to improve angiographic imaging. Further refinements, particularly in artifact suppression and clinical validation, are necessary for practical implementation in medical settings.

Combination of 2D and 3D nnU-Net for ground glass opacity segmentation in CT images of Post-COVID-19 patients.

Nguyen QH, Hoang DA, Pham HV

pubmed logopapersJun 20 2025
The COVID-19 pandemic plays a significant roles in the global health, highlighting the imperative for effective management of post-recovery symptoms. Within this context, Ground Glass Opacity (GGO) in lung computed tomography CT scans emerges as a critical indicator for early intervention. Recently, most researchers have investigated initially a challenge to refine techniques for GGO segmentation. These approaches aim to scrutinize and juxtapose cutting-edge methods for analyzing lung CT images of patients recuperating from COVID-19. While many methods in this challenge utilize the nnU-Net architecture, its general approach has not concerned completely GGO areas such as marking infected areas, ground-glass opacity, irregular shapes and fuzzy boundaries. This research has investigated a specialized machine learning algorithm, advancing the nn-UNet framework to accurately segment GGO in lung CT scans of post-COVID-19 patients. We propose a novel approach for two-stage image segmentation methods based on nnU-Net 2D and 3D models including lung and shadow image segmentation, incorporating the attention mechanism. The combination models enhance automatic segmentation and models' accuracy when using different error function in the training process. Experimental results show that the proposed model's outcomes DSC score ranks fifth among the compared results. The proposed method has also the second-highest sensitivity value among the methods, which shows that this method has a higher true segmentation rate than most of the other methods. The proposed method has achieved a Hausdorff95 of 54.566, Surface dice of 0.7193, Sensitivity of 0.7528, and Specificity of 0.7749. As compared with the state-of-the-art methods, the proposed model in experimental results is improved much better than the current methods in term of segmentation of infected areas. The proposed model has been deployed in the case study of real-world problems with the combination of 2D and 3D models. It is demonstrated the capacity to comprehensively detect lung lesions correctly. Additionally, the boundary loss function has assisted in achieving more precise segmentation for low-resolution images. Initially segmenting lung area has reduced the volume of images requiring processing, while diminishing for training process.
Page 90 of 2412401 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.