Sort by:
Page 102 of 2532525 results

Impact of ablation on regional strain from 4D computed tomography in the left atrium.

Mehringer N, Severance L, Park A, Ho G, McVeigh E

pubmed logopapersJun 20 2025
Ablation for atrial fibrillation targets an arrhythmogenic substrate in the left atrium (LA) myocardium with therapeutic energy, resulting in a scar tissue. Although a global LA function typically improves after ablation, the injured tissue is stiffer and non-contractile. The local functional impact of ablation has not been thoroughly investigated. This study retrospectively analyzed the LA mechanics of 15 subjects who received a four-dimensional computed tomography (4DCT) scan pre- and post-ablation for atrial fibrillation. LA volumes were automatically segmented at every frame by a trained neural network and converted into surface meshes. A local endocardial strain was computed at a resolution of 2 mm from the deforming meshes. The LA endocardial surface was automatically divided into five walls and further into 24 sub-segments using the left atrial positioning system. Intraoperative notes gathered during the ablation procedure informed which regions received ablative treatment. In an average of 18 months after ablation, the strain is decreased by 16.3% in the septal wall and by 18.3% in the posterior wall. In subjects who were imaged in sinus rhythm both before and after the procedure, the effect of ablation reduced the regional strain by 15.3% (p = 0.012). Post-ablation strain maps demonstrated spatial patterns of reduced strain which matched the ablation pattern. This study demonstrates the capability of 4DCT to capture high-resolution changes in the left atrial strain in response to tissue damage and explores the quantification of a regionally reduced LA function from the scar tissue.

Radiological data processing system: lifecycle management and annotation.

Bobrovskaya T, Vasilev Y, Vladzymyrskyy A, Omelyanskaya O, Kosov P, Krylova E, Ponomarenko A, Burtsev T, Savkina E, Kodenko M, Kasimov S, Medvedev K, Kovalchuk A, Zinchenko V, Rumyantsev D, Kazarinova V, Semenov S, Arzamasov K

pubmed logopapersJun 20 2025
To develop a platform for automated processing of radiological datasets that operates independently of medical information systems. The platform maintains datasets throughout their lifecycle, from data retrieval to annotation and presentation. The platform employs a modular structure in which modules can operate independently or in conjunction. Each module sequentially processes output from the preceding module. The platform incorporates a local database containing textual study protocols, a radiology information system (RIS), and storage for labeled studies and reports. A platform equipped with local permanent and temporary file storages facilitates radiological datasets processing. The platform's modules enable data search, extraction, anonymization, annotation, generation of annotated files, and standardized documentation of datasets. The platform provides a comprehensive workflow for radiological dataset management and is currently operational at the Center for Diagnostics and Telemedicine. Future development will focus on expanding platform functionality.

Artificial intelligence-based tumor size measurement on mammography: agreement with pathology and comparison with human readers' assessments across multiple imaging modalities.

Kwon MR, Kim SH, Park GE, Mun HS, Kang BJ, Kim YT, Yoon I

pubmed logopapersJun 20 2025
To evaluate the agreement between artificial intelligence (AI)-based tumor size measurements of breast cancer and the final pathology and compare these results with those of other imaging modalities. This retrospective study included 925 women (mean age, 55.3 years ± 11.6) with 936 breast cancers, who underwent digital mammography, breast ultrasound, and magnetic resonance imaging before breast cancer surgery. AI-based tumor size measurement was performed on post-processed mammographic images, outlining areas with AI abnormality scores of 10, 50, and 90%. Absolute agreement between AI-based tumor sizes, image modalities, and histopathology was assessed using intraclass correlation coefficient (ICC) analysis. Concordant and discordant cases between AI measurements and histopathologic examinations were compared. Tumor size with an abnormality score of 50% showed the highest agreement with histopathologic examination (ICC = 0.54, 95% confidential interval [CI]: 0.49-0.59), showing comparable agreement with mammography (ICC = 0.54, 95% CI: 0.48-0.60, p = 0.40). For ductal carcinoma in situ and human epidermal growth factor receptor 2-positive cancers, AI revealed a higher agreement than that of mammography (ICC = 0.76, 95% CI: 0.67-0.84 and ICC = 0.73, 95% CI: 0.52-0.85). Overall, 52.0% (487/936) of cases were discordant, with these cases more commonly observed in younger patients with dense breasts, multifocal malignancies, lower abnormality scores, and different imaging characteristics. AI-based tumor size measurements with abnormality scores of 50% showed moderate agreement with histopathology but demonstrated size discordance in more than half of the cases. While comparable to mammography, its limitations emphasize the need for further refinement and research.

MVKD-Trans: A Multi-View Knowledge Distillation Vision Transformer Architecture for Breast Cancer Classification Based on Ultrasound Images.

Ling D, Jiao X

pubmed logopapersJun 20 2025
Breast cancer is the leading cancer threatening women's health. In recent years, deep neural networks have outperformed traditional methods in terms of both accuracy and efficiency for breast cancer classification. However, most ultrasound-based breast cancer classification methods rely on single-perspective information, which may lead to higher misdiagnosis rates. In this study, we propose a multi-view knowledge distillation vision transformer architecture (MVKD-Trans) for the classification of benign and malignant breast tumors. We utilize multi-view ultrasound images of the same tumor to capture diverse features. Additionally, we employ a shuffle module for feature fusion, extracting channel and spatial dual-attention information to improve the model's representational capability. Given the limited computational capacity of ultrasound devices, we also utilize knowledge distillation (KD) techniques to compress the multi-view network into a single-view network. The results show that the accuracy, area under the ROC curve (AUC), sensitivity, specificity, precision, and F1 score of the model are 88.15%, 91.23%, 81.41%, 90.73%, 78.29%, and 79.69%, respectively. The superior performance of our approach, compared to several existing models, highlights its potential to significantly enhance the understanding and classification of breast cancer.

Robust Radiomic Signatures of Intervertebral Disc Degeneration from MRI.

McSweeney T, Tiulpin A, Kowlagi N, Määttä J, Karppinen J, Saarakkala S

pubmed logopapersJun 20 2025
A retrospective analysis. The aim of this study was to identify a robust radiomic signature from deep learning segmentations for intervertebral disc (IVD) degeneration classification. Low back pain (LBP) is the most common musculoskeletal symptom worldwide and IVD degeneration is an important contributing factor. To improve the quantitative phenotyping of IVD degeneration from T2-weighted magnetic resonance imaging (MRI) and better understand its relationship with LBP, multiple shape and intensity features have been investigated. IVD radiomics have been less studied but could reveal sub-visual imaging characteristics of IVD degeneration. We used data from Northern Finland Birth Cohort 1966 members who underwent lumbar spine T2-weighted MRI scans at age 45-47 (n=1397). We used a deep learning model to segment the lumbar spine IVDs and extracted 737 radiomic features, as well as calculating IVD height index and peak signal intensity difference. Intraclass correlation coefficients across image and mask perturbations were calculated to identify robust features. Sparse partial least squares discriminant analysis was used to train a Pfirrmann grade classification model. The radiomics model had balanced accuracy of 76.7% (73.1-80.3%) and Cohen's Kappa of 0.70 (0.67-0.74), compared to 66.0% (62.0-69.9%) and 0.55 (0.51-0.59) for an IVD height index and peak signal intensity model. 2D sphericity and interquartile range emerged as radiomics-based features that were robust and highly correlated to Pfirrmann grade (Spearman's correlation coefficients of -0.72 and -0.77 respectively). Based on our findings these radiomic signatures could serve as alternatives to the conventional indices, representing a significant advance in the automated quantitative phenotyping of IVD degeneration from standard-of-care MRI.

Artificial intelligence-assisted decision-making in third molar assessment using ChatGPT: is it really a valid tool?

Grinberg N, Ianculovici C, Whitefield S, Kleinman S, Feldman S, Peleg O

pubmed logopapersJun 20 2025
Artificial intelligence (AI) is becoming increasingly popular in medicine. The current study aims to investigate whether an AI-based chatbot, such as ChatGPT, could be a valid tool for assisting in decision-making when assessing mandibular third molars before extractions. Panoramic radiographs were collected from a publicly available library. Mandibular third molars were assessed by position and depth. Two specialists evaluated each case regarding the need for CBCT referral, followed by introducing all cases to ChatGPT under a uniform script to decide the need for further CBCT radiographs. The process was performed first without any guidelines, Second, after introducing the guidelines presented by Rood et al. (1990), and third, with additional test cases. ChatGPT and a specialist's decision were compared and analyzed using Cohen's kappa test and the Cochrane-Mantel--Haenszel test to consider the effect of different tooth positions. All analyses were made under a 95% confidence level. The study evaluated 184 molars. Without any guidelines, ChatGPT correlated with the specialist in 49% of cases, with no statistically significant agreement (kappa < 0.1), followed by 70% and 91% with moderate (kappa = 0.39) and near-perfect (kappa = 0.81) agreement, respectively, after the second and third rounds (p < 0.05). The high correlation between the specialist and the chatbot was preserved when analyzed by the different tooth locations and positions (p < 0.01). ChatGPT has shown the ability to analyze third molars prior to surgical interventions using accepted guidelines with substantial correlation to specialists.

Effective workflow from multimodal MRI data to model-based prediction.

Jung K, Wischnewski KJ, Eickhoff SB, Popovych OV

pubmed logopapersJun 20 2025
Predicting human behavior from neuroimaging data remains a complex challenge in neuroscience. To address this, we propose a systematic and multi-faceted framework that incorporates a model-based workflow using dynamical brain models. This approach utilizes multi-modal MRI data for brain modeling and applies the optimized modeling outcome to machine learning. We demonstrate the performance of such an approach through several examples such as sex classification and prediction of cognition or personality traits. We in particular show that incorporating the simulated data into machine learning can significantly improve the prediction performance compared to using empirical features alone. These results suggest considering the output of the dynamical brain models as an additional neuroimaging data modality that complements empirical data by capturing brain features that are difficult to measure directly. The discussed model-based workflow can offer a promising avenue for investigating and understanding inter-individual variability in brain-behavior relationships and enhancing prediction performance in neuroimaging research.

Detection of breast cancer using fractional discrete sinc transform based on empirical Fourier decomposition.

Azmy MM

pubmed logopapersJun 20 2025
Breast cancer is the most common cause of death among women worldwide. Early detection of breast cancer is important; for saving patients' lives. Ultrasound and mammography are the most common noninvasive methods for detecting breast cancer. Computer techniques are used to help physicians diagnose cancer. In most of the previous studies, the classification parameter rates were not high enough to achieve the correct diagnosis. In this study, new approaches were applied to detect breast cancer images from three databases. The programming software used to extract features from the images was MATLAB R2022a. Novel approaches were obtained using new fractional transforms. These fractional transforms were deduced from the fraction Fourier transform and novel discrete transforms. The novel discrete transforms were derived from discrete sine and cosine transforms. The steps of the approaches were described below. First, fractional transforms were applied to the breast images. Then, the empirical Fourier decomposition (EFD) was obtained. The mean, variance, kurtosis, and skewness were subsequently calculated. Finally, RNN-BILSTM (recurrent neural network-bidirectional-long short-term memory) was used as a classification phase. The proposed approaches were compared to obtain the highest accuracy rate during the classification phase based on different fractional transforms. The highest accuracy rate was obtained when the fractional discrete sinc transform of approach 4 was applied. The area under the receiver operating characteristic curve (AUC) was 1. The accuracy, sensitivity, specificity, precision, G-mean, and F-measure rates were 100%. If traditional machine learning methods, such as support vector machines (SVMs) and artificial neural networks (ANNs), were used, the classification parameter rates would be low. Therefore, the fourth approach used RNN-BILSTM to extract the features of breast images perfectly. This approach can be programed on a computer to help physicians correctly classify breast images.

Generalizable model to predict new or progressing compression fractures in tumor-infiltrated thoracolumbar vertebrae in an all-comer population.

Flores A, Nitturi V, Kavoussi A, Feygin M, Andrade de Almeida RA, Ramirez Ferrer E, Anand A, Nouri S, Allam AK, Ricciardelli A, Reyes G, Reddy S, Rampalli I, Rhines L, Tatsui CE, North RY, Ghia A, Siewerdsen JH, Ropper AE, Alvarez-Breckenridge C

pubmed logopapersJun 20 2025
Neurosurgical evaluation is required in the setting of spinal metastases at high risk for leading to a vertebral body fracture. Both irradiated and nonirradiated vertebrae are affected. Understanding fracture risk is critical in determining management, including follow-up timing and prophylactic interventions. Herein, the authors report the results of a machine learning model that predicts the development or progression of a pathological vertebral compression fracture (VCF) in metastatic tumor-infiltrated thoracolumbar vertebrae in an all-comer population. A multi-institutional all-comer cohort of patients with tumor containing vertebral levels spanning T1 through L5 and at least 1 year of follow-up was included in the study. Clinical features of the patients, diseases, and treatments were collected. CT radiomic features of the vertebral bodies were extracted from tumor-infiltrated vertebrae that did or did not subsequently fracture or progress. Recursive feature elimination (RFE) of both radiomic and clinical features was performed. The resulting features were used to create a purely clinical model, purely radiomic model, and combined clinical-radiomic model. A Spine Instability Neoplastic Score (SINS) model was created for a baseline performance comparison. Model performance was assessed using the area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity (with 95% confidence intervals) with tenfold cross-validation. Within 1 year from initial CT, 123 of 977 vertebrae developed VCF. Selected clinical features included SINS, SINS component for < 50% vertebral body collapse, SINS component for "none of the prior 3" (i.e., "none of the above" on the SINS component for vertebral body involvement), histology, age, and BMI. Of the 2015 radiomic features, RFE selected 19 to be used in the pure radiomic model and the combined clinical-radiomic model. The best performing model was a random forest classifier using both clinical and radiomic features, demonstrating an AUROC of 0.86 (95% CI 0.82-0.9), sensitivity of 0.78 (95% CI 0.70-0.84), and specificity of 0.80 (95% CI 0.77-0.82). This performance was significantly higher than the best SINS-alone model (AUROC 0.75, 95% CI 0.70-0.80) and outperformed the clinical-only model but not in a statistically significant manner (AUROC 0.82, 95% CI 0.77-0.87). The authors developed a clinically generalizable machine learning model to predict the risk of a new or progressing VCF in an all-comer population. This model addresses limitations from prior work and was trained on the largest cohort of patients and vertebrae published to date. If validated, the model could lead to more consistent and systematic identification of high-risk vertebrae, resulting in faster, more accurate triage of patients for optimal management.

Image-Based Search in Radiology: Identification of Brain Tumor Subtypes within Databases Using MRI-Based Radiomic Features.

von Reppert M, Chadha S, Willms K, Avesta A, Maleki N, Zeevi T, Lost J, Tillmanns N, Jekel L, Merkaj S, Lin M, Hoffmann KT, Aneja S, Aboian MS

pubmed logopapersJun 20 2025
Existing neuroradiology reference materials do not cover the full range of primary brain tumor presentations, and text-based medical image search engines are limited by the lack of consistent structure in radiology reports. To address this, an image-based search approach is introduced here, leveraging an institutional database to find reference MRIs visually similar to presented query cases. Two hundred ninety-five patients (mean age and standard deviation, 51 ± 20 years) with primary brain tumors who underwent surgical and/or radiotherapeutic treatment between 2000 and 2021 were included in this retrospective study. Semiautomated convolutional neural network-based tumor segmentation was performed, and radiomic features were extracted. The data set was split into reference and query subsets, and dimensionality reduction was applied to cluster reference cases. Radiomic features extracted from each query case were projected onto the clustered reference cases, and nearest neighbors were retrieved. Retrieval performance was evaluated by using mean average precision at k, and the best-performing dimensionality reduction technique was identified. Expert readers independently rated visual similarity by using a 5-point Likert scale. t-Distributed stochastic neighbor embedding with 6 components was the highest-performing dimensionality reduction technique, with mean average precision at 5 ranging from 78%-100% by tumor type. The top 5 retrieved reference cases showed high visual similarity Likert scores with corresponding query cases (76% 'similar' or 'very similar'). We introduce an image-based search method for exploring historical MR images of primary brain tumors and fetching reference cases closely resembling queried ones. Assessment involving comparison of tumor types and visual similarity Likert scoring by expert neuroradiologists validates the effectiveness of this method.
Page 102 of 2532525 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.