Sort by:
Page 252 of 3853843 results

Enhancing Privacy: The Utility of Stand-Alone Synthetic CT and MRI for Tumor and Bone Segmentation

André Ferreira, Kunpeng Xie, Caroline Wilpert, Gustavo Correia, Felix Barajas Ordonez, Tiago Gil Oliveira, Maike Bode, Robert Siepmann, Frank Hölzle, Rainer Röhrig, Jens Kleesiek, Daniel Truhn, Jan Egger, Victor Alves, Behrus Puladi

arxiv logopreprintJun 13 2025
AI requires extensive datasets, while medical data is subject to high data protection. Anonymization is essential, but poses a challenge for some regions, such as the head, as identifying structures overlap with regions of clinical interest. Synthetic data offers a potential solution, but studies often lack rigorous evaluation of realism and utility. Therefore, we investigate to what extent synthetic data can replace real data in segmentation tasks. We employed head and neck cancer CT scans and brain glioma MRI scans from two large datasets. Synthetic data were generated using generative adversarial networks and diffusion models. We evaluated the quality of the synthetic data using MAE, MS-SSIM, Radiomics and a Visual Turing Test (VTT) performed by 5 radiologists and their usefulness in segmentation tasks using DSC. Radiomics indicates high fidelity of synthetic MRIs, but fall short in producing highly realistic CT tissue, with correlation coefficient of 0.8784 and 0.5461 for MRI and CT tumors, respectively. DSC results indicate limited utility of synthetic data: tumor segmentation achieved DSC=0.064 on CT and 0.834 on MRI, while bone segmentation a mean DSC=0.841. Relation between DSC and correlation is observed, but is limited by the complexity of the task. VTT results show synthetic CTs' utility, but with limited educational applications. Synthetic data can be used independently for the segmentation task, although limited by the complexity of the structures to segment. Advancing generative models to better tolerate heterogeneous inputs and learn subtle details is essential for enhancing their realism and expanding their application potential.

BraTS orchestrator : Democratizing and Disseminating state-of-the-art brain tumor image analysis

Florian Kofler, Marcel Rosier, Mehdi Astaraki, Ujjwal Baid, Hendrik Möller, Josef A. Buchner, Felix Steinbauer, Eva Oswald, Ezequiel de la Rosa, Ivan Ezhov, Constantin von See, Jan Kirschke, Anton Schmick, Sarthak Pati, Akis Linardos, Carla Pitarch, Sanyukta Adap, Jeffrey Rudie, Maria Correia de Verdier, Rachit Saluja, Evan Calabrese, Dominic LaBella, Mariam Aboian, Ahmed W. Moawad, Nazanin Maleki, Udunna Anazodo, Maruf Adewole, Marius George Linguraru, Anahita Fathi Kazerooni, Zhifan Jiang, Gian Marco Conte, Hongwei Li, Juan Eugenio Iglesias, Spyridon Bakas, Benedikt Wiestler, Marie Piraud, Bjoern Menze

arxiv logopreprintJun 13 2025
The Brain Tumor Segmentation (BraTS) cluster of challenges has significantly advanced brain tumor image analysis by providing large, curated datasets and addressing clinically relevant tasks. However, despite its success and popularity, algorithms and models developed through BraTS have seen limited adoption in both scientific and clinical communities. To accelerate their dissemination, we introduce BraTS orchestrator, an open-source Python package that provides seamless access to state-of-the-art segmentation and synthesis algorithms for diverse brain tumors from the BraTS challenge ecosystem. Available on GitHub (https://github.com/BrainLesion/BraTS), the package features intuitive tutorials designed for users with minimal programming experience, enabling both researchers and clinicians to easily deploy winning BraTS algorithms for inference. By abstracting the complexities of modern deep learning, BraTS orchestrator democratizes access to the specialized knowledge developed within the BraTS community, making these advances readily available to broader neuro-radiology and neuro-oncology audiences.

Impact of Deep Learning-Based Image Conversion on Fully Automated Coronary Artery Calcium Scoring Using Thin-Slice, Sharp-Kernel, Non-Gated, Low-Dose Chest CT Scans: A Multi-Center Study.

Kim C, Hong S, Choi H, Yoo WS, Kim JY, Chang S, Park CH, Hong SJ, Yang DH, Yong HS, van Assen M, De Cecco CN, Suh YJ

pubmed logopapersJun 13 2025
To evaluate the impact of deep learning-based image conversion on the accuracy of automated coronary artery calcium quantification using thin-slice, sharp-kernel, non-gated, low-dose chest computed tomography (LDCT) images collected from multiple institutions. A total of 225 pairs of LDCT and calcium scoring CT (CSCT) images scanned at 120 kVp and acquired from the same patient within a 6-month interval were retrospectively collected from four institutions. Image conversion was performed for LDCT images using proprietary software programs to simulate conventional CSCT. This process included 1) deep learning-based kernel conversion of low-dose, high-frequency, sharp kernels to simulate standard-dose, low-frequency kernels, and 2) thickness conversion using the raysum method to convert 1-mm or 1.25-mm thickness images to 3-mm thickness. Automated Agaston scoring was conducted on the LDCT scans before (LDCT-Org<sub>auto</sub>) and after the image conversion (LDCT-CONV<sub>auto</sub>). Manual scoring was performed on the CSCT images (CSCT<sub>manual</sub>) and used as a reference standard. The accuracy of automated Agaston scores and risk severity categorization based on the automated scoring on LDCT scans was analyzed compared to the reference standard, using the Bland-Altman analysis, concordance correlation coefficient (CCC), and weighted kappa (κ) statistic. LDCT-CONV<sub>auto</sub> demonstrated a reduced bias for Agaston score, compared with CSCT<sub>manual</sub>, than LDCT-Org<sub>auto</sub> did (-3.45 vs. 206.7). LDCT-CONV<sub>auto</sub> showed a higher CCC than LDCT-Org<sub>auto</sub> did (0.881 [95% confidence interval {CI}, 0.750-0.960] vs. 0.269 [95% CI, 0.129-0.430]). In terms of risk category assignment, LDCT-Org<sub>auto</sub> exhibited poor agreement with CSCT<sub>manual</sub> (weighted κ = 0.115 [95% CI, 0.082-0.154]), whereas LDCT-CONV<sub>auto</sub> achieved good agreement (weighted κ = 0.792 [95% CI, 0.731-0.847]). Deep learning-based conversion of LDCT images originally obtained with thin slices and a sharp kernel can enhance the accuracy of automated coronary artery calcium score measurement using the images.

Beyond Benchmarks: Towards Robust Artificial Intelligence Bone Segmentation in Socio-Technical Systems

Xie, K., Gruber, L. J., Crampen, M., Li, Y., Ferreira, A., Tappeiner, E., Gillot, M., Schepers, J., Xu, J., Pankert, T., Beyer, M., Shahamiri, N., ten Brink, R., Dot, G., Weschke, C., van Nistelrooij, N., Verhelst, P.-J., Guo, Y., Xu, Z., Bienzeisler, J., Rashad, A., Flügge, T., Cotton, R., Vinayahalingam, S., Ilesan, R., Raith, S., Madsen, D., Seibold, C., Xi, T., Berge, S., Nebelung, S., Kodym, O., Sundqvist, O., Thieringer, F., Lamecker, H., Coppens, A., Potrusil, T., Kraeima, J., Witjes, M., Wu, G., Chen, X., Lambrechts, A., Cevidanes, L. H. S., Zachow, S., Hermans, A., Truhn, D., Alves,

medrxiv logopreprintJun 13 2025
Despite the advances in automated medical image segmentation, AI models still underperform in various clinical settings, challenging real-world integration. In this multicenter evaluation, we analyzed 20 state-of-the-art mandibular segmentation models across 19,218 segmentations of 1,000 clinically resampled CT/CBCT scans. We show that segmentation accuracy varies by up to 25% depending on socio-technical factors such as voxel size, bone orientation, and patient conditions such as osteosynthesis or pathology. Higher sharpness, isotropic smaller voxels, and neutral orientation significantly improved results, while metallic osteosynthesis and anatomical complexity led to significant degradation. Our findings challenge the common view of AI models as "plug-and-play" tools and suggest evidence-based optimization recommendations for both clinicians and developers. This will in turn boost the integration of AI segmentation tools in routine healthcare.

Clinically reported covert cerebrovascular disease and risk of neurological disease: a whole-population cohort of 395,273 people using natural language processing

Iveson, M. H., Mukherjee, M., Davidson, E. M., Zhang, H., Sherlock, L., Ball, E. L., Mair, G., Hosking, A., Whalley, H., Poon, M. T. C., Wardlaw, J. M., Kent, D., Tobin, R., Grover, C., Alex, B., Whiteley, W. N.

medrxiv logopreprintJun 13 2025
ImportanceUnderstanding the relevance of covert cerebrovascular disease (CCD) for later health will allow clinicians to more effectively monitor and target interventions. ObjectiveTo examine the association between clinically reported CCD, measured using natural language processing (NLP), and subsequent disease risk. Design, Setting and ParticipantsWe conducted a retrospective e-cohort study using linked health record data. From all people with clinical brain imaging in Scotland from 2010 to 2018, we selected people with no prior hospitalisation for neurological disease. The data were analysed from March 2024 to June 2025. ExposureFour phenotypes were identified with NLP of imaging reports: white matter hypoattenuation or hyperintensities (WMH), lacunes, cortical infarcts and cerebral atrophy. Main outcomes and measuresHazard ratios (aHR) for stroke, dementia, and Parkinsons disease (conditions previously associated with CCD), epilepsy (a brain-based control condition) and colorectal cancer (a non-brain control condition), adjusted for age, sex, deprivation, region, scan modality, and pre-scan healthcare, were calculated for each phenotype. ResultsFrom 395,273 people with brain imaging and no history of neurological disease, 145,978 (37%) had [&ge;]1 phenotype. For each phenotype, the aHR of any stroke was: WMH 1.4 (95%CI: 1.3-1.4), lacunes 1.6 (1.5-1.6), cortical infarct 1.7 (1.6-1.8), and cerebral atrophy 1.1 (1.0-1.1). The aHR of any dementia was: WMH, 1.3 (1.3-1.3), lacunes, 1.0 (0.9-1.0), cortical infarct 1.1 (1.0-1.1) and cerebral atrophy 1.7 (1.7-1.7). The aHR of Parkinsons disease was, in people with a report of: WMH 1.1 (1.0-1.2), lacunes 1.1 (0.9-1.2), cortical infarct 0.7 (0.6-0.9) and cerebral atrophy 1.4 (1.3-1.5). The aHRs between CCD phenotypes and epilepsy and colorectal cancer overlapped the null. Conclusions and RelevanceNLP identified CCD and atrophy phenotypes from routine clinical image reports, and these had important associations with future stroke, dementia and Parkinsons disease. Prevention of neurological disease in people with CCD should be a priority for healthcare providers and policymakers. Key PointsO_ST_ABSQuestionC_ST_ABSAre measures of Covert Cerebrovascular Disease (CCD) associated with the risk of subsequent disease (stroke, dementia, Parkinsons disease, epilepsy, and colorectal cancer)? FindingsThis study used a validated NLP algorithm to identify CCD (white matter hypoattenuation/hyperintensities, lacunes, cortical infarcts) and cerebral atrophy from both MRI and computed tomography (CT) imaging reports generated during routine healthcare in >395K people in Scotland. In adjusted models, we demonstrate higher risk of dementia (particularly Alzheimers disease) in people with atrophy, and higher risk of stroke in people with cortical infarcts. However, associations with an age-associated control outcome (colorectal cancer) were neutral, supporting a causal relationship. It also highlights differential associations between cerebral atrophy and dementia and cortical infarcts and stroke risk. MeaningCCD or atrophy on brain imaging reports in routine clinical practice is associated with a higher risk of stroke or dementia. Evidence is needed to support treatment strategies to reduce this risk. NLP can identify these important, otherwise uncoded, disease phenotypes, allowing research at scale into imaging-based biomarkers of dementia and stroke.

Radiomics and machine learning for predicting valve vegetation in infective endocarditis: a comparative analysis of mitral and aortic valves using TEE imaging.

Esmaely F, Moradnejad P, Boudagh S, Bitarafan-Rajabi A

pubmed logopapersJun 12 2025
Detecting valve vegetation in infective endocarditis (IE) poses challenges, particularly with mechanical valves, because acoustic shadowing artefacts often obscure critical diagnostic details. This study aimed to classify native and prosthetic mitral and aortic valves with and without vegetation using radiomics and machine learning. 286 TEE scans from suspected IE cases (August 2023-November 2024) were analysed alongside 113 rejected IE as control cases. Frames were preprocessed using the Extreme Total Variation Bilateral (ETVB) filter, and radiomics features were extracted for classification using machine learning models, including Random Forest, Decision Tree, SVM, k-NN, and XGBoost. in order to evaluate the models, AUC, ROC curves, and Decision Curve Analysis (DCA) were used. For native mitral valves, SVM achieved the highest performance with an AUC of 0.88, a sensitivity of 0.91, and a specificity of 0.87. Mechanical mitral valves also showed optimal results with SVM (AUC: 0.85, sensitivity: 0.73, specificity: 0.92). Native aortic valves were best classified using SVM (AUC: 0.86, sensitivity: 0.87, specificity: 0.86), while Random Forest excelled for mechanical aortic valves (AUC: 0.81, sensitivity: 0.89, specificity: 0.78). These findings suggest that combining the models with the clinician's report may enhance the diagnostic accuracy of TEE, particularly in the absence of advanced imaging methods like PET/CT.

High visceral-to-subcutaneous fat area ratio is an unfavorable prognostic indicator in patients with uterine sarcoma.

Kurokawa M, Gonoi W, Hanaoka S, Kurokawa R, Uehara S, Kato M, Suzuki M, Toyohara Y, Takaki Y, Kusakabe M, Kino N, Tsukazaki T, Unno T, Sone K, Abe O

pubmed logopapersJun 12 2025
Uterine sarcoma is a rare disease whose association with body composition parameters is poorly understood. This study explored the impact of body composition parameters on overall survival with uterine sarcoma. This multicenter study included 52 patients with uterine sarcomas treated at three Japanese hospitals between 2007 and 2023. A semi-automatic segmentation program based on deep learning analyzed transaxial CT images at the L3 vertebral level, calculating body composition parameters as follows: area indices (areas divided by height squared) of skeletal muscle, visceral and subcutaneous adipose tissue (SMI, VATI, and SATI, respectively); skeletal muscle density; and the visceral-to-subcutaneous fat area ratio (VSR). The optimal cutoff values for each parameter were calculated using maximally selected rank statistics with several p value approximations. The effects of body composition parameters and clinical data on overall survival (OS) and cancer-specific survival (CSS) were analyzed. Univariate Cox proportional hazards regression analysis revealed that advanced stage (III-IV) and high VSR were unfavorable prognostic factors for both OS and CSS. Multivariate Cox proportional hazard regression analysis revealed that advanced stage (III-IV) (hazard ratios (HRs), 4.67 for OS and 4.36 for CSS, p < 0.01), and high VSR (HRs, 9.36 for OS and 8.22 for CSS, p < 0.001) were poor prognostic factors for both OS and CSS. Added values were observed when the VSR was incorporated into the OS and the CSS prediction models. Increased VSR and tumor stage are significant predictors of poor overall survival in patients with uterine sarcoma.

Radiogenomic correlation of hypoxia-related biomarkers in clear cell renal cell carcinoma.

Shao Y, Cen HS, Dhananjay A, Pawan SJ, Lei X, Gill IS, D'souza A, Duddalwar VA

pubmed logopapersJun 12 2025
This study aimed to evaluate radiomic models' ability to predict hypoxia-related biomarker expression in clear cell renal cell carcinoma (ccRCC). Clinical and molecular data from 190 patients were extracted from The Cancer Genome Atlas-Kidney Renal Clear Cell Carcinoma dataset, and corresponding CT imaging data were manually segmented from The Cancer Imaging Archive. A panel of 2,824 radiomic features was analyzed, and robust, high-interscanner-reproducibility features were selected. Gene expression data for 13 hypoxia-related biomarkers were stratified by tumor grade (1/2 vs. 3/4) and stage (I/II vs. III/IV) and analyzed using Wilcoxon rank sum test. Machine learning modeling was conducted using the High-Performance Random Forest (RF) procedure in SAS Enterprise Miner 15.1, with significance at P < 0.05. Descriptive univariate analysis revealed significantly lower expression of several biomarkers in high-grade and late-stage tumors, with KLF6 showing the most notable decrease. The RF model effectively predicted the expression of KLF6, ETS1, and BCL2, as well as PLOD2 and PPARGC1A underexpression. Stratified performance assessment showed improved predictive ability for RORA, BCL2, and KLF6 in high-grade tumors and for ETS1 across grades, with no significant performance difference across grade or stage. The RF model demonstrated modest but significant associations between texture metrics derived from clinical CT scans, such as GLDM and GLCM, and key hypoxia-related biomarkers including KLF6, BCL2, ETS1, and PLOD2. These findings suggest that radiomic analysis could support ccRCC risk stratification and personalized treatment planning by providing non-invasive insights into tumor biology.

Tackling Tumor Heterogeneity Issue: Transformer-Based Multiple Instance Enhancement Learning for Predicting EGFR Mutation via CT Images.

Fang Y, Wang M, Song Q, Cao C, Gao Z, Song B, Min X, Li A

pubmed logopapersJun 12 2025
Accurate and non-invasive prediction of epidermal growth factor receptor (EGFR) mutation is crucial for the diagnosis and treatment of non-small cell lung cancer (NSCLC). While computed tomography (CT) imaging shows promise in identifying EGFR mutation, current prediction methods heavily rely on fully supervised learning, which overlooks the substantial heterogeneity of tumors and therefore leads to suboptimal results. To tackle tumor heterogeneity issue, this study introduces a novel weakly supervised method named TransMIEL, which leverages multiple instance learning techniques for accurate EGFR mutation prediction. Specifically, we first propose an innovative instance enhancement learning (IEL) strategy that strengthens the discriminative power of instance features for complex tumor CT images by exploring self-derived soft pseudo-labels. Next, to improve tumor representation capability, we design a spatial-aware transformer (SAT) that fully captures inter-instance relationships of different pathological subregions to mirror the diagnostic processes of radiologists. Finally, an instance adaptive gating (IAG) module is developed to effectively emphasize the contribution of informative instance features in heterogeneous tumors, facilitating dynamic instance feature aggregation and increasing model generalization performance. Experimental results demonstrate that TransMIEL significantly outperforms existing fully and weakly supervised methods on both public and in-house NSCLC datasets. Additionally, visualization results show that our approach can highlight intra-tumor and peri-tumor areas relevant to EGFR mutation status. Therefore, our method holds significant potential as an effective tool for EGFR prediction and offers a novel perspective for future research on tumor heterogeneity.

Task Augmentation-Based Meta-Learning Segmentation Method for Retinopathy.

Wang J, Mateen M, Xiang D, Zhu W, Shi F, Huang J, Sun K, Dai J, Xu J, Zhang S, Chen X

pubmed logopapersJun 12 2025
Deep learning (DL) requires large amounts of labeled data, which is extremely time-consuming and laborintensive to obtain for medical image segmentation tasks. Metalearning focuses on developing learning strategies that enable quick adaptation to new tasks with limited labeled data. However, rich-class medical image segmentation datasets for constructing meta-learning multi-tasks are currently unavailable. In addition, data collected from various healthcare sites and devices may present significant distribution differences, potentially degrading model's performance. In this paper, we propose a task augmentation-based meta-learning method for retinal image segmentation (TAMS) to meet labor-intensive annotation demand. A retinal Lesion Simulation Algorithm (LSA) is proposed to automatically generate multi-class retinal disease datasets with pixel-level segmentation labels, such that metalearning tasks can be augmented without collecting data from various sources. In addition, a novel simulation function library is designed to control generation process and ensure interpretability. Moreover, a generative simulation network (GSNet) with an improved adversarial training strategy is introduced to maintain high-quality representations of complex retinal diseases. TAMS is evaluated on three different OCT and CFP image datasets, and comprehensive experiments have demonstrated that TAMS achieves superior segmentation performance than state-of-the-art models.
Page 252 of 3853843 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.