Sort by:
Page 601 of 7627616 results

Florian Kofler, Marcel Rosier, Mehdi Astaraki, Ujjwal Baid, Hendrik Möller, Josef A. Buchner, Felix Steinbauer, Eva Oswald, Ezequiel de la Rosa, Ivan Ezhov, Constantin von See, Jan Kirschke, Anton Schmick, Sarthak Pati, Akis Linardos, Carla Pitarch, Sanyukta Adap, Jeffrey Rudie, Maria Correia de Verdier, Rachit Saluja, Evan Calabrese, Dominic LaBella, Mariam Aboian, Ahmed W. Moawad, Nazanin Maleki, Udunna Anazodo, Maruf Adewole, Marius George Linguraru, Anahita Fathi Kazerooni, Zhifan Jiang, Gian Marco Conte, Hongwei Li, Juan Eugenio Iglesias, Spyridon Bakas, Benedikt Wiestler, Marie Piraud, Bjoern Menze

arxiv logopreprintJun 13 2025
The Brain Tumor Segmentation (BraTS) cluster of challenges has significantly advanced brain tumor image analysis by providing large, curated datasets and addressing clinically relevant tasks. However, despite its success and popularity, algorithms and models developed through BraTS have seen limited adoption in both scientific and clinical communities. To accelerate their dissemination, we introduce BraTS orchestrator, an open-source Python package that provides seamless access to state-of-the-art segmentation and synthesis algorithms for diverse brain tumors from the BraTS challenge ecosystem. Available on GitHub (https://github.com/BrainLesion/BraTS), the package features intuitive tutorials designed for users with minimal programming experience, enabling both researchers and clinicians to easily deploy winning BraTS algorithms for inference. By abstracting the complexities of modern deep learning, BraTS orchestrator democratizes access to the specialized knowledge developed within the BraTS community, making these advances readily available to broader neuro-radiology and neuro-oncology audiences.

Kim C, Hong S, Choi H, Yoo WS, Kim JY, Chang S, Park CH, Hong SJ, Yang DH, Yong HS, van Assen M, De Cecco CN, Suh YJ

pubmed logopapersJun 13 2025
To evaluate the impact of deep learning-based image conversion on the accuracy of automated coronary artery calcium quantification using thin-slice, sharp-kernel, non-gated, low-dose chest computed tomography (LDCT) images collected from multiple institutions. A total of 225 pairs of LDCT and calcium scoring CT (CSCT) images scanned at 120 kVp and acquired from the same patient within a 6-month interval were retrospectively collected from four institutions. Image conversion was performed for LDCT images using proprietary software programs to simulate conventional CSCT. This process included 1) deep learning-based kernel conversion of low-dose, high-frequency, sharp kernels to simulate standard-dose, low-frequency kernels, and 2) thickness conversion using the raysum method to convert 1-mm or 1.25-mm thickness images to 3-mm thickness. Automated Agaston scoring was conducted on the LDCT scans before (LDCT-Org<sub>auto</sub>) and after the image conversion (LDCT-CONV<sub>auto</sub>). Manual scoring was performed on the CSCT images (CSCT<sub>manual</sub>) and used as a reference standard. The accuracy of automated Agaston scores and risk severity categorization based on the automated scoring on LDCT scans was analyzed compared to the reference standard, using the Bland-Altman analysis, concordance correlation coefficient (CCC), and weighted kappa (κ) statistic. LDCT-CONV<sub>auto</sub> demonstrated a reduced bias for Agaston score, compared with CSCT<sub>manual</sub>, than LDCT-Org<sub>auto</sub> did (-3.45 vs. 206.7). LDCT-CONV<sub>auto</sub> showed a higher CCC than LDCT-Org<sub>auto</sub> did (0.881 [95% confidence interval {CI}, 0.750-0.960] vs. 0.269 [95% CI, 0.129-0.430]). In terms of risk category assignment, LDCT-Org<sub>auto</sub> exhibited poor agreement with CSCT<sub>manual</sub> (weighted κ = 0.115 [95% CI, 0.082-0.154]), whereas LDCT-CONV<sub>auto</sub> achieved good agreement (weighted κ = 0.792 [95% CI, 0.731-0.847]). Deep learning-based conversion of LDCT images originally obtained with thin slices and a sharp kernel can enhance the accuracy of automated coronary artery calcium score measurement using the images.

Pasumarthi, S., Campbell Arnold, T., Colombo, S., Rudie, J. D., Andre, J. B., Elor, R., Gulaka, P., Shankaranarayanan, A., Erb, G., Zaharchuk, G.

medrxiv logopreprintJun 13 2025
BackgroundGadolinium-based Contrast Agents (GBCAs) are used in brain MRI exams to improve the visualization of pathology and improve the delineation of lesions. Higher doses of GBCAs can improve lesion sensitivity but involve substantial deviation from standard-of-care procedures and may have safety implications, particularly in the light of recent findings on gadolinium retention and deposition. PurposeTo evaluate the clinical performance of an FDA cleared deep-learning (DL) based contrast boosting algorithm in routine clinical brain MRI exams. MethodsA multi-center retrospective database of contrast-enhanced brain MRI images (obtained from April 2017 to December 2023) was used to evaluate a DL-based contrast boosting algorithm. Pre-contrast and standard post-contrast (SC) images were processed with the algorithm to obtain contrast boosted (CB) images. Quantitative performance of CB images in comparison to SC images was compared using contrast-to-noise ratio (CNR), lesion-to-brain ratio (LBR) and contrast enhancement percentage (CEP). Three board-certified radiologists reviewed CB and SC images side-by-side for qualitative evaluation and rated them on a 4-point Likert scale for lesion contrast enhancement, border delineation, internal morphology, overall image quality, presence of artefacts, and changes in vessel conspicuity. The presence, cause, and severity of any false lesions was recorded. CB results were compared to SC using Wilcoxon signed rank test for statistical significance. ResultsBrain MRI images from 110 patients (47 {+/-} 22 years; 52 Females, 47 Males, 11 N/A) were evaluated. CB images had superior quantitative performance than SC images in terms of CNR (+634%), LBR (+70%) and CEP (+150%). In the qualitative assessment CB images showed better lesion visualization (3.73 vs 3.16) and had better image quality (3.55 vs 3.07). Readers were able to rule out all false lesions on CB by using SC for comparison. ConclusionsDeep learning based contrast boosting improves lesion visualization and image quality without increasing contrast dosage. Key ResultsO_LIIn a retrospective study of 110 patients, deep-learning based contrast boosted (CB) images showed better lesion visualization than standard post-contrast (SC) brain MRI images (3.73 vs 3.16; mean reader scores [4-point Likert scale]) C_LIO_LICB images had better overall image quality than SC images (3.55 vs 3.07) C_LIO_LIContrast-to-noise ratio, Lesion-to-brain Ratio and Contrast Enhancement Percentage for CB images were significantly higher than SC images (+729%, +88% and +165%; p < 0.001) C_LI Summary StatementDeep-learning based contrast boosting achieves better lesion visualization and overall image quality and provides more contrast information, without increasing the contrast dosage in contrast-enhanced brain MR protocols.

Fabelo, H., Ramallo-Farina, Y., Morera, J., Pineiro, J. F., Lagares, A., Jimenez-Roldan, L., Burstrom, G., Garcia-Bello, M. A., Garcia-Perez, L., Falero, R., Gonzalez, M., Duque, S., Rodriguez-Jimenez, C., Hernandez, M., Delgado-Sanchez, J. J., Paredes, A. B., Hernandez, G., Ponce, P., Leon, R., Gonzalez-Martin, J. M., Rodriguez-Esparragon, F., Callico, G. M., Wagner, A. M., Clavo, B., STRATUM,

medrxiv logopreprintJun 13 2025
IntroductionIntegrated digital diagnostics can support complex surgeries in many anatomic sites, and brain tumour surgery represents one of the most complex cases. Neurosurgeons face several challenges during brain tumour surgeries, such as differentiating critical tissue from brain tumour margins. To overcome these challenges, the STRATUM project will develop a 3D decision support tool for brain surgery guidance and diagnostics based on multimodal data processing, including hyperspectral imaging, integrated as a point-of-care computing tool in neurosurgical workflows. This paper reports the protocol for the development and technical validation of the STRATUM tool. Methods and analysisThis international multicentre, prospective, open, observational cohort study, STRATUM-OS (study: 28 months, pre-recruitment: 2 months, recruitment: 20 months, follow-up: 6 months), with no control group, will collect data from 320 patients undergoing standard neurosurgical procedures to: (1) develop and technically validate the STRATUM tool, and (2) collect the outcome measures for comparing the standard procedure versus the standard procedure plus the use of the STRATUM tool during surgery in a subsequent historically controlled non-randomized clinical trial. Ethics and disseminationThe protocol was approved by the participant Ethics Committees. Results will be disseminated in scientific conferences and peer-reviewed journals. Trial registration number[Pending Number] ARTICLE SUMMARYO_ST_ABSStrengths and limitations of this studyC_ST_ABSO_LISTRATUM-OS will be the first multicentre prospective observational study to develop and technically validate a 3D decision support tool for brain surgery guidance and diagnostics in real-time based on artificial intelligence and multimodal data processing, including the emerging hyperspectral imaging modality. C_LIO_LIThis study encompasses a prospective collection of multimodal pre, intra and postoperative medical data, including innovative imaging modalities, from patients with intra-axial brain tumours. C_LIO_LIThis large observational study will act as historical control in a subsequent clinical trial to evaluate a fully-working prototype. C_LIO_LIAlthough the estimated sample size is deemed adequate for the purpose of the study, the complexity of the clinical context and the type of surgery could potentially lead to under-recruitment and under-representation of less prevalent tumour types. C_LI

Xie, K., Gruber, L. J., Crampen, M., Li, Y., Ferreira, A., Tappeiner, E., Gillot, M., Schepers, J., Xu, J., Pankert, T., Beyer, M., Shahamiri, N., ten Brink, R., Dot, G., Weschke, C., van Nistelrooij, N., Verhelst, P.-J., Guo, Y., Xu, Z., Bienzeisler, J., Rashad, A., Flügge, T., Cotton, R., Vinayahalingam, S., Ilesan, R., Raith, S., Madsen, D., Seibold, C., Xi, T., Berge, S., Nebelung, S., Kodym, O., Sundqvist, O., Thieringer, F., Lamecker, H., Coppens, A., Potrusil, T., Kraeima, J., Witjes, M., Wu, G., Chen, X., Lambrechts, A., Cevidanes, L. H. S., Zachow, S., Hermans, A., Truhn, D., Alves,

medrxiv logopreprintJun 13 2025
Despite the advances in automated medical image segmentation, AI models still underperform in various clinical settings, challenging real-world integration. In this multicenter evaluation, we analyzed 20 state-of-the-art mandibular segmentation models across 19,218 segmentations of 1,000 clinically resampled CT/CBCT scans. We show that segmentation accuracy varies by up to 25% depending on socio-technical factors such as voxel size, bone orientation, and patient conditions such as osteosynthesis or pathology. Higher sharpness, isotropic smaller voxels, and neutral orientation significantly improved results, while metallic osteosynthesis and anatomical complexity led to significant degradation. Our findings challenge the common view of AI models as "plug-and-play" tools and suggest evidence-based optimization recommendations for both clinicians and developers. This will in turn boost the integration of AI segmentation tools in routine healthcare.

Cepeda, S., Esteban-Sinovas, O., Arrese, I., Sarabia, R.

medrxiv logopreprintJun 13 2025
BackgroundIntracranial hemorrhage (ICH), whether spontaneous or traumatic, is a neurological emergency with high morbidity and mortality. Accurate assessment of severity is essential for neurosurgical decision-making. This study aimed to develop and evaluate a fully automated, deep learning-based tool for the standardized assessment of ICH severity, based on the segmentation of the hemorrhage and intracranial structures, and the computation of an objective severity index. MethodsNon-contrast cranial CT scans from patients with spontaneous or traumatic ICH were retrospectively collected from public datasets and a tertiary care center. Deep learning models were trained to segment hemorrhages and intracranial structures. These segmentations were used to compute a severity index reflecting bleeding burden and mass effect through volumetric relationships. Segmentation performance was evaluated on a hold-out test cohort. In a prospective cohort, the severity index was assessed in relation to expert-rated CT severity, clinical outcomes, and the need for urgent neurosurgical intervention. ResultsA total of 1,110 non-contrast cranial CT scans were analyzed, 900 from the retrospective cohort and 200 from the prospective evaluation cohort. The binary segmentation model achieved a median Dice score of 0.90 for total hemorrhage. The multilabel model yielded Dice scores ranging from 0.55 to 0.94 across hemorrhage subtypes. The severity index significantly correlated with expert-rated CT severity (p < 0.001), the modified Rankin Scale (p = 0.007), and the Glasgow Outcome Scale-Extended (p = 0.039), and independently predicted the need for urgent surgery (p < 0.001). A threshold [~]300 was identified as a decision point for surgical management (AUC = 0.83). ConclusionWe developed a fully automated and openly accessible pipeline for the analysis of non-contrast cranial CT in intracranial hemorrhage. It computes a novel index that objectively quantifies hemorrhage severity and is significantly associated with clinically relevant outcomes, including the need for urgent neurosurgical intervention.

Iveson, M. H., Mukherjee, M., Davidson, E. M., Zhang, H., Sherlock, L., Ball, E. L., Mair, G., Hosking, A., Whalley, H., Poon, M. T. C., Wardlaw, J. M., Kent, D., Tobin, R., Grover, C., Alex, B., Whiteley, W. N.

medrxiv logopreprintJun 13 2025
ImportanceUnderstanding the relevance of covert cerebrovascular disease (CCD) for later health will allow clinicians to more effectively monitor and target interventions. ObjectiveTo examine the association between clinically reported CCD, measured using natural language processing (NLP), and subsequent disease risk. Design, Setting and ParticipantsWe conducted a retrospective e-cohort study using linked health record data. From all people with clinical brain imaging in Scotland from 2010 to 2018, we selected people with no prior hospitalisation for neurological disease. The data were analysed from March 2024 to June 2025. ExposureFour phenotypes were identified with NLP of imaging reports: white matter hypoattenuation or hyperintensities (WMH), lacunes, cortical infarcts and cerebral atrophy. Main outcomes and measuresHazard ratios (aHR) for stroke, dementia, and Parkinsons disease (conditions previously associated with CCD), epilepsy (a brain-based control condition) and colorectal cancer (a non-brain control condition), adjusted for age, sex, deprivation, region, scan modality, and pre-scan healthcare, were calculated for each phenotype. ResultsFrom 395,273 people with brain imaging and no history of neurological disease, 145,978 (37%) had [&ge;]1 phenotype. For each phenotype, the aHR of any stroke was: WMH 1.4 (95%CI: 1.3-1.4), lacunes 1.6 (1.5-1.6), cortical infarct 1.7 (1.6-1.8), and cerebral atrophy 1.1 (1.0-1.1). The aHR of any dementia was: WMH, 1.3 (1.3-1.3), lacunes, 1.0 (0.9-1.0), cortical infarct 1.1 (1.0-1.1) and cerebral atrophy 1.7 (1.7-1.7). The aHR of Parkinsons disease was, in people with a report of: WMH 1.1 (1.0-1.2), lacunes 1.1 (0.9-1.2), cortical infarct 0.7 (0.6-0.9) and cerebral atrophy 1.4 (1.3-1.5). The aHRs between CCD phenotypes and epilepsy and colorectal cancer overlapped the null. Conclusions and RelevanceNLP identified CCD and atrophy phenotypes from routine clinical image reports, and these had important associations with future stroke, dementia and Parkinsons disease. Prevention of neurological disease in people with CCD should be a priority for healthcare providers and policymakers. Key PointsO_ST_ABSQuestionC_ST_ABSAre measures of Covert Cerebrovascular Disease (CCD) associated with the risk of subsequent disease (stroke, dementia, Parkinsons disease, epilepsy, and colorectal cancer)? FindingsThis study used a validated NLP algorithm to identify CCD (white matter hypoattenuation/hyperintensities, lacunes, cortical infarcts) and cerebral atrophy from both MRI and computed tomography (CT) imaging reports generated during routine healthcare in >395K people in Scotland. In adjusted models, we demonstrate higher risk of dementia (particularly Alzheimers disease) in people with atrophy, and higher risk of stroke in people with cortical infarcts. However, associations with an age-associated control outcome (colorectal cancer) were neutral, supporting a causal relationship. It also highlights differential associations between cerebral atrophy and dementia and cortical infarcts and stroke risk. MeaningCCD or atrophy on brain imaging reports in routine clinical practice is associated with a higher risk of stroke or dementia. Evidence is needed to support treatment strategies to reduce this risk. NLP can identify these important, otherwise uncoded, disease phenotypes, allowing research at scale into imaging-based biomarkers of dementia and stroke.

Zhang H, Zheng E, Zheng W, Huang C, Xi Y, Cheng Y, Yu S, Chakraborty S, Bonaccio E, Takabe K, Fan XC, Xu W, Xia J

pubmed logopapersJun 12 2025
We developed an automated photoacoustic and ultrasound breast tomography system that images the patient in the standing pose. The system, named OneTouch-PAT, utilized linear transducer arrays with optical-acoustic combiners for effective dual-modal imaging. During scanning, subjects only need to gently attach their breasts to the imaging window, and co-registered three-dimensional ultrasonic and photoacoustic images of the breast can be obtained within one minute. Our system has a large field of view of 17 cm by 15 cm and achieves an imaging depth of 3 cm with sub-millimeter resolution. A three-dimensional deep-learning network was also developed to further improve the image quality by improving the 3D resolution, enhancing vasculature, eliminating skin signals, and reducing noise. The performance of the system was tested on four healthy subjects and 61 patients with breast cancer. Our results indicate that the ultrasound structural information can be combined with the photoacoustic vascular information for better tissue characterization. Representative cases from different molecular subtypes have indicated different photoacoustic and ultrasound features that could potentially be used for imaging-based cancer classification. Statistical analysis among all patients indicates that the regional photoacoustic intensity and vessel branching points are indicators of breast malignancy. These promising results suggest that our system could significantly enhance breast cancer diagnosis and classification.

Esmaely F, Moradnejad P, Boudagh S, Bitarafan-Rajabi A

pubmed logopapersJun 12 2025
Detecting valve vegetation in infective endocarditis (IE) poses challenges, particularly with mechanical valves, because acoustic shadowing artefacts often obscure critical diagnostic details. This study aimed to classify native and prosthetic mitral and aortic valves with and without vegetation using radiomics and machine learning. 286 TEE scans from suspected IE cases (August 2023-November 2024) were analysed alongside 113 rejected IE as control cases. Frames were preprocessed using the Extreme Total Variation Bilateral (ETVB) filter, and radiomics features were extracted for classification using machine learning models, including Random Forest, Decision Tree, SVM, k-NN, and XGBoost. in order to evaluate the models, AUC, ROC curves, and Decision Curve Analysis (DCA) were used. For native mitral valves, SVM achieved the highest performance with an AUC of 0.88, a sensitivity of 0.91, and a specificity of 0.87. Mechanical mitral valves also showed optimal results with SVM (AUC: 0.85, sensitivity: 0.73, specificity: 0.92). Native aortic valves were best classified using SVM (AUC: 0.86, sensitivity: 0.87, specificity: 0.86), while Random Forest excelled for mechanical aortic valves (AUC: 0.81, sensitivity: 0.89, specificity: 0.78). These findings suggest that combining the models with the clinician's report may enhance the diagnostic accuracy of TEE, particularly in the absence of advanced imaging methods like PET/CT.

Kurokawa M, Gonoi W, Hanaoka S, Kurokawa R, Uehara S, Kato M, Suzuki M, Toyohara Y, Takaki Y, Kusakabe M, Kino N, Tsukazaki T, Unno T, Sone K, Abe O

pubmed logopapersJun 12 2025
Uterine sarcoma is a rare disease whose association with body composition parameters is poorly understood. This study explored the impact of body composition parameters on overall survival with uterine sarcoma. This multicenter study included 52 patients with uterine sarcomas treated at three Japanese hospitals between 2007 and 2023. A semi-automatic segmentation program based on deep learning analyzed transaxial CT images at the L3 vertebral level, calculating body composition parameters as follows: area indices (areas divided by height squared) of skeletal muscle, visceral and subcutaneous adipose tissue (SMI, VATI, and SATI, respectively); skeletal muscle density; and the visceral-to-subcutaneous fat area ratio (VSR). The optimal cutoff values for each parameter were calculated using maximally selected rank statistics with several p value approximations. The effects of body composition parameters and clinical data on overall survival (OS) and cancer-specific survival (CSS) were analyzed. Univariate Cox proportional hazards regression analysis revealed that advanced stage (III-IV) and high VSR were unfavorable prognostic factors for both OS and CSS. Multivariate Cox proportional hazard regression analysis revealed that advanced stage (III-IV) (hazard ratios (HRs), 4.67 for OS and 4.36 for CSS, p < 0.01), and high VSR (HRs, 9.36 for OS and 8.22 for CSS, p < 0.001) were poor prognostic factors for both OS and CSS. Added values were observed when the VSR was incorporated into the OS and the CSS prediction models. Increased VSR and tumor stage are significant predictors of poor overall survival in patients with uterine sarcoma.
Page 601 of 7627616 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,700+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.