Sort by:
Page 246 of 6576562 results

Cheng X, Li H, Li C, Li J, Liu Z, Fan X, Lu C, Song K, Shen Z, Wang Z, Yang Q, Zhang J, Yin J, Qian C, You Y, Wang X

pubmed logopapersAug 20 2025
Preoperative assessment of World Health Organization (WHO) meningioma grading and Ki-67 expression is crucial for treatment strategies. We aimed to develop a fully automated attention-based deep learning network to predict WHO meningioma grading and Ki-67 expression. This retrospective study included 952 meningioma patients, divided into training (n = 542), internal validation (n = 96), and external test sets (n = 314). For each task, clinical, radiomics, and deep learning models were compared. We used no-new-Unet (nn-Unet) models to construct the segmentation network, followed by four classification models using ResNet50 or Swin Transformer architectures with 2D or 2.5D input strategies. All deep learning models incorporated attention mechanisms. Both the segmentation and 2.5D classification models demonstrated robust performance on the external test set. The segmentation network achieved Dice coefficients of 0.98 (0.97-0.99) and 0.87 (0.83-0.91) for brain parenchyma and tumour segmentation. For predicting meningioma grade, the 2.5D ResNet50 achieved the highest area under the curve (AUC) of 0.90 (0.85-0.93), significantly outperforming the clinical (AUC = 0.77 [0.70-0.83], p < 0.001) and radiomics models (AUC = 0.80 [0.75-0.85], p < 0.001). For Ki-67 expression prediction, the 2.5D Swin Transformer achieved the highest AUC of 0.89 (0.85-0.93), outperforming both the clinical (AUC = 0.76 [0.71-0.81], p < 0.001) and radiomics models (AUC = 0.82 [0.77-0.86], p = 0.002). Our automated deep learning network demonstrated superior performance. This novel network could support more precise treatment planning for meningioma patients. Question Can artificial intelligence accurately assess meningioma WHO grade and Ki-67 expression from preoperative MRI to guide personalised treatment and follow-up strategies? Findings The attention-enhanced nn-Unet segmentation achieved high accuracy, while 2.5D deep learning models with attention mechanisms achieved accurate prediction of grades and Ki-67. Clinical relevance Our fully automated 2.5D deep learning model, enhanced with attention mechanisms, accurately predicts WHO grades and Ki-67 expression levels in meningiomas, offering a robust, objective, and non-invasive solution to support clinical diagnosis and optimise treatment planning.

Hu Y, Xiang Y, Zhou YJ, He Y, Lang D, Yang S, Du X, Den C, Xu Y, Wang G, Ding Z, Huang J, Zhao W, Wu X, Li D, Zhu Q, Li Z, Qiu C, Wu Z, He Y, Tian C, Qiu Y, Lin Z, Zhang X, Hu L, He Y, Yuan Z, Zhou X, Fan R, Chen R, Guo W, Xu J, Zhang J, Mok TCW, Li Z, Kalra MK, Lu L, Xiao W, Li X, Bian Y, Shao C, Wang G, Lu W, Huang Z, Xu M, Zhang H

pubmed logopapersAug 20 2025
The accurate and timely diagnosis of acute aortic syndrome (AAS) in patients presenting with acute chest pain remains a clinical challenge. Aortic computed tomography (CT) angiography is the imaging protocol of choice in patients with suspected AAS. However, due to economic and workflow constraints in China, the majority of suspected patients initially undergo noncontrast CT as the initial imaging testing, and CT angiography is reserved for those at higher risk. Although noncontrast CT can reveal specific signs indicative of AAS, its diagnostic efficacy when used alone has not been well characterized. Here we present an artificial intelligence-based warning system, iAorta, using noncontrast CT for AAS identification in China, which demonstrates remarkably high accuracy and provides clinicians with interpretable warnings. iAorta was evaluated through a comprehensive step-wise study. In the multicenter retrospective study (n = 20,750), iAorta achieved a mean area under the receiver operating curve of 0.958 (95% confidence interval 0.950-0.967). In the large-scale real-world study (n = 137,525), iAorta demonstrated consistently high performance across various noncontrast CT protocols, achieving a sensitivity of 0.913-0.942 and a specificity of 0.991-0.993. In the prospective comparative study (n = 13,846), iAorta demonstrated the capability to significantly shorten the time to correct diagnostic pathway for patients with initial false suspicion from an average of 219.7 (115-325) min to 61.6 (43-89) min. Furthermore, for the prospective pilot deployment that we conducted, iAorta correctly identified 21 out of 22 patients with AAS among 15,584 consecutive patients presenting with acute chest pain and under noncontrast CT protocol in the emergency department. For these 21 AAS-positive patients, the average time to diagnosis was 102.1 (75-133) min. Finally, iAorta may help prevent delayed or missed diagnoses of AAS in settings where noncontrast CT remains the only feasible initial imaging modality-such as in resource-limited regions or in patients who cannot receive, or did not receive, intravenous contrast.

Reder SR, Hardt J, Brockmann MA, Brockmann C, Kim S, Kawulycz M, Schulz M, Kantelhardt SR, Petrowski K, Fischbeck S

pubmed logopapersAug 20 2025
To explore the mental and physical health (MH, PH) on individuals living with brain aneurysms and to profile their differences in disease experience. In N = 111 patients the Short Form 36 Health Survey (SF-36) was assessed via an online survey; Supplementary data included angiography and magnetic resonance imaging (MRI) findings (including AI-based brain Lesion Volume analyses in ml, or LV). Correlation and regression analyses were conducted (including biological sex, age, overall brain LV, PH, MH). Disease profiles were determined using principal component analysis. Compared to the German normative cohort, patients exhibited overall lower SF-36 scores. In regression analyses, the DW was predictable by PH (β = 0.345) and MH (β=-0.646; R = 0.557; p < 0.001). Vasospasm severity correlated significantly with LV (r = 0.242, p = 0.043), MH (r=-0.321, p = 0.043), and PH (r=-0.372, p = 0.028). Higher LV were associated with poorer PH (r=-0.502, p = 0.001), unlike MH (p > 0.05). Main disease profiles were identified: (1) those with increased LV post-rupture (high DW); (2) older individuals with stable aneurysms (low DW); (3) revealing a sex disparity in QoL despite similar vasospasm severity; and 4), focused on chronic pain and its impact on daily tasks. Two sub-profiles highlighted trauma-induced impairments, functional disabilities from LV, and persistent anxiety. Reduced thalamic and pallidal volumes were linked to low QoL following subarachnoid hemorrhage. MH has a greater impact on life quality compared to physical disabilities, leading to prolonged DW. A singular physical impairment was rather atypical for a perceived worse outcome. Patient profiles revealed that clinical history, sex, psychological stress, and pain each contribute uniquely to QoL and work capacity. Prioritizing MH in assessing workability and rehabilitation is crucial for survivors' long-term outcome.

Barstuğan M

pubmed logopapersAug 20 2025
Brain tumors have complex structures, and their shape, density, and size can vary widely. Consequently, their accurate classification, which involves identifying features that best describe the tumor data, is challenging. Using classical 2D texture features can yield only limited accuracy. Here, we show that this limitation can be overcome by using 3D feature extraction and ranking methods. Brain tumor images obtained through 3D magnetic resonance imaging were used to classify high-grade and low-grade glioma in the BraTS 2017 dataset. From the dataset, texture properties for each of the four phases (i.e., FLAIR, T1, T1c, and T2) were extracted using a 3D gray level co-occurrence matrix. Various combinations of brain tumor feature sets were created, and feature ranking methods-Bhattacharyya, entropy, receiver operating characteristic, the t-test, and the Wilcoxon test-were applied to them. Features were classified using gradient boosting, support vector machines (SVMs), and random forest methods. The performance of all combinations was evaluated from the sensitivity, specificity, accuracy, precision, and F-score obtained from twofold, fivefold, and tenfold cross-validation tests. In all experiments, the most effective scheme was that involving the quadruple combination (FLAIR + T1 + T1c + T2) and the entropy feature-ranking method with twofold cross-validation. Notably, the proposed machine-learning-based framework showed remarkable scores of 100% (sensitivity), 97.29% (specificity), 99.30% (accuracy), 99.07% (precision), and 99.53% (F-score) for glioma classification with an SVM. The proposed flowchart reflects a novel brain tumor classification system that competes with the novel methods.

Ganjgahi H, Häring DA, Aarden P, Graham G, Sun Y, Gardiner S, Su W, Berge C, Bischof A, Fisher E, Gaetano L, Thoma SP, Kieseier BC, Nichols TE, Thompson AJ, Montalban X, Lublin FD, Kappos L, Arnold DL, Bermel RA, Wiendl H, Holmes CC

pubmed logopapersAug 20 2025
Multiple sclerosis (MS) affects 2.9 million people. Traditional classification of MS into distinct subtypes poorly reflects its pathobiology and has limited value for prognosticating disease evolution and treatment response, thereby hampering drug discovery. Here we report a data-driven classification of MS disease evolution by analyzing a large clinical trial database (approximately 8,000 patients, 118,000 patient visits and more than 35,000 magnetic resonance imaging scans) using probabilistic machine learning. Four dimensions define MS disease states: physical disability, brain damage, relapse and subclinical disease activity. Early/mild/evolving (EME) MS and advanced MS represent two poles of a disease severity spectrum. Patients with EME MS show limited clinical impairment and minor brain damage. Transitions to advanced MS occur via brain damage accumulation through inflammatory states, with or without accompanying symptoms. Advanced MS is characterized by moderate to high disability levels, radiological disease burden and risk of disease progression independent of relapses, with little probability of returning to earlier MS states. We validated these results in an independent clinical trial database and a real-world cohort, totaling more than 4,000 patients with MS. Our findings support viewing MS as a disease continuum. We propose a streamlined disease classification to offer a unifying understanding of the disease, improve patient management and enhance drug discovery efficiency and precision.

Aydin A, Ozcan C, Simsek SA, Say F

pubmed logopapersAug 20 2025
An enchondroma is a benign neoplasm of mature hyaline cartilage that proliferates from the medullary cavity toward the cortical bone. This results in the formation of a significant endogenous mass within the medullary cavity. Although enchondromas are predominantly asymptomatic, they may exhibit various clinical manifestations contingent on the size of the lesion, its localization, and the characteristics observed on radiological imaging. This study aimed to identify and present cases of bone tissue enchondromas to field specialists as preliminary data. In this study, authentic X-ray radiographs of patients were obtained following ethical approval and subjected to preprocessing. The images were then annotated by orthopedic oncology specialists using advanced, state-of-the-art object detection algorithms trained with diverse architectural frameworks. All processes, from preprocessing to identifying pathological regions using object detection systems, underwent rigorous cross-validation and oversight by the research team. After performing various operations and procedural steps, including modifying deep learning architectures and optimizing hyperparameters, enchondroma formation in bone tissue was successfully identified. This achieved an average precision of 0.97 and an accuracy rate of 0.98, corroborated by medical professionals. A comprehensive study incorporating 1055 authentic patient data from multiple healthcare centers will be a pioneering investigation that introduces innovative approaches for delivering preliminary insights to specialists concerning bone radiography.

Li Y, Chen Q, Li M, Si L, Guo Y, Xiong Y, Wang Q, Qin Y, Xu L, Smagt PV, Wang K, Tang J, Chen N

pubmed logopapersAug 20 2025
Multi-modality magnetic resonance imaging(MRI) data facilitate the early diagnosis, tumor segmentation, and disease staging in the management of nasopharyngeal carcinoma (NPC). The lack of publicly available, comprehensive datasets limits advancements in diagnosis, treatment planning, and the development of machine learning algorithms for NPC. Addressing this critical need, we introduce the first comprehensive NPC MRI dataset, encompassing MR axial imaging of 277 primary NPC patients. This dataset includes T1-weighted, T2-weighted, and contrast-enhanced T1-weighted sequences, totaling 831 scans. In addition to the corresponding clinical data, manually annotated and labeled segmentations by experienced radiologists offer high-quality data resources from untreated primary NPC.

Lin Z, Zhang H, Duan X, Bai Y, Wang J, Liang Q, Zhou J, Xie F, Shentu Z, Huang R, Chen Y, Yu H, Weng Z, Ni D, Liu L, Zhou L

pubmed logopapersAug 20 2025
Timely and accurate diagnosis of severe neonatal cerebral lesions is critical for preventing long-term neurological damage and addressing life-threatening conditions. Cranial ultrasound is the primary screening tool, but the process is time-consuming and reliant on operator's proficiency. In this study, a deep-learning powered neonatal cerebral lesions screening system capable of automatically extracting standard views from cranial ultrasound videos and identifying cases with severe cerebral lesions is developed based on 8,757 neonatal cranial ultrasound images. The system demonstrates an area under the curve of 0.982 and 0.944, with sensitivities of 0.875 and 0.962 on internal and external video datasets, respectively. Furthermore, the system outperforms junior radiologists and performs on par with mid-level radiologists, with 55.11% faster examination efficiency. In conclusion, the developed system can automatically extract standard views and make correct diagnosis with efficiency from cranial ultrasound videos and might be useful to deploy in multiple application scenarios.

Xu W, Deng S, Mao G, Wang N, Huang Y, Zhang C, Sa G, Wu S, An Y

pubmed logopapersAug 20 2025
To explore the value of a deep learning-based model in distinguishing between ductal carcinoma in situ (DCIS) and invasive ductal carcinoma (IDC) manifesting suspicious microcalcifications on mammography. A total of 294 breast cancer cases (106 DCIS and 188 IDC) from two centers were randomly allocated into training, internal validation and external validation sets in this retrospective study. Clinical variables differentiating DCIS from IDC were identified through univariate and multivariate analyses and used to build a clinical model. Deep learning features were extracted using Resnet101 and selected by minimum redundancy maximum correlation (mRMR) and least absolute shrinkage and selection operator (LASSO). A deep learning model was developed using deep learning features, and a combined model was constructed by combining these features with clinical variables. The area under the receiver operating characteristic curve (AUC) was used to assess the performance of each model. Multivariate logistic regression identified lesion type and BI-RADS category as independent predictors for differentiating DCIS from IDC. The clinical model incorporating these factors achieved an AUC of 0.67, sensitivity of 0.53, specificity of 0.81, and accuracy of 0.63 in the external validation set. In comparison, the deep learning model showed an AUC of 0.97, sensitivity of 0.94 and specificity of 0.92, accuracy of 0.93. For the combined model, the AUC, sensitivity, specificity and accuracy were 0.97, 0.96, 0.92 and 0.95, respectively. The diagnostic efficacy of the deep learning model and combined model was comparable (p>0.05), and both models outperformed the clinical model (p<0.05). Deep learning provides an effective non-invasive approach to differentiate DCIS from IDC presenting as suspicious microcalcifications on mammography.

Chi Y, Schubert KE, Badal A, Roncali E

pubmed logopapersAug 20 2025
Monte Carlo (MC) simulation remains the gold standard for modeling complex physical interactions in transmission and emission tomography, with GPU parallel computing offering unmatched computational performance and enabling practical, large-scale MC applications. In recent years, rapid advancements in both GPU technologies and tomography techniques have been observed. Harnessing emerging GPU capabilities to accelerate MC simulation and strengthen its role in supporting the rapid growth of medical tomography has become an important topic. To provide useful insights, we conducted a comprehensive review of state-of-the-art GPU-accelerated MC simulations in tomography, highlighting current achievements and underdeveloped areas.&#xD;&#xD;Approach: We reviewed key technical developments across major tomography modalities, including computed tomography (CT), cone-beam CT (CBCT), positron emission tomography, single-photon emission computed tomography, proton CT, emerging techniques, and hybrid modalities. We examined MC simulation methods and major CPU-based MC platforms that have historically supported medical imaging development, followed by a review of GPU acceleration strategies, hardware evolutions, and leading GPU-based MC simulation packages. Future development directions were also discussed.&#xD;&#xD;Main Results: Significant advancements have been achieved in both tomography and MC simulation technologies over the past half-century. The introduction of GPUs has enabled speedups often exceeding 100-1000 times over CPU implementations, providing essential support to the development of new imaging systems. Emerging GPU features like ray-tracing cores, tensor cores, and GPU-execution-friendly transport methods offer further opportunities for performance enhancement. &#xD;&#xD;Significance: GPU-based MC simulation is expected to remain essential in advancing medical emission and transmission tomography. With the emergence of new concepts such as training Machine Learning with synthetic data, Digital Twins for Healthcare, and Virtual Clinical Trials, improving hardware portability and modularizing GPU-based MC codes to adapt to these evolving simulation needs represent important future research directions. This review aims to provide useful insights for researchers, developers, and practitioners in relevant fields.
Page 246 of 6576562 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.