Sort by:
Page 191 of 6526512 results

Zhu FY, Chen WJ, Chen HY, Ren SY, Zhuo LY, Wang TD, Ren CC, Yin XP, Wang JN

pubmed logopapersSep 6 2025
The present study aimed to develop a noninvasive predictive framework that integrates clinical data, conventional radiomics, habitat imaging, and deep learning for the preoperative stratification of MGMT gene promoter methylation in glioma. This retrospective study included 410 patients from the University of California, San Francisco, USA, and 102 patients from our hospital. Seven models were constructed using preoperative contrast-enhanced T1-weighted MRI with gadobenate dimeglumine as the contrast agent. Habitat radiomics features were extracted from tumor subregions by k-means clustering, while deep learning features were acquired using a 3D convolutional neural network. Model performance was evaluated based on area under the curve (AUC) value, F1-score, and decision curve analysis. The combined model integrating clinical data, conventional radiomics, habitat imaging features, and deep learning achieved the highest performance (training AUC = 0.979 [95 % CI: 0.969-0.990], F1-score = 0.944; testing AUC = 0.777 [0.651-0.904], F1-score = 0.711). Among the single-modality models, habitat radiomics outperformed the other models (training AUC = 0.960 [0.954-0.983]; testing AUC = 0.724 [0.573-0.875]). The proposed multimodal framework considerably enhances preoperative prediction of MGMT gene promoter methylation, with habitat radiomics highlighting the critical role of tumor heterogeneity. This approach provides a scalable tool for personalized management of glioma.

Wang Z, Sun Q, Zhang B, Wang P, Zhang J, Zhang Q

pubmed logopapersSep 6 2025
Few-shot learning has emerged as a key technological solution to address challenges such as limited data and the difficulty of acquiring annotations in medical image classification. However, relying solely on a single image modality is insufficient to capture conceptual categories. Therefore, medical image classification requires a comprehensive approach to capture conceptual category information that aids in the interpretation of image content. This study proposes a novel medical image classification paradigm based on a multi-modal foundation model, called PM<sup>2</sup>. In addition to the image modality, PM<sup>2</sup> introduces supplementary text input (prompt) to further describe images or conceptual categories and facilitate cross-modal few-shot learning. We empirically studied five different prompting schemes under this new paradigm. Furthermore, linear probing in multi-modal models only takes class token as input, ignoring the rich statistical data contained in high-level visual tokens. Therefore, we alternately perform linear classification on the feature distributions of visual tokens and class token. To effectively extract statistical information, we use global covariance pool with efficient matrix power normalization to aggregate the visual tokens. We then combine two classification heads: one for handling image class token and prompt representations encoded by the text encoder, and the other for classifying the feature distributions of visual tokens. Experimental results on three datasets: breast cancer, brain tumor, and diabetic retinopathy demonstrate that PM<sup>2</sup> effectively improves the performance of medical image classification. Compared to existing multi-modal models, PM<sup>2</sup> achieves state-of-the-art performance. Integrating text prompts as supplementary samples effectively enhances the model's performance. Additionally, by leveraging second-order features of visual tokens to enrich the category feature space and combining them with class token, the model's representational capacity is significantly strengthened.

Franco P, Montalba C, Caulier-Cisterna R, Milovic C, González A, Ramirez-Mahaluf JP, Undurraga J, Salas R, Crossley N, Tejos C, Uribe S

pubmed logopapersSep 6 2025
Several studies have shown changes in neurochemicals within the deep-brain nuclei of patients with psychosis. These alterations indicate a dysfunction in dopamine within subcortical regions affected by fluctuations in iron concentrations. Quantitative Susceptibility Mapping (QSM) is a method employed to measure iron concentration, offering a potential means to identify dopamine dysfunction in these subcortical areas. This study employed a random forest algorithm to predict susceptibility features of the First-Episode Psychosis (FEP) and the response to antipsychotics using Shapley Additionality Explanation (SHAP) values. 3D multi-echo Gradient Echo (GRE) and T1-weighted GRE were obtained in 61 healthy-volunteers (HV) and 76 FEP patients (32 % Treatment-Resistant Schizophrenia (TRS) and 68 % treatment-Responsive Schizophrenia (RS)) using a 3T Philips Ingenia MRI scanner. QSM and R2* were reconstructed and averaged in twenty-two segmented regions of interest. We used a Sequential Forward Selection as a feature selection algorithm and a Random Forest as a model to predict FEP patients and their response to antipsychotics. We further applied the SHAP framework to identify informative features and their interpretations. Finally, multiple correlation patterns from magnetic susceptibility parameters were extracted using hierarchical clustering. Our approach accurately classifies HV and FEP patients with 76.48 ± 10.73 % accuracy (using four features) and TRS vs RS patients with 76.43 ± 12.57 % accuracy (using four features), using 10-fold stratified cross-validation. The SHAP analyses indicated the top four nonlinear relationships between the selected features. Hierarchical clustering revealed two groups of correlated features for each study. Early prediction of treatment response enables tailored strategies for FEP patients with treatment resistance, ensuring timely and effective interventions.

Urooj B, Ko Y, Na S, Kim IO, Lee EH, Cho S, Jeong H, Khang S, Lee J, Kim KW

pubmed logopapersSep 5 2025
Opportunistic computed tomography (CT) screening for the evaluation of sarcopenia and myosteatosis has been gaining emphasis. A fully automated artificial intelligence (AI)-integrated system for body composition assessment on CT scans is a prerequisite for effective opportunistic screening. However, no study has evaluated the implementation of fully automated AI systems for opportunistic screening in real-world clinical practice for routine health check-ups. The aim of this study is to evaluate the performance and clinical utility of a fully automated AI-integrated system for body composition assessment on opportunistic CT during routine health check-ups. This prospective multicenter study included 537 patients who underwent routine health check-ups across 3 institutions. Our AI algorithm models are composed of selecting L3 slice and segmenting muscle and fat area in an end-to-end manner. The AI models were integrated into the Picture Archiving and Communication System (PACS) at each institution. Technical success rate, processing time, and segmentation accuracy in Dice similarity coefficient were assessed. Body composition metrics were analyzed across age and sex groups. The fully automated AI-integrated system successfully retrieved anonymized CT images from the PACS, performed L3 selection and segmentation, and provided body composition metrics, including muscle quality maps and muscle age. The technical success rate was 100% without any failed cases requiring manual adjustment. The mean processing time from CT acquisition to report generation was 4.12 seconds. Segmentation accuracy comparing AI results and human expert results was 97.4%. Significant age-related declines in skeletal muscle area and normal-attenuation muscle area were observed, alongside increases in low-attenuation muscle area and intramuscular adipose tissue. Implementation of the fully automated AI-integrated system significantly enhanced opportunistic sarcopenia screening, achieving excellent technical success and high segmentation accuracy without manual intervention. This system has the potential to transform routine health check-ups by providing rapid and accurate assessments of body composition.

Du P, An D, Wang C, Wang JX

pubmed logopapersSep 5 2025
Image-based modeling is essential for understanding cardiovascular hemodynamics and advancing the diagnosis and treatment of cardiovascular diseases. Constructing patient-specific vascular models remains labor-intensive, error-prone, and time-consuming, limiting their clinical applications. This study introduces a deep-learning framework that automates the creation of simulation-ready vascular models from medical images. The framework integrates a segmentation module for accurate voxel-based vessel delineation with a surface deformation module that performs anatomically consistent and unsupervised surface refinements guided by medical image data. The integrated pipeline addresses key limitations of existing methods, enhancing geometric accuracy and computational efficiency. Evaluated on public datasets, it achieves state-of-the-art segmentation performance while substantially reducing manual effort and processing time. The resulting vascular models exhibit anatomically accurate and visually realistic geometries, effectively capturing both primary vessels and intricate branching patterns. In conclusion, this work advances the scalability and reliability of image-based computational modeling, facilitating broader applications in clinical and research settings.

Jacobson LEO, Bader-El-Den M, Maurya L, Hopgood AA, Tamma V, Masum SK, Prendergast DJ, Osborn P

pubmed logopapersSep 5 2025
Prostate cancer (PCa) remains one of the most prevalent cancers among men, with over 1.4 million new cases and 375,304 deaths reported globally in 2020. Current diagnostic approaches, such as prostate-specific antigen (PSA) testing and trans-rectal ultrasound (TRUS)-guided biopsies, are often Limited by low specificity and accuracy. This study addresses these Limitations by leveraging deep learning-based image segmentation techniques on a dataset comprising 61,119 T2-weighted MR images from 1151 patients to enhance PCa detection and characterisation. A multi-stage segmentation approach, including one-stage, sequential two-stage, and end-to-end two-stage methods, was evaluated using various deep learning architectures. The MultiResUNet model, integrated into a multi-stage segmentation framework, demonstrated significant improvements in delineating prostate boundaries. The study utilised a dataset of over 61,000 T2-weighted magnetic resonance (MR) images from more than 1100 patients, employing three distinct segmentation strategies: one-stage, sequential two-stage, and end-to-end two-stage methods. The end-to-end approach, leveraging shared feature representations, consistently outperformed other methods, underscoring its effectiveness in enhancing diagnostic accuracy. These findings highlight the potential of advanced deep learning architectures in streamlining prostate cancer detection and treatment planning. Future work will focus on further optimisation of the models and assessing their generalisability to diverse medical imaging contexts.

da Rocha NC, Barbosa AMP, Schnr YO, Peres LDB, de Andrade LGM, de Magalhaes Rosa GJ, Pessoa EC, Corrente JE, de Arruda Silveira LV

pubmed logopapersSep 5 2025
Breast cancer is the leading cause of cancer-related deaths among women worldwide. Early detection through mammography significantly improves outcomes, with breast density acting as both a risk factor and a key interpretive feature. Although the Breast Imaging Reporting and Data System (BI-RADS) provides standardized density categories, assessments are often subjective and variable. While automated tools exist, most are proprietary and resource-intensive, limiting their use in underserved settings. There is a critical need for accessible, low-cost AI solutions that provide consistent breast density classification. This study aims to develop and evaluate an open-source, computer vision-based approach using deep learning techniques for objective breast density assessment in mammography images, with a focus on accessibility, consistency, and applicability in resource-limited healthcare environments. Our approach integrates a custom-designed convolutional neural network (CD-CNN) with an extreme learning machine (ELM) layer for image-based breast density classification. The retrospective dataset includes 10,371 full-field digital mammography images, previously categorized by radiologists into one of four BI-RADS breast density categories (A-D). The proposed model achieved a testing accuracy of 95.4%, with a specificity of 98.0% and a sensitivity of 92.5%. Agreement between the automated breast density classification and the specialists' consensus was strong, with a weighted kappa of 0.90 (95% CI: 0.82-0.98). On the external and independent mini-MIAS dataset, the model achieved an accuracy of 73.9%, a precision of 81.1%, a specificity of 87.3%, and a sensitivity of 75.1%, which is comparable to the performance reported in previous studies using this dataset. The proposed approach advances breast density assessment in mammograms, enhancing accuracy and consistency to support early breast cancer detection.

Wu X, Liu F, Xu G, Ma Y, Cheng C, He R, Yang A, Gan J, Liang J, Wu X, Zhao S

pubmed logopapersSep 5 2025
The objective of this retrospective study is to develop and validate an artificial intelligence model constrained by the anatomical structure of the brain with the aim of improving the accuracy of prenatal diagnosis of fetal cerebellar hypoplasia using ultrasound imaging. Fetal central nervous system dysplasia is one of the most prevalent congenital malformations, and cerebellar hypoplasia represents a significant manifestation of this anomaly. Accurate clinical diagnosis is of great importance for the purpose of prenatal screening of fetal health. Although ultrasound has been extensively utilized to assess fetal development, the accurate assessment of cerebellar development remains challenging due to the inherent limitations of ultrasound imaging, including low resolution, artifacts, and acoustic shadowing of the skull. This retrospective study included 302 cases diagnosed with cerebellar hypoplasia and 549 normal pregnancies collected from Maternal and Child Health Hospital of Hubei Province between September 2019 and September 2023. For each case, experienced ultrasound physicians selected appropriate brain ultrasound images to delineate the boundaries of the skull, cerebellum, and cerebellomedullary cistern. These cases were divided into one training set and two test sets, based on the examination dates. This study then proposed a dual-branch deep learning classification network, anatomical structure-constrained network (ASC-Net), which took ultrasound images and anatomical structure masks as separate inputs. The performance of the ASC-Net was extensively evaluated and compared with several state-of-the-art deep learning networks. The impact of anatomical structures on the performance of ASC-Net was carefully examined. ASC-Net demonstrated superior performance in the diagnosis of cerebellar hypoplasia, achieving classification accuracies of 0.9778 and 0.9222, as well as areas under the receiver operating characteristic curve of 0.9986 and 0.9265 on the two test sets. These results significantly outperformed several state-of-the-art networks on the same dataset. In comparison to other studies on cerebellar hypoplasia auxiliary diagnosis, ASC-Net also demonstrated comparable or even better performance. A subgroup analysis revealed that ASC-Net was more capable of distinguishing cerebellar hypoplasia in cases with gestational weeks greater than 30 weeks. Furthermore, when constrained by anatomical structures of both the cerebellum and cistern, ASC-Net exhibited the best performance compared to other kinds of structural constraint. The development and validation of ASC-Net have significantly enhanced the accuracy of prenatal diagnosis of cerebellar hypoplasia using ultrasound images. This study highlights the importance of anatomical structures of the fetal cerebellum and cistern on the performance of the diagnostic artificial intelligence model in ultrasound. This might provide new insights for clinical diagnosis of cerebellar hypoplasia, assist clinicians in providing more targeted advice and treatment during pregnancy, and contribute to improved perinatal healthcare. ASC-Net is open-sourced and publicly available in a GitHub repository at https://github.com/Wwwwww111112/ASC-Net .

Li M, Zhu R, Li M, Wang H, Teng Y

pubmed logopapersSep 5 2025
Recognition of tumors is very important in clinical practice and radiomics; however, the segmentation task currently still needs to be done manually by experts. With the development of deep learning, automatic segmentation of tumors is gradually becoming possible. This paper combines the molecular information from PET and the pathology information from CT for tumor segmentation. A dual-branch encoder is designed based on SE-UNet (Squeeze-and-Excitation Normalization UNet) and Transformer, 3D Convolutional Block Attention Module (CBAM) is added to skip-connection, and BCE loss is used in training for improving segmentation accuracy. The new model is named TASE-UNet. The proposed method was tested on the HECKTOR2022 dataset, which obtains the best segmentation accuracy compared with state-of-the-art methods. Specifically, we obtained results of 76.10 <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>%</mo></math> and 3.27 for the two key evaluation metrics, DSC and HD95. Experiments demonstrate that the designed network is reasonable and effective. The full implementation is available at https://github.com/LiMingrui1/TASE-UNet .

Zhang LB, Lu XJ, Zhang HJ, Wei ZX, Kong YZ, Tu YH, Iannetti GD, Hu L

pubmed logopapersSep 5 2025
Revealing the neural underpinnings of pain sensitivity is crucial for understanding how the brain encodes individual differences in pain and advancing personalized pain treatments. Here, six large and diverse functional magnetic resonance imaging (fMRI) datasets (total N = 1046) are leveraged to uncover the neural mechanisms of pain sensitivity. Replicable and generalizable correlations are found between nociceptive-evoked fMRI responses and pain sensitivity for laser heat, contact heat, and mechanical pains. These fMRI responses correlate more strongly with pain sensitivity than with tactile, auditory, and visual sensitivity. Moreover, a machine learning model is developed that accurately predicts not only pain sensitivity (r = 0.20∼0.56, ps < 0.05) but also analgesic effects of different treatments in healthy individuals (r = 0.17∼0.25, ps < 0.05). Notably, these findings are influenced considerably by sample sizes, requiring >200 for univariate whole brain correlation analysis and >150 for multivariate machine learning modeling. Altogether, this study demonstrates that fMRI activations encode pain sensitivity across various types of pain, thus facilitating interpretations of subjective pain reports and promoting more mechanistically informed investigations into pain physiology.
Page 191 of 6526512 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.