Sort by:
Page 30 of 3433422 results

Interpretable machine learning model for characterizing magnetic susceptibility-based biomarkers in first episode psychosis.

Franco P, Montalba C, Caulier-Cisterna R, Milovic C, González A, Ramirez-Mahaluf JP, Undurraga J, Salas R, Crossley N, Tejos C, Uribe S

pubmed logopapersSep 6 2025
Several studies have shown changes in neurochemicals within the deep-brain nuclei of patients with psychosis. These alterations indicate a dysfunction in dopamine within subcortical regions affected by fluctuations in iron concentrations. Quantitative Susceptibility Mapping (QSM) is a method employed to measure iron concentration, offering a potential means to identify dopamine dysfunction in these subcortical areas. This study employed a random forest algorithm to predict susceptibility features of the First-Episode Psychosis (FEP) and the response to antipsychotics using Shapley Additionality Explanation (SHAP) values. 3D multi-echo Gradient Echo (GRE) and T1-weighted GRE were obtained in 61 healthy-volunteers (HV) and 76 FEP patients (32 % Treatment-Resistant Schizophrenia (TRS) and 68 % treatment-Responsive Schizophrenia (RS)) using a 3T Philips Ingenia MRI scanner. QSM and R2* were reconstructed and averaged in twenty-two segmented regions of interest. We used a Sequential Forward Selection as a feature selection algorithm and a Random Forest as a model to predict FEP patients and their response to antipsychotics. We further applied the SHAP framework to identify informative features and their interpretations. Finally, multiple correlation patterns from magnetic susceptibility parameters were extracted using hierarchical clustering. Our approach accurately classifies HV and FEP patients with 76.48 ± 10.73 % accuracy (using four features) and TRS vs RS patients with 76.43 ± 12.57 % accuracy (using four features), using 10-fold stratified cross-validation. The SHAP analyses indicated the top four nonlinear relationships between the selected features. Hierarchical clustering revealed two groups of correlated features for each study. Early prediction of treatment response enables tailored strategies for FEP patients with treatment resistance, ensuring timely and effective interventions.

Implementation of Fully Automated AI-Integrated System for Body Composition Assessment on Computed Tomography for Opportunistic Sarcopenia Screening: Multicenter Prospective Study.

Urooj B, Ko Y, Na S, Kim IO, Lee EH, Cho S, Jeong H, Khang S, Lee J, Kim KW

pubmed logopapersSep 5 2025
Opportunistic computed tomography (CT) screening for the evaluation of sarcopenia and myosteatosis has been gaining emphasis. A fully automated artificial intelligence (AI)-integrated system for body composition assessment on CT scans is a prerequisite for effective opportunistic screening. However, no study has evaluated the implementation of fully automated AI systems for opportunistic screening in real-world clinical practice for routine health check-ups. The aim of this study is to evaluate the performance and clinical utility of a fully automated AI-integrated system for body composition assessment on opportunistic CT during routine health check-ups. This prospective multicenter study included 537 patients who underwent routine health check-ups across 3 institutions. Our AI algorithm models are composed of selecting L3 slice and segmenting muscle and fat area in an end-to-end manner. The AI models were integrated into the Picture Archiving and Communication System (PACS) at each institution. Technical success rate, processing time, and segmentation accuracy in Dice similarity coefficient were assessed. Body composition metrics were analyzed across age and sex groups. The fully automated AI-integrated system successfully retrieved anonymized CT images from the PACS, performed L3 selection and segmentation, and provided body composition metrics, including muscle quality maps and muscle age. The technical success rate was 100% without any failed cases requiring manual adjustment. The mean processing time from CT acquisition to report generation was 4.12 seconds. Segmentation accuracy comparing AI results and human expert results was 97.4%. Significant age-related declines in skeletal muscle area and normal-attenuation muscle area were observed, alongside increases in low-attenuation muscle area and intramuscular adipose tissue. Implementation of the fully automated AI-integrated system significantly enhanced opportunistic sarcopenia screening, achieving excellent technical success and high segmentation accuracy without manual intervention. This system has the potential to transform routine health check-ups by providing rapid and accurate assessments of body composition.

AI-powered automated model construction for patient-specific CFD simulations of aortic flows.

Du P, An D, Wang C, Wang JX

pubmed logopapersSep 5 2025
Image-based modeling is essential for understanding cardiovascular hemodynamics and advancing the diagnosis and treatment of cardiovascular diseases. Constructing patient-specific vascular models remains labor-intensive, error-prone, and time-consuming, limiting their clinical applications. This study introduces a deep-learning framework that automates the creation of simulation-ready vascular models from medical images. The framework integrates a segmentation module for accurate voxel-based vessel delineation with a surface deformation module that performs anatomically consistent and unsupervised surface refinements guided by medical image data. The integrated pipeline addresses key limitations of existing methods, enhancing geometric accuracy and computational efficiency. Evaluated on public datasets, it achieves state-of-the-art segmentation performance while substantially reducing manual effort and processing time. The resulting vascular models exhibit anatomically accurate and visually realistic geometries, effectively capturing both primary vessels and intricate branching patterns. In conclusion, this work advances the scalability and reliability of image-based computational modeling, facilitating broader applications in clinical and research settings.

Enhancing Breast Density Assessment in Mammograms Through Artificial Intelligence.

da Rocha NC, Barbosa AMP, Schnr YO, Peres LDB, de Andrade LGM, de Magalhaes Rosa GJ, Pessoa EC, Corrente JE, de Arruda Silveira LV

pubmed logopapersSep 5 2025
Breast cancer is the leading cause of cancer-related deaths among women worldwide. Early detection through mammography significantly improves outcomes, with breast density acting as both a risk factor and a key interpretive feature. Although the Breast Imaging Reporting and Data System (BI-RADS) provides standardized density categories, assessments are often subjective and variable. While automated tools exist, most are proprietary and resource-intensive, limiting their use in underserved settings. There is a critical need for accessible, low-cost AI solutions that provide consistent breast density classification. This study aims to develop and evaluate an open-source, computer vision-based approach using deep learning techniques for objective breast density assessment in mammography images, with a focus on accessibility, consistency, and applicability in resource-limited healthcare environments. Our approach integrates a custom-designed convolutional neural network (CD-CNN) with an extreme learning machine (ELM) layer for image-based breast density classification. The retrospective dataset includes 10,371 full-field digital mammography images, previously categorized by radiologists into one of four BI-RADS breast density categories (A-D). The proposed model achieved a testing accuracy of 95.4%, with a specificity of 98.0% and a sensitivity of 92.5%. Agreement between the automated breast density classification and the specialists' consensus was strong, with a weighted kappa of 0.90 (95% CI: 0.82-0.98). On the external and independent mini-MIAS dataset, the model achieved an accuracy of 73.9%, a precision of 81.1%, a specificity of 87.3%, and a sensitivity of 75.1%, which is comparable to the performance reported in previous studies using this dataset. The proposed approach advances breast density assessment in mammograms, enhancing accuracy and consistency to support early breast cancer detection.

Prenatal diagnosis of cerebellar hypoplasia in fetal ultrasound using deep learning under the constraint of the anatomical structures of the cerebellum and cistern.

Wu X, Liu F, Xu G, Ma Y, Cheng C, He R, Yang A, Gan J, Liang J, Wu X, Zhao S

pubmed logopapersSep 5 2025
The objective of this retrospective study is to develop and validate an artificial intelligence model constrained by the anatomical structure of the brain with the aim of improving the accuracy of prenatal diagnosis of fetal cerebellar hypoplasia using ultrasound imaging. Fetal central nervous system dysplasia is one of the most prevalent congenital malformations, and cerebellar hypoplasia represents a significant manifestation of this anomaly. Accurate clinical diagnosis is of great importance for the purpose of prenatal screening of fetal health. Although ultrasound has been extensively utilized to assess fetal development, the accurate assessment of cerebellar development remains challenging due to the inherent limitations of ultrasound imaging, including low resolution, artifacts, and acoustic shadowing of the skull. This retrospective study included 302 cases diagnosed with cerebellar hypoplasia and 549 normal pregnancies collected from Maternal and Child Health Hospital of Hubei Province between September 2019 and September 2023. For each case, experienced ultrasound physicians selected appropriate brain ultrasound images to delineate the boundaries of the skull, cerebellum, and cerebellomedullary cistern. These cases were divided into one training set and two test sets, based on the examination dates. This study then proposed a dual-branch deep learning classification network, anatomical structure-constrained network (ASC-Net), which took ultrasound images and anatomical structure masks as separate inputs. The performance of the ASC-Net was extensively evaluated and compared with several state-of-the-art deep learning networks. The impact of anatomical structures on the performance of ASC-Net was carefully examined. ASC-Net demonstrated superior performance in the diagnosis of cerebellar hypoplasia, achieving classification accuracies of 0.9778 and 0.9222, as well as areas under the receiver operating characteristic curve of 0.9986 and 0.9265 on the two test sets. These results significantly outperformed several state-of-the-art networks on the same dataset. In comparison to other studies on cerebellar hypoplasia auxiliary diagnosis, ASC-Net also demonstrated comparable or even better performance. A subgroup analysis revealed that ASC-Net was more capable of distinguishing cerebellar hypoplasia in cases with gestational weeks greater than 30 weeks. Furthermore, when constrained by anatomical structures of both the cerebellum and cistern, ASC-Net exhibited the best performance compared to other kinds of structural constraint. The development and validation of ASC-Net have significantly enhanced the accuracy of prenatal diagnosis of cerebellar hypoplasia using ultrasound images. This study highlights the importance of anatomical structures of the fetal cerebellum and cistern on the performance of the diagnostic artificial intelligence model in ultrasound. This might provide new insights for clinical diagnosis of cerebellar hypoplasia, assist clinicians in providing more targeted advice and treatment during pregnancy, and contribute to improved perinatal healthcare. ASC-Net is open-sourced and publicly available in a GitHub repository at https://github.com/Wwwwww111112/ASC-Net .

A dual-branch encoder network based on squeeze-and-excitation UNet and transformer for 3D PET-CT image tumor segmentation.

Li M, Zhu R, Li M, Wang H, Teng Y

pubmed logopapersSep 5 2025
Recognition of tumors is very important in clinical practice and radiomics; however, the segmentation task currently still needs to be done manually by experts. With the development of deep learning, automatic segmentation of tumors is gradually becoming possible. This paper combines the molecular information from PET and the pathology information from CT for tumor segmentation. A dual-branch encoder is designed based on SE-UNet (Squeeze-and-Excitation Normalization UNet) and Transformer, 3D Convolutional Block Attention Module (CBAM) is added to skip-connection, and BCE loss is used in training for improving segmentation accuracy. The new model is named TASE-UNet. The proposed method was tested on the HECKTOR2022 dataset, which obtains the best segmentation accuracy compared with state-of-the-art methods. Specifically, we obtained results of 76.10 <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>%</mo></math> and 3.27 for the two key evaluation metrics, DSC and HD95. Experiments demonstrate that the designed network is reasonable and effective. The full implementation is available at https://github.com/LiMingrui1/TASE-UNet .

A Replicable and Generalizable Neuroimaging-Based Indicator of Pain Sensitivity Across Individuals.

Zhang LB, Lu XJ, Zhang HJ, Wei ZX, Kong YZ, Tu YH, Iannetti GD, Hu L

pubmed logopapersSep 5 2025
Revealing the neural underpinnings of pain sensitivity is crucial for understanding how the brain encodes individual differences in pain and advancing personalized pain treatments. Here, six large and diverse functional magnetic resonance imaging (fMRI) datasets (total N = 1046) are leveraged to uncover the neural mechanisms of pain sensitivity. Replicable and generalizable correlations are found between nociceptive-evoked fMRI responses and pain sensitivity for laser heat, contact heat, and mechanical pains. These fMRI responses correlate more strongly with pain sensitivity than with tactile, auditory, and visual sensitivity. Moreover, a machine learning model is developed that accurately predicts not only pain sensitivity (r = 0.20∼0.56, ps < 0.05) but also analgesic effects of different treatments in healthy individuals (r = 0.17∼0.25, ps < 0.05). Notably, these findings are influenced considerably by sample sizes, requiring >200 for univariate whole brain correlation analysis and >150 for multivariate machine learning modeling. Altogether, this study demonstrates that fMRI activations encode pain sensitivity across various types of pain, thus facilitating interpretations of subjective pain reports and promoting more mechanistically informed investigations into pain physiology.

Preoperative Assessment of Extraprostatic Extension in Prostate Cancer Using an Interpretable Tabular Prior-Data Fitted Network-Based Radiomics Model From MRI.

Liu BC, Ding XH, Xu HH, Bai X, Zhang XJ, Cui MQ, Guo AT, Mu XT, Xie LZ, Kang HH, Zhou SP, Zhao J, Wang BJ, Wang HY

pubmed logopapersSep 5 2025
MRI assessment for extraprostatic extension (EPE) of prostate cancer (PCa) is challenging due to limited accuracy and interobserver agreement. To develop an interpretable Tabular Prior-data Fitted Network (TabPFN)-based radiomics model to evaluate EPE using MRI and explore its integration with radiologists' assessments. Retrospective. Five hundred and thirteen consecutive patients who underwent radical prostatectomy. Four hundred and eleven patients from center 1 (mean age 67 ± 7 years) formed training (287 patients) and internal test (124 patients) sets, and 102 patients from center 2 (mean age 66 ± 6 years) were assigned as an external test set. Three Tesla, fast spin echo T2-weighted imaging (T2WI) and diffusion-weighted imaging using single-shot echo planar imaging. Radiomics features were extracted from T2WI and apparent diffusion coefficient maps, and the TabRadiomics model was developed using TabPFN. Three machine learning models served as baseline comparisons: support vector machine, random forest, and categorical boosting. Two radiologists (with > 1500 and > 500 prostate MRI interpretations, respectively) independently evaluated EPE grade on MRI. Artificial intelligence (AI)-modified EPE grading algorithms incorporating the TabRadiomics model with radiologists' interpretations of curvilinear contact length and frank EPE were simulated. Receiver operating characteristic curve (AUC), Delong test, and McNemar test. p < 0.05 was considered significant. The TabRadiomics model performed comparably to machine learning models in both internal and external tests, with AUCs of 0.806 (95% CI, 0.727-0.884) and 0.842 (95% CI, 0.770-0.912), respectively. AI-modified algorithms showed significantly higher accuracies compared with the less experienced reader in internal testing, with up to 34.7% of interpretations requiring no radiologist input. However, no difference was observed in both readers in the external test set. The TabRadiomics model demonstrated high performance in EPE assessment and may improve clinical assessment in PCa. 4. Stage 2.

Predicting Efficacy of Neoadjuvant Chemoradiotherapy for Locally Advanced Rectal Cancer Using Transrectal Contrast-Enhanced Ultrasound-Based Radiomics Model.

Liao Z, Yang Y, Luo Y, Yin H, Jing J, Zhuang H

pubmed logopapersSep 5 2025
Predicting tumor regression grade (TRG) after neoadjuvant chemoradiotherapy (NCRT) in patients with locally advanced rectal cancer (LARC) preoperatively accurately is crucial for providing individualized treatment plans. This study aims to develop transrectal contrast-enhanced ultrasound-based (TR-CEUS) radiomics models for predicting TRG. A total of 190 LARC patients undergoing NCRT and subsequent total mesorectal excision were categorized into good and poor response groups based on pathological TRG. TR-CEUS examinations were conducted before and after NCRT. Machine learning (ML) models for predicting TRG were developed by employing pre- and post-NCRT TR-CEUS image series, based on seven classifiers, including random forest (RF), multi-layer perceptron (MLP) and so on. The predictive performance of models was evaluated using receiver operating characteristic curve analysis and Delong test. A total of 1525 TR-CEUS images were included for analysis, and 3360 ML models were constructed using image series before and after NCRT, respectively. The optimal pre-NCRT ML model, constructed from imaging series before NCRT, was RF; whereas the optimal post-NCRT model, derived from imaging series after NCRT, was MLP. The areas under the curve for the optimal RF and MLP models demonstrated values of 0.609 and 0.857, respectively, in the cross-validation cohort, with corresponding values of 0.659 and 0.841 observed in the independent test cohort. Delong tests showed that the predictive efficacy of the post-NCRT model was statistically higher than that of the pre-NCRT model (p < 0.05). Radiomics model developed by TR-CEUS images after NCRT demonstrated high predictive performance for TRG, thereby facilitating precise evaluation of therapeutic response to NCRT in LARC patients.

Prediction of intracranial aneurysm rupture from computed tomography angiography using an automated artificial intelligence framework.

Choi JH, Sobisch J, Kim M, Park JC, Ahn JS, Kwun BD, Špiclin Ž, Bizjak Ž, Park W

pubmed logopapersSep 5 2025
Intracranial aneurysms (IAs) are common vascular pathologies with a risk of fatal rupture. Human assessment of rupture risk is error prone, and treatment decision for unruptured IAs often rely on expert opinion and institutional policy. Therefore, we aimed to develop a computer-assisted aneurysm rupture prediction framework to help guide the decision-making process and create future decision criteria. This retrospective study included 335 patients with 500 IAs, of the 500 IAs studied, 250 were labeled as ruptured and 250 as unruptured. A skilled radiologist and a neurosurgeon visually examined the computed tomography angiography (CTA) images and labeled the IAs. For external validation we included 24 IAs, 10 ruptured and 15 unruptured, imaged with 3D rotational angiography (3D-RA) from the Aneurisk dataset. The pretrained nnU-net model was used for automated vessel segmentation, which was fed to pretrained PointNet++ models for vessel labeling and aneurysm segmentation. From these the latent keypoint representations were extracted as vessel shape and aneurysm shape features, respectively. Additionally, conventional features such as IAs morphological measurements, location and patient data, such as age, sex, were used for training and testing eight machine learning models for rupture status classification. The top-performing model, a random forest with feature selection, achieved an area under the receiver operating curve of 0.851, an accuracy of 0.782, a sensitivity of 0.804, and a specificity of 0.760. This model used 14 aneurysm shape features, seven conventional features, and one vessel shape feature. On the external dataset, it achieved an AUC of 0.805. While aneurysm shape features consistently contributed significantly across the classification models, vessel shape features contributed a small portion. Our proposed automated artificial intelligence framework could assist in clinical decision-making by assessing aneurysm rupture risk using screening tests, such as CTA and 3D-RA.
Page 30 of 3433422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.