Sort by:
Page 31 of 3433422 results

A dual-branch encoder network based on squeeze-and-excitation UNet and transformer for 3D PET-CT image tumor segmentation.

Li M, Zhu R, Li M, Wang H, Teng Y

pubmed logopapersSep 5 2025
Recognition of tumors is very important in clinical practice and radiomics; however, the segmentation task currently still needs to be done manually by experts. With the development of deep learning, automatic segmentation of tumors is gradually becoming possible. This paper combines the molecular information from PET and the pathology information from CT for tumor segmentation. A dual-branch encoder is designed based on SE-UNet (Squeeze-and-Excitation Normalization UNet) and Transformer, 3D Convolutional Block Attention Module (CBAM) is added to skip-connection, and BCE loss is used in training for improving segmentation accuracy. The new model is named TASE-UNet. The proposed method was tested on the HECKTOR2022 dataset, which obtains the best segmentation accuracy compared with state-of-the-art methods. Specifically, we obtained results of 76.10 <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>%</mo></math> and 3.27 for the two key evaluation metrics, DSC and HD95. Experiments demonstrate that the designed network is reasonable and effective. The full implementation is available at https://github.com/LiMingrui1/TASE-UNet .

Preoperative Assessment of Extraprostatic Extension in Prostate Cancer Using an Interpretable Tabular Prior-Data Fitted Network-Based Radiomics Model From MRI.

Liu BC, Ding XH, Xu HH, Bai X, Zhang XJ, Cui MQ, Guo AT, Mu XT, Xie LZ, Kang HH, Zhou SP, Zhao J, Wang BJ, Wang HY

pubmed logopapersSep 5 2025
MRI assessment for extraprostatic extension (EPE) of prostate cancer (PCa) is challenging due to limited accuracy and interobserver agreement. To develop an interpretable Tabular Prior-data Fitted Network (TabPFN)-based radiomics model to evaluate EPE using MRI and explore its integration with radiologists' assessments. Retrospective. Five hundred and thirteen consecutive patients who underwent radical prostatectomy. Four hundred and eleven patients from center 1 (mean age 67 ± 7 years) formed training (287 patients) and internal test (124 patients) sets, and 102 patients from center 2 (mean age 66 ± 6 years) were assigned as an external test set. Three Tesla, fast spin echo T2-weighted imaging (T2WI) and diffusion-weighted imaging using single-shot echo planar imaging. Radiomics features were extracted from T2WI and apparent diffusion coefficient maps, and the TabRadiomics model was developed using TabPFN. Three machine learning models served as baseline comparisons: support vector machine, random forest, and categorical boosting. Two radiologists (with > 1500 and > 500 prostate MRI interpretations, respectively) independently evaluated EPE grade on MRI. Artificial intelligence (AI)-modified EPE grading algorithms incorporating the TabRadiomics model with radiologists' interpretations of curvilinear contact length and frank EPE were simulated. Receiver operating characteristic curve (AUC), Delong test, and McNemar test. p < 0.05 was considered significant. The TabRadiomics model performed comparably to machine learning models in both internal and external tests, with AUCs of 0.806 (95% CI, 0.727-0.884) and 0.842 (95% CI, 0.770-0.912), respectively. AI-modified algorithms showed significantly higher accuracies compared with the less experienced reader in internal testing, with up to 34.7% of interpretations requiring no radiologist input. However, no difference was observed in both readers in the external test set. The TabRadiomics model demonstrated high performance in EPE assessment and may improve clinical assessment in PCa. 4. Stage 2.

A Replicable and Generalizable Neuroimaging-Based Indicator of Pain Sensitivity Across Individuals.

Zhang LB, Lu XJ, Zhang HJ, Wei ZX, Kong YZ, Tu YH, Iannetti GD, Hu L

pubmed logopapersSep 5 2025
Revealing the neural underpinnings of pain sensitivity is crucial for understanding how the brain encodes individual differences in pain and advancing personalized pain treatments. Here, six large and diverse functional magnetic resonance imaging (fMRI) datasets (total N = 1046) are leveraged to uncover the neural mechanisms of pain sensitivity. Replicable and generalizable correlations are found between nociceptive-evoked fMRI responses and pain sensitivity for laser heat, contact heat, and mechanical pains. These fMRI responses correlate more strongly with pain sensitivity than with tactile, auditory, and visual sensitivity. Moreover, a machine learning model is developed that accurately predicts not only pain sensitivity (r = 0.20∼0.56, ps < 0.05) but also analgesic effects of different treatments in healthy individuals (r = 0.17∼0.25, ps < 0.05). Notably, these findings are influenced considerably by sample sizes, requiring >200 for univariate whole brain correlation analysis and >150 for multivariate machine learning modeling. Altogether, this study demonstrates that fMRI activations encode pain sensitivity across various types of pain, thus facilitating interpretations of subjective pain reports and promoting more mechanistically informed investigations into pain physiology.

AI-powered automated model construction for patient-specific CFD simulations of aortic flows.

Du P, An D, Wang C, Wang JX

pubmed logopapersSep 5 2025
Image-based modeling is essential for understanding cardiovascular hemodynamics and advancing the diagnosis and treatment of cardiovascular diseases. Constructing patient-specific vascular models remains labor-intensive, error-prone, and time-consuming, limiting their clinical applications. This study introduces a deep-learning framework that automates the creation of simulation-ready vascular models from medical images. The framework integrates a segmentation module for accurate voxel-based vessel delineation with a surface deformation module that performs anatomically consistent and unsupervised surface refinements guided by medical image data. The integrated pipeline addresses key limitations of existing methods, enhancing geometric accuracy and computational efficiency. Evaluated on public datasets, it achieves state-of-the-art segmentation performance while substantially reducing manual effort and processing time. The resulting vascular models exhibit anatomically accurate and visually realistic geometries, effectively capturing both primary vessels and intricate branching patterns. In conclusion, this work advances the scalability and reliability of image-based computational modeling, facilitating broader applications in clinical and research settings.

Enhancing Breast Density Assessment in Mammograms Through Artificial Intelligence.

da Rocha NC, Barbosa AMP, Schnr YO, Peres LDB, de Andrade LGM, de Magalhaes Rosa GJ, Pessoa EC, Corrente JE, de Arruda Silveira LV

pubmed logopapersSep 5 2025
Breast cancer is the leading cause of cancer-related deaths among women worldwide. Early detection through mammography significantly improves outcomes, with breast density acting as both a risk factor and a key interpretive feature. Although the Breast Imaging Reporting and Data System (BI-RADS) provides standardized density categories, assessments are often subjective and variable. While automated tools exist, most are proprietary and resource-intensive, limiting their use in underserved settings. There is a critical need for accessible, low-cost AI solutions that provide consistent breast density classification. This study aims to develop and evaluate an open-source, computer vision-based approach using deep learning techniques for objective breast density assessment in mammography images, with a focus on accessibility, consistency, and applicability in resource-limited healthcare environments. Our approach integrates a custom-designed convolutional neural network (CD-CNN) with an extreme learning machine (ELM) layer for image-based breast density classification. The retrospective dataset includes 10,371 full-field digital mammography images, previously categorized by radiologists into one of four BI-RADS breast density categories (A-D). The proposed model achieved a testing accuracy of 95.4%, with a specificity of 98.0% and a sensitivity of 92.5%. Agreement between the automated breast density classification and the specialists' consensus was strong, with a weighted kappa of 0.90 (95% CI: 0.82-0.98). On the external and independent mini-MIAS dataset, the model achieved an accuracy of 73.9%, a precision of 81.1%, a specificity of 87.3%, and a sensitivity of 75.1%, which is comparable to the performance reported in previous studies using this dataset. The proposed approach advances breast density assessment in mammograms, enhancing accuracy and consistency to support early breast cancer detection.

Prenatal diagnosis of cerebellar hypoplasia in fetal ultrasound using deep learning under the constraint of the anatomical structures of the cerebellum and cistern.

Wu X, Liu F, Xu G, Ma Y, Cheng C, He R, Yang A, Gan J, Liang J, Wu X, Zhao S

pubmed logopapersSep 5 2025
The objective of this retrospective study is to develop and validate an artificial intelligence model constrained by the anatomical structure of the brain with the aim of improving the accuracy of prenatal diagnosis of fetal cerebellar hypoplasia using ultrasound imaging. Fetal central nervous system dysplasia is one of the most prevalent congenital malformations, and cerebellar hypoplasia represents a significant manifestation of this anomaly. Accurate clinical diagnosis is of great importance for the purpose of prenatal screening of fetal health. Although ultrasound has been extensively utilized to assess fetal development, the accurate assessment of cerebellar development remains challenging due to the inherent limitations of ultrasound imaging, including low resolution, artifacts, and acoustic shadowing of the skull. This retrospective study included 302 cases diagnosed with cerebellar hypoplasia and 549 normal pregnancies collected from Maternal and Child Health Hospital of Hubei Province between September 2019 and September 2023. For each case, experienced ultrasound physicians selected appropriate brain ultrasound images to delineate the boundaries of the skull, cerebellum, and cerebellomedullary cistern. These cases were divided into one training set and two test sets, based on the examination dates. This study then proposed a dual-branch deep learning classification network, anatomical structure-constrained network (ASC-Net), which took ultrasound images and anatomical structure masks as separate inputs. The performance of the ASC-Net was extensively evaluated and compared with several state-of-the-art deep learning networks. The impact of anatomical structures on the performance of ASC-Net was carefully examined. ASC-Net demonstrated superior performance in the diagnosis of cerebellar hypoplasia, achieving classification accuracies of 0.9778 and 0.9222, as well as areas under the receiver operating characteristic curve of 0.9986 and 0.9265 on the two test sets. These results significantly outperformed several state-of-the-art networks on the same dataset. In comparison to other studies on cerebellar hypoplasia auxiliary diagnosis, ASC-Net also demonstrated comparable or even better performance. A subgroup analysis revealed that ASC-Net was more capable of distinguishing cerebellar hypoplasia in cases with gestational weeks greater than 30 weeks. Furthermore, when constrained by anatomical structures of both the cerebellum and cistern, ASC-Net exhibited the best performance compared to other kinds of structural constraint. The development and validation of ASC-Net have significantly enhanced the accuracy of prenatal diagnosis of cerebellar hypoplasia using ultrasound images. This study highlights the importance of anatomical structures of the fetal cerebellum and cistern on the performance of the diagnostic artificial intelligence model in ultrasound. This might provide new insights for clinical diagnosis of cerebellar hypoplasia, assist clinicians in providing more targeted advice and treatment during pregnancy, and contribute to improved perinatal healthcare. ASC-Net is open-sourced and publicly available in a GitHub repository at https://github.com/Wwwwww111112/ASC-Net .

Implementation of Fully Automated AI-Integrated System for Body Composition Assessment on Computed Tomography for Opportunistic Sarcopenia Screening: Multicenter Prospective Study.

Urooj B, Ko Y, Na S, Kim IO, Lee EH, Cho S, Jeong H, Khang S, Lee J, Kim KW

pubmed logopapersSep 5 2025
Opportunistic computed tomography (CT) screening for the evaluation of sarcopenia and myosteatosis has been gaining emphasis. A fully automated artificial intelligence (AI)-integrated system for body composition assessment on CT scans is a prerequisite for effective opportunistic screening. However, no study has evaluated the implementation of fully automated AI systems for opportunistic screening in real-world clinical practice for routine health check-ups. The aim of this study is to evaluate the performance and clinical utility of a fully automated AI-integrated system for body composition assessment on opportunistic CT during routine health check-ups. This prospective multicenter study included 537 patients who underwent routine health check-ups across 3 institutions. Our AI algorithm models are composed of selecting L3 slice and segmenting muscle and fat area in an end-to-end manner. The AI models were integrated into the Picture Archiving and Communication System (PACS) at each institution. Technical success rate, processing time, and segmentation accuracy in Dice similarity coefficient were assessed. Body composition metrics were analyzed across age and sex groups. The fully automated AI-integrated system successfully retrieved anonymized CT images from the PACS, performed L3 selection and segmentation, and provided body composition metrics, including muscle quality maps and muscle age. The technical success rate was 100% without any failed cases requiring manual adjustment. The mean processing time from CT acquisition to report generation was 4.12 seconds. Segmentation accuracy comparing AI results and human expert results was 97.4%. Significant age-related declines in skeletal muscle area and normal-attenuation muscle area were observed, alongside increases in low-attenuation muscle area and intramuscular adipose tissue. Implementation of the fully automated AI-integrated system significantly enhanced opportunistic sarcopenia screening, achieving excellent technical success and high segmentation accuracy without manual intervention. This system has the potential to transform routine health check-ups by providing rapid and accurate assessments of body composition.

Prediction of bronchopulmonary dysplasia using machine learning from chest X-rays of premature infants in the neonatal intensive care unit.

Ozcelik G, Erol S, Korkut S, Kose Cetinkaya A, Ozcelik H

pubmed logopapersSep 5 2025
Bronchopulmonary dysplasia (BPD) is a significant morbidity in premature infants. This study aimed to assess the accuracy of the model's predictions in comparison to clinical outcomes. Medical records of premature infants born ≤ 28 weeks and < 1250 g between January 1, 2020, and December 31, 2021, in the neonatal intensive care unit were obtained. In this retrospective model development and validation study, an artificial intelligence model was developed using DenseNet121 deep learning architecture. The data set and test set consisted of chest radiographs obtained on postnatal day 1 as well as during the 2nd, 3rd, and 4th weeks. The model predicted the likelihood of developing no BPD, or mild, moderate, or severe BPD. The accuracy of the artificial intelligence model was tested based on the clinical outcomes of patients. This study included 122 premature infants with a birth weight of 990 g (range: 840-1120 g). Of these, 33 (27%) patients did not develop BPD, 24 (19.7%) had mild BPD, 28 (23%) had moderate BPD, and 37 (30%) had severe BPD. A total of 395 chest radiographs from these patients were used to develop an artificial intelligence (AI) model for predicting BPD. Area under the curve values, representing the accuracy of predicting severe, moderate, mild, and no BPD, were as follows: 0.79, 0.75, 0.82, and 0.82 for day 1 radiographs; 0.88, 0.82, 0.74, and 0.94 for week 2 radiographs; 0.87, 0.83, 0.88, and 0.96 for week 3 radiographs; and 0.90, 0.82, 0.86, and 0.97 for week 4 radiographs. The artificial intelligence model successfully identified BPD on chest radiographs and classified its severity. The accuracy of the model can be improved using larger control and external validation datasets.

AI-based synthetic simulation CT generation from diagnostic CT for simulation-free workflow of spinal palliative radiotherapy

Han, Y., Hanania, A. N., Siddiqui, Z. A., Ugarte, V., Zhou, B., Mohamed, A. S. R., Pathak, P., Hamstra, D. A., Sun, B.

medrxiv logopreprintSep 5 2025
Purpose/ObjectiveCurrent radiotherapy (RT) planning workflows rely on pre-treatment simulation CT (sCT), which can significantly delay treatment initiation, particularly in resource-constrained settings. While diagnostic CT (dCT) offers a potential alternative for expedited planning, inherent geometric discrepancies from sCT in patient positioning and table curvature limit its direct use for accurate RT planning. This study presents a novel AI-based method designed to overcome these limitations by generating synthetic simulation CT (ssCT) directly from standard dCT for spinal palliative RT, aiming to eliminate the need for sCT and accelerate the treatment workflow. Materials/MethodsssCTs were generated using two neural network models to adjust spine position and correct table curvature. The neural networks use a three-layer structure (ReLU activation), optimized by Adam with MSE loss and MAE metrics. The models were trained on paired dCT and sCT images from 30 patients undergoing palliative spine radiotherapy from a safety-net hospital, with 22 cases used for training and 8 for testing. To explore institutional dependence, the models were also tested on 7 patients from an academic medical center (AMC). To evaluate ssCT accuracy, both ssCT and dCT were aligned with sCT using the same frame of reference rigid registration on bone windows. Dosimetric differences were assessed by comparing dCT vs. sCT and ssCT vs. sCT, quantifying deviations in dose-volume histogram (DVH) metrics, including Dmean, Dmax, D95, D99, V100, V107, and root-mean-square (RMS) differences. The imaging and plan quality was assessed by four radiation oncologists using a Likert score. The Wilcoxon signed-rank test was used to determine whether there is a significant difference between the two methods. ResultsFor the safety-net hospital cases, the generated ssCT demonstrated significantly improved geometric and dosimetric accuracy compared to dCT. ssCT reduced the mean difference in key dosimetric parameters (e.g., Dmean difference decreased from 2.0% for dCT vs. sCT to 0.57% for ssCT vs. sCT with significant improvement under the Wilcoxon signed-rank test) and achieved a significant reduction in the RMS difference of DVH curves (from 6.4% to 2.2%). Furthermore, physician evaluations showed that ssCT was consistently rated as significantly superior for treatment planning images (mean scores improving from "Acceptable" for dCT to "Good to Perfect" for ssCT), reflecting improved confidence in target and tissue positioning. In the academic medical-center cohort--where technologists already apply meticulous pre-scan alignment--ssCT still yielded statistically significant, though smaller, improvements in several dosimetric endpoints and in observer ratings. ConclusionOur AI-driven approach successfully generates ssCT from dCT that achieves geometric and dosimetric accuracy comparable to sCT for spinal palliative RT planning. By specifically addressing critical discrepancies like spine position and table curvature, this method offers a robust approach to bypass the need for dedicated sCT simulations. This advancement has the potential to significantly streamline the RT workflow, reduce treatment uncertainties, and accelerate time to treatment, offering a highly promising solution for improving access to timely and accurate radiotherapy, especially in limited-resource environments.

Predicting Efficacy of Neoadjuvant Chemoradiotherapy for Locally Advanced Rectal Cancer Using Transrectal Contrast-Enhanced Ultrasound-Based Radiomics Model.

Liao Z, Yang Y, Luo Y, Yin H, Jing J, Zhuang H

pubmed logopapersSep 5 2025
Predicting tumor regression grade (TRG) after neoadjuvant chemoradiotherapy (NCRT) in patients with locally advanced rectal cancer (LARC) preoperatively accurately is crucial for providing individualized treatment plans. This study aims to develop transrectal contrast-enhanced ultrasound-based (TR-CEUS) radiomics models for predicting TRG. A total of 190 LARC patients undergoing NCRT and subsequent total mesorectal excision were categorized into good and poor response groups based on pathological TRG. TR-CEUS examinations were conducted before and after NCRT. Machine learning (ML) models for predicting TRG were developed by employing pre- and post-NCRT TR-CEUS image series, based on seven classifiers, including random forest (RF), multi-layer perceptron (MLP) and so on. The predictive performance of models was evaluated using receiver operating characteristic curve analysis and Delong test. A total of 1525 TR-CEUS images were included for analysis, and 3360 ML models were constructed using image series before and after NCRT, respectively. The optimal pre-NCRT ML model, constructed from imaging series before NCRT, was RF; whereas the optimal post-NCRT model, derived from imaging series after NCRT, was MLP. The areas under the curve for the optimal RF and MLP models demonstrated values of 0.609 and 0.857, respectively, in the cross-validation cohort, with corresponding values of 0.659 and 0.841 observed in the independent test cohort. Delong tests showed that the predictive efficacy of the post-NCRT model was statistically higher than that of the pre-NCRT model (p < 0.05). Radiomics model developed by TR-CEUS images after NCRT demonstrated high predictive performance for TRG, thereby facilitating precise evaluation of therapeutic response to NCRT in LARC patients.
Page 31 of 3433422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.