Sort by:
Page 106 of 1411402 results

Association of peripheral immune markers with brain age and dementia risk estimated using deep learning methods.

Huang X, Yuan S, Ling Y, Tan S, Bai Z, Xu Y, Shen S, Lyu J, Wang H

pubmed logopapersJun 25 2025
The peripheral immune system is essential for maintaining central nervous system homeostasis. This study investigates the effects of peripheral immune markers on accelerated brain aging and dementia using brain-predicted age difference based on neuroimaging. By leveraging data from the UK Biobank, Cox regression was used to explore the relationship between peripheral immune markers and dementia, and multivariate linear regression to assess associations between peripheral immune biomarkers and brain structure. Additionally, we established a brain age prediction model using Simple Fully Convolutional Network (SFCN) deep learning architecture. Analysis of the resulting brain-Predicted Age Difference (PAD) revealed relationships between accelerated brain aging, peripheral immune markers, and dementia. During the median follow-up period of 14.3 years, 4, 277 dementia cases were observed among 322, 761 participants. Both innate and adaptive immune markers correlated with dementia risk. NLR showed the strongest association with dementia risk (HR = 1.14; 95% CI: 1.11-1.18, P<0.001). Multivariate linear regression revealed significant associations between peripheral immune markers and brain regional structural indices. Utilizing the deep learning-based SFCN model, the estimated brain age of dementia subjects (MAE = 5.63, r2 = - 0.46, R = 0.22) was determined. PAD showed significant correlation with dementia risk and certain peripheral immune markers, particularly in individuals with positive brain age increment. This study employs brain age as a quantitative marker of accelerated brain aging to investigate its potential associations with peripheral immunity and dementia, highlighting the importance of early intervention targeting peripheral immune markers to delay brain aging and prevent dementia.

AI-based CT assessment of sarcopenia in borderline resectable pancreatic Cancer: A narrative review of clinical and technical perspectives.

Gehin W, Lambert A, Bibault JE

pubmed logopapersJun 25 2025
Sarcopenia, defined as the progressive loss of skeletal muscle mass and function, has been associated with poor prognosis in patients with pancreatic cancer, particularly those with borderline resectable pancreatic cancer (BRPC). Although body composition can be extracted from routine CT imaging, sarcopenia assessment remains underused in clinical practice. Recent advances in artificial intelligence (AI) offer the potential to automate and standardize this process, but their clinical translation remains limited. This narrative review aims to critically evaluate (1) the clinical impact of CT-defined sarcopenia in BRPC, and (2) the performance and maturity of AI-based methods for automated muscle and fat segmentation on CT images. A dual-axis literature search was conducted to identify clinical studies assessing the prognostic role of sarcopenia in BRPC, and technical studies developing AI-based segmentation models for body composition analysis. Structured data extraction was applied to 13 clinical and 71 technical studies. A PRISMA-inspired flow diagram was included to ensure methodological transparency. Sarcopenia was consistently associated with worse survival and treatment tolerance in BRPC, yet clinical definitions and cut-offs varied widely. AI models-mostly 2D U-Nets trained on L3-level CT slices-achieved high segmentation accuracy (mean DSC >0.93), but external validation and standardization were often lacking. CT-based AI assessment of sarcopenia holds promise for improving patient stratification in BRPC. However, its clinical adoption will require standardization, integration into decision-support frameworks, and prospective validation across diverse populations.

Machine Learning-Based Risk Assessment of Myasthenia Gravis Onset in Thymoma Patients and Analysis of Their Correlations and Causal Relationships.

Liu W, Wang W, Zhang H, Guo M

pubmed logopapersJun 25 2025
The study aims to utilize interpretable machine learning models to predict the risk of myasthenia gravis onset in thymoma patients and investigate the intrinsic correlations and causal relationships between them. A comprehensive retrospective analysis was conducted on 172 thymoma patients diagnosed at two medical centers between 2018 and 2024. The cohort was bifurcated into a training set (n = 134) and test set (n = 38) to develop and validate risk predictive models. Radiomic and deep features were extracted from tumor regions across three CT phases: non-enhanced, arterial, and venous. Through rigorous feature selection employing Spearman's rank correlation coefficient and LASSO (Least Absolute Shrinkage and Selection Operator) regularization, 12 optimal imaging features were identified. These were integrated with 11 clinical parameters and one pathological subtype variable to form a multi-dimensional feature matrix. Six machine learning algorithms were subsequently implemented for model construction and comparative analysis. We utilized SHAP (SHapley Additive exPlanation) to interpret the model and employed doubly robust learner to perform a potential causal analysis between thymoma and myasthenia gravis (MG). All six models demonstrated satisfactory predictive capabilities, with the support vector machine (SVM) model exhibiting superior performance on the test cohort. It achieved an area under the curve (AUC) of 0.904 (95% confidence interval [CI] 0.798-1.000), outperforming other models such as logistic regression, multilayer perceptron (MLP), and others. The model's predictive result substantiates the strong correlation between thymoma and MG. Additionally, our analysis revealed the existence of a significant causal relationship between them, and high-risk tumors significantly elevated the risk of MG by an average treatment effect (ATE) of 9.2%. This implies that thymoma patients with types B2 and B3 face a considerably high risk of developing MG compared to those with types A, AB, and B1. The model provides a novel and effective tool for evaluating the risk of MG development in patients with thymoma. Furthermore, correlation and causal analysis have unveiled pathways that connect tumor to the risk of MG, with a notably higher incidence of MG observed in high risk pathological subtypes. These insights contribute to a deeper understanding of MG and drive a paradigm shift in medical practice from passive treatment to proactive intervention.

Weighted Mean Frequencies: a handcraft Fourier feature for 4D Flow MRI segmentation

Simon Perrin, Sébastien Levilly, Huajun Sun, Harold Mouchère, Jean-Michel Serfaty

arxiv logopreprintJun 25 2025
In recent decades, the use of 4D Flow MRI images has enabled the quantification of velocity fields within a volume of interest and along the cardiac cycle. However, the lack of resolution and the presence of noise in these biomarkers are significant issues. As indicated by recent studies, it appears that biomarkers such as wall shear stress are particularly impacted by the poor resolution of vessel segmentation. The Phase Contrast Magnetic Resonance Angiography (PC-MRA) is the state-of-the-art method to facilitate segmentation. The objective of this work is to introduce a new handcraft feature that provides a novel visualisation of 4D Flow MRI images, which is useful in the segmentation task. This feature, termed Weighted Mean Frequencies (WMF), is capable of revealing the region in three dimensions where a voxel has been passed by pulsatile flow. Indeed, this feature is representative of the hull of all pulsatile velocity voxels. The value of the feature under discussion is illustrated by two experiments. The experiments involved segmenting 4D Flow MRI images using optimal thresholding and deep learning methods. The results obtained demonstrate a substantial enhancement in terms of IoU and Dice, with a respective increase of 0.12 and 0.13 in comparison with the PC-MRA feature, as evidenced by the deep learning task. This feature has the potential to yield valuable insights that could inform future segmentation processes in other vascular regions, such as the heart or the brain.

AI-assisted radiographic analysis in detecting alveolar bone-loss severity and patterns

Chathura Wimalasiri, Piumal Rathnayake, Shamod Wijerathne, Sumudu Rasnayaka, Dhanushka Leuke Bandara, Roshan Ragel, Vajira Thambawita, Isuru Nawinne

arxiv logopreprintJun 25 2025
Periodontitis, a chronic inflammatory disease causing alveolar bone loss, significantly affects oral health and quality of life. Accurate assessment of bone loss severity and pattern is critical for diagnosis and treatment planning. In this study, we propose a novel AI-based deep learning framework to automatically detect and quantify alveolar bone loss and its patterns using intraoral periapical (IOPA) radiographs. Our method combines YOLOv8 for tooth detection with Keypoint R-CNN models to identify anatomical landmarks, enabling precise calculation of bone loss severity. Additionally, YOLOv8x-seg models segment bone levels and tooth masks to determine bone loss patterns (horizontal vs. angular) via geometric analysis. Evaluated on a large, expertly annotated dataset of 1000 radiographs, our approach achieved high accuracy in detecting bone loss severity (intra-class correlation coefficient up to 0.80) and bone loss pattern classification (accuracy 87%). This automated system offers a rapid, objective, and reproducible tool for periodontal assessment, reducing reliance on subjective manual evaluation. By integrating AI into dental radiographic analysis, our framework has the potential to improve early diagnosis and personalized treatment planning for periodontitis, ultimately enhancing patient care and clinical outcomes.

General Methods Make Great Domain-specific Foundation Models: A Case-study on Fetal Ultrasound

Jakob Ambsdorf, Asbjørn Munk, Sebastian Llambias, Anders Nymark Christensen, Kamil Mikolaj, Randall Balestriero, Martin Tolsgaard, Aasa Feragen, Mads Nielsen

arxiv logopreprintJun 24 2025
With access to large-scale, unlabeled medical datasets, researchers are confronted with two questions: Should they attempt to pretrain a custom foundation model on this medical data, or use transfer-learning from an existing generalist model? And, if a custom model is pretrained, are novel methods required? In this paper we explore these questions by conducting a case-study, in which we train a foundation model on a large regional fetal ultrasound dataset of 2M images. By selecting the well-established DINOv2 method for pretraining, we achieve state-of-the-art results on three fetal ultrasound datasets, covering data from different countries, classification, segmentation, and few-shot tasks. We compare against a series of models pretrained on natural images, ultrasound images, and supervised baselines. Our results demonstrate two key insights: (i) Pretraining on custom data is worth it, even if smaller models are trained on less data, as scaling in natural image pretraining does not translate to ultrasound performance. (ii) Well-tuned methods from computer vision are making it feasible to train custom foundation models for a given medical domain, requiring no hyperparameter tuning and little methodological adaptation. Given these findings, we argue that a bias towards methodological innovation should be avoided when developing domain specific foundation models under common computational resource constraints.

Diagnostic Performance of Universal versus Stratified Computer-Aided Detection Thresholds for Chest X-Ray-Based Tuberculosis Screening

Sung, J., Kitonsa, P. J., Nalutaaya, A., Isooba, D., Birabwa, S., Ndyabayunga, K., Okura, R., Magezi, J., Nantale, D., Mugabi, I., Nakiiza, V., Dowdy, D. W., Katamba, A., Kendall, E. A.

medrxiv logopreprintJun 24 2025
BackgroundComputer-aided detection (CAD) software analyzes chest X-rays for features suggestive of tuberculosis (TB) and provides a numeric abnormality score. However, estimates of CAD accuracy for TB screening are hindered by the lack of confirmatory data among people with lower CAD scores, including those without symptoms. Additionally, the appropriate CAD score thresholds for obtaining further testing may vary according to population and client characteristics. MethodsWe screened for TB in Ugandan individuals aged [&ge;]15 years using portable chest X-rays with CAD (qXR v3). Participants were offered screening regardless of their symptoms. Those with X-ray scores above a threshold of 0.1 (range, 0 - 1) were asked to provide sputum for Xpert Ultra testing. We estimated the diagnostic accuracy of CAD for detecting Xpert-positive TB when using the same threshold for all individuals (under different assumptions about TB prevalence among people with X-ray scores <0.1), and compared this estimate to age- and/or sex-stratified approaches. FindingsOf 52,835 participants screened for TB using CAD, 8,949 (16.9%) had X-ray scores [&ge;]0.1. Of 7,219 participants with valid Xpert Ultra results, 382 (5.3%) were Xpert-positive, including 81 with trace results. Assuming 0.1% of participants with X-ray scores <0.1 would have been Xpert-positive if tested, qXR had an estimated AUC of 0.920 (95% confidence interval 0.898-0.941) for Xpert-positive TB. Stratifying CAD thresholds according to age and sex improved accuracy; for example, at 96.1% specificity, estimated sensitivity was 75.0% for a universal threshold (of [&ge;]0.65) versus 76.9% for thresholds stratified by age and sex (p=0.046). InterpretationThe accuracy of CAD for TB screening among all screening participants, including those without symptoms or abnormal chest X-rays, is higher than previously estimated. Stratifying CAD thresholds based on client characteristics such as age and sex could further improve accuracy, enabling a more effective and personalized approach to TB screening. FundingNational Institutes of Health Research in contextO_ST_ABSEvidence before this studyC_ST_ABSThe World Health Organization (WHO) has endorsed computer-aided detection (CAD) as a screening tool for tuberculosis (TB), but the appropriate CAD score that triggers further diagnostic evaluation for tuberculosis varies by population. The WHO recommends determining the appropriate CAD threshold for specific settings and population and considering unique thresholds for specific populations, including older age groups, among whom CAD may perform poorly. We performed a PubMed literature search for articles published until September 9, 2024, using the search terms "tuberculosis" AND ("computer-aided detection" OR "computer aided detection" OR "CAD" OR "computer-aided reading" OR "computer aided reading" OR "artificial intelligence"), which resulted in 704 articles. Among them, we identified studies that evaluated the performance of CAD for tuberculosis screening and additionally reviewed relevant references. Most prior studies reported area under the curves (AUC) ranging from 0.76 to 0.88 but limited their evaluations to individuals with symptoms or abnormal chest X-rays. Some prior studies identified subgroups (including older individuals and people with prior TB) among whom CAD had lower-than-average AUCs, and authors discussed how the prevalence of such characteristics could affect the optimal value of a population-wide CAD threshold; however, none estimated the accuracy that could be gained with adjusting CAD thresholds between individuals based on personal characteristics. Added value of this studyIn this study, all consenting individuals in a high-prevalence setting were offered chest X-ray screening, regardless of symptoms, if they were [&ge;]15 years old, not pregnant, and not on TB treatment. A very low CAD score cutoff (qXR v3 score of 0.1 on a 0-1 scale) was used to select individuals for confirmatory sputum molecular testing, enabling the detection of radiographically mild forms of TB and facilitating comparisons of diagnostic accuracy at different CAD thresholds. With this more expansive, symptom-neutral evaluation of CAD, we estimated an AUC of 0.920, and we found that the qXR v3 threshold needed to decrease to under 0.1 to meet the WHO target product profile goal of [&ge;]90% sensitivity and [&ge;]70% specificity. Compared to using the same thresholds for all participants, adjusting CAD thresholds by age and sex strata resulted in a 1 to 2% increase in sensitivity without affecting specificity. Implications of all the available evidenceTo obtain high sensitivity with CAD screening in high-prevalence settings, low score thresholds may be needed. However, countries with a high burden of TB often do not have sufficient resources to test all individuals above a low threshold. In such settings, adjusting CAD thresholds based on individual characteristics associated with TB prevalence (e.g., male sex) and those associated with false-positive X-ray results (e.g., old age) can potentially improve the efficiency of TB screening programs.

Multimodal Deep Learning Based on Ultrasound Images and Clinical Data for Better Ovarian Cancer Diagnosis.

Su C, Miao K, Zhang L, Yu X, Guo Z, Li D, Xu M, Zhang Q, Dong X

pubmed logopapersJun 24 2025
This study aimed to develop and validate a multimodal deep learning model that leverages 2D grayscale ultrasound (US) images alongside readily available clinical data to improve diagnostic performance for ovarian cancer (OC). A retrospective analysis was conducted involving 1899 patients who underwent preoperative US examinations and subsequent surgeries for adnexal masses between 2019 and 2024. A multimodal deep learning model was constructed for OC diagnosis and extracting US morphological features from the images. The model's performance was evaluated using metrics such as receiver operating characteristic (ROC) curves, accuracy, and F1 score. The multimodal deep learning model exhibited superior performance compared to the image-only model, achieving areas under the curves (AUCs) of 0.9393 (95% CI 0.9139-0.9648) and 0.9317 (95% CI 0.9062-0.9573) in the internal and external test sets, respectively. The model significantly improved the AUCs for OC diagnosis by radiologists and enhanced inter-reader agreement. Regarding US morphological feature extraction, the model demonstrated robust performance, attaining accuracies of 86.34% and 85.62% in the internal and external test sets, respectively. Multimodal deep learning has the potential to enhance the diagnostic accuracy and consistency of radiologists in identifying OC. The model's effective feature extraction from ultrasound images underscores the capability of multimodal deep learning to automate the generation of structured ultrasound reports.

AI-based large-scale screening of gastric cancer from noncontrast CT imaging.

Hu C, Xia Y, Zheng Z, Cao M, Zheng G, Chen S, Sun J, Chen W, Zheng Q, Pan S, Zhang Y, Chen J, Yu P, Xu J, Xu J, Qiu Z, Lin T, Yun B, Yao J, Guo W, Gao C, Kong X, Chen K, Wen Z, Zhu G, Qiao J, Pan Y, Li H, Gong X, Ye Z, Ao W, Zhang L, Yan X, Tong Y, Yang X, Zheng X, Fan S, Cao J, Yan C, Xie K, Zhang S, Wang Y, Zheng L, Wu Y, Ge Z, Tian X, Zhang X, Wang Y, Zhang R, Wei Y, Zhu W, Zhang J, Qiu H, Su M, Shi L, Xu Z, Zhang L, Cheng X

pubmed logopapersJun 24 2025
Early detection through screening is critical for reducing gastric cancer (GC) mortality. However, in most high-prevalence regions, large-scale screening remains challenging due to limited resources, low compliance and suboptimal detection rate of upper endoscopic screening. Therefore, there is an urgent need for more efficient screening protocols. Noncontrast computed tomography (CT), routinely performed for clinical purposes, presents a promising avenue for large-scale designed or opportunistic screening. Here we developed the Gastric Cancer Risk Assessment Procedure with Artificial Intelligence (GRAPE), leveraging noncontrast CT and deep learning to identify GC. Our study comprised three phases. First, we developed GRAPE using a cohort from 2 centers in China (3,470 GC and 3,250 non-GC cases) and validated its performance on an internal validation set (1,298 cases, area under curve = 0.970) and an independent external cohort from 16 centers (18,160 cases, area under curve = 0.927). Subgroup analysis showed that the detection rate of GRAPE increased with advancing T stage but was independent of tumor location. Next, we compared the interpretations of GRAPE with those of radiologists and assessed its potential in assisting diagnostic interpretation. Reader studies demonstrated that GRAPE significantly outperformed radiologists, improving sensitivity by 21.8% and specificity by 14.0%, particularly in early-stage GC. Finally, we evaluated GRAPE in real-world opportunistic screening using 78,593 consecutive noncontrast CT scans from a comprehensive cancer center and 2 independent regional hospitals. GRAPE identified persons at high risk with GC detection rates of 24.5% and 17.7% in 2 regional hospitals, with 23.2% and 26.8% of detected cases in T1/T2 stage. Additionally, GRAPE detected GC cases that radiologists had initially missed, enabling earlier diagnosis of GC during follow-up for other diseases. In conclusion, GRAPE demonstrates strong potential for large-scale GC screening, offering a feasible and effective approach for early detection. ClinicalTrials.gov registration: NCT06614179 .

SAM2-SGP: Enhancing SAM2 for Medical Image Segmentation via Support-Set Guided Prompting

Yang Xing, Jiong Wu, Yuheng Bu, Kuang Gong

arxiv logopreprintJun 24 2025
Although new vision foundation models such as Segment Anything Model 2 (SAM2) have significantly enhanced zero-shot image segmentation capabilities, reliance on human-provided prompts poses significant challenges in adapting SAM2 to medical image segmentation tasks. Moreover, SAM2's performance in medical image segmentation was limited by the domain shift issue, since it was originally trained on natural images and videos. To address these challenges, we proposed SAM2 with support-set guided prompting (SAM2-SGP), a framework that eliminated the need for manual prompts. The proposed model leveraged the memory mechanism of SAM2 to generate pseudo-masks using image-mask pairs from a support set via a Pseudo-mask Generation (PMG) module. We further introduced a novel Pseudo-mask Attention (PMA) module, which used these pseudo-masks to automatically generate bounding boxes and enhance localized feature extraction by guiding attention to relevant areas. Furthermore, a low-rank adaptation (LoRA) strategy was adopted to mitigate the domain shift issue. The proposed framework was evaluated on both 2D and 3D datasets across multiple medical imaging modalities, including fundus photography, X-ray, computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasound. The results demonstrated a significant performance improvement over state-of-the-art models, such as nnUNet and SwinUNet, as well as foundation models, such as SAM2 and MedSAM2, underscoring the effectiveness of the proposed approach. Our code is publicly available at https://github.com/astlian9/SAM_Support.
Page 106 of 1411402 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.