Sort by:
Page 96 of 2432421 results

AGE-US: automated gestational age estimation based on fetal ultrasound images

César Díaz-Parga, Marta Nuñez-Garcia, Maria J. Carreira, Gabriel Bernardino, Nicolás Vila-Blanco

arxiv logopreprintJun 19 2025
Being born small carries significant health risks, including increased neonatal mortality and a higher likelihood of future cardiac diseases. Accurate estimation of gestational age is critical for monitoring fetal growth, but traditional methods, such as estimation based on the last menstrual period, are in some situations difficult to obtain. While ultrasound-based approaches offer greater reliability, they rely on manual measurements that introduce variability. This study presents an interpretable deep learning-based method for automated gestational age calculation, leveraging a novel segmentation architecture and distance maps to overcome dataset limitations and the scarcity of segmentation masks. Our approach achieves performance comparable to state-of-the-art models while reducing complexity, making it particularly suitable for resource-constrained settings and with limited annotated data. Furthermore, our results demonstrate that the use of distance maps is particularly suitable for estimating femur endpoints.

Prompt-based Dynamic Token Pruning to Guide Transformer Attention in Efficient Segmentation

Pallabi Dutta, Anubhab Maity, Sushmita Mitra

arxiv logopreprintJun 19 2025
The high computational demands of Vision Transformers (ViTs), in processing a huge number of tokens, often constrain their practical application in analyzing medical images. This research proposes an adaptive prompt-guided pruning method to selectively reduce the processing of irrelevant tokens in the segmentation pipeline. The prompt-based spatial prior helps to rank the tokens according to their relevance. Tokens with low-relevance scores are down-weighted, ensuring that only the relevant ones are propagated for processing across subsequent stages. This data-driven pruning strategy facilitates end-to-end training, maintains gradient flow, and improves segmentation accuracy by focusing computational resources on essential regions. The proposed framework is integrated with several state-of-the-art models to facilitate the elimination of irrelevant tokens; thereby, enhancing computational efficiency while preserving segmentation accuracy. The experimental results show a reduction of $\sim$ 35-55\% tokens; thus reducing the computational costs relative to the baselines. Cost-effective medical image processing, using our framework, facilitates real-time diagnosis by expanding its applicability in resource-constrained environments.

Cardiovascular risk in childhood and young adulthood is associated with the hemodynamic response function in midlife: The Bogalusa Heart Study.

Chuang KC, Naseri M, Ramakrishnapillai S, Madden K, Amant JS, McKlveen K, Gwizdala K, Dhullipudi R, Bazzano L, Carmichael O

pubmed logopapersJun 18 2025
In functional MRI, a hemodynamic response function (HRF) describes how neural events are translated into a blood oxygenation response detected through imaging. The HRF has the potential to quantify neurovascular mechanisms by which cardiovascular risks modify brain health, but relationships among HRF characteristics, brain health, and cardiovascular modifiers of brain health have not been well studied to date. One hundred and thirty-seven middle-aged participants (mean age: 53.6±4.7, female (62%), 78% White American participants and 22% African American participants) in the exploratory analysis from Bogalusa Heart Study completed clinical evaluations from childhood to midlife and an adaptive Stroop task during fMRI in midlife. The HRFs of each participant within seventeen brain regions of interest (ROIs) previously identified as activated by this task were calculated using a convolutional neural network approach. Faster and more efficient neurovascular functioning was characterized in terms of five HRF characteristics: faster time to peak (TTP), shorter full width at half maximum (FWHM), smaller peak magnitude (PM), smaller trough magnitude (TM), and smaller area under the HRF curve (AUHRF). The composite HRF summary characteristics over all ROIs were calculated for multivariable and simple linear regression analyses. In multivariable models, faster and more efficient HRF characteristic was found in non-smoker compared to smokers (AUHRF, p = 0.029). Faster and more efficient HRF characteristics were associated with lower systolic and diastolic blood pressures (FWHM, TM, and AUHRF, p = 0.030, 0.042, and 0.032) and cerebral amyloid burden (FWHM, p-value = 0.027) in midlife; as well as greater response rate on the Stroop task (FWHM, p = 0.042) in midlife. In simple linear regression models, faster and more efficient HRF characteristics were found in women compared to men (TM, p = 0.019); in White American participants compared to African American participants (AUHRF, p = 0.044); and in non-smokers compared to smokers (TTP and AUHRF, p = 0.019 and 0.010). Faster and more efficient HRF characteristics were associated with lower systolic and diastolic blood pressures (FWHM and TM, p = 0.019 and 0.029), and lower BMI (FWHM and AUHRF, p = 0.025 and 0.017), in childhood and adolescence; and lower BMI (TTP, p = 0.049), cerebral amyloid burden (FWHM, p = 0.002), and white matter hyperintensity burden (FWHM, p = 0.046) in midlife; as well as greater accuracy on the Stroop task (AUHRF, p = 0.037) in midlife. In a diverse middle-aged community sample, HRF-based indicators of faster and more efficient neurovascular functioning were associated with better brain health and cognitive function, as well as better lifespan cardiovascular health.

Can CTA-based Machine Learning Identify Patients for Whom Successful Endovascular Stroke Therapy is Insufficient?

Jeevarajan JA, Dong Y, Ballekere A, Marioni SS, Niktabe A, Abdelkhaleq R, Sheth SA, Giancardo L

pubmed logopapersJun 18 2025
Despite advances in endovascular stroke therapy (EST) devices and techniques, many patients are left with substantial disability, even if the final infarct volumes (FIVs) remain small. Here, we evaluate the performance of a machine learning (ML) approach using pre-treatment CT angiography (CTA) to identify this cohort of patients that may benefit from additional interventions. We identified consecutive large vessel occlusion (LVO) acute ischemic stroke (AIS) subjects who underwent EST with successful reperfusion in a multicenter prospective registry cohort. We included only subjects with FIV<30mL and recorded 90-day outcome (modified Rankin scale, mRS). A deep learning model was pre-trained and then fine-tuned to predict 90-day mRS 0-2 using pre-treatment CTA images (DSN-CTA model). The primary outcome was the predictive performance of the DSNCTA model compared to a logistic regression model with clinical variables, measured by the area under the receiver operating characteristic curve (AUROC). The DSN-CTA model was pre-trained on 1,542 subjects and then fine-tuned and cross-validated with 48 subjects, all of whom underwent EST with TICI 2b-3 reperfusion. Of this cohort, 56.2% of subjects had 90-day mRS 3-6 despite successful EST and FIV<30mL. The DSN-CTA model showed significantly better performance than a model with clinical variables alone when predicting good 90-day mRS (AUROC 0.81 vs 0.492, p=0.006). The CTA-based machine learning model was able to more reliably predict unexpected poor functional outcome after successful EST and small FIV for patients with LVO AIS compared to standard clinical variables. ML models may identify <i>a priori</i> patients in whom EST-based LVO reperfusion alone is insufficient to improve clinical outcomes. AIS= acute ischemic stroke; AUROC= area under the receiver operating characteristic curve; DSN-CTA= DeepSymNet-v3 model; EST= endovascular stroke therapy; FIV= final infarct volume; LVO= large vessel occlusion; ML= machine learning.

Hierarchical refinement with adaptive deformation cascaded for multi-scale medical image registration.

Hussain N, Yan Z, Cao W, Anwar M

pubmed logopapersJun 18 2025
Deformable image registration is a fundamental task in medical image analysis, which is crucial in enabling early detection and accurate disease diagnosis. Although transformer-based architectures have demonstrated strong potential through attention mechanisms, challenges remain in ineffective feature extraction and spatial alignment, particularly within hierarchical attention frameworks. To address these limitations, we propose a novel registration framework that integrates hierarchical feature encoding in the encoder and an adaptive cascaded refinement strategy in the decoder. The model employs hierarchical cross-attention between fixed and moving images at multiple scales, enabling more precise alignment and improved registration accuracy. The decoder incorporates the Adaptive Cascaded Module (ACM), facilitating progressive deformation field refinement across multiple resolution levels. This approach captures coarse global transformations and acceptable local variations, resulting in smooth and anatomically consistent alignment. However, rather than relying solely on the final decoder output, our framework leverages intermediate representations at each stage of the network, enhancing the robustness and precision of the registration process. Our method achieves superior accuracy and adaptability by integrating deformations across all scales. Comprehensive experiments on two widely used 3D brain MRI datasets, OASIS and LPBA40, demonstrate that the proposed framework consistently outperforms existing state-of-the-art approaches across multiple evaluation metrics regarding accuracy, robustness, and generalizability.

Applying a multi-task and multi-instance framework to predict axillary lymph node metastases in breast cancer.

Li Y, Chen Z, Ding Z, Mei D, Liu Z, Wang J, Tang K, Yi W, Xu Y, Liang Y, Cheng Y

pubmed logopapersJun 18 2025
Deep learning (DL) models have shown promise in predicting axillary lymph node (ALN) status. However, most existing DL models were classification-only models and did not consider the practical application scenarios of multi-view joint prediction. Here, we propose a Multi-Task Learning (MTL) and Multi-Instance Learning (MIL) framework that simulates the real-world clinical diagnostic scenario for ALN status prediction in breast cancer. Ultrasound images of the primary tumor and ALN (if available) regions were collected, each annotated with a segmentation label. The model was trained on a training cohort and tested on both internal and external test cohorts. The proposed two-stage DL framework using one of the Transformer models, Segformer, as the network backbone, exhibits the top-performing model. It achieved an AUC of 0.832, a sensitivity of 0.815, and a specificity of 0.854 in the internal test cohort. In the external cohort, this model attained an AUC of 0.918, a sensitivity of 0.851 and a specificity of 0.957. The Class Activation Mapping method demonstrated that the DL model correctly identified the characteristic areas of metastasis within the primary tumor and ALN regions. This framework may serve as an effective second reader to assist clinicians in ALN status assessment.

Generalist medical foundation model improves prostate cancer segmentation from multimodal MRI images.

Zhang Y, Ma X, Li M, Huang K, Zhu J, Wang M, Wang X, Wu M, Heng PA

pubmed logopapersJun 18 2025
Prostate cancer (PCa) is one of the most common types of cancer, seriously affecting adult male health. Accurate and automated PCa segmentation is essential for radiologists to confirm the location of cancer, evaluate its severity, and design appropriate treatments. This paper presents PCaSAM, a fully automated PCa segmentation model that allows us to input multi-modal MRI images into the foundation model to improve performance significantly. We collected multi-center datasets to conduct a comprehensive evaluation. The results showed that PCaSAM outperforms the generalist medical foundation model and the other representative segmentation models, with the average DSC of 0.721 and 0.706 in the internal and external datasets, respectively. Furthermore, with the assistance of segmentation, the PI-RADS scoring of PCa lesions was improved significantly, leading to a substantial increase in average AUC by 8.3-8.9% on two external datasets. Besides, PCaSAM achieved superior efficiency, making it highly suitable for real-world deployment scenarios.

Automated Multi-grade Brain Tumor Classification Using Adaptive Hierarchical Optimized Horse Herd BiLSTM Fusion Network in MRI Images.

Thanya T, Jeslin T

pubmed logopapersJun 18 2025
Brain tumor classification using Magnetic Resonance Imaging (MRI) images is an important and emerging field of medical imaging and artificial intelligence in the current world. With advancements in technology, particularly in deep learning and machine learning, researchers and clinicians are leveraging these tools to create complex models that, using MRI data, can reliably detect and classify tumors in the brain. However, it has a number of drawbacks, including the intricacy of tumor types and grades, intensity variations in MRI data and tumors varying in severity. This paper proposes a Multi-Grade Hierarchical Classification Network Model (MGHCN) for the hierarchical classification of tumor grades in MRI images. The model's distinctive feature lies in its ability to categorize tumors into multiple grades, thereby capturing the hierarchical nature of tumor severity. To address variations in intensity levels across different MRI samples, an Improved Adaptive Intensity Normalization (IAIN) pre-processing step is employed. This step standardizes intensity values, effectively mitigating the impact of intensity variations and ensuring more consistent analyses. The model renders utilization of the Dual Tree Complex Wavelet Transform with Enhanced Trigonometric Features (DTCWT-ETF) for efficient feature extraction. DTCWT-ETF captures both spatial and frequency characteristics, allowing the model to distinguish between different tumor types more effectively. In the classification stage, the framework introduces the Adaptive Hierarchical Optimized Horse Herd BiLSTM Fusion Network (AHOHH-BiLSTM). This multi-grade classification model is designed with a comprehensive architecture, including distinct layers that enhance the learning process and adaptively refine parameters. The purpose of this study is to improve the precision of distinguishing different grades of tumors in MRI images. To evaluate the proposed MGHCN framework, a set of evaluation metrics is incorporated which includes precision, recall, and the F1-score. The structure employs BraTS Challenge 2021, Br35H, and BraTS Challenge 2023 datasets, a significant combination that ensures comprehensive training and evaluation. The MGHCN framework aims to enhance brain tumor classification in MRI images by utilizing these datasets along with a comprehensive set of evaluation metrics, providing a more thorough and sophisticated understanding of its capabilities and performance.

Quality appraisal of radiomics-based studies on chondrosarcoma using METhodological RadiomICs Score (METRICS) and Radiomics Quality Score (RQS).

Gitto S, Cuocolo R, Klontzas ME, Albano D, Messina C, Sconfienza LM

pubmed logopapersJun 18 2025
To assess the methodological quality of radiomics-based studies on bone chondrosarcoma using METhodological RadiomICs Score (METRICS) and Radiomics Quality Score (RQS). A literature search was conducted on EMBASE and PubMed databases for research papers published up to July 2024 and focused on radiomics in bone chondrosarcoma, with no restrictions regarding the study aim. Three readers independently evaluated the study quality using METRICS and RQS. Baseline study characteristics were extracted. Inter-reader reliability was calculated using intraclass correlation coefficient (ICC). Out of 68 identified papers, 18 were finally included in the analysis. Radiomics research was aimed at lesion classification (n = 15), outcome prediction (n = 2) or both (n = 1). Study design was retrospective in all papers. Most studies employed MRI (n = 12), CT (n = 3) or both (n = 1). METRICS and RQS adherence rates ranged between 37.3-94.8% and 2.8-44.4%, respectively. Excellent inter-reader reliability was found for both METRICS (ICC = 0.961) and RQS (ICC = 0.975). Among the limitations of the evaluated studies, the absence of prospective studies and deep learning-based analyses was highlighted, along with the limited adherence to radiomics guidelines, use of external testing datasets and open science data. METRICS and RQS are reproducible quality assessment tools, with the former showing higher adherence rates in studies on chondrosarcoma. METRICS is better suited for assessing papers with retrospective design, which is often chosen in musculoskeletal oncology due to the low prevalence of bone sarcomas. Employing quality scoring systems should be promoted in radiomics-based studies to improve methodological quality and facilitate clinical translation. Employing reproducible quality scoring systems, especially METRICS (which shows higher adherence rates than RQS and is better suited for assessing retrospective investigations), is highly recommended to design radiomics-based studies on chondrosarcoma, improve methodological quality and facilitate clinical translation. The low scientific and reporting quality of radiomics studies on chondrosarcoma is the main reason preventing clinical translation. Quality appraisal using METRICS and RQS showed 37.3-94.8% and 2.8-44.4% adherence rates, respectively. Room for improvement was noted in study design, deep learning methods, external testing and open science. Employing reproducible quality scoring systems is recommended to design radiomics studies on bone chondrosarcoma and facilitate clinical translation.

Artificial Intelligence in Breast US Diagnosis and Report Generation.

Wang J, Tian H, Yang X, Wu H, Zhu X, Chen R, Chang A, Chen Y, Dou H, Huang R, Cheng J, Zhou Y, Gao R, Yang K, Li G, Chen J, Ni D, Dong F, Xu J, Gu N

pubmed logopapersJun 18 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and evaluate an artificial intelligence (AI) system for generating breast ultrasound (BUS) reports. Materials and Methods This retrospective study included 104,364 cases from three hospitals (January 2020-December 2022). The AI system was trained on 82,896 cases, validated on 10,385 cases, and tested on an internal set (10,383 cases) and two external sets (300 and 400 cases). Under blind review, three senior radiologists (> 10 years of experience) evaluated AI-generated reports and those written by one midlevel radiologist (7 years of experience), as well as reports from three junior radiologists (2-3 years of experience) with and without AI assistance. The primary outcomes included the acceptance rates of Breast Imaging Reporting and Data System (BI-RADS) categories and lesion characteristics. Statistical analysis included one-sided and two-sided McNemar tests for non-inferiority and significance testing. Results In external test set 1 (300 cases), the midlevel radiologist and AI system achieved BI-RADS acceptance rates of 95.00% [285/300] versus 92.33% [277/300] (<i>P</i> < .001; non-inferiority test with a prespecified margin of 10%). In external test set 2 (400 cases), three junior radiologists had BI-RADS acceptance rates of 87.00% [348/400] versus 90.75% [363/400] (<i>P</i> = .06), 86.50% [346/400] versus 92.00% [368/400] ( <i>P</i> = .007), and 84.75% [339/400] versus 90.25% [361/400] (<i>P</i> = .02) with and without AI assistance, respectively. Conclusion The AI system performed comparably to a midlevel radiologist and aided junior radiologists in BI-RADS classification. ©RSNA, 2025.
Page 96 of 2432421 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.