Sort by:
Page 243 of 3883875 results

RECIST<sup>Surv</sup>: Hybrid Multi-task Transformer for Hepatocellular Carcinoma Response and Survival Evaluation.

Jiao R, Liu Q, Zhang Y, Pu B, Xue B, Cheng Y, Yang K, Liu X, Qu J, Jin C, Zhang Y, Wang Y, Zhang YD

pubmed logopapersJun 18 2025
Transarterial Chemoembolization (TACE) is a widely applied alternative treatment for patients with hepatocellular carcinoma who are not eligible for liver resection or transplantation. However, the clinical outcomes after TACE are highly heterogeneous. There remains an urgent need for effective and efficient strategies to accurately assess tumor response and predict long-term outcomes using longitudinal and multi-center datasets. To address this challenge, we here introduce RECIST<sup>Surv</sup>, a novel response-driven Transformer model that integrates multi-task learning with a response-driven co-attention mechanism to simultaneously perform liver and tumor segmentation, predict tumor response to TACE, and estimate overall survival based on longitudinal Computed Tomography (CT) imaging. The proposed Response-driven Co-attention layer models the interactions between pre-TACE and post-TACE features guided by the treatment response embedding. This design enables the model to capture complex relationships between imaging features, treatment response, and survival outcomes, thereby enhancing both prediction accuracy and interpretability. In a multi-center validation study, RECIST<sup>Surv</sup>-predicted prognosis has demonstrated superior precision than state-of-the-art methods with C-indexes ranging from 0.595 to 0.780. Furthermore, when integrated with multi-modal data, RECIST<sup>Surv</sup> has emerged as an independent prognostic factor in all three validation cohorts, with hazard ratio (HR) ranging from 1.693 to 20.7 (P = 0.001-0.042). Our results highlight the potential of RECIST<sup>Surv</sup> as a powerful tool for personalized treatment planning and outcome prediction in hepatocellular carcinoma patients undergoing TACE. The experimental code is made publicly available at https://github.com/rushier/RECISTSurv.

RESIGN: Alzheimer's Disease Detection Using Hybrid Deep Learning based Res-Inception Seg Network.

Amsavalli K, Suba Raja SK, Sudha S

pubmed logopapersJun 18 2025
Alzheimer's disease (AD) is a leading cause of death, making early detection critical to improve survival rates. Conventional manual techniques struggle with early diagnosis due to the brain's complex structure, necessitating the use of dependable deep learning (DL) methods. This research proposes a novel RESIGN model is a combination of Res-InceptionSeg for detecting AD utilizing MRI images. The input MRI images were pre-processed using a Non-Local Means (NLM) filter to reduce noise artifacts. A ResNet-LSTM model was used for feature extraction, targeting White Matter (WM), Grey Matter (GM), and Cerebrospinal Fluid (CSF). The extracted features were concatenated and classified into Normal, MCI, and AD categories using an Inception V3-based classifier. Additionally, SegNet was employed for abnormal brain region segmentation. The RESIGN model achieved an accuracy of 99.46%, specificity of 98.68%, precision of 95.63%, recall of 97.10%, and an F1 score of 95.42%. It outperformed ResNet, AlexNet, Dense- Net, and LSTM by 7.87%, 5.65%, 3.92%, and 1.53%, respectively, and further improved accuracy by 25.69%, 5.29%, 2.03%, and 1.71% over ResNet18, CLSTM, VGG19, and CNN, respectively. The integration of spatial-temporal feature extraction, hybrid classification, and deep segmentation makes RESIGN highly reliable in detecting AD. A 5-fold cross-validation proved its robustness, and its performance exceeded that of existing models on the ADNI dataset. However, there are potential limitations related to dataset bias and limited generalizability due to uniform imaging conditions. The proposed RESIGN model demonstrates significant improvement in early AD detection through robust feature extraction and classification by offering a reliable tool for clinical diagnosis.

Dual-scan self-learning denoising for application in ultralow-field MRI.

Zhang Y, He W, Wu J, Xu Z

pubmed logopapersJun 18 2025
This study develops a self-learning method to denoise MR images for use in ultralow field (ULF) applications. We propose use of a self-learning neural network for denoising 3D MRI obtained from two acquisitions (dual scan), which are utilized as training pairs. Based on the self-learning method Noise2Noise, an effective data augmentation method and integrated learning strategy for enhancing model performance are proposed. Experimental results demonstrate that (1) the proposed model can produce exceptional denoising results and outperform the traditional Noise2Noise method subjectively and objectively; (2) magnitude images can be effectively denoised comparing with several state-of-the-art methods on synthetic and real ULF data; and (3) the proposed method can yield better results on phase images and quantitative imaging applications than other denoisers due to the self-learning framework. Theoretical and experimental implementations show that the proposed self-learning model achieves improved performance on magnitude image denoising with synthetic and real-world data at ULF. Additionally, we test our method on calculated phase and quantification images, demonstrating its superior performance over several contrastive methods.

RadioRAG: Online Retrieval-augmented Generation for Radiology Question Answering.

Tayebi Arasteh S, Lotfinia M, Bressem K, Siepmann R, Adams L, Ferber D, Kuhl C, Kather JN, Nebelung S, Truhn D

pubmed logopapersJun 18 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate diagnostic accuracy of various large language models (LLMs) when answering radiology-specific questions with and without access to additional online, up-to-date information via retrieval-augmented generation (RAG). Materials and Methods The authors developed Radiology RAG (RadioRAG), an end-to-end framework that retrieves data from authoritative radiologic online sources in real-time. RAG incorporates information retrieval from external sources to supplement the initial prompt, grounding the model's response in relevant information. Using 80 questions from the RSNA Case Collection across radiologic subspecialties and 24 additional expert-curated questions with reference standard answers, LLMs (GPT-3.5-turbo, GPT-4, Mistral-7B, Mixtral-8 × 7B, and Llama3 [8B and 70B]) were prompted with and without RadioRAG in a zero-shot inference scenario (temperature ≤ 0.1, top- <i>P</i> = 1). RadioRAG retrieved context-specific information from www.radiopaedia.org. Accuracy of LLMs with and without RadioRAG in answering questions from each dataset was assessed. Statistical analyses were performed using bootstrapping while preserving pairing. Additional assessments included comparison of model with human performance and comparison of time required for conventional versus RadioRAG-powered question answering. Results RadioRAG improved accuracy for some LLMs, including GPT-3.5-turbo [74% (59/80) versus 66% (53/80), FDR = 0.03] and Mixtral-8 × 7B [76% (61/80) versus 65% (52/80), FDR = 0.02] on the RSNA-RadioQA dataset, with similar trends in the ExtendedQA dataset. Accuracy exceeded (FDR ≤ 0.007) that of a human expert (63%, (50/80)) for these LLMs, while not for Mistral-7B-instruct-v0.2, Llama3-8B, and Llama3-70B (FDR ≥ 0.21). RadioRAG reduced hallucinations for all LLMs (rates from 6-25%). RadioRAG increased estimated response time fourfold. Conclusion RadioRAG shows potential to improve LLM accuracy and factuality in radiology question answering by integrating real-time domain-specific data. ©RSNA, 2025.

EchoFM: Foundation Model for Generalizable Echocardiogram Analysis.

Kim S, Jin P, Song S, Chen C, Li Y, Ren H, Li X, Liu T, Li Q

pubmed logopapersJun 18 2025
Echocardiography is the first-line noninvasive cardiac imaging modality, providing rich spatio-temporal information on cardiac anatomy and physiology. Recently, foundation model trained on extensive and diverse datasets has shown strong performance in various downstream tasks. However, translating foundation models into the medical imaging domain remains challenging due to domain differences between medical and natural images, the lack of diverse patient and disease datasets. In this paper, we introduce EchoFM, a general-purpose vision foundation model for echocardiography trained on a large-scale dataset of over 20 million echocardiographic images from 6,500 patients. To enable effective learning of rich spatio-temporal representations from periodic videos, we propose a novel self-supervised learning framework based on a masked autoencoder with a spatio-temporal consistent masking strategy and periodic-driven contrastive learning. The learned cardiac representations can be readily adapted and fine-tuned for a wide range of downstream tasks, serving as a strong and flexible backbone model. We validate EchoFM through experiments across key downstream tasks in the clinical echocardiography workflow, leveraging public and multi-center internal datasets. EchoFM consistently outperforms SOTA methods, demonstrating superior generalization capabilities and flexibility. The code and checkpoints are available at: https://github.com/SekeunKim/EchoFM.git.

Comparative analysis of transformer-based deep learning models for glioma and meningioma classification.

Nalentzi K, Gerogiannis K, Bougias H, Stogiannos N, Papavasileiou P

pubmed logopapersJun 18 2025
This study compares the classification accuracy of novel transformer-based deep learning models (ViT and BEiT) on brain MRIs of gliomas and meningiomas through a feature-driven approach. Meta's Segment Anything Model was used for semi-automatic segmentation, therefore proposing a total neural network-based workflow for this classification task. ViT and BEiT models were finetuned to a publicly available brain MRI dataset. Gliomas/meningiomas cases (625/507) were used for training and 520 cases (260/260; gliomas/meningiomas) for testing. The extracted deep radiomic features from ViT and BEiT underwent normalization, dimensionality reduction based on the Pearson correlation coefficient (PCC), and feature selection using analysis of variance (ANOVA). A multi-layer perceptron (MLP) with 1 hidden layer, 100 units, rectified linear unit activation, and Adam optimizer was utilized. Hyperparameter tuning was performed via 5-fold cross-validation. The ViT model achieved the highest AUC on the validation dataset using 7 features, yielding an AUC of 0.985 and accuracy of 0.952. On the independent testing dataset, the model exhibited an AUC of 0.962 and an accuracy of 0.904. The BEiT model yielded an AUC of 0.939 and an accuracy of 0.871 on the testing dataset. This study demonstrates the effectiveness of transformer-based models, especially ViT, for glioma and meningioma classification, achieving high AUC scores and accuracy. However, the study is limited by the use of a single dataset, which may affect generalizability. Future work should focus on expanding datasets and further optimizing models to improve performance and applicability across different institutions. This study introduces a feature-driven methodology for glioma and meningioma classification, showcasing advancements in the accuracy and model robustness of transformer-based models.

Artificial intelligence-based diagnosis of hallux valgus interphalangeus using anteroposterior foot radiographs.

Kwolek K, Gądek A, Kwolek K, Lechowska-Liszka A, Malczak M, Liszka H

pubmed logopapersJun 18 2025
A recently developed method enables automated measurement of the hallux valgus angle (HVA) and the first intermetatarsal angle (IMA) from weight-bearing foot radiographs. This approach employs bone segmentation to identify anatomical landmarks and provides standardized angle measurements based on established guidelines. While effective for HVA and IMA, preoperative radiograph analysis remains complex and requires additional measurements, such as the hallux interphalangeal angle (IPA), which has received limited research attention. To expand the previous method, which measured HVA and IMA, by incorporating the automatic measurement of IPA, evaluating its accuracy and clinical relevance. A preexisting database of manually labeled foot radiographs was used to train a U-Net neural network for segmenting bones and identifying landmarks necessary for IPA measurement. Of the 265 radiographs in the dataset, 161 were selected for training and 20 for validation. The U-Net neural network achieves a high mean Sørensen-Dice index (> 0.97). The remaining 84 radiographs were used to assess the reliability of automated IPA measurements against those taken manually by two orthopedic surgeons (O<sub>A</sub> and O<sub>B</sub>) using computer-based tools. Each measurement was repeated to assess intraobserver (O<sub>A1</sub> and O<sub>A2</sub>) and interobserver (O<sub>A2</sub> and O<sub>B</sub>) reliability. Agreement between automated and manual methods was evaluated using the Intraclass Correlation Coefficient (ICC), and Bland-Altman analysis identified systematic differences. Standard error of measurement (SEM) and Pearson correlation coefficients quantified precision and linearity, and measurement times were recorded to evaluate efficiency. The artificial intelligence (AI)-based system demonstrated excellent reliability, with ICC3.1 values of 0.92 (AI <i>vs</i> O<sub>A2</sub>) and 0.88 (AI <i>vs</i> O<sub>B</sub>), both statistically significant (<i>P</i> < 0.001). For manual measurements, ICC values were 0.95 (O<sub>A2</sub> <i>vs</i> O<sub>A1</sub>) and 0.95 (O<sub>A2</sub> <i>vs</i> O<sub>B</sub>), supporting both intraobserver and interobserver reliability. Bland-Altman analysis revealed minimal biases of: (1) 1.61° (AI <i>vs</i> O<sub>A2</sub>); and (2) 2.54° (AI <i>vs</i> O<sub>B</sub>), with clinically acceptable limits of agreement. The AI system also showed high precision, as evidenced by low SEM values: (1) 1.22° (O<sub>A2</sub> <i>vs</i> O<sub>B</sub>); (2) 1.77° (AI <i>vs</i> O<sub>A2</sub>); and (3) 2.09° (AI <i>vs</i> O<sub>B</sub>). Furthermore, Pearson correlation coefficients confirmed strong linear relationships between automated and manual measurements, with <i>r</i> = 0.85 (AI <i>vs</i> O<sub>A2</sub>) and <i>r</i> = 0.90 (AI <i>vs</i> O<sub>B</sub>). The AI method significantly improved efficiency, completing all 84 measurements 8 times faster than manual methods, reducing the time required from an average 36 minutes to just 4.5 minutes. The proposed AI-assisted IPA measurement method shows strong clinical potential, effectively corresponding with manual measurements. Integrating IPA with HVA and IMA assessments provides a comprehensive tool for automated forefoot deformity analysis, supporting hallux valgus severity classification and preoperative planning, while offering substantial time savings in high-volume clinical settings.

Pediatric Pancreas Segmentation from MRI Scans with Deep Learning

Elif Keles, Merve Yazol, Gorkem Durak, Ziliang Hong, Halil Ertugrul Aktas, Zheyuan Zhang, Linkai Peng, Onkar Susladkar, Necati Guzelyel, Oznur Leman Boyunaga, Cemal Yazici, Mark Lowe, Aliye Uc, Ulas Bagci

arxiv logopreprintJun 18 2025
Objective: Our study aimed to evaluate and validate PanSegNet, a deep learning (DL) algorithm for pediatric pancreas segmentation on MRI in children with acute pancreatitis (AP), chronic pancreatitis (CP), and healthy controls. Methods: With IRB approval, we retrospectively collected 84 MRI scans (1.5T/3T Siemens Aera/Verio) from children aged 2-19 years at Gazi University (2015-2024). The dataset includes healthy children as well as patients diagnosed with AP or CP based on clinical criteria. Pediatric and general radiologists manually segmented the pancreas, then confirmed by a senior pediatric radiologist. PanSegNet-generated segmentations were assessed using Dice Similarity Coefficient (DSC) and 95th percentile Hausdorff distance (HD95). Cohen's kappa measured observer agreement. Results: Pancreas MRI T2W scans were obtained from 42 children with AP/CP (mean age: 11.73 +/- 3.9 years) and 42 healthy children (mean age: 11.19 +/- 4.88 years). PanSegNet achieved DSC scores of 88% (controls), 81% (AP), and 80% (CP), with HD95 values of 3.98 mm (controls), 9.85 mm (AP), and 15.67 mm (CP). Inter-observer kappa was 0.86 (controls), 0.82 (pancreatitis), and intra-observer agreement reached 0.88 and 0.81. Strong agreement was observed between automated and manual volumes (R^2 = 0.85 in controls, 0.77 in diseased), demonstrating clinical reliability. Conclusion: PanSegNet represents the first validated deep learning solution for pancreatic MRI segmentation, achieving expert-level performance across healthy and diseased states. This tool, algorithm, along with our annotated dataset, are freely available on GitHub and OSF, advancing accessible, radiation-free pediatric pancreatic imaging and fostering collaborative research in this underserved domain.

A Deep Learning Lung Cancer Segmentation Pipeline to Facilitate CT-based Radiomics

So, A. C. P., Cheng, D., Aslani, S., Azimbagirad, M., Yamada, D., Dunn, R., Josephides, E., McDowall, E., Henry, A.-R., Bille, A., Sivarasan, N., Karapanagiotou, E., Jacob, J., Pennycuick, A.

medrxiv logopreprintJun 18 2025
BackgroundCT-based radio-biomarkers could provide non-invasive insights into tumour biology to risk-stratify patients. One of the limitations is laborious manual segmentation of regions-of-interest (ROI). We present a deep learning auto-segmentation pipeline for radiomic analysis. Patients and Methods153 patients with resected stage 2A-3B non-small cell lung cancer (NSCLCs) had tumours segmented using nnU-Net with review by two clinicians. The nnU-Net was pretrained with anatomical priors in non-cancerous lungs and finetuned on NSCLCs. Three ROIs were segmented: intra-tumoural, peri-tumoural, and whole lung. 1967 features were extracted using PyRadiomics. Feature reproducibility was tested using segmentation perturbations. Features were selected using minimum-redundancy-maximum-relevance with Random Forest-recursive feature elimination nested in 500 bootstraps. ResultsAuto-segmentation time was [~]36 seconds/series. Mean volumetric and surface Dice-Sorensen coefficient (DSC) scores were 0.84 ({+/-}0.28), and 0.79 ({+/-}0.34) respectively. DSC were significantly correlated with tumour shape (sphericity, diameter) and location (worse with chest wall adherence), but not batch effects (e.g. contrast, reconstruction kernel). 6.5% cases had missed segmentations; 6.5% required major changes. Pre-training on anatomical priors resulted in better segmentations compared to training on tumour-labels alone (p<0.001) and tumour with anatomical labels (p<0.001). Most radiomic features were not reproducible following perturbations and resampling. Adding radiomic features, however, did not significantly improve the clinical model in predicting 2-year disease-free survival: AUCs 0.67 (95%CI 0.59-0.75) vs 0.63 (95%CI 0.54-0.71) respectively (p=0.28). ConclusionOur study demonstrates that integrating auto-segmentation into radio-biomarker discovery is feasible with high efficiency and accuracy. Whilst radiomic analysis show limited reproducibility, our auto-segmentation may allow more robust radio-biomarker analysis using deep learning features.

Cardiovascular risk in childhood and young adulthood is associated with the hemodynamic response function in midlife: The Bogalusa Heart Study.

Chuang KC, Naseri M, Ramakrishnapillai S, Madden K, Amant JS, McKlveen K, Gwizdala K, Dhullipudi R, Bazzano L, Carmichael O

pubmed logopapersJun 18 2025
In functional MRI, a hemodynamic response function (HRF) describes how neural events are translated into a blood oxygenation response detected through imaging. The HRF has the potential to quantify neurovascular mechanisms by which cardiovascular risks modify brain health, but relationships among HRF characteristics, brain health, and cardiovascular modifiers of brain health have not been well studied to date. One hundred and thirty-seven middle-aged participants (mean age: 53.6±4.7, female (62%), 78% White American participants and 22% African American participants) in the exploratory analysis from Bogalusa Heart Study completed clinical evaluations from childhood to midlife and an adaptive Stroop task during fMRI in midlife. The HRFs of each participant within seventeen brain regions of interest (ROIs) previously identified as activated by this task were calculated using a convolutional neural network approach. Faster and more efficient neurovascular functioning was characterized in terms of five HRF characteristics: faster time to peak (TTP), shorter full width at half maximum (FWHM), smaller peak magnitude (PM), smaller trough magnitude (TM), and smaller area under the HRF curve (AUHRF). The composite HRF summary characteristics over all ROIs were calculated for multivariable and simple linear regression analyses. In multivariable models, faster and more efficient HRF characteristic was found in non-smoker compared to smokers (AUHRF, p = 0.029). Faster and more efficient HRF characteristics were associated with lower systolic and diastolic blood pressures (FWHM, TM, and AUHRF, p = 0.030, 0.042, and 0.032) and cerebral amyloid burden (FWHM, p-value = 0.027) in midlife; as well as greater response rate on the Stroop task (FWHM, p = 0.042) in midlife. In simple linear regression models, faster and more efficient HRF characteristics were found in women compared to men (TM, p = 0.019); in White American participants compared to African American participants (AUHRF, p = 0.044); and in non-smokers compared to smokers (TTP and AUHRF, p = 0.019 and 0.010). Faster and more efficient HRF characteristics were associated with lower systolic and diastolic blood pressures (FWHM and TM, p = 0.019 and 0.029), and lower BMI (FWHM and AUHRF, p = 0.025 and 0.017), in childhood and adolescence; and lower BMI (TTP, p = 0.049), cerebral amyloid burden (FWHM, p = 0.002), and white matter hyperintensity burden (FWHM, p = 0.046) in midlife; as well as greater accuracy on the Stroop task (AUHRF, p = 0.037) in midlife. In a diverse middle-aged community sample, HRF-based indicators of faster and more efficient neurovascular functioning were associated with better brain health and cognitive function, as well as better lifespan cardiovascular health.
Page 243 of 3883875 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.