Sort by:
Page 41 of 3993982 results

The REgistry of Flow and Perfusion Imaging for Artificial INtelligEnce with PET(REFINE PET): Rationale and Design.

Ramirez G, Lemley M, Shanbhag A, Kwiecinski J, Miller RJH, Kavanagh PB, Liang JX, Dey D, Slipczuk L, Travin MI, Alexanderson E, Carvajal-Juarez I, Packard RRS, Al-Mallah M, Einstein AJ, Feher A, Acampa W, Knight S, Le VT, Mason S, Sanghani R, Wopperer S, Chareonthaitawee P, Buechel RR, Rosamond TL, deKemp RA, Berman DS, Di Carli MF, Slomka PJ

pubmed logopapersAug 5 2025
The REgistry of Flow and Perfusion Imaging for Artificial Intelligence with PET (REFINE PET) was established to collect multicenter PET and associated computed tomography (CT) images, together with clinical data and outcomes, into a comprehensive research resource. REFINE PET will enable validation and development of both standard and novel cardiac PET/CT processing methods. REFINE PET is a multicenter, international registry that contains both clinical and imaging data. The PET scans were processed using QPET software (Cedars-Sinai Medical Center, Los Angeles, CA), while the CT scans were processed using deep learning (DL) to detect coronary artery calcium (CAC). Patients were followed up for the occurrence of major adverse cardiovascular events (MACE), which include death, myocardial infarction, unstable angina, and late revascularization (>90 days from PET). The REFINE PET registry currently contains data for 35,588 patients from 14 sites, with additional patient data and sites anticipated. Comprehensive clinical data (including demographics, medical history, and stress test results) were integrated with more than 2200 imaging variables across 42 categories. The registry is poised to address a broad range of clinical questions, supported by correlating invasive angiography (within 6 months of MPI) in 5972 patients and a total of 9252 major adverse cardiovascular events during a median follow-up of 4.2 years. The REFINE PET registry leverages the integration of clinical, multimodality imaging, and novel quantitative and AI tools to advance the role of PET/CT MPI in diagnosis and risk stratification.

Multi-modal MRI cascaded incremental reconstruction with coarse-to-fine spatial registration.

Wang Y, Sun Y, Liu J, Jing L, Liu Q

pubmed logopapersAug 5 2025
Magnetic resonance imaging (MRI) typically utilizes multiple contrasts to assess different tissue features, but prolonged scanning increases the risk of motion artifacts. Compressive sensing MRI (CS-MRI) employs computational reconstruction algorithm to accelerate imaging. Full-sampled auxiliary MR images can effectively assist in the reconstruction of under-sampled target MR images. However, due to spatial offset and differences in imaging parameters, how to achieve cross-modal fusion is a key issue. In order to cope with this issue, we propose an end-to-end network integrating spatial registration and cascaded incremental reconstruction for multi-modal CS-MRI. Specifically, the proposed network comprises two stages: a coarse-to-fine spatial registration sub-network and a cascaded incremental reconstruction sub-network. The registration sub-network iteratively predicts deformation flow fields between under-sampled target images and full-sampled auxiliary images, gradually aligning them to mitigate spatial offsets. The cascaded incremental reconstruction sub-network adopts a new separated criss-cross window Transformer as the basic component and deploys them in dual-path to fuse inter-modal and intra-modal features from the registered auxiliary images and under-sampled target images. Through cascade learning, we can recover incremental details from fused features and continuously refine the target images. We validate our model using the IXI brain dataset, and the experimental results demonstrate that, compared to existing methods, our network exhibits superior performance.

Beyond unimodal analysis: Multimodal ensemble learning for enhanced assessment of atherosclerotic disease progression.

Guarrasi V, Bertgren A, Näslund U, Wennberg P, Soda P, Grönlund C

pubmed logopapersAug 5 2025
Atherosclerosis is a leading cardiovascular disease typified by fatty streaks accumulating within arterial walls, culminating in potential plaque ruptures and subsequent strokes. Existing clinical risk scores, such as systematic coronary risk estimation and Framingham risk score, profile cardiovascular risks based on factors like age, cholesterol, and smoking, among others. However, these scores display limited sensitivity in early disease detection. Parallelly, ultrasound-based risk markers, such as the carotid intima media thickness, while informative, only offer limited predictive power. Notably, current models largely focus on either ultrasound image-derived risk markers or clinical risk factor data without combining both for a comprehensive, multimodal assessment. This study introduces a multimodal ensemble learning framework to assess atherosclerosis severity, especially in its early sub-clinical stage. We utilize a multi-objective optimization targeting both performance and diversity, aiming to integrate features from each modality effectively. Our objective is to measure the efficacy of models using multimodal data in assessing vascular aging, i.e., plaque presence and vascular age, over a six-year period. We also delineate a procedure for optimal model selection from a vast pool, focusing on best-suited models for classification tasks. Additionally, through eXplainable Artificial Intelligence techniques, this work delves into understanding key model contributors and discerning unique subject subgroups.

Recurrent inference machine for medical image registration.

Zhang Y, Zhao Y, Xue H, Kellman P, Klein S, Tao Q

pubmed logopapersAug 5 2025
Image registration is essential for medical image applications where alignment of voxels across multiple images is needed for qualitative or quantitative analysis. With recent advances in deep neural networks and parallel computing, deep learning-based medical image registration methods become competitive with their flexible modeling and fast inference capabilities. However, compared to traditional optimization-based registration methods, the speed advantage may come at the cost of registration performance at inference time. Besides, deep neural networks ideally demand large training datasets while optimization-based methods are training-free. To improve registration accuracy and data efficiency, we propose a novel image registration method, termed Recurrent Inference Image Registration (RIIR) network. RIIR is formulated as a meta-learning solver for the registration problem in an iterative manner. RIIR addresses the accuracy and data efficiency issues, by learning the update rule of optimization, with implicit regularization combined with explicit gradient input. We extensively evaluated RIIR on brain MRI, lung CT, and quantitative cardiac MRI datasets, in terms of both registration accuracy and training data efficiency. Our experiments showed that RIIR outperformed a range of deep learning-based methods, even with only 5% of the training data, demonstrating high data efficiency. Key findings from our ablation studies highlighted the important added value of the hidden states introduced in the recurrent inference framework for meta-learning. Our proposed RIIR offers a highly data-efficient framework for deep learning-based medical image registration.

Delineating retinal breaks in ultra-widefield fundus images with a PraNet-based machine learning model

Takayama, T., Uto, T., Tsuge, T., Kondo, Y., Tampo, H., Chiba, M., Kaburaki, T., Yanagi, Y., Takahashi, H.

medrxiv logopreprintAug 5 2025
BackgroundRetinal breaks are critical lesions that can lead to retinal detachment and vision loss if not detected and treated early. Automated and precise delineation of retinal breaks using ultra- widefield fundus (UWF) images remain a significant challenge in ophthalmology. ObjectiveThis study aimed to develop and validate a deep learning model based on the PraNet architecture for the accurate delineation of retinal breaks in UWF images, with a particular focus on segmentation performance in retinal break-positive cases. MethodsWe developed a deep learning segmentation model based on the PraNet architecture. This study utilized a dataset consisting of 8,083 cases and a total of 34,867 UWF images. Of these, 960 images contained retinal breaks, while the remaining 33,907 images did not. The dataset was split into 34,713 images for training, 81 for validation, and 73 for testing. The model was trained and validated on this dataset. Model performance was evaluated using both image-wise segmentation metrics (accuracy, precision, recall, Intersection over Union (IoU), dice score, centroid distance score) and lesion-wise detection metrics (sensitivity, positive predictive value). ResultsThe PraNet-based model achieved an accuracy of 0.996, a precision of 0.635, a recall of 0.756, an IoU of 0.539, a dice score of 0.652, and a centroid distance score of 0.081 for pixel-level detection of retinal breaks. The lesion-wise sensitivity was calculated as 0.885, and the positive predictive value (PPV) was 0.742. ConclusionsTo our knowledge, this is the first study to present pixel-level localization of retinal breaks using deep learning on UWF images. Our findings demonstrate that the PraNet-based model provides precise and robust pixel-level segmentation of retinal breaks in UWF images. This approach offers a clinically applicable tool for the precise delineation of retinal breaks, with the potential to improve patient outcomes. Future work should focus on external validation across multiple institutions and integration of additional annotation strategies to further enhance model performance and generalizability.

Prediction of breast cancer HER2 status changes based on ultrasound radiomics attention network.

Liu J, Xue X, Yan Y, Song Q, Cheng Y, Wang L, Wang X, Xu D

pubmed logopapersAug 5 2025
Following Neoadjuvant Chemotherapy (NAC), there exists a probability of changes occurring in the Human Epidermal Growth Factor Receptor 2 (HER2) status. If these changes are not promptly addressed, it could hinder the timely adjustment of treatment plans, thereby affecting the optimal management of breast cancer. Consequently, the accurate prediction of HER2 status changes holds significant clinical value, underscoring the need for a model capable of precisely forecasting these alterations. In this paper, we elucidate the intricacies surrounding HER2 status changes, and propose a deep learning architecture combined with radiomics techniques, named as Ultrasound Radiomics Attention Network (URAN), to predict HER2 status changes. Firstly, radiomics technology is used to extract ultrasound image features to provide rich and comprehensive medical information. Secondly, HER2 Key Feature Selection (HKFS) network is constructed for retain crucial features relevant to HER2 status change. Thirdly, we design Max and Average Attention and Excitation (MAAE) network to adjust the model's focus on different key features. Finally, a fully connected neural network is utilized to predict HER2 status changes. The code to reproduce our experiments can be found at https://github.com/joanaapa/Foundation-Medical. Our research was carried out using genuine ultrasound images sourced from hospitals. On this dataset, URAN outperformed both state-of-the-art and traditional methods in predicting HER2 status changes, achieving an accuracy of 0.8679 and an AUC of 0.8328 (95% CI: 0.77-0.90). Comparative experiments on the public BUS_UCLM dataset further demonstrated URAN's superiority, attaining an accuracy of 0.9283 and an AUC of 0.9161 (95% CI: 0.91-0.92). Additionally, we undertook rigorously crafted ablation studies, which validated the logicality and effectiveness of the radiomics techniques, as well as the HKFS and MAAE modules integrated within the URAN model. The results pertaining to specific HER2 statuses indicate that URAN exhibits superior accuracy in predicting changes in HER2 status characterized by low expression and IHC scores of 2+ or below. Furthermore, we examined the radiomics attributes of ultrasound images and discovered that various wavelet transform features significantly impacted the changes in HER2 status. We have developed a URAN method for predicting HER2 status changes that combines radiomics techniques and deep learning. URAN model have better predictive performance compared to other competing algorithms, and can mine key radiomics features related to HER2 status changes.

STARFormer: A novel spatio-temporal aggregation reorganization transformer of FMRI for brain disorder diagnosis.

Dong W, Li Y, Zeng W, Chen L, Yan H, Siok WT, Wang N

pubmed logopapersAug 5 2025
Many existing methods that use functional magnetic resonance imaging (fMRI) to classify brain disorders, such as autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD), often overlook the integration of spatial and temporal dependencies of the blood oxygen level-dependent (BOLD) signals, which may lead to inaccurate or imprecise classification results. To solve this problem, we propose a spatio-temporal aggregation reorganization transformer (STARFormer) that effectively captures both spatial and temporal features of BOLD signals by incorporating three key modules. The region of interest (ROI) spatial structure analysis module uses eigenvector centrality (EC) to reorganize brain regions based on effective connectivity, highlighting critical spatial relationships relevant to the brain disorder. The temporal feature reorganization module systematically segments the time series into equal-dimensional window tokens and captures multiscale features through variable window and cross-window attention. The spatio-temporal feature fusion module employs a parallel transformer architecture with dedicated temporal and spatial branches to extract integrated features. The proposed STARFormer has been rigorously evaluated on two publicly available datasets for the classification of ASD and ADHD. The experimental results confirm that STARFormer achieves state-of-the-art performance across multiple evaluation metrics, providing a more accurate and reliable tool for the diagnosis of brain disorders and biomedical research. The official implementation codes are available at: https://github.com/NZWANG/STARFormer.

Machine Learning and MRI-Based Whole-Organ Magnetic Resonance Imaging Score (WORMS): A Novel Approach to Enhancing Genicular Artery Embolization Outcomes in Knee Osteoarthritis.

Dablan A, Özgül H, Arslan MF, Türksayar O, Cingöz M, Mutlu IN, Erdim C, Guzelbey T, Kılıckesmez O

pubmed logopapersAug 4 2025
To evaluate the feasibility of machine learning (ML) models using preprocedural MRI-based Whole-Organ Magnetic Resonance Imaging Score (WORMS) and clinical parameters to predict treatment response after genicular artery embolization in patients with knee osteoarthritis. This retrospective study included 66 patients (72 knees) who underwent GAE between December 2022 and June 2024. Preprocedural assessments included WORMS and Kellgren-Lawrence grading. Clinical response was defined as a ≥ 50% reduction in Visual Analog Scale (VAS) score. Feature selection was performed using recursive feature elimination and correlation analysis. Multiple ML algorithms (Random Forest, Support Vector Machine, Logistic Regression) were trained using stratified fivefold cross-validation. Conventional statistical analyses assessed group differences and correlations. Of 72 knees, 33 (45.8%) achieved a clinically significant response. Responders showed significantly lower WORMSs for cartilage, bone marrow, and total joint damage (p < 0.05). The Random Forest model demonstrated the best performance, with an accuracy of 81.8%, AUC-ROC of 86.2%, sensitivity of 90%, and specificity of 75%. Key predictive features included total WORMS, ligament score, and baseline VAS. Bone marrow score showed the strongest correlation with VAS reduction (r = -0.430, p < 0.001). ML models integrating WORMS and clinical data suggest that greater cartilage loss, bone marrow edema, joint damage, and higher baseline VAS scores may help to identify patients less likely to respond to GAE for knee OA.

Evaluating acute image ordering for real-world patient cases via language model alignment with radiological guidelines.

Yao MS, Chae A, Saraiya P, Kahn CE, Witschey WR, Gee JC, Sagreiya H, Bastani O

pubmed logopapersAug 4 2025
Diagnostic imaging studies are increasingly important in the management of acutely presenting patients. However, ordering appropriate imaging studies in the emergency department is a challenging task with a high degree of variability among healthcare providers. To address this issue, recent work has investigated whether generative AI and large language models can be leveraged to recommend diagnostic imaging studies in accordance with evidence-based medical guidelines. However, it remains challenging to ensure that these tools can provide recommendations that correctly align with medical guidelines, especially given the limited diagnostic information available in acute care settings. In this study, we introduce a framework to intelligently leverage language models by recommending imaging studies for patient cases that align with the American College of Radiology's Appropriateness Criteria, a set of evidence-based guidelines. To power our experiments, we introduce RadCases, a dataset of over 1500 annotated case summaries reflecting common patient presentations, and apply our framework to enable state-of-the-art language models to reason about appropriate imaging choices. Using our framework, state-of-the-art language models achieve accuracy comparable to clinicians in ordering imaging studies. Furthermore, we demonstrate that our language model-based pipeline can be used as an intelligent assistant by clinicians to support image ordering workflows and improve the accuracy of acute image ordering according to the American College of Radiology's Appropriateness Criteria. Our work demonstrates and validates a strategy to leverage AI-based software to improve trustworthy clinical decision-making in alignment with expert evidence-based guidelines.

Monitoring ctDNA in Aggressive B-cell Lymphoma: A Prospective Correlative Study of ctDNA Kinetics and PET-CT Metrics.

Vimalathas G, Hansen MH, Cédile OML, Thomassen M, Møller MB, Dahlmann SK, Kjeldsen MLG, Hildebrandt MG, Nielsen AL, Naghavi-Behzad M, Edenbrandt L, Nyvold CG, Larsen TS

pubmed logopapersAug 4 2025
Positron emission tomography-computed tomography (PET-CT) is recommended for response evaluation in aggressive large B-cell lymphoma (LBCL) but cannot detect minimal residual disease (MRD). Circulating tumor DNA (ctDNA) has emerged as a promising biomarker for real-time disease monitoring. This study evaluated longitudinal ctDNA monitoring as an MRD marker in LBCL. In this prospective, single-center study, 14 newly diagnosed LBCL patients receiving first-line immunochemotherapy underwent frequent longitudinal blood sampling. A 53-gene targeted sequencing panel quantified ctDNA and evaluated its kinetics, correlating it with clinical parameters and PET-CT, including total metabolic tumor volume (TMTV) calculated using AI-based analysis via RECOMIA. Baseline ctDNA was detected in 11 out of 14 patients (79%), with a median variant allele frequency of 6.88% (interquartile range: 1.19-10.20%). ctDNA levels correlated significantly with TMTV (ρ = 0.91, p < 0.0001) and lactate dehydrogenase. Circulating tumor DNA kinetics, including after one treatment cycle, mirrored PET-CT metabolic changes and identified relapsing or refractory cases. This study demonstrates ctDNA-based MRD monitoring in LBCL using a fixed targeted assay with an analytical sensitivity of at least 10-3. The kinetics of ctDNA reflects the clinical course and PET-CT findings, underscoring its complementary potential to PET-CT.
Page 41 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.