Sort by:
Page 107 of 4034028 results

Altered effective connectivity in patients with drug-naïve first-episode, recurrent, and medicated major depressive disorder: a multi-site fMRI study.

Dai P, Huang K, Hu T, Chen Q, Liao S, Grecucci A, Yi X, Chen BT

pubmed logopapersAug 5 2025
Major depressive disorder (MDD) has been diagnosed through subjective and inconsistent clinical assessments. Resting-state functional magnetic resonance imaging (rs-fMRI) with connectivity analysis has been valuable for identifying neural correlates of patients with MDD, yet most studies rely on single-site and small sample sizes. This study utilized large-scale, multi-site rs-fMRI data from the Rest-meta-MDD consortium to assess effective connectivity in patients with MDD and its subtypes, i.e., drug-naïve first-episode (FEDN), recurrent (RMDD), and medicated MDD (MMDD) subtypes. To mitigate site-related variability, the ComBat algorithm was applied, and multivariate linear regression was used to control for age and gender effects. A random forest classification model was developed to identify the most predictive features. Nested five-fold cross-validation was used to assess model performance. The model effectively distinguished FEDN subtype from healthy controls (HC) group, achieving 90.13% accuracy and 96.41% AUC. However, classification performance for RMDD vs. FEDN and MMDD vs. FEDN was lower, suggesting that differences between the subtypes were less pronounced than differences between the patients with MDD and the HC group. Patients with RMDD exhibited more extensive connectivity abnormalities in the frontal-limbic system and default mode network than the patients with FEDN, implying heightened rumination. Additionally, treatment with medication appeared to partially modulate the aberrant connectivity, steering it toward normalization. This study showed altered brain connectivity in patients with MDD and its subtypes, which could be classified with machine learning models with robust performance. Abnormal connectivity could be the potential neural correlates for the presenting symptoms of patients with MDD. These findings provide novel insights into the neural pathogenesis of patients with MDD.

The REgistry of Flow and Perfusion Imaging for Artificial INtelligEnce with PET(REFINE PET): Rationale and Design.

Ramirez G, Lemley M, Shanbhag A, Kwiecinski J, Miller RJH, Kavanagh PB, Liang JX, Dey D, Slipczuk L, Travin MI, Alexanderson E, Carvajal-Juarez I, Packard RRS, Al-Mallah M, Einstein AJ, Feher A, Acampa W, Knight S, Le VT, Mason S, Sanghani R, Wopperer S, Chareonthaitawee P, Buechel RR, Rosamond TL, deKemp RA, Berman DS, Di Carli MF, Slomka PJ

pubmed logopapersAug 5 2025
The REgistry of Flow and Perfusion Imaging for Artificial Intelligence with PET (REFINE PET) was established to collect multicenter PET and associated computed tomography (CT) images, together with clinical data and outcomes, into a comprehensive research resource. REFINE PET will enable validation and development of both standard and novel cardiac PET/CT processing methods. REFINE PET is a multicenter, international registry that contains both clinical and imaging data. The PET scans were processed using QPET software (Cedars-Sinai Medical Center, Los Angeles, CA), while the CT scans were processed using deep learning (DL) to detect coronary artery calcium (CAC). Patients were followed up for the occurrence of major adverse cardiovascular events (MACE), which include death, myocardial infarction, unstable angina, and late revascularization (>90 days from PET). The REFINE PET registry currently contains data for 35,588 patients from 14 sites, with additional patient data and sites anticipated. Comprehensive clinical data (including demographics, medical history, and stress test results) were integrated with more than 2200 imaging variables across 42 categories. The registry is poised to address a broad range of clinical questions, supported by correlating invasive angiography (within 6 months of MPI) in 5972 patients and a total of 9252 major adverse cardiovascular events during a median follow-up of 4.2 years. The REFINE PET registry leverages the integration of clinical, multimodality imaging, and novel quantitative and AI tools to advance the role of PET/CT MPI in diagnosis and risk stratification.

Multi-modal MRI cascaded incremental reconstruction with coarse-to-fine spatial registration.

Wang Y, Sun Y, Liu J, Jing L, Liu Q

pubmed logopapersAug 5 2025
Magnetic resonance imaging (MRI) typically utilizes multiple contrasts to assess different tissue features, but prolonged scanning increases the risk of motion artifacts. Compressive sensing MRI (CS-MRI) employs computational reconstruction algorithm to accelerate imaging. Full-sampled auxiliary MR images can effectively assist in the reconstruction of under-sampled target MR images. However, due to spatial offset and differences in imaging parameters, how to achieve cross-modal fusion is a key issue. In order to cope with this issue, we propose an end-to-end network integrating spatial registration and cascaded incremental reconstruction for multi-modal CS-MRI. Specifically, the proposed network comprises two stages: a coarse-to-fine spatial registration sub-network and a cascaded incremental reconstruction sub-network. The registration sub-network iteratively predicts deformation flow fields between under-sampled target images and full-sampled auxiliary images, gradually aligning them to mitigate spatial offsets. The cascaded incremental reconstruction sub-network adopts a new separated criss-cross window Transformer as the basic component and deploys them in dual-path to fuse inter-modal and intra-modal features from the registered auxiliary images and under-sampled target images. Through cascade learning, we can recover incremental details from fused features and continuously refine the target images. We validate our model using the IXI brain dataset, and the experimental results demonstrate that, compared to existing methods, our network exhibits superior performance.

Beyond unimodal analysis: Multimodal ensemble learning for enhanced assessment of atherosclerotic disease progression.

Guarrasi V, Bertgren A, Näslund U, Wennberg P, Soda P, Grönlund C

pubmed logopapersAug 5 2025
Atherosclerosis is a leading cardiovascular disease typified by fatty streaks accumulating within arterial walls, culminating in potential plaque ruptures and subsequent strokes. Existing clinical risk scores, such as systematic coronary risk estimation and Framingham risk score, profile cardiovascular risks based on factors like age, cholesterol, and smoking, among others. However, these scores display limited sensitivity in early disease detection. Parallelly, ultrasound-based risk markers, such as the carotid intima media thickness, while informative, only offer limited predictive power. Notably, current models largely focus on either ultrasound image-derived risk markers or clinical risk factor data without combining both for a comprehensive, multimodal assessment. This study introduces a multimodal ensemble learning framework to assess atherosclerosis severity, especially in its early sub-clinical stage. We utilize a multi-objective optimization targeting both performance and diversity, aiming to integrate features from each modality effectively. Our objective is to measure the efficacy of models using multimodal data in assessing vascular aging, i.e., plaque presence and vascular age, over a six-year period. We also delineate a procedure for optimal model selection from a vast pool, focusing on best-suited models for classification tasks. Additionally, through eXplainable Artificial Intelligence techniques, this work delves into understanding key model contributors and discerning unique subject subgroups.

Recurrent inference machine for medical image registration.

Zhang Y, Zhao Y, Xue H, Kellman P, Klein S, Tao Q

pubmed logopapersAug 5 2025
Image registration is essential for medical image applications where alignment of voxels across multiple images is needed for qualitative or quantitative analysis. With recent advances in deep neural networks and parallel computing, deep learning-based medical image registration methods become competitive with their flexible modeling and fast inference capabilities. However, compared to traditional optimization-based registration methods, the speed advantage may come at the cost of registration performance at inference time. Besides, deep neural networks ideally demand large training datasets while optimization-based methods are training-free. To improve registration accuracy and data efficiency, we propose a novel image registration method, termed Recurrent Inference Image Registration (RIIR) network. RIIR is formulated as a meta-learning solver for the registration problem in an iterative manner. RIIR addresses the accuracy and data efficiency issues, by learning the update rule of optimization, with implicit regularization combined with explicit gradient input. We extensively evaluated RIIR on brain MRI, lung CT, and quantitative cardiac MRI datasets, in terms of both registration accuracy and training data efficiency. Our experiments showed that RIIR outperformed a range of deep learning-based methods, even with only 5% of the training data, demonstrating high data efficiency. Key findings from our ablation studies highlighted the important added value of the hidden states introduced in the recurrent inference framework for meta-learning. Our proposed RIIR offers a highly data-efficient framework for deep learning-based medical image registration.

Delineating retinal breaks in ultra-widefield fundus images with a PraNet-based machine learning model

Takayama, T., Uto, T., Tsuge, T., Kondo, Y., Tampo, H., Chiba, M., Kaburaki, T., Yanagi, Y., Takahashi, H.

medrxiv logopreprintAug 5 2025
BackgroundRetinal breaks are critical lesions that can lead to retinal detachment and vision loss if not detected and treated early. Automated and precise delineation of retinal breaks using ultra- widefield fundus (UWF) images remain a significant challenge in ophthalmology. ObjectiveThis study aimed to develop and validate a deep learning model based on the PraNet architecture for the accurate delineation of retinal breaks in UWF images, with a particular focus on segmentation performance in retinal break-positive cases. MethodsWe developed a deep learning segmentation model based on the PraNet architecture. This study utilized a dataset consisting of 8,083 cases and a total of 34,867 UWF images. Of these, 960 images contained retinal breaks, while the remaining 33,907 images did not. The dataset was split into 34,713 images for training, 81 for validation, and 73 for testing. The model was trained and validated on this dataset. Model performance was evaluated using both image-wise segmentation metrics (accuracy, precision, recall, Intersection over Union (IoU), dice score, centroid distance score) and lesion-wise detection metrics (sensitivity, positive predictive value). ResultsThe PraNet-based model achieved an accuracy of 0.996, a precision of 0.635, a recall of 0.756, an IoU of 0.539, a dice score of 0.652, and a centroid distance score of 0.081 for pixel-level detection of retinal breaks. The lesion-wise sensitivity was calculated as 0.885, and the positive predictive value (PPV) was 0.742. ConclusionsTo our knowledge, this is the first study to present pixel-level localization of retinal breaks using deep learning on UWF images. Our findings demonstrate that the PraNet-based model provides precise and robust pixel-level segmentation of retinal breaks in UWF images. This approach offers a clinically applicable tool for the precise delineation of retinal breaks, with the potential to improve patient outcomes. Future work should focus on external validation across multiple institutions and integration of additional annotation strategies to further enhance model performance and generalizability.

Prediction of breast cancer HER2 status changes based on ultrasound radiomics attention network.

Liu J, Xue X, Yan Y, Song Q, Cheng Y, Wang L, Wang X, Xu D

pubmed logopapersAug 5 2025
Following Neoadjuvant Chemotherapy (NAC), there exists a probability of changes occurring in the Human Epidermal Growth Factor Receptor 2 (HER2) status. If these changes are not promptly addressed, it could hinder the timely adjustment of treatment plans, thereby affecting the optimal management of breast cancer. Consequently, the accurate prediction of HER2 status changes holds significant clinical value, underscoring the need for a model capable of precisely forecasting these alterations. In this paper, we elucidate the intricacies surrounding HER2 status changes, and propose a deep learning architecture combined with radiomics techniques, named as Ultrasound Radiomics Attention Network (URAN), to predict HER2 status changes. Firstly, radiomics technology is used to extract ultrasound image features to provide rich and comprehensive medical information. Secondly, HER2 Key Feature Selection (HKFS) network is constructed for retain crucial features relevant to HER2 status change. Thirdly, we design Max and Average Attention and Excitation (MAAE) network to adjust the model's focus on different key features. Finally, a fully connected neural network is utilized to predict HER2 status changes. The code to reproduce our experiments can be found at https://github.com/joanaapa/Foundation-Medical. Our research was carried out using genuine ultrasound images sourced from hospitals. On this dataset, URAN outperformed both state-of-the-art and traditional methods in predicting HER2 status changes, achieving an accuracy of 0.8679 and an AUC of 0.8328 (95% CI: 0.77-0.90). Comparative experiments on the public BUS_UCLM dataset further demonstrated URAN's superiority, attaining an accuracy of 0.9283 and an AUC of 0.9161 (95% CI: 0.91-0.92). Additionally, we undertook rigorously crafted ablation studies, which validated the logicality and effectiveness of the radiomics techniques, as well as the HKFS and MAAE modules integrated within the URAN model. The results pertaining to specific HER2 statuses indicate that URAN exhibits superior accuracy in predicting changes in HER2 status characterized by low expression and IHC scores of 2+ or below. Furthermore, we examined the radiomics attributes of ultrasound images and discovered that various wavelet transform features significantly impacted the changes in HER2 status. We have developed a URAN method for predicting HER2 status changes that combines radiomics techniques and deep learning. URAN model have better predictive performance compared to other competing algorithms, and can mine key radiomics features related to HER2 status changes.

STARFormer: A novel spatio-temporal aggregation reorganization transformer of FMRI for brain disorder diagnosis.

Dong W, Li Y, Zeng W, Chen L, Yan H, Siok WT, Wang N

pubmed logopapersAug 5 2025
Many existing methods that use functional magnetic resonance imaging (fMRI) to classify brain disorders, such as autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD), often overlook the integration of spatial and temporal dependencies of the blood oxygen level-dependent (BOLD) signals, which may lead to inaccurate or imprecise classification results. To solve this problem, we propose a spatio-temporal aggregation reorganization transformer (STARFormer) that effectively captures both spatial and temporal features of BOLD signals by incorporating three key modules. The region of interest (ROI) spatial structure analysis module uses eigenvector centrality (EC) to reorganize brain regions based on effective connectivity, highlighting critical spatial relationships relevant to the brain disorder. The temporal feature reorganization module systematically segments the time series into equal-dimensional window tokens and captures multiscale features through variable window and cross-window attention. The spatio-temporal feature fusion module employs a parallel transformer architecture with dedicated temporal and spatial branches to extract integrated features. The proposed STARFormer has been rigorously evaluated on two publicly available datasets for the classification of ASD and ADHD. The experimental results confirm that STARFormer achieves state-of-the-art performance across multiple evaluation metrics, providing a more accurate and reliable tool for the diagnosis of brain disorders and biomedical research. The official implementation codes are available at: https://github.com/NZWANG/STARFormer.

Glioblastoma Overall Survival Prediction With Vision Transformers

Yin Lin, iccardo Barbieri, Domenico Aquino, Giuseppe Lauria, Marina Grisoli, Elena De Momi, Alberto Redaelli, Simona Ferrante

arxiv logopreprintAug 4 2025
Glioblastoma is one of the most aggressive and common brain tumors, with a median survival of 10-15 months. Predicting Overall Survival (OS) is critical for personalizing treatment strategies and aligning clinical decisions with patient outcomes. In this study, we propose a novel Artificial Intelligence (AI) approach for OS prediction using Magnetic Resonance Imaging (MRI) images, exploiting Vision Transformers (ViTs) to extract hidden features directly from MRI images, eliminating the need of tumor segmentation. Unlike traditional approaches, our method simplifies the workflow and reduces computational resource requirements. The proposed model was evaluated on the BRATS dataset, reaching an accuracy of 62.5% on the test set, comparable to the top-performing methods. Additionally, it demonstrated balanced performance across precision, recall, and F1 score, overcoming the best model in these metrics. The dataset size limits the generalization of the ViT which typically requires larger datasets compared to convolutional neural networks. This limitation in generalization is observed across all the cited studies. This work highlights the applicability of ViTs for downsampled medical imaging tasks and establishes a foundation for OS prediction models that are computationally efficient and do not rely on segmentation.

Scaling Artificial Intelligence for Prostate Cancer Detection on MRI towards Population-Based Screening and Primary Diagnosis in a Global, Multiethnic Population (Study Protocol)

Anindo Saha, Joeran S. Bosma, Jasper J. Twilt, Alexander B. C. D. Ng, Aqua Asif, Kirti Magudia, Peder Larson, Qinglin Xie, Xiaodong Zhang, Chi Pham Minh, Samuel N. Gitau, Ivo G. Schoots, Martijn F. Boomsma, Renato Cuocolo, Nikolaos Papanikolaou, Daniele Regge, Derya Yakar, Mattijs Elschot, Jeroen Veltman, Baris Turkbey, Nancy A. Obuchowski, Jurgen J. Fütterer, Anwar R. Padhani, Hashim U. Ahmed, Tobias Nordström, Martin Eklund, Veeru Kasivisvanathan, Maarten de Rooij, Henkjan Huisman

arxiv logopreprintAug 4 2025
In this intercontinental, confirmatory study, we include a retrospective cohort of 22,481 MRI examinations (21,288 patients; 46 cities in 22 countries) to train and externally validate the PI-CAI-2B model, i.e., an efficient, next-generation iteration of the state-of-the-art AI system that was developed for detecting Gleason grade group $\geq$2 prostate cancer on MRI during the PI-CAI study. Of these examinations, 20,471 cases (19,278 patients; 26 cities in 14 countries) from two EU Horizon projects (ProCAncer-I, COMFORT) and 12 independent centers based in Europe, North America, Asia and Africa, are used for training and internal testing. Additionally, 2010 cases (2010 patients; 20 external cities in 12 countries) from population-based screening (STHLM3-MRI, IP1-PROSTAGRAM trials) and primary diagnostic settings (PRIME trial) based in Europe, North and South Americas, Asia and Australia, are used for external testing. Primary endpoint is the proportion of AI-based assessments in agreement with the standard of care diagnoses (i.e., clinical assessments made by expert uropathologists on histopathology, if available, or at least two expert urogenital radiologists in consensus; with access to patient history and peer consultation) in the detection of Gleason grade group $\geq$2 prostate cancer within the external testing cohorts. Our statistical analysis plan is prespecified with a hypothesis of diagnostic interchangeability to the standard of care at the PI-RADS $\geq$3 (primary diagnosis) or $\geq$4 (screening) cut-off, considering an absolute margin of 0.05 and reader estimates derived from the PI-CAI observer study (62 radiologists reading 400 cases). Secondary measures comprise the area under the receiver operating characteristic curve (AUROC) of the AI system stratified by imaging quality, patient age and patient ethnicity to identify underlying biases (if any).
Page 107 of 4034028 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.