Sort by:
Page 17 of 1261258 results

Outcome prediction and individualized treatment effect estimation in patients with large vessel occlusion stroke

Lisa Herzog, Pascal Bühler, Ezequiel de la Rosa, Beate Sick, Susanne Wegener

arxiv logopreprintJul 3 2025
Mechanical thrombectomy has become the standard of care in patients with stroke due to large vessel occlusion (LVO). However, only 50% of successfully treated patients show a favorable outcome. We developed and evaluated interpretable deep learning models to predict functional outcomes in terms of the modified Rankin Scale score alongside individualized treatment effects (ITEs) using data of 449 LVO stroke patients from a randomized clinical trial. Besides clinical variables, we considered non-contrast CT (NCCT) and angiography (CTA) scans which were integrated using novel foundation models to make use of advanced imaging information. Clinical variables had a good predictive power for binary functional outcome prediction (AUC of 0.719 [0.666, 0.774]) which could slightly be improved when adding CTA imaging (AUC of 0.737 [0.687, 0.795]). Adding NCCT scans or a combination of NCCT and CTA scans to clinical features yielded no improvement. The most important clinical predictor for functional outcome was pre-stroke disability. While estimated ITEs were well calibrated to the average treatment effect, discriminatory ability was limited indicated by a C-for-Benefit statistic of around 0.55 in all models. In summary, the models allowed us to jointly integrate CT imaging and clinical features while achieving state-of-the-art prediction performance and ITE estimates. Yet, further research is needed to particularly improve ITE estimation.

BrainAGE latent representation clustering is associated with longitudinal disease progression in early-onset Alzheimer's disease.

Manouvriez D, Kuchcinski G, Roca V, Sillaire AR, Bertoux M, Delbeuck X, Pruvo JP, Lecerf S, Pasquier F, Lebouvier T, Lopes R

pubmed logopapersJul 3 2025
Early-onset Alzheimer's disease (EOAD) population is a clinically, genetically and pathologically heterogeneous condition. Identifying biomarkers related to disease progression is crucial for advancing clinical trials and improving therapeutic strategies. This study aims to differentiate EOAD patients with varying rates of progression using Brain Age Gap Estimation (BrainAGE)-based clustering algorithm applied to structural magnetic resonance images (MRI). A retrospective analysis of a longitudinal cohort consisting of 142 participants who met the criteria for early-onset probable Alzheimer's disease was conducted. Participants were assessed clinically, neuropsychologically and with structural MRI at baseline and annually for 6 years. A Brain Age Gap Estimation (BrainAGE) deep learning model pre-trained on 3,227 3D T1-weighted MRI of healthy subjects was used to extract encoded MRI representations at baseline. Then, k-means clustering was performed on these encoded representations to stratify the population. The resulting clusters were then analyzed for disease severity, cognitive phenotype and brain volumes at baseline and longitudinally. The optimal number of clusters was determined to be 2. Clusters differed significantly in BrainAGE scores (5.44 [± 8] years vs 15.25 [± 5 years], p < 0.001). The high BrainAGE cluster was associated with older age (p = 0.001) and higher proportion of female patients (p = 0.005), as well as greater disease severity based on Mini Mental State Examination (MMSE) scores (19.32 [±4.62] vs 14.14 [±6.93], p < 0.001) and gray matter volume (0.35 [±0.03] vs 0.32 [±0.02], p < 0.001). Longitudinal analyses revealed significant differences in disease progression (MMSE decline of -2.35 [±0.15] pts/year vs -3.02 [±0.25] pts/year, p = 0.02; CDR 1.58 [±0.10] pts/year vs 1.99 [±0.16] pts/year, p = 0.03). K-means clustering of BrainAGE encoded representations stratified EOAD patients based on varying rates of disease progression. These findings underscore the potential of using BrainAGE as a biomarker for better understanding and managing EOAD.

Deep neural hashing for content-based medical image retrieval: A survey.

Manna A, Sista R, Sheet D

pubmed logopapersJul 3 2025
The ever-growing digital repositories of medical data provide opportunities for advanced healthcare by forming a foundation for a digital healthcare ecosystem. Such an ecosystem facilitates digitized solutions to aspects like early diagnosis, evidence-based treatments, precision medicine, etc. Content-based medical image retrieval (CBMIR) plays a pivotal role in delivering advanced diagnostic healthcare within such an ecosystem. The concept of deep neural hashing (DNH) is introduced with CBMIR systems to aid in faster and more relevant retrievals from such large repositories. The fusion of DNH with CBMIR is an interesting and blooming area whose potential, impact, and methods have not been summarized so far. This survey attempts to summarize this blooming area through an in-depth exploration of the methods of DNH for CBMIR. This survey portrays an end-to-end pipeline for DNH within a CBMIR system. As part of this, concepts like the design of the DNH network, utilizing diverse learning strategies, different loss functions, and evaluation metrics for retrieval performance are discussed in detail. The learning strategies for DNH are further explored by categorizing them based on the loss function into pointwise, pairwise, and triplet-wise. Centered around this categorization, various existing methods are discussed in-depth, mainly focusing on the key contributing aspects of each method. Finally, the future vision for this field is shared in detail by emphasizing three key aspects: current and immediate areas of research, realizing the current and near-future research into practical applications, and finally, some unexplored research topics for the future. In summary, this survey depicts the current state of research and the future vision of the field of CBMIR systems with DNH.

Transformer attention-based neural network for cognitive score estimation from sMRI data.

Li S, Zhang Y, Zou C, Zhang L, Li F, Liu Q

pubmed logopapersJul 3 2025
Accurately predicting cognitive scores based on structural MRI holds significant clinical value for understanding the pathological stages of dementia and forecasting Alzheimer's disease (AD). Some existing deep learning methods often depend on anatomical priors, overlooking individual-specific structural differences during AD progression. To address these limitations, this work proposes a deep neural network that incorporates Transformer attention to jointly predict multiple cognitive scores, including ADAS, CDRSB, and MMSE. The architecture first employs a 3D convolutional neural network backbone to encode sMRI, capturing preliminary local structural information. Then an improved Transformer attention block integrated with 3D positional encoding and 3D convolutional layer to adaptively capture discriminative imaging features across the brain, thereby focusing on key cognitive-related regions effectively. Finally, an attention-aware regression network enables the joint prediction of multiple clinical scores. Experimental results demonstrate that our method outperforms some existing traditional and deep learning methods based on the ADNI dataset. Further qualitative analysis reveals that the dementia-related brain regions identified by the model hold important biological significance, effectively enhancing the performance of cognitive score prediction. Our code is publicly available at: https://github.com/lshsx/CTA_MRI.

Multi-task machine learning reveals the functional neuroanatomy fingerprint of mental processing

Wang, Z., Chen, Y., Pan, Y., Yan, J., Mao, W., Xiao, Z., Cao, G., Toussaint, P.-J., Guo, W., Zhao, B., Sun, H., Zhang, T., Evans, A. C., Jiang, X.

biorxiv logopreprintJul 3 2025
Mental processing delineates the functions of human mind encompassing a wide range of motor, sensory, emotional, and cognitive processes, each of which is underlain by the neuroanatomical substrates. Identifying accurate representation of functional neuroanatomy substrates of mental processing could inform understanding of its neural mechanism. The challenge is that it is unclear whether a specific mental process possesses a 'functional neuroanatomy fingerprint', i.e., a unique and reliable pattern of functional neuroanatomy that underlies the mental process. To address this question, we utilized a multi-task deep learning model to disentangle the functional neuroanatomy fingerprint of seven different and representative mental processes including Emotion, Gambling, Language, Motor, Relational, Social, and Working Memory. Results based on the functional magnetic resonance imaging data of two independent cohorts of 1235 subjects from the US and China consistently show that each of the seven mental processes possessed a functional neuroanatomy fingerprint, which is represented by a unique set of functional activity weights of whole-brain regions characterizing the degree of each region involved in the mental process. The functional neuroanatomy fingerprint of a specific mental process exhibits high discrimination ability (93% classification accuracy and AUC of 0.99) with those of the other mental processes, and is robust across different datasets and using different brain atlases. This study provides a solid functional neuroanatomy foundation for investigating the neural mechanism of mental processing.

MedFormer: Hierarchical Medical Vision Transformer with Content-Aware Dual Sparse Selection Attention

Zunhui Xia, Hongxing Li, Libin Lan

arxiv logopreprintJul 3 2025
Medical image recognition serves as a key way to aid in clinical diagnosis, enabling more accurate and timely identification of diseases and abnormalities. Vision transformer-based approaches have proven effective in handling various medical recognition tasks. However, these methods encounter two primary challenges. First, they are often task-specific and architecture-tailored, limiting their general applicability. Second, they usually either adopt full attention to model long-range dependencies, resulting in high computational costs, or rely on handcrafted sparse attention, potentially leading to suboptimal performance. To tackle these issues, we present MedFormer, an efficient medical vision transformer with two key ideas. First, it employs a pyramid scaling structure as a versatile backbone for various medical image recognition tasks, including image classification and dense prediction tasks such as semantic segmentation and lesion detection. This structure facilitates hierarchical feature representation while reducing the computation load of feature maps, highly beneficial for boosting performance. Second, it introduces a novel Dual Sparse Selection Attention (DSSA) with content awareness to improve computational efficiency and robustness against noise while maintaining high performance. As the core building technique of MedFormer, DSSA is explicitly designed to attend to the most relevant content. In addition, a detailed theoretical analysis has been conducted, demonstrating that MedFormer has superior generality and efficiency in comparison to existing medical vision transformers. Extensive experiments on a variety of imaging modality datasets consistently show that MedFormer is highly effective in enhancing performance across all three above-mentioned medical image recognition tasks. The code is available at https://github.com/XiaZunhui/MedFormer.

Development of a prediction model by combining tumor diameter and clinical parameters of adrenal incidentaloma.

Iwamoto Y, Kimura T, Morimoto Y, Sugisaki T, Dan K, Iwamoto H, Sanada J, Fushimi Y, Shimoda M, Fujii T, Nakanishi S, Mune T, Kaku K, Kaneto H

pubmed logopapersJul 3 2025
When adrenal incidentalomas are detected, diagnostic procedures are complicated by the need for endocrine-stimulating tests and imaging using various modalities to evaluate whether the tumor is a hormone-producing adrenal tumor. This study aimed to develop a machine-learning-based clinical model that combines computed tomography (CT) imaging and clinical parameters for adrenal tumor classification. This was a retrospective cohort study involving 162 patients who underwent hormone testing for adrenal incidentalomas at our institution. Nominal logistic regression analysis was used to identify the predictive factors for hormone-producing adrenal tumors, and three random forest classification models were developed using clinical and imaging parameters. The study included 55 patients with non-functioning adrenal tumors (NFAT), 44 with primary aldosteronism (PA), 22 with mild autonomous cortisol secretion (MACS), 18 with Cushing's syndrome (CS), and 23 with pheochromocytoma (Pheo). A random forest classification model combining the adrenal tumor diameter on CT, early morning hormone measurements, and several clinical parameters was constructed, and showed high diagnostic accuracy for PA, Pheo, and CS (area under the curve: 0.88, 0.85, and 0.80, respectively). However, sufficient diagnostic accuracy has not yet been achieved for MACS. This model provides a noninvasive and efficient tool for adrenal tumor classification, potentially reducing the need for additional hormonal stimulation tests. However, further validation studies are required to confirm the clinical utility of this method.

MvHo-IB: Multi-View Higher-Order Information Bottleneck for Brain Disorder Diagnosis

Kunyu Zhang, Qiang Li, Shujian Yu

arxiv logopreprintJul 3 2025
Recent evidence suggests that modeling higher-order interactions (HOIs) in functional magnetic resonance imaging (fMRI) data can enhance the diagnostic accuracy of machine learning systems. However, effectively extracting and utilizing HOIs remains a significant challenge. In this work, we propose MvHo-IB, a novel multi-view learning framework that integrates both pairwise interactions and HOIs for diagnostic decision-making, while automatically compressing task-irrelevant redundant information. MvHo-IB introduces several key innovations: (1) a principled method that combines O-information from information theory with a matrix-based Renyi alpha-order entropy estimator to quantify and extract HOIs, (2) a purpose-built Brain3DCNN encoder to effectively utilize these interactions, and (3) a new multi-view learning information bottleneck objective to enhance representation learning. Experiments on three benchmark fMRI datasets demonstrate that MvHo-IB achieves state-of-the-art performance, significantly outperforming previous methods, including recent hypergraph-based techniques. The implementation of MvHo-IB is available at https://github.com/zky04/MvHo-IB.

Development of a deep learning-based automated diagnostic system (DLADS) for classifying mammographic lesions - a first large-scale multi-institutional clinical trial in Japan.

Yamaguchi T, Koyama Y, Inoue K, Ban K, Hirokaga K, Kujiraoka Y, Okanami Y, Shinohara N, Tsunoda H, Uematsu T, Mukai H

pubmed logopapersJul 3 2025
Recently, western countries have built evidence on mammographic artificial Intelligence-computer-aided diagnosis (AI-CADx) systems; however, their effectiveness has not yet been sufficiently validated in Japanese women. In this study, we aimed to establish a Japanese mammographic AI-CADx system for the first time. We retrospectively collected screening or diagnostic mammograms from 63 institutions in Japan. We then randomly divided the images into training, validation, and test datasets in a balanced ratio of 8:1:1 on a case-level basis. The gold standard of annotation for the AI-CADx system is mammographic findings based on pathologic references. The AI-CADx system was developed using SE-ResNet modules and a sliding window algorithm. A cut-off concentration gradient of the heatmap image was set at 15%. The AI-CADx system was considered accurate if it detected the presence of a malignant lesion in a breast cancer mammogram. The primary endpoint of the AI-CADx system was defined as a sensitivity and specificity of over 80% for breast cancer diagnosis in the test dataset. We collected 20,638 mammograms from 11,450 Japanese women with a median age of 55 years. The mammograms included 5019 breast cancer (24.3%), 5026 benign (24.4%), and 10,593 normal (51.3%) mammograms. In the test dataset of 2059 mammograms, the AI-CADx system achieved a sensitivity of 83.5% and a specificity of 84.7% for breast cancer diagnosis. The AUC in the test dataset was 0.841 (DeLong 95% CI; 0.822-0.859). The Accuracy was almost consistent independent of breast density, mammographic findings, type of cancer, and mammography vendors (AUC (range); 0.639-0.906). The developed Japanese mammographic AI-CADx system diagnosed breast cancer with a pre-specified sensitivity and specificity. We are planning a prospective study to validate the breast cancer diagnostic performance of Japanese physicians using this AI-CADx system as a second reader. UMIN, trial number UMIN000039009. Registered 26 December 2019, https://www.umin.ac.jp/ctr/.

Radiological and Biological Dictionary of Radiomics Features: Addressing Understandable AI Issues in Personalized Prostate Cancer, Dictionary Version PM1.0.

Salmanpour MR, Amiri S, Gharibi S, Shariftabrizi A, Xu Y, Weeks WB, Rahmim A, Hacihaliloglu I

pubmed logopapersJul 3 2025
Artificial intelligence (AI) can advance medical diagnostics, but interpretability limits its clinical use. This work links standardized quantitative Radiomics features (RF) extracted from medical images with clinical frameworks like PI-RADS, ensuring AI models are understandable and aligned with clinical practice. We investigate the connection between visual semantic features defined in PI-RADS and associated risk factors, moving beyond abnormal imaging findings, and establishing a shared framework between medical and AI professionals by creating a standardized radiological/biological RF dictionary. Six interpretable and seven complex classifiers, combined with nine interpretable feature selection algorithms (FSA), were applied to RFs extracted from segmented lesions in T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), and apparent diffusion coefficient (ADC) multiparametric MRI sequences to predict TCIA-UCLA scores, grouped as low-risk (scores 1-3) and high-risk (scores 4-5). We then utilized the created dictionary to interpret the best predictive models. Combining sequences with FSAs including ANOVA F-test, Correlation Coefficient, and Fisher Score, and utilizing logistic regression, identified key features: The 90th percentile from T2WI, (reflecting hypo-intensity related to prostate cancer risk; Variance from T2WI (lesion heterogeneity; shape metrics including Least Axis Length and Surface Area to Volume ratio from ADC, describing lesion shape and compactness; and Run Entropy from ADC (texture consistency). This approach achieved the highest average accuracy of 0.78 ± 0.01, significantly outperforming single-sequence methods (p-value < 0.05). The developed dictionary for Prostate-MRI (PM1.0) serves as a common language and fosters collaboration between clinical professionals and AI developers to advance trustworthy AI solutions that support reliable/interpretable clinical decisions.
Page 17 of 1261258 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.