Sort by:
Page 12 of 3423413 results

Prediction and Causality of functional MRI and synthetic signal using a Zero-Shot Time-Series Foundation Model

Alessandro Crimi, Andrea Brovelli

arxiv logopreprintSep 15 2025
Time-series forecasting and causal discovery are central in neuroscience, as predicting brain activity and identifying causal relationships between neural populations and circuits can shed light on the mechanisms underlying cognition and disease. With the rise of foundation models, an open question is how they compare to traditional methods for brain signal forecasting and causality analysis, and whether they can be applied in a zero-shot setting. In this work, we evaluate a foundation model against classical methods for inferring directional interactions from spontaneous brain activity measured with functional magnetic resonance imaging (fMRI) in humans. Traditional approaches often rely on Wiener-Granger causality. We tested the forecasting ability of the foundation model in both zero-shot and fine-tuned settings, and assessed causality by comparing Granger-like estimates from the model with standard Granger causality. We validated the approach using synthetic time series generated from ground-truth causal models, including logistic map coupling and Ornstein-Uhlenbeck processes. The foundation model achieved competitive zero-shot forecasting fMRI time series (mean absolute percentage error of 0.55 in controls and 0.27 in patients). Although standard Granger causality did not show clear quantitative differences between models, the foundation model provided a more precise detection of causal interactions. Overall, these findings suggest that foundation models offer versatility, strong zero-shot performance, and potential utility for forecasting and causal discovery in time-series data.

Artificial Intelligence-Derived Intramuscular Adipose Tissue Assessment Predicts Perineal Wound Complications Following Abdominoperineal Resection.

Besson A, Cao K, Kokelaar R, Hajdarevic E, Wirth L, Yeung J, Yeung JM

pubmed logopapersSep 15 2025
Perineal wound complications following abdominoperineal resection (APR) significantly impacts patient morbidity. Despite various closure techniques, no method has proven superior. Body composition is a key factor influencing postoperative outcomes. AI-assisted CT scan analysis is an accurate and efficient approach to assessing body composition. This study aimed to evaluate whether body composition characteristics can predict perineal wound complications following APR. A retrospective cohort study of APR patients from 2012 to 2024 was conducted, comparing primary closure and inferior gluteal artery myocutaneous (IGAM) flap closure outcomes. Preoperative CT scans were analyzed using a validated AI model to measure lumbosacral skeletal muscle (SM), intramuscular adipose tissue (IMAT), visceral adipose tissue, and subcutaneous adipose tissue. Greater IMAT volume correlated with increased wound dehiscence in males undergoing IGAM closure (40% vs. 4.8% and p = 0.027). Lower SM-to-IMAT volume ratio was associated with higher wound infection rates (60% vs. 19% and p = 0.04). Closure technique did not significantly impact wound infection or dehiscence rates. This study is the first to use AI derived 3D body composition analysis to assess perineal wound complications after APR. IMAT volume significantly influences wound healing in male patients having IGAM reconstruction.

Enhanced value of chest computed tomography radiomics features in breast density classification.

Zhou W, Yang Q, Zhang H

pubmed logopapersSep 15 2025
This study investigates the correlation between chest computed tomography (CT) radiomics features and breast density classification, and aiming to develop an automated radiomics model for breast density assessment using chest CT images. The diagnostic performance was evaluated to establish a CT-based alternative for breast density classification in clinical practice. A retrospective analysis was conducted on patients who underwent both mammography and chest CT scans. The breast density classification results based on mammography images were used to guide the development of CT-based breast density classification models. Radiomic features were extracted from breast regions of interest (ROIs) segmented on chest CT images. The diagnostic performance was evaluated to establish a CT-based alternative for breast density classification in clinical practice. Following dimensionality reduction and selection of dominant radiomic features, four four-class classification models were established, including ① Extreme Gradient Boosting (XGBoost), ② One Vs Rest Classifier-Logistic Regression, ③ Gradient Boosting, and ④ Random Forest Classifier. The performance of these models in classifying breast density using CT images was then evaluated. A total of 330 patients, aged 23-79 years, were included for analysis. The breast ROIs were automatically segmented using a U-net neural network model and subsequently refined and calibrated manually. A total of 1427 radiomic features were extracted, and after dimensionality reduction and feature selection, 28 dominant features closely associated with breast density classification were obtained to construct four classification models. Among the tested models-XGBoost, One-vs-Rest Logistic Regression, Gradient Boosting Classifier, and Random Forest Classifier-the XGBoost model achieved the best performance, with a classification accuracy of 86.6%. Analysis of the receiver operating characteristic curves showed Area Under the Curve (AUC) values of 1.00, 0.93, 0.93, and 0.99 for the four breast density categories, along with a micro-averaged AUC of 0.97 and a macro-averaged AUC of 0.96. Chest CT scans, combined with imaging radiomics models, can accurately classify breast density, providing valuable information related to breast cancer risk stratification. The proposed classification model offers a promising tool for automated breast density assessment, which could enhance personalized breast cancer screening and clinical decision-making.

Fractal-driven self-supervised learning enhances early-stage lung cancer GTV segmentation: a novel transfer learning framework.

Tozuka R, Kadoya N, Yasunaga A, Saito M, Komiyama T, Nemoto H, Ando H, Onishi H, Jingu K

pubmed logopapersSep 15 2025
To develop and evaluate a novel deep learning strategy for automated early-stage lung cancer gross tumor volume (GTV) segmentation, utilizing pre-training with mathematically generated non-natural fractal images. This retrospective study included 104 patients (36-91 years old; 81 males; 23 females) with peripheral early-stage non-small cell lung cancer who underwent radiotherapy at our institution from December 2017 to March 2025. First, we utilized encoders from a Convolutional Neural Network and a Vision Transformer (ViT), pre-trained with four learning strategies: from scratch, ImageNet-1K (1,000 classes of natural images), FractalDB-1K (1,000 classes of fractal images), and FractalDB-10K (10,000 classes of fractal images), with the latter three utilizing publicly available models. Second, the models were fine-tuned using CT images and physician-created contour data. Model accuracy was then evaluated using the volumetric Dice Similarity Coefficient (vDSC), surface Dice Similarity Coefficient (sDSC), and 95th percentile Hausdorff Distance (HD95) between the predicted and ground truth GTV contours, averaged across the fourfold cross-validation. Additionally, the segmentation accuracy was compared between simple and complex groups, categorized by the surface-to-volume ratio, to assess the impact of GTV shape complexity. Pre-trained with FractalDB-10K yielded the best segmentation accuracy across all metrics. For the ViT model, the vDSC, sDSC, and HD95 results were 0.800 ± 0.079, 0.732 ± 0.152, and 2.04 ± 1.59 mm for FractalDB-10K; 0.779 ± 0.093, 0.688 ± 0.156, and 2.72 ± 3.12 mm for FractalDB-1K; 0.764 ± 0.102, 0.660 ± 0.156, and 3.03 ± 3.47 mm for ImageNet-1K, respectively. In conditions FractalDB-1K and ImageNet-1K, there was no significant difference in the simple group, whereas the complex group showed a significantly higher vDSC (0.743 ± 0.095 vs 0.714 ± 0.104, p = 0.006). Pre-training with fractal structures achieved comparable or superior accuracy to ImageNet pre-training for early-stage lung cancer GTV auto-segmentation.

DinoAtten3D: Slice-Level Attention Aggregation of DinoV2 for 3D Brain MRI Anomaly Classification

Fazle Rafsani, Jay Shah, Catherine D. Chong, Todd J. Schwedt, Teresa Wu

arxiv logopreprintSep 15 2025
Anomaly detection and classification in medical imaging are critical for early diagnosis but remain challenging due to limited annotated data, class imbalance, and the high cost of expert labeling. Emerging vision foundation models such as DINOv2, pretrained on extensive, unlabeled datasets, offer generalized representations that can potentially alleviate these limitations. In this study, we propose an attention-based global aggregation framework tailored specifically for 3D medical image anomaly classification. Leveraging the self-supervised DINOv2 model as a pretrained feature extractor, our method processes individual 2D axial slices of brain MRIs, assigning adaptive slice-level importance weights through a soft attention mechanism. To further address data scarcity, we employ a composite loss function combining supervised contrastive learning with class-variance regularization, enhancing inter-class separability and intra-class consistency. We validate our framework on the ADNI dataset and an institutional multi-class headache cohort, demonstrating strong anomaly classification performance despite limited data availability and significant class imbalance. Our results highlight the efficacy of utilizing pretrained 2D foundation models combined with attention-based slice aggregation for robust volumetric anomaly detection in medical imaging. Our implementation is publicly available at https://github.com/Rafsani/DinoAtten3D.git.

AI and Healthcare Disparities: Lessons from a Cautionary Tale in Knee Radiology.

Hull G

pubmed logopapersSep 14 2025
Enthusiasm about the use of artificial intelligence (AI) in medicine has been tempered by concern that algorithmic systems can be unfairly biased against racially minoritized populations. This article uses work on racial disparities in knee osteoarthritis diagnoses to underline that achieving justice in the use of AI in medical imaging requires attention to the entire sociotechnical system within which it operates, rather than isolated properties of algorithms. Using AI to make current diagnostic procedures more efficient risks entrenching existing disparities; a recent algorithm points to some of the problems in current procedures while highlighting systemic normative issues that need to be addressed while designing further AI systems. The article thus contributes to a literature arguing that bias and fairness issues in AI be considered as aspects of structural inequality and injustice and to highlighting ways that AI can be helpful in making progress on these.

Image analysis of cardiac hepatopathy secondary to heart failure: Machine learning <i>vs</i> gastroenterologists and radiologists.

Miida S, Kamimura H, Fujiki S, Kobayashi T, Endo S, Maruyama H, Yoshida T, Watanabe Y, Kimura N, Abe H, Sakamaki A, Yokoo T, Tsukada M, Numano F, Kashimura T, Inomata T, Fuzawa Y, Hirata T, Horii Y, Ishikawa H, Nonaka H, Kamimura K, Terai S

pubmed logopapersSep 14 2025
Congestive hepatopathy, also known as nutmeg liver, is liver damage secondary to chronic heart failure (HF). Its morphological characteristics in terms of medical imaging are not defined and remain unclear. To leverage machine learning to capture imaging features of congestive hepatopathy using incidentally acquired computed tomography (CT) scans. We retrospectively analyzed 179 chronic HF patients who underwent echocardiography and CT within one year. Right HF severity was classified into three grades. Liver CT images at the paraumbilical vein level were used to develop a ResNet-based machine learning model to predict tricuspid regurgitation (TR) severity. Model accuracy was compared with that of six gastroenterology and four radiology experts. In the included patients, 120 were male (mean age: 73.1 ± 14.4 years). The accuracy of the results predicting TR severity from a single CT image for the machine learning model was significantly higher than the average accuracy of the experts. The model was found to be exceptionally reliable for predicting severe TR. Deep learning models, particularly those using ResNet architectures, can help identify morphological changes associated with TR severity, aiding in early liver dysfunction detection in patients with HF, thereby improving outcomes.

Toward Next-generation Medical Vision Backbones: Modeling Finer-grained Long-range Visual Dependency

Mingyuan Meng

arxiv logopreprintSep 14 2025
Medical Image Computing (MIC) is a broad research topic covering both pixel-wise (e.g., segmentation, registration) and image-wise (e.g., classification, regression) vision tasks. Effective analysis demands models that capture both global long-range context and local subtle visual characteristics, necessitating fine-grained long-range visual dependency modeling. Compared to Convolutional Neural Networks (CNNs) that are limited by intrinsic locality, transformers excel at long-range modeling; however, due to the high computational loads of self-attention, transformers typically cannot process high-resolution features (e.g., full-scale image features before downsampling or patch embedding) and thus face difficulties in modeling fine-grained dependency among subtle medical image details. Concurrently, Multi-layer Perceptron (MLP)-based visual models are recognized as computation/memory-efficient alternatives in modeling long-range visual dependency but have yet to be widely investigated in the MIC community. This doctoral research advances deep learning-based MIC by investigating effective long-range visual dependency modeling. It first presents innovative use of transformers for both pixel- and image-wise medical vision tasks. The focus then shifts to MLPs, pioneeringly developing MLP-based visual models to capture fine-grained long-range visual dependency in medical images. Extensive experiments confirm the critical role of long-range dependency modeling in MIC and reveal a key finding: MLPs provide feasibility in modeling finer-grained long-range dependency among higher-resolution medical features containing enriched anatomical/pathological details. This finding establishes MLPs as a superior paradigm over transformers/CNNs, consistently enhancing performance across various medical vision tasks and paving the way for next-generation medical vision backbones.

Disentanglement of Biological and Technical Factors via Latent Space Rotation in Clinical Imaging Improves Disease Pattern Discovery

Jeanny Pan, Philipp Seeböck, Christoph Fürböck, Svitlana Pochepnia, Jennifer Straub, Lucian Beer, Helmut Prosch, Georg Langs

arxiv logopreprintSep 14 2025
Identifying new disease-related patterns in medical imaging data with the help of machine learning enlarges the vocabulary of recognizable findings. This supports diagnostic and prognostic assessment. However, image appearance varies not only due to biological differences, but also due to imaging technology linked to vendors, scanning- or re- construction parameters. The resulting domain shifts impedes data representation learning strategies and the discovery of biologically meaningful cluster appearances. To address these challenges, we introduce an approach to actively learn the domain shift via post-hoc rotation of the data latent space, enabling disentanglement of biological and technical factors. Results on real-world heterogeneous clinical data showcase that the learned disentangled representation leads to stable clusters representing tissue-types across different acquisition settings. Cluster consistency is improved by +19.01% (ARI), +16.85% (NMI), and +12.39% (Dice) compared to the entangled representation, outperforming four state-of-the-art harmonization methods. When using the clusters to quantify tissue composition on idiopathic pulmonary fibrosis patients, the learned profiles enhance Cox survival prediction. This indicates that the proposed label-free framework facilitates biomarker discovery in multi-center routine imaging data. Code is available on GitHub https://github.com/cirmuw/latent-space-rotation-disentanglement.

Multi-encoder self-adaptive hard attention network with maximum intensity projections for lung nodule segmentation.

Usman M, Rehman A, Ur Rehman A, Shahid A, Khan TM, Razzak I, Chung M, Shin YG

pubmed logopapersSep 14 2025
Accurate lung nodule segmentation is crucial for early-stage lung cancer diagnosis, as it can substantially enhance patient survival rates. Computed tomography (CT) images are widely employed for early diagnosis in lung nodule analysis. However, the heterogeneity of lung nodules, size diversity, and the complexity of the surrounding environment pose challenges for developing robust nodule segmentation methods. In this study, we propose an efficient end-to-end framework, the Multi-Encoder Self-Adaptive Hard Attention Network (MESAHA-Net), which consists of three encoding paths, an attention block, and a decoder block that assimilates CT slice patches with both forward and backward maximum intensity projection (MIP) images. This synergy affords a profound contextual understanding of lung nodules and also results in a deluge of features. To manage the profusion of features generated, we incorporate a self-adaptive hard attention mechanism guided by region of interest (ROI) masks centered on nodular regions, which MESAHA-Net autonomously produces. The network sequentially undertakes slice-by-slice segmentation, emphasizing nodule regions to produce precise three-dimensional (3D) segmentation. The proposed framework has been comprehensively evaluated on the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset, the largest publicly available dataset for lung nodule segmentation. The results demonstrate that our approach is highly robust across various lung nodule types, outperforming previous state-of-the-art techniques in terms of segmentation performance and computational complexity, making it suitable for real-time clinical implementation of artificial intelligence (AI)-driven diagnostic tools.
Page 12 of 3423413 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.