Sort by:
Page 216 of 6546537 results

Yousefirizi F, Dassanayake M, Lopez A, Reader A, Cook GJR, Mingels C, Rahmim A, Seifert R, Alberts I

pubmed logopapersAug 29 2025
Positron emission tomography/computed tomography (PET/CT) imaging plays a pivotal role in oncology, aiding tumor metabolism assessment, disease staging, and therapy response evaluation. Traditionally, semi-quantitative metrics such as SUVmax have been extensively used, though these methods face limitations in reproducibility and predictive capability. Recent advancements in artificial intelligence (AI), particularly deep learning, have revolutionized PET imaging, significantly enhancing image quantification accuracy, and biomarker extraction capabilities, thereby enabling more precise clinical decision-making.

Omer Faruk Durugol, Maximilian Rokuss, Yannick Kirchhoff, Klaus H. Maier-Hein

arxiv logopreprintAug 29 2025
Automated segmentation of Pancreatic Ductal Adenocarcinoma (PDAC) from MRI is critical for clinical workflows but is hindered by poor tumor-tissue contrast and a scarcity of annotated data. This paper details our submission to the PANTHER challenge, addressing both diagnostic T1-weighted (Task 1) and therapeutic T2-weighted (Task 2) segmentation. Our approach is built upon the nnU-Net framework and leverages a deep, multi-stage cascaded pre-training strategy, starting from a general anatomical foundation model and sequentially fine-tuning on CT pancreatic lesion datasets and the target MRI modalities. Through extensive five-fold cross-validation, we systematically evaluated data augmentation schemes and training schedules. Our analysis revealed a critical trade-off, where aggressive data augmentation produced the highest volumetric accuracy, while default augmentations yielded superior boundary precision (achieving a state-of-the-art MASD of 5.46 mm and HD95 of 17.33 mm for Task 1). For our final submission, we exploited this finding by constructing custom, heterogeneous ensembles of specialist models, essentially creating a mix of experts. This metric-aware ensembling strategy proved highly effective, achieving a top cross-validation Tumor Dice score of 0.661 for Task 1 and 0.523 for Task 2. Our work presents a robust methodology for developing specialized, high-performance models in the context of limited data and complex medical imaging tasks (Team MIC-DKFZ).

Maximilian Rokuss, Yannick Kirchhoff, Fabian Isensee, Klaus H. Maier-Hein

arxiv logopreprintAug 29 2025
Whole-body PET/CT is a cornerstone of oncological imaging, yet accurate lesion segmentation remains challenging due to tracer heterogeneity, physiological uptake, and multi-center variability. While fully automated methods have advanced substantially, clinical practice benefits from approaches that keep humans in the loop to efficiently refine predicted masks. The autoPET/CT IV challenge addresses this need by introducing interactive segmentation tasks based on simulated user prompts. In this work, we present our submission to Task 1. Building on the winning autoPET III nnU-Net pipeline, we extend the framework with promptable capabilities by encoding user-provided foreground and background clicks as additional input channels. We systematically investigate representations for spatial prompts and demonstrate that Euclidean Distance Transform (EDT) encodings consistently outperform Gaussian kernels. Furthermore, we propose online simulation of user interactions and a custom point sampling strategy to improve robustness under realistic prompting conditions. Our ensemble of EDT-based models, trained with and without external data, achieves the strongest cross-validation performance, reducing both false positives and false negatives compared to baseline models. These results highlight the potential of promptable models to enable efficient, user-guided segmentation workflows in multi-tracer, multi-center PET/CT. Code is publicly available at https://github.com/MIC-DKFZ/autoPET-interactive

Daniël Boeke, Cedrik Blommestijn, Rebecca N. Wray, Kalina Chupetlovska, Shangqi Gao, Zeyu Gao, Regina G. H. Beets-Tan, Mireia Crispin-Ortuzar, James O. Jones, Wilson Silva, Ines P. Machado

arxiv logopreprintAug 29 2025
Recurrence risk estimation in clear cell renal cell carcinoma (ccRCC) is essential for guiding postoperative surveillance and treatment. The Leibovich score remains widely used for stratifying distant recurrence risk but offers limited patient-level resolution and excludes imaging information. This study evaluates multimodal recurrence prediction by integrating preoperative computed tomography (CT) and postoperative histopathology whole-slide images (WSIs). A modular deep learning framework with pretrained encoders and Cox-based survival modeling was tested across unimodal, late fusion, and intermediate fusion setups. In a real-world ccRCC cohort, WSI-based models consistently outperformed CT-only models, underscoring the prognostic strength of pathology. Intermediate fusion further improved performance, with the best model (TITAN-CONCH with ResNet-18) approaching the adjusted Leibovich score. Random tie-breaking narrowed the gap between the clinical baseline and learned models, suggesting discretization may overstate individualized performance. Using simple embedding concatenation, radiology added value primarily through fusion. These findings demonstrate the feasibility of foundation model-based multimodal integration for personalized ccRCC risk prediction. Future work should explore more expressive fusion strategies, larger multimodal datasets, and general-purpose CT encoders to better match pathology modeling capacity.

Nico Albert Disch, Yannick Kirchhoff, Robin Peretzke, Maximilian Rokuss, Saikat Roy, Constantin Ulrich, David Zimmerer, Klaus Maier-Hein

arxiv logopreprintAug 29 2025
Understanding temporal dynamics in medical imaging is crucial for applications such as disease progression modeling, treatment planning and anatomical development tracking. However, most deep learning methods either consider only single temporal contexts, or focus on tasks like classification or regression, limiting their ability for fine-grained spatial predictions. While some approaches have been explored, they are often limited to single timepoints, specific diseases or have other technical restrictions. To address this fundamental gap, we introduce Temporal Flow Matching (TFM), a unified generative trajectory method that (i) aims to learn the underlying temporal distribution, (ii) by design can fall back to a nearest image predictor, i.e. predicting the last context image (LCI), as a special case, and (iii) supports $3D$ volumes, multiple prior scans, and irregular sampling. Extensive benchmarks on three public longitudinal datasets show that TFM consistently surpasses spatio-temporal methods from natural imaging, establishing a new state-of-the-art and robust baseline for $4D$ medical image prediction.

Kaouther Mouheb, Marawan Elbatel, Janne Papma, Geert Jan Biessels, Jurgen Claassen, Huub Middelkoop, Barbara van Munster, Wiesje van der Flier, Inez Ramakers, Stefan Klein, Esther E. Bron

arxiv logopreprintAug 29 2025
While foundation models (FMs) offer strong potential for AI-based dementia diagnosis, their integration into federated learning (FL) systems remains underexplored. In this benchmarking study, we systematically evaluate the impact of key design choices: classification head architecture, fine-tuning strategy, and aggregation method, on the performance and efficiency of federated FM tuning using brain MRI data. Using a large multi-cohort dataset, we find that the architecture of the classification head substantially influences performance, freezing the FM encoder achieves comparable results to full fine-tuning, and advanced aggregation methods outperform standard federated averaging. Our results offer practical insights for deploying FMs in decentralized clinical settings and highlight trade-offs that should guide future method development.

Wang D, Wang M, Chen R, Song J, Su Y, Wang Y, Liu F, Zhu X, Yang F

pubmed logopapersAug 29 2025
This study aims to construct a noninvasive preoperative prediction model for lymph node metastasis in adenocarcinoma of esophagogastric junction (AEG) using computed tomography (CT) texture characterization and machine learning. We analyzed clinical and imaging data from 57 patients with preoperative CT enhancement scans and pathologically confirmed AEG. Lesions were delineated, and texture features were extracted from arterial phase and venous phase CT images using 3D-Slicer software. Features were normalized, downscaled, and screened using correlation analysis and the least absolute shrinkage and selection operator algorithm. The lymph node metastasis prediction model employed machine learning algorithms (random forest, logistic regression, decision tree [DT], and support vector machine), with performance validated using receiver operating characteristic curves. In the arterial phase, the random forest model excelled in precision (0.86) and positive predictive value (0.86). The DT model exhibited the best negative predictive value (0.86), while the logistic regression model demonstrated the highest area under the curve (AUC; 0.78) and specificity (1.0). During the venous phase, the DT model excelled in precision (0.72), F1 score (0.76), and recall (0.80), whereas the support vector machine model had the highest AUC (0.75). Differences in AUCs between models in both phases were not statistically significant per DeLong's test, indicating comparable performance. Each model displayed strengths across various metrics, with the DT model showing consistent performance across arterial and venous phases, emphasizing accuracy and specificity. The CT texture-based machine learning model effectively predicts lymph node metastasis noninvasively in AEG patients, demonstrating robust predictive efficacy.

Chen J, Yan W, Shi Y, Pan X, Yu R, Wang D, Zhang X, Wang L, Liu K

pubmed logopapersAug 29 2025
The growth of subsolid nodules (SSNs) is a strong predictor of lung adenocarcinoma. However, the heterogeneity in the biological behavior of SSNs poses significant challenges for clinical management. This study aimed to evaluate the clinical utility of deep learning and radiomics approaches in predicting SSN growth based on computed tomography (CT) images. A total of 353 patients with 387 SSNs were enrolled in this retrospective study. All cases were divided into growth (n = 195) and non-growth (n = 192) groups and were randomly assigned to the training (n = 247), validation (n = 62), and test sets (n = 78) in a ratio of 3:1:1. We obtained 1454 radiomics features from each volumetric region of interest (VOI). Pearson correlation coefficient and the least absolute shrinkage and selection operator (LASSO) methods were used for radiomics signature determination. A ResNet18 architecture was used to construct the deep-learning model. The 2 models were combined via a ResNet-based fusion network to construct an ensemble model. The area under the curve (AUC) was plotted and decision curve analysis (DCA) was performed to determine the clinical performance of the 3 models. The combined model (AUC = 0.926, 95% CI: 0.869-0.977) outperformed the radiomics (AUC = 0.894, 95% CI: 0.808-0.957) and deep-learning models (AUC = 0.802, 95% CI: 0.695-0.899) in the test set. The DeLong test results showed a statistically significant difference between the combined model and the deep-learning model (P = .012), supporting the clinical value of DCA. This study demonstrates that integrating radiomics with deep learning offers promising potential for the preoperative prediction of SSN growth.

Farhan Fuad Abir, Abigail Elliott Daly, Kyle Anderman, Tolga Ozmen, Laura J. Brattain

arxiv logopreprintAug 29 2025
Phyllodes tumors (PTs) are rare fibroepithelial breast lesions that are difficult to classify preoperatively due to their radiological similarity to benign fibroadenomas. This often leads to unnecessary surgical excisions. To address this, we propose a multimodal deep learning framework that integrates breast ultrasound (BUS) images with structured clinical data to improve diagnostic accuracy. We developed a dual-branch neural network that extracts and fuses features from ultrasound images and patient metadata from 81 subjects with confirmed PTs. Class-aware sampling and subject-stratified 5-fold cross-validation were applied to prevent class imbalance and data leakage. The results show that our proposed multimodal method outperforms unimodal baselines in classifying benign versus borderline/malignant PTs. Among six image encoders, ConvNeXt and ResNet18 achieved the best performance in the multimodal setting, with AUC-ROC scores of 0.9427 and 0.9349, and F1-scores of 0.6720 and 0.7294, respectively. This study demonstrates the potential of multimodal AI to serve as a non-invasive diagnostic tool, reducing unnecessary biopsies and improving clinical decision-making in breast tumor management.

Moona Mazher, Steven A Niederer, Abdul Qayyum

arxiv logopreprintAug 29 2025
The accurate segmentation of lesions in whole-body PET/CT imaging is es-sential for tumor characterization, treatment planning, and response assess-ment, yet current manual workflows are labor-intensive and prone to inter-observer variability. Automated deep learning methods have shown promise but often remain limited by modality specificity, isolated time points, or in-sufficient integration of expert knowledge. To address these challenges, we present a two-stage lesion segmentation framework developed for the fourth AutoPET Challenge. In the first stage, a Masked Autoencoder (MAE) is em-ployed for self-supervised pretraining on unlabeled PET/CT and longitudinal CT scans, enabling the extraction of robust modality-specific representations without manual annotations. In the second stage, the pretrained encoder is fine-tuned with a bidirectional XLSTM architecture augmented with ResNet blocks and a convolutional decoder. By jointly leveraging anatomical (CT) and functional (PET) information as complementary input channels, the model achieves improved temporal and spatial feature integration. Evalua-tion on the AutoPET Task 1 dataset demonstrates that self-supervised pre-training significantly enhances segmentation accuracy, achieving a Dice score of 0.582 compared to 0.543 without pretraining. These findings high-light the potential of combining self-supervised learning with multimodal fu-sion for robust and generalizable PET/CT lesion segmentation. Code will be available at https://github.com/RespectKnowledge/AutoPet_2025_BxLSTM_UNET_Segmentation
Page 216 of 6546537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.