Sort by:
Page 140 of 3993984 results

LA-Seg: Disentangled sinogram pattern-guided transformer for lesion segmentation in limited-angle computed tomography.

Yoon JH, Lee YJ, Yoo SB

pubmed logopapersJul 21 2025
Limited-angle computed tomography (LACT) offers patient-friendly benefits, such as rapid scanning and reduced radiation exposure. However, the incompleteness of data in LACT often causes notable artifacts, posing challenges for precise medical interpretation. Although numerous approaches have been introduced to reconstruct LACT images into complete computed tomography (CT) scans, they focus on improving image quality and operate separately from lesion segmentation models, often overlooking essential lesion-specific information. This is because reconstruction models are primarily optimized to satisfy overall image quality rather than local lesion-specific regions, in a non-end-to-end setup where each component is optimized independently and may not contribute to reaching the global minimum of the overall objective function. To address this problem, we propose LA-Seg, a transformer-based segmentation model using the sinogram domain of LACT data. The LA-Seg method uses an auxiliary reconstruction task to estimates incomplete sinogram regions to enhance segmentation robustness. Applying transformers adapted from video prediction models captures the spatial structure and sequential patterns in sinograms and reconstructs features in incomplete regions using a disentangled representation guided by distinctive patterns. We propose contrastive abnormal feature loss to distinguish between normal and abnormal regions better. The experimental results demonstrate that LA-Seg consistently surpasses existing medical segmentation approaches in diverse LACT conditions. The source code is provided at https://github.com/jhyoon964/LA-Seg.

The added value for MRI radiomics and deep-learning for glioblastoma prognostication compared to clinical and molecular information

D. Abler, O. Pusterla, A. Joye-Kühnis, N. Andratschke, M. Bach, A. Bink, S. M. Christ, P. Hagmann, B. Pouymayou, E. Pravatà, P. Radojewski, M. Reyes, L. Ruinelli, R. Schaer, B. Stieltjes, G. Treglia, W. Valenzuela, R. Wiest, S. Zoergiebel, M. Guckenberger, S. Tanadini-Lang, A. Depeursinge

arxiv logopreprintJul 21 2025
Background: Radiomics shows promise in characterizing glioblastoma, but its added value over clinical and molecular predictors has yet to be proven. This study assessed the added value of conventional radiomics (CR) and deep learning (DL) MRI radiomics for glioblastoma prognosis (<= 6 vs > 6 months survival) on a large multi-center dataset. Methods: After patient selection, our curated dataset gathers 1152 glioblastoma (WHO 2016) patients from five Swiss centers and one public source. It included clinical (age, gender), molecular (MGMT, IDH), and baseline MRI data (T1, T1 contrast, FLAIR, T2) with tumor regions. CR and DL models were developed using standard methods and evaluated on internal and external cohorts. Sub-analyses assessed models with different feature sets (imaging-only, clinical/molecular-only, combined-features) and patient subsets (S-1: all patients, S-2: with molecular data, S-3: IDH wildtype). Results: The best performance was observed in the full cohort (S-1). In external validation, the combined-feature CR model achieved an AUC of 0.75, slightly, but significantly outperforming clinical-only (0.74) and imaging-only (0.68) models. DL models showed similar trends, though without statistical significance. In S-2 and S-3, combined models did not outperform clinical-only models. Exploratory analysis of CR models for overall survival prediction suggested greater relevance of imaging data: across all subsets, combined-feature models significantly outperformed clinical-only models, though with a modest advantage of 2-4 C-index points. Conclusions: While confirming the predictive value of anatomical MRI sequences for glioblastoma prognosis, this multi-center study found standard CR and DL radiomics approaches offer minimal added value over demographic predictors such as age and gender.

Regularized Low-Rank Adaptation for Few-Shot Organ Segmentation

Ghassen Baklouti, Julio Silva-Rodríguez, Jose Dolz, Houda Bahig, Ismail Ben Ayed

arxiv logopreprintJul 21 2025
Parameter-efficient fine-tuning (PEFT) of pre-trained foundation models is increasingly attracting interest in medical imaging due to its effectiveness and computational efficiency. Among these methods, Low-Rank Adaptation (LoRA) is a notable approach based on the assumption that the adaptation inherently occurs in a low-dimensional subspace. While it has shown good performance, its implementation requires a fixed and unalterable rank, which might be challenging to select given the unique complexities and requirements of each medical imaging downstream task. Inspired by advancements in natural image processing, we introduce a novel approach for medical image segmentation that dynamically adjusts the intrinsic rank during adaptation. Viewing the low-rank representation of the trainable weight matrices as a singular value decomposition, we introduce an l_1 sparsity regularizer to the loss function, and tackle it with a proximal optimizer. The regularizer could be viewed as a penalty on the decomposition rank. Hence, its minimization enables to find task-adapted ranks automatically. Our method is evaluated in a realistic few-shot fine-tuning setting, where we compare it first to the standard LoRA and then to several other PEFT methods across two distinguishable tasks: base organs and novel organs. Our extensive experiments demonstrate the significant performance improvements driven by our method, highlighting its efficiency and robustness against suboptimal rank initialization. Our code is publicly available: https://github.com/ghassenbaklouti/ARENA

MedSR-Impact: Transformer-Based Super-Resolution for Lung CT Segmentation, Radiomics, Classification, and Prognosis

Marc Boubnovski Martell, Kristofer Linton-Reid, Mitchell Chen, Sumeet Hindocha, Benjamin Hunter, Marco A. Calzado, Richard Lee, Joram M. Posma, Eric O. Aboagye

arxiv logopreprintJul 21 2025
High-resolution volumetric computed tomography (CT) is essential for accurate diagnosis and treatment planning in thoracic diseases; however, it is limited by radiation dose and hardware costs. We present the Transformer Volumetric Super-Resolution Network (\textbf{TVSRN-V2}), a transformer-based super-resolution (SR) framework designed for practical deployment in clinical lung CT analysis. Built from scalable components, including Through-Plane Attention Blocks (TAB) and Swin Transformer V2 -- our model effectively reconstructs fine anatomical details in low-dose CT volumes and integrates seamlessly with downstream analysis pipelines. We evaluate its effectiveness on three critical lung cancer tasks -- lobe segmentation, radiomics, and prognosis -- across multiple clinical cohorts. To enhance robustness across variable acquisition protocols, we introduce pseudo-low-resolution augmentation, simulating scanner diversity without requiring private data. TVSRN-V2 demonstrates a significant improvement in segmentation accuracy (+4\% Dice), higher radiomic feature reproducibility, and enhanced predictive performance (+0.06 C-index and AUC). These results indicate that SR-driven recovery of structural detail significantly enhances clinical decision support, positioning TVSRN-V2 as a well-engineered, clinically viable system for dose-efficient imaging and quantitative analysis in real-world CT workflows.

A Study of Anatomical Priors for Deep Learning-Based Segmentation of Pheochromocytoma in Abdominal CT

Tanjin Taher Toma, Tejas Sudharshan Mathai, Bikash Santra, Pritam Mukherjee, Jianfei Liu, Wesley Jong, Darwish Alabyad, Vivek Batheja, Abhishek Jha, Mayank Patel, Darko Pucar, Jayadira del Rivero, Karel Pacak, Ronald M. Summers

arxiv logopreprintJul 21 2025
Accurate segmentation of pheochromocytoma (PCC) in abdominal CT scans is essential for tumor burden estimation, prognosis, and treatment planning. It may also help infer genetic clusters, reducing reliance on expensive testing. This study systematically evaluates anatomical priors to identify configurations that improve deep learning-based PCC segmentation. We employed the nnU-Net framework to evaluate eleven annotation strategies for accurate 3D segmentation of pheochromocytoma, introducing a set of novel multi-class schemes based on organ-specific anatomical priors. These priors were derived from adjacent organs commonly surrounding adrenal tumors (e.g., liver, spleen, kidney, aorta, adrenal gland, and pancreas), and were compared against a broad body-region prior used in previous work. The framework was trained and tested on 105 contrast-enhanced CT scans from 91 patients at the NIH Clinical Center. Performance was measured using Dice Similarity Coefficient (DSC), Normalized Surface Distance (NSD), and instance-wise F1 score. Among all strategies, the Tumor + Kidney + Aorta (TKA) annotation achieved the highest segmentation accuracy, significantly outperforming the previously used Tumor + Body (TB) annotation across DSC (p = 0.0097), NSD (p = 0.0110), and F1 score (25.84% improvement at an IoU threshold of 0.5), measured on a 70-30 train-test split. The TKA model also showed superior tumor burden quantification (R^2 = 0.968) and strong segmentation across all genetic subtypes. In five-fold cross-validation, TKA consistently outperformed TB across IoU thresholds (0.1 to 0.5), reinforcing its robustness and generalizability. These findings highlight the value of incorporating relevant anatomical context in deep learning models to achieve precise PCC segmentation, supporting clinical assessment and longitudinal monitoring.

Cascaded Multimodal Deep Learning in the Differential Diagnosis, Progression Prediction, and Staging of Alzheimer's and Frontotemporal Dementia

Guarnier, G., Reinelt, J., Molloy, E. N., Mihai, P. G., Einaliyan, P., Valk, S., Modestino, A., Ugolini, M., Mueller, K., Wu, Q., Babayan, A., Castellaro, M., Villringer, A., Scherf, N., Thierbach, K., Schroeter, M. L., Alzheimers Disease Neuroimaging Initiative,, Australian Imaging Biomarkers and Lifestyle flagship study of ageing,, Frontotemporal Lobar Degeneration Neuroimaging Initiative,

medrxiv logopreprintJul 21 2025
Dementia is a complex condition whose multifaceted nature poses significant challenges in the diagnosis, prognosis, and treatment of patients. Despite the availability of large open-source data fueling a wealth of promising research, effective translation of preclinical findings to clinical practice remains difficult. This barrier is largely due to the complexity of unstructured and disparate preclinical and clinical data, which traditional analytical methods struggle to handle. Novel analytical techniques involving Deep Learning (DL), however, are gaining significant traction in this regard. Here, we have investigated the potential of a cascaded multimodal DL-based system (TelDem), assessing the ability to integrate and analyze a large, heterogeneous dataset (n=7,159 patients), applied to three clinically relevant use cases. Using a Cascaded Multi-Modal Mixing Transformer (CMT), we assessed TelDems validity and (using a Cross-Modal Fusion Norm - CMFN) model explainability in (i) differential diagnosis between healthy individuals, AD, and three sub-types of frontotemporal lobar degeneration (ii) disease staging from healthy cognition to mild cognitive impairment (MCI) and AD, and (iii) predicting progression from MCI to AD. Our findings show that the CMT enhances diagnostic and prognostic accuracy when incorporating multimodal data compared to unimodal modeling and that cerebrospinal fluid (CSF) biomarkers play a key role in accurate model decision making. These results reinforce the power of DL technology in tapping deeper into already existing data, thereby accelerating preclinical dementia research by utilizing clinically relevant information to disentangle complex dementia pathophysiology.

Latent Space Synergy: Text-Guided Data Augmentation for Direct Diffusion Biomedical Segmentation

Muhammad Aqeel, Maham Nazir, Zanxi Ruan, Francesco Setti

arxiv logopreprintJul 21 2025
Medical image segmentation suffers from data scarcity, particularly in polyp detection where annotation requires specialized expertise. We present SynDiff, a framework combining text-guided synthetic data generation with efficient diffusion-based segmentation. Our approach employs latent diffusion models to generate clinically realistic synthetic polyps through text-conditioned inpainting, augmenting limited training data with semantically diverse samples. Unlike traditional diffusion methods requiring iterative denoising, we introduce direct latent estimation enabling single-step inference with T x computational speedup. On CVC-ClinicDB, SynDiff achieves 96.0% Dice and 92.9% IoU while maintaining real-time capability suitable for clinical deployment. The framework demonstrates that controlled synthetic augmentation improves segmentation robustness without distribution shift. SynDiff bridges the gap between data-hungry deep learning models and clinical constraints, offering an efficient solution for deployment in resourcelimited medical settings.

Mammo-SAE: Interpreting Breast Cancer Concept Learning with Sparse Autoencoders

Krishna Kanth Nakka

arxiv logopreprintJul 21 2025
Interpretability is critical in high-stakes domains such as medical imaging, where understanding model decisions is essential for clinical adoption. In this work, we introduce Sparse Autoencoder (SAE)-based interpretability to breast imaging by analyzing {Mammo-CLIP}, a vision--language foundation model pretrained on large-scale mammogram image--report pairs. We train a patch-level \texttt{Mammo-SAE} on Mammo-CLIP to identify and probe latent features associated with clinically relevant breast concepts such as \textit{mass} and \textit{suspicious calcification}. Our findings reveal that top activated class level latent neurons in the SAE latent space often tend to align with ground truth regions, and also uncover several confounding factors influencing the model's decision-making process. Additionally, we analyze which latent neurons the model relies on during downstream finetuning for improving the breast concept prediction. This study highlights the promise of interpretable SAE latent representations in providing deeper insight into the internal workings of foundation models at every layer for breast imaging.

Automated extraction of vertebral bone mineral density from imaging with various scan parameters: a cadaver study with correlation to quantitative computed tomography.

Ramschütz C, Kloth C, Vogele D, Baum T, Rühling S, Beer M, Jansen JU, Schlager B, Wilke HJ, Kirschke JS, Sollmann N

pubmed logopapersJul 21 2025
To investigate lumbar vertebral volumetric bone mineral density (vBMD) from ex vivo opportunistic multi-detector computed tomography (MDCT) scans using different protocols, and compare it to dedicated quantitative CT (QCT) values from the same specimens. Cadavers from two female donors (ages 62 and 68 years) were scanned (L1-L5) using six different MDCT protocols and one dedicated QCT scan. Opportunistic vBMD was extracted using an artificial intelligence-based algorithm. The vBMD measurements from the six MDCT protocols, which varied in peak tube voltage (80-140 kVp), tube load (72-200 mAs), slice thickness (0.75-1 mm), and/or slice increment (0.5-0.75 mm), were compared to those obtained from dedicated QCT. A strong positive correlation was observed between vBMD from opportunistic MDCT and reference QCT (ρ = 0.869, p < 0.01). Agreement between vBMD measurements from MDCT protocols and the QCT reference standard according to the intraclass correlation coefficient (ICC) was 0.992 (95% confidence interval [CI]: 0.982-0.998). Bland-Altman analysis showed biases ranging from - 12.66 to 8.00 mg/cm³ across the six MDCT protocols, with all data points falling within the respective limits of agreement (LOA) for both cadavers. Opportunistic vBMD measurements of lumbar vertebrae demonstrated reliable consistency ex vivo across various scan parameters when compared to dedicated QCT.

Lightweight Network Enhancing High-Resolution Feature Representation for Efficient Low Dose CT Denoising.

Li J, Li Y, Qi F, Wang S, Zhang Z, Huang Z, Yu Z

pubmed logopapersJul 21 2025
Low-dose computed tomography plays a crucial role in reducing radiation exposure in clinical imaging, however, the resultant noise significantly impacts image quality and diagnostic precision. Recent transformer-based models have demonstrated strong denoising capabilities but are often constrained by high computational complexity. To overcome these limitations, we propose AMFA-Net, an adaptive multi-order feature aggregation network that provides a lightweight architecture for enhancing highresolution feature representation in low-dose CT imaging. AMFA-Net effectively integrates local and global contexts within high-resolution feature maps while learning discriminative representations through multi-order context aggregation. We introduce an agent-based self-attention crossshaped window transformer block that efficiently captures global context in high-resolution feature maps, which is subsequently fused with backbone features to preserve critical structural information. Our approach employs multiorder gated aggregation to adaptively guide the network in capturing expressive interactions that may be overlooked in fused features, thereby producing robust representations for denoised image reconstruction. Experiments on two challenging public datasets with 25% and 10% full-dose CT image quality demonstrate that our method surpasses state-of-the-art approaches in denoising performance with low computational cost, highlighting its potential for realtime medical applications.
Page 140 of 3993984 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.