Sort by:
Page 24 of 1331328 results

Automated quantification of lung pathology on micro-CT in diverse disease models using deep learning.

Belmans F, Seldeslachts L, Vanhoffelen E, Tielemans B, Vos W, Maes F, Vande Velde G

pubmed logopapersAug 30 2025
Micro-CT significantly enhances the efficiency, predictive power and translatability of animal studies to human clinical trials for respiratory diseases. However, the analysis of large micro-CT datasets remains a bottleneck. We developed a generic deep learning (DL)-based lung segmentation model using longitudinal micro-CT images from studies of Down syndrome, viral and fungal infections, and exacerbation with variable lung pathology and degree of disease burden. 2D models were trained with cross-validation on axial, coronal and sagittal slices. Predictions from these single-orientation models were combined to create a 2.5D model using majority voting or probability averaging. The generalisability of these models to other studies (COVID-19, lung inflammation and fibrosis), scanner configurations and rodent species (rats, hamsters, degus) was tested, including a publicly available database. On the internal validation data, the highest mean Dice Similarity Coefficient (DSC) was found for the 2.5D probability averaging model (0.953 ± 0.023), further improving the output of the 2D models by removing erroneous voxels outside the lung region. The models demonstrated good generalisability with average DSC values ranging from 0.89 to 0.94 across different lung pathologies and scanner configurations. The biomarkers extracted from manual and automated segmentations are well in agreement and proved that our proposed solution effectively monitors longitudinal lung pathology development and response to treatment in real-world preclinical studies. Our DL-based pipeline for lung pathology quantification offers efficient analysis of large micro-CT datasets, is widely applicable across rodent disease models and acquisition protocols and enables real-time insights into therapy efficacy. This research was supported by the Service Public de Wallonie (AEROVID grant to FB, WV) and The Flemish Research Foundation (FWO, doctoral mandate 1SF2224N to EV and 1186121N/1186123N to LS, infrastructure grant I006524N to GVV).

A Multimodal and Multi-centric Head and Neck Cancer Dataset for Tumor Segmentation and Outcome Prediction

Numan Saeed, Salma Hassan, Shahad Hardan, Ahmed Aly, Darya Taratynova, Umair Nawaz, Ufaq Khan, Muhammad Ridzuan, Vincent Andrearczyk, Adrien Depeursinge, Mathieu Hatt, Thomas Eugene, Raphaël Metz, Mélanie Dore, Gregory Delpon, Vijay Ram Kumar Papineni, Kareem Wahid, Cem Dede, Alaa Mohamed Shawky Ali, Carlos Sjogreen, Mohamed Naser, Clifton D. Fuller, Valentin Oreiller, Mario Jreige, John O. Prior, Catherine Cheze Le Rest, Olena Tankyevych, Pierre Decazes, Su Ruan, Stephanie Tanadini-Lang, Martin Vallières, Hesham Elhalawani, Ronan Abgral, Romain Floch, Kevin Kerleguer, Ulrike Schick, Maelle Mauguen, Arman Rahmim, Mohammad Yaqub

arxiv logopreprintAug 30 2025
We describe a publicly available multimodal dataset of annotated Positron Emission Tomography/Computed Tomography (PET/CT) studies for head and neck cancer research. The dataset includes 1123 FDG-PET/CT studies from patients with histologically confirmed head and neck cancer, acquired from 10 international medical centers. All examinations consisted of co-registered PET/CT scans with varying acquisition protocols, reflecting real-world clinical diversity across institutions. Primary gross tumor volumes (GTVp) and involved lymph nodes (GTVn) were manually segmented by experienced radiation oncologists and radiologists following standardized guidelines and quality control measures. We provide anonymized NifTi files of all studies, along with expert-annotated segmentation masks, radiotherapy dose distribution for a subset of patients, and comprehensive clinical metadata. This metadata includes TNM staging, HPV status, demographics (age and gender), long-term follow-up outcomes, survival times, censoring indicators, and treatment information. We demonstrate how this dataset can be used for three key clinical tasks: automated tumor segmentation, recurrence-free survival prediction, and HPV status classification, providing benchmark results using state-of-the-art deep learning models, including UNet, SegResNet, and multimodal prognostic frameworks.

External validation of deep learning-derived 18F-FDG PET/CT delta biomarkers for loco-regional control in head and neck cancer.

Kovacs DG, Aznar M, Van Herk M, Mohamed I, Price J, Ladefoged CN, Fischer BM, Andersen FL, McPartlin A, Osorio EMV, Abravan A

pubmed logopapersAug 30 2025
Delta biomarkers that reflect changes in tumour burden over time can support personalised follow-up in head and neck cancer. However, their clinical use can be limited by the need for manual image segmentation. This study externally evaluates a deep learning model for automatic determination of volume change from serial 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) scans to stratify patients by loco-regional outcome. Patient/material and methods: An externally developed deep learning algorithm for tumour segmentation was applied to pre- and post-radiotherapy (RT, with or without concomitant chemoradiotherapy) PET/CT scans of 50 consecutive head and neck cancer patients from The Christie NHS Foundation Trust, UK. The model, originally trained on pre-treatment scans from a different institution, was deployed to derive tumour volumes at both time points. The AI-derived change in tumour volume (ΔPET-Gross tumour volume (GTV)) was calculated for each patient. Kaplan-Meier analysis assessed loco-regional control based on ΔPET-GTV, dichotomised at the cohort median. In a separate secondary analysis confined to the pre‑treatment scans, a radiation oncologist qualitatively evaluated the AI‑generated PET‑GTV contours. Patients with higher ΔPET-GTV (i.e. greater tumour shrinkage) had significantly improved loco-regional control (log-rank p = 0.02). At 2 years, control was 94.1% (95% CI: 83.6-100%) vs. 53.6% (95% CI: 32.2-89.1%). Only one of nine failures occurred in the high ΔPET-GTV group. Clinician review found AI volumes acceptable for planning in 78% of cases. In two cases, the algorithm identified oropharyngeal primaries on pre-treatment PET-CT before clinical identification. Deep learning-derived ΔPET-GTV may support clinically meaningful assessment of post-treatment disease status and risk stratification, offering a scalable alternative to manual segmentation in PET/CT follow-up.

Interpretable Auto Window setting for deep-learning-based CT analysis.

Zhang Y, Chen M, Zhang Z

pubmed logopapersAug 30 2025
Whether during the early days of popularization or in the present, the window setting in Computed Tomography (CT) has always been an indispensable part of the CT analysis process. Although research has investigated the capabilities of CT multi-window fusion in enhancing neural networks, there remains a paucity of domain-invariant, intuitively interpretable methodologies for Auto Window Setting. In this work, we propose plug-and-play module derived from Tanh activation function. This module enables the deployment of medical imaging neural network backbones without requiring manual CT window configuration. Domain-invariant design facilitates observation of the preference decisions rendered by the adaptive mechanism from a clinically intuitive perspective. We confirm the effectiveness of the proposed method on multiple open-source datasets, allowing for direct training without the need for manual window setting and yielding improvements with 54%∼127%+ Dice, 14%∼32%+ Recall and 94%∼200%+ Precision on hard segmentation targets. Experimental results conducted in NVIDIA NGC environment demonstrate that the module facilitates efficient deployment of AI-powered medical imaging tasks. The proposed method enables automatic determination of CT window settings for specific downstream tasks in the development and deployment of mainstream medical imaging neural networks, demonstrating the potential to reduce associated deployment costs.

A Modality-agnostic Multi-task Foundation Model for Human Brain Imaging

Peirong Liu, Oula Puonti, Xiaoling Hu, Karthik Gopinath, Annabel Sorby-Adams, Daniel C. Alexander, W. Taylor Kimberly, Juan E. Iglesias

arxiv logopreprintAug 30 2025
Recent learning-based approaches have made astonishing advances in calibrated medical imaging like computerized tomography (CT), yet they struggle to generalize in uncalibrated modalities -- notably magnetic resonance (MR) imaging, where performance is highly sensitive to the differences in MR contrast, resolution, and orientation. This prevents broad applicability to diverse real-world clinical protocols. Here we introduce BrainFM, a modality-agnostic, multi-task vision foundation model for human brain imaging. With the proposed "mild-to-severe" intra-subject generation and "real-synth" mix-up training strategy, BrainFM is resilient to the appearance of acquired images (e.g., modality, contrast, deformation, resolution, artifacts), and can be directly applied to five fundamental brain imaging tasks, including image synthesis for CT and T1w/T2w/FLAIR MRI, anatomy segmentation, scalp-to-cortical distance, bias field estimation, and registration. We evaluate the efficacy of BrainFM on eleven public datasets, and demonstrate its robustness and effectiveness across all tasks and input modalities. Code is available at https://github.com/jhuldr/BrainFM.

Advancing Positron Emission Tomography Image Quantification: Artificial Intelligence-Driven Methods, Clinical Challenges, and Emerging Opportunities in Long-Axial Field-of-View Positron Emission Tomography/Computed Tomography Imaging.

Yousefirizi F, Dassanayake M, Lopez A, Reader A, Cook GJR, Mingels C, Rahmim A, Seifert R, Alberts I

pubmed logopapersAug 29 2025
Positron emission tomography/computed tomography (PET/CT) imaging plays a pivotal role in oncology, aiding tumor metabolism assessment, disease staging, and therapy response evaluation. Traditionally, semi-quantitative metrics such as SUVmax have been extensively used, though these methods face limitations in reproducibility and predictive capability. Recent advancements in artificial intelligence (AI), particularly deep learning, have revolutionized PET imaging, significantly enhancing image quantification accuracy, and biomarker extraction capabilities, thereby enabling more precise clinical decision-making.

Enhanced glioma semantic segmentation using U-net and pre-trained backbone U-net architectures.

Khorasani A

pubmed logopapersAug 29 2025
Gliomas are known to have different sub-regions within the tumor, including the edema, necrotic, and active tumor regions. Segmenting of these regions is very important for glioma treatment decisions and management. This paper aims to demonstrate the application of U-Net and pre-trained U-Net backbone networks in glioma semantic segmentation, utilizing different magnetic resonance imaging (MRI) image weights. The data used in this study for network training, validation, and testing is the Multimodal Brain Tumor Segmentation (BraTS) 2021 challenge. In this study, we applied the U-Net and different pre-trained Backbone U-Net for the semantic segmentation of glioma regions. The ResNet, Inception, and VGG networks, which are pre-trained using the ImageNet dataset, have been used as the Backbone in the U-Net architecture. The Accuracy (ACC) and Intersection over Union (IoU) were employed to assess the performance of the networks. The most prominent finding to emerge from this study is that trained ResNet-U-Net with T<sub>1</sub> post-contrast enhancement (T<sub>1</sub>Gd) has the highest ACC and IoU for the necrotic and active tumor regions semantic segmentation in glioma. It was also demonstrated that a trained ResNet-U-Net with T<sub>2</sub> Fluid-Attenuated Inversion Recovery (T<sub>2</sub>-FLAIR) is a suitable combination for edema segmentation in glioma. Our study further validates that the proposed framework's architecture and modules are scientifically grounded and practical, enabling the extraction and aggregation of valuable semantic information to enhance glioma semantic segmentation capability. It demonstrates how useful the ResNet-U-Net will be for physicians to extract glioma regions automatically.

Masked Autoencoder Pretraining and BiXLSTM ResNet Architecture for PET/CT Tumor Segmentation

Moona Mazher, Steven A Niederer, Abdul Qayyum

arxiv logopreprintAug 29 2025
The accurate segmentation of lesions in whole-body PET/CT imaging is es-sential for tumor characterization, treatment planning, and response assess-ment, yet current manual workflows are labor-intensive and prone to inter-observer variability. Automated deep learning methods have shown promise but often remain limited by modality specificity, isolated time points, or in-sufficient integration of expert knowledge. To address these challenges, we present a two-stage lesion segmentation framework developed for the fourth AutoPET Challenge. In the first stage, a Masked Autoencoder (MAE) is em-ployed for self-supervised pretraining on unlabeled PET/CT and longitudinal CT scans, enabling the extraction of robust modality-specific representations without manual annotations. In the second stage, the pretrained encoder is fine-tuned with a bidirectional XLSTM architecture augmented with ResNet blocks and a convolutional decoder. By jointly leveraging anatomical (CT) and functional (PET) information as complementary input channels, the model achieves improved temporal and spatial feature integration. Evalua-tion on the AutoPET Task 1 dataset demonstrates that self-supervised pre-training significantly enhances segmentation accuracy, achieving a Dice score of 0.582 compared to 0.543 without pretraining. These findings high-light the potential of combining self-supervised learning with multimodal fu-sion for robust and generalizable PET/CT lesion segmentation. Code will be available at https://github.com/RespectKnowledge/AutoPet_2025_BxLSTM_UNET_Segmentation

Sex-Specific Prognostic Value of Automated Epicardial Adipose Tissue Quantification on Serial Lung Cancer Screening Chest CT.

Brendel JM, Mayrhofer T, Hadzic I, Norton E, Langenbach IL, Langenbach MC, Jung M, Raghu VK, Nikolaou K, Douglas PS, Lu MT, Aerts HJWL, Foldyna B

pubmed logopapersAug 29 2025
Epicardial adipose tissue (EAT) is a metabolically active fat depot associated with coronary atherosclerosis and cardiovascular (CV) risk. While EAT is a known prognostic marker in lung cancer screening, its sex-specific prognostic value remains unclear. This study investigated sex differences in the prognostic utility of serial EAT measurements on low-dose chest CTs. We analyzed baseline and two-year changes in EAT volume and density using a validated automated deep-learning algorithm in 24,008 heavy-smoking participants from the National Lung Screening Trial (NLST). Sex-stratified multivariable Cox models, adjusted for CV risk factors, BMI, and coronary artery calcium (CAC), assessed associations between EAT and all-cause and CV mortality (median follow-up 12.3 years [IQR: 11.9-12.8], 4,668 [19.4%] all-cause deaths, 1,083 [4.5%] CV deaths).Women (n = 9,841; 41%) were younger, with fewer CV risk factors, lower BMI, fewer pack-years, and lower CAC than men (all P < 0.001). Baseline EAT was associated with similar all-cause and CV mortality risk in both sexes (max. aHR women: 1.70; 95%-CI: 1.13-2.55; men: 1.83; 95%-CI: 1.40-2.40, P-interaction=0.986). However, two-year EAT changes predicted CV death only in women (aHR: 1.82; 95%-CI: 1.37-2.49, P < 0.001), and showed a stronger association with all-cause mortality in women (aHR: 1.52; 95%-CI: 1.31-1.77) than in men (aHR: 1.26; 95%-CI: 1.13-1.40, P-interaction=0.041). In this large lung cancer screening cohort, serial EAT changes independently predicted CV mortality in women and were more strongly associated with all-cause mortality in women than in men. These findings support routine EAT quantification on chest CT for improved, sex-specific cardiovascular risk stratification.

Age- and sex-related changes in proximal humeral volumetric BMD assessed via chest CT with a deep learning-based segmentation model.

Li S, Tang C, Zhang H, Ma C, Weng Y, Chen B, Xu S, Xu H, Giunchiglia F, Lu WW, Guo D, Qin Y

pubmed logopapersAug 29 2025
Accurate assessment of proximal humeral volumetric bone mineral density (vBMD) is essential for surgical planning in shoulder pathology. However, age-related changes in proximal humeral vBMD remain poorly characterized. This study developed a deep learning-based method to assess proximal humeral vBMD and identified sex-specific age-related changes. It also demonstrated that lumbar spine vBMD is not a valid substitute. This study aimed to develop a deep learning-based method for proximal humeral vBMD assessment and to investigate its age- and sex-related changes, as well as its correlation with lumbar spine vBMD. An nnU-Net-based deep learning pipeline was developed to automatically segment the proximal humerus on chest CT scans from 2,675 adults. Segmentation performance was assessed using the Dice Similarity Coefficient (DSC), Intersection over Union (IoU), 95th-percentile Hausdorff Distance (95HD), and Average Symmetric Surface Distance (ASSD). Phantom-calibrated vBMD-total, trabecular, and BMAT-corrected trabecular-was quantified for each subject. Age-related distributions were modeled with generalized additive models for location, scale, and shape (GAMLSS) to generate sex-specific P3-P97 percentile curves. Lumbar spine vBMD was measured in 1460 individuals for correlation analysis. Segmentation was highly accurate (DSC 98.42 ± 0.20%; IoU 96.89 ± 0.42%; 95HD 1.12 ± 0.37 mm; ASSD 0.94 ± 0.31 mm). In males, total, trabecular, and BMAT-corrected trabecular vBMD declined approximately linearly from early adulthood. In females, a pronounced inflection occurred at ~ 40-45 years: values were stable or slightly rising beforehand, then all percentiles dropped steeply and synchronously, indicating accelerated menopause-related loss. In females, vBMD declined earlier in the lumbar spine than in the proximal humerus. Correlations between proximal humeral and lumbar spine vBMD were low to moderate overall and weakened after age 50. We present a novel, automated method for quantifying proximal humeral vBMD from chest CT, revealing distinct, sex-specific aging patterns. Males' humeral vBMD declines linearly, while females experience an earlier, accelerated loss. Moreover, the peak humeral vBMD in females occurs later than that of the lumbar spine, and spinal measurements cannot reliably substitute for humeral BMD in clinical assessment.
Page 24 of 1331328 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.