Sort by:
Page 213 of 6546537 results

Lai M, Yao J, Zhou Y, Zhou L, Jiang T, Sui L, Tang J, Zhu X, Huang J, Wang Y, Liu J, Xu D

pubmed logopapersAug 30 2025
This study aims to explore the feasibility of employing generative adversarial networks (GAN) to generate synthetic contrast-enhanced ultrasound (CEUS) from grayscale ultrasound images of patients with thyroid nodules while dispensing with the need for ultrasound contrast agents injection. Patients who underwent preoperative thyroid CEUS examinations between January 2020 and July 2022 were collected retrospectively. The cycle-GAN framework integrated paired and unpaired learning modules was employed to develop the non-invasive image generation process. The synthetic CEUS images was generated in three phases: pre-arterial, plateau, and venous. The evaluation included quantitative similarity metrics, classification performance, and qualitative assessment by radiologists. CEUS videos of 360 thyroid nodules from 314 patients (45 years ± 12 [SD]; 272 women) in the internal dataset and 202 thyroid nodules from 183 patients (46 years ± 13 [SD]; 148 women) in the external dataset were included. In the external testing dataset, quantitative analysis revealed a significant degree of similarity between real and synthetic CEUS images (structure similarity index, 0.89 ± 0.04; peak signal-to-noise ratio, 28.17 ± 2.42). Radiologists deemed 126 of 132 [95%] synthetic CEUS images diagnostically useful. The accuracy of radiologists in distinguishing between real and synthetic images was 55.6% (95% CI: 0.49, 0.63), with an AUC of 61.0% (95% CI: 0.65, 0.68). No statistically significant difference (p > 0.05) was observed when radiologists assessed peak intensity and enhancement patterns using real CEUS and synthetic CEUS. Both quantitative analysis and radiologist evaluations exhibited that synthetic CEUS images generated by generative adversarial networks were similar to real CEUS images. QuestionIt is feasible to generate synthetic thyroid contrast-enhanced ultrasound images using generative adversarial networks without ultrasound contrast agents injection. FindingsCompared to real contrast-enhanced ultrasound images, synthetic contrast-enhanced ultrasound images exhibited high similarity and image quality. Clinical relevanceThis non-invasive and intelligent transformation may reduce the requirement for ultrasound contrast agents in certain cases, particularly in scenarios where ultrasound contrast agents administration is contraindicated, such as in patients with allergies, poor tolerance, or limited access to resources.

Quattrone A, Franzmeier N, Huppertz HJ, Seneca N, Petzold GC, Spottke A, Levin J, Prudlo J, Düzel E, Höglinger GU

pubmed logopapersAug 30 2025
Clinical progression rate is the typical primary endpoint measure in progressive supranuclear palsy (PSP) clinical trials. This longitudinal multicohort study investigated whether baseline clinical severity and regional brain atrophy could predict clinical progression in PSP-Richardson's syndrome (PSP-RS). PSP-RS patients (n = 309) from the placebo arms of clinical trials (NCT03068468, NCT01110720, NCT02985879, NCT01049399) and DescribePSP cohort were included. We investigated associations of baseline clinical and volumetric magnetic resonance imaging (MRI) data with 1-year longitudinal PSP rating scale (PSPRS) change. Machine learning (ML) models were tested to predict individual clinical trajectories. PSP-RS patients showed a mean PSPRS score increase of 10.3 points/yr. The frontal lobe volume showed the strongest association with subsequent clinical progression (β: -0.34, P < 0.001). However, ML models did not accurately predict individual progression rates (R<sup>2</sup> <0.15). Baseline clinical severity and brain atrophy could not predict individual clinical progression, suggesting no need for MRI-based stratification of patients in future PSP trials. © 2025 The Author(s). Movement Disorders published by Wiley Periodicals LLC on behalf of International Parkinson and Movement Disorder Society.

Kovacs DG, Aznar M, Van Herk M, Mohamed I, Price J, Ladefoged CN, Fischer BM, Andersen FL, McPartlin A, Osorio EMV, Abravan A

pubmed logopapersAug 30 2025
Delta biomarkers that reflect changes in tumour burden over time can support personalised follow-up in head and neck cancer. However, their clinical use can be limited by the need for manual image segmentation. This study externally evaluates a deep learning model for automatic determination of volume change from serial 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) scans to stratify patients by loco-regional outcome. Patient/material and methods: An externally developed deep learning algorithm for tumour segmentation was applied to pre- and post-radiotherapy (RT, with or without concomitant chemoradiotherapy) PET/CT scans of 50 consecutive head and neck cancer patients from The Christie NHS Foundation Trust, UK. The model, originally trained on pre-treatment scans from a different institution, was deployed to derive tumour volumes at both time points. The AI-derived change in tumour volume (ΔPET-Gross tumour volume (GTV)) was calculated for each patient. Kaplan-Meier analysis assessed loco-regional control based on ΔPET-GTV, dichotomised at the cohort median. In a separate secondary analysis confined to the pre‑treatment scans, a radiation oncologist qualitatively evaluated the AI‑generated PET‑GTV contours. Patients with higher ΔPET-GTV (i.e. greater tumour shrinkage) had significantly improved loco-regional control (log-rank p = 0.02). At 2 years, control was 94.1% (95% CI: 83.6-100%) vs. 53.6% (95% CI: 32.2-89.1%). Only one of nine failures occurred in the high ΔPET-GTV group. Clinician review found AI volumes acceptable for planning in 78% of cases. In two cases, the algorithm identified oropharyngeal primaries on pre-treatment PET-CT before clinical identification. Deep learning-derived ΔPET-GTV may support clinically meaningful assessment of post-treatment disease status and risk stratification, offering a scalable alternative to manual segmentation in PET/CT follow-up.

Zheng W, Lu A, Tang X, Chen L

pubmed logopapersAug 30 2025
This study aims to develop a noninvasive preoperative predictive model utilizing ultrasound radiomics combined with clinical characteristics to differentiate uterine sarcoma from leiomyoma. This study included 212 patients with uterine mesenchymal lesions (102 sarcomas and 110 leiomyomas). Clinical characteristics were systematically selected through both univariate and multivariate logistic regression analyses. A clinical model was constructed using the selected clinical characteristics. Radiomics features were extracted from transvaginal ultrasound images, and 6 machine learning algorithms were used to construct radiomics models. Then, a clinical radiomics nomogram was developed integrating clinical characteristics with radiomics signature. The effectiveness of these models in predicting uterine sarcoma was thoroughly evaluated. The area under the curve (AUC) was used to compare the predictive efficacy of the different models. The AUC of the clinical model was 0.835 (95% confidence interval [CI]: 0.761-0.883) and 0.791 (95% CI: 0.652-0.869) in the training and testing sets, respectively. The logistic regression model performed best in the radiomics model construction, with AUC values of 0.878 (95% CI: 0.811-0.918) and 0.818 (95% CI: 0.681-0.895) in the training and testing sets, respectively. The clinical radiomics nomogram performed well in differentiation, with AUC values of 0.955 (95% CI: 0.911-0.973) and 0.882 (95% CI: 0.767-0.936) in the training and testing sets, respectively. The clinical radiomics nomogram can provide more comprehensive and personalized diagnostic information, which is highly important for selecting treatment strategies and ultimately improving patient outcomes in the management of uterine mesenchymal tumors.

Belmans F, Seldeslachts L, Vanhoffelen E, Tielemans B, Vos W, Maes F, Vande Velde G

pubmed logopapersAug 30 2025
Micro-CT significantly enhances the efficiency, predictive power and translatability of animal studies to human clinical trials for respiratory diseases. However, the analysis of large micro-CT datasets remains a bottleneck. We developed a generic deep learning (DL)-based lung segmentation model using longitudinal micro-CT images from studies of Down syndrome, viral and fungal infections, and exacerbation with variable lung pathology and degree of disease burden. 2D models were trained with cross-validation on axial, coronal and sagittal slices. Predictions from these single-orientation models were combined to create a 2.5D model using majority voting or probability averaging. The generalisability of these models to other studies (COVID-19, lung inflammation and fibrosis), scanner configurations and rodent species (rats, hamsters, degus) was tested, including a publicly available database. On the internal validation data, the highest mean Dice Similarity Coefficient (DSC) was found for the 2.5D probability averaging model (0.953 ± 0.023), further improving the output of the 2D models by removing erroneous voxels outside the lung region. The models demonstrated good generalisability with average DSC values ranging from 0.89 to 0.94 across different lung pathologies and scanner configurations. The biomarkers extracted from manual and automated segmentations are well in agreement and proved that our proposed solution effectively monitors longitudinal lung pathology development and response to treatment in real-world preclinical studies. Our DL-based pipeline for lung pathology quantification offers efficient analysis of large micro-CT datasets, is widely applicable across rodent disease models and acquisition protocols and enables real-time insights into therapy efficacy. This research was supported by the Service Public de Wallonie (AEROVID grant to FB, WV) and The Flemish Research Foundation (FWO, doctoral mandate 1SF2224N to EV and 1186121N/1186123N to LS, infrastructure grant I006524N to GVV).

Du C, Wei W, Hu M, He J, Shen J, Liu Y, Li J, Liu L

pubmed logopapersAug 30 2025
The research aims to evaluate the effectiveness of a multi-dual-energy CT (DECT) image-based interpretable model that integrates habitat radiomics with a 3D Vision Transformer (ViT) deep learning (DL) for preoperatively predicting muscle invasion in bladder cancer (BCa). This retrospective study analyzed 200 BCa patients, who were divided into a training cohort (n=140) and a test cohort (n=60) in a 7:3 ratio. Univariate and multivariate analyses were performed on the DECT quantitative parameters to identify independent predictors, which were subsequently used to develop a DECT model. The K-means algorithm was employed to generate habitat sub-regions of BCa. Traditional radiomics (Rad) model, habitat model, ResNet 18 model, ViT model, and fusion models were constructed from the 40, 70, and 100 keV virtual monochromatic images (VMIs) in DECT. The evaluation of all models used the area under the receiver operating characteristic curve (AUC), calibration curve, decision curve analysis (DCA), net reclassification index (NRI), and integrated discrimination improvement (IDI). The SHAP method was employed to interpret the optimal model and visualize its decision-making process. The Habitat-ViT model demonstrated superior performance compared to other single models, achieving an AUC of 0.997 (95% CI 0.992, 1.000) in the training cohort and 0.892 (95% CI 0.814, 0.971) in the test cohort. The incorporation of DECT quantitative parameters did not improve the performance. DCA and calibration curve assessments indicated that the Habitat-ViT model provided a favorable net benefit and demonstrated strong calibration. Furthermore, SHAP clarified the decision-making processes underlying the model's predicted outcomes. Multi-DECT image-based interpretable model that integrates habitat radiomics with a ViT DL holds promise for predicting muscle invasion status in BCa, providing valuable insights for personalized treatment planning and prognostic assessment.

Nabil HR, Ahmed I, Das A, Mridha MF, Kabir MM, Aung Z

pubmed logopapersAug 30 2025
This study introduces MSFE-GallNet-X, a domain-adaptive deep learning model utilizing multi-scale feature extraction (MSFE) to improve the classification accuracy of gallbladder diseases from grayscale ultrasound images, while integrating explainable artificial intelligence (XAI) methods to enhance clinical interpretability. We developed a convolutional neural network-based architecture that automatically learns multi-scale features from a dataset comprising 10,692 high-resolution ultrasound images from 1,782 patients, covering nine gallbladder disease classes, including gallstones, cholecystitis, and carcinoma. The model incorporated Gradient-Weighted Class Activation Mapping (Grad-CAM) and Local Interpretable Model-Agnostic Explanations (LIME) to provide visual interpretability of diagnostic predictions. Model performance was evaluated using standard metrics, including accuracy and F1 score. The MSFE-GallNet-X achieved a classification accuracy of 99.63% and an F1 score of 99.50%, outperforming state-of-the-art models including VGG-19 (98.89%) and DenseNet121 (91.81%), while maintaining greater parameter efficiency, only 1·91 M parameters in gallbladder disease classification. Visualization through Grad-CAM and LIME highlighted critical image regions influencing model predictions, supporting explainability for clinical use. MSFE-GallNet-X demonstrates strong performance on a controlled and balanced dataset, suggesting its potential as an AI-assisted tool for clinical decision-making in gallbladder disease management. Not applicable.

Peirong Liu, Oula Puonti, Xiaoling Hu, Karthik Gopinath, Annabel Sorby-Adams, Daniel C. Alexander, W. Taylor Kimberly, Juan E. Iglesias

arxiv logopreprintAug 30 2025
Recent learning-based approaches have made astonishing advances in calibrated medical imaging like computerized tomography (CT), yet they struggle to generalize in uncalibrated modalities -- notably magnetic resonance (MR) imaging, where performance is highly sensitive to the differences in MR contrast, resolution, and orientation. This prevents broad applicability to diverse real-world clinical protocols. Here we introduce BrainFM, a modality-agnostic, multi-task vision foundation model for human brain imaging. With the proposed "mild-to-severe" intra-subject generation and "real-synth" mix-up training strategy, BrainFM is resilient to the appearance of acquired images (e.g., modality, contrast, deformation, resolution, artifacts), and can be directly applied to five fundamental brain imaging tasks, including image synthesis for CT and T1w/T2w/FLAIR MRI, anatomy segmentation, scalp-to-cortical distance, bias field estimation, and registration. We evaluate the efficacy of BrainFM on eleven public datasets, and demonstrate its robustness and effectiveness across all tasks and input modalities. Code is available at https://github.com/jhuldr/BrainFM.

Numan Saeed, Salma Hassan, Shahad Hardan, Ahmed Aly, Darya Taratynova, Umair Nawaz, Ufaq Khan, Muhammad Ridzuan, Vincent Andrearczyk, Adrien Depeursinge, Mathieu Hatt, Thomas Eugene, Raphaël Metz, Mélanie Dore, Gregory Delpon, Vijay Ram Kumar Papineni, Kareem Wahid, Cem Dede, Alaa Mohamed Shawky Ali, Carlos Sjogreen, Mohamed Naser, Clifton D. Fuller, Valentin Oreiller, Mario Jreige, John O. Prior, Catherine Cheze Le Rest, Olena Tankyevych, Pierre Decazes, Su Ruan, Stephanie Tanadini-Lang, Martin Vallières, Hesham Elhalawani, Ronan Abgral, Romain Floch, Kevin Kerleguer, Ulrike Schick, Maelle Mauguen, Arman Rahmim, Mohammad Yaqub

arxiv logopreprintAug 30 2025
We describe a publicly available multimodal dataset of annotated Positron Emission Tomography/Computed Tomography (PET/CT) studies for head and neck cancer research. The dataset includes 1123 FDG-PET/CT studies from patients with histologically confirmed head and neck cancer, acquired from 10 international medical centers. All examinations consisted of co-registered PET/CT scans with varying acquisition protocols, reflecting real-world clinical diversity across institutions. Primary gross tumor volumes (GTVp) and involved lymph nodes (GTVn) were manually segmented by experienced radiation oncologists and radiologists following standardized guidelines and quality control measures. We provide anonymized NifTi files of all studies, along with expert-annotated segmentation masks, radiotherapy dose distribution for a subset of patients, and comprehensive clinical metadata. This metadata includes TNM staging, HPV status, demographics (age and gender), long-term follow-up outcomes, survival times, censoring indicators, and treatment information. We demonstrate how this dataset can be used for three key clinical tasks: automated tumor segmentation, recurrence-free survival prediction, and HPV status classification, providing benchmark results using state-of-the-art deep learning models, including UNet, SegResNet, and multimodal prognostic frameworks.

Movindu Dassanayake, Alejandro Lopez, Andrew Reader, Gary J. R. Cook, Clemens Mingels, Arman Rahmim, Robert Seifert, Ian Alberts, Fereshteh Yousefirizi

arxiv logopreprintAug 30 2025
LAFOV PET/CT has the potential to unlock new applications such as ultra-low dose PET/CT imaging, multiplexed imaging, for biomarker development and for faster AI-driven reconstruction, but further work is required before these can be deployed in clinical routine. LAFOV PET/CT has unrivalled sensitivity but has a spatial resolution of an equivalent scanner with a shorter axial field of view. AI approaches are increasingly explored as potential avenues to enhance image resolution.
Page 213 of 6546537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.