Sort by:
Page 7 of 2252246 results

Quantitative CT biomarkers for renal cell carcinoma subtype differentiation: a comparison of DECT, PCT, and CT texture analysis.

Sah A, Goswami S, Gupta A, Garg S, Yadav N, Dhanakshirur R, Das CJ

pubmed logopapersJul 1 2025
To evaluate and compare the diagnostic performance of CT texture analysis (CTTA), perfusion CT (PCT), and dual-energy CT (DECT) in distinguishing between clear-cell renal cell carcinoma (ccRCC) and non-ccRCC. This retrospective study included 66 patients with RCC (52 ccRCC and 14 non-ccRCC) who underwent DECT and PCT imaging before surgery (2017-2022). This DECT parameters (iodine concentration, iodine ratio [IR]) and PCT parameters (blood flow, blood volume, mean transit time, time to peak) were measured using circular regions of interest (ROIs). CT texture analysis features were extracted from manually annotated corticomedullary-phase images. A machine learning (ML) model was developed to differentiate RCC subtypes, with performance evaluated using k-fold cross-validation. Multivariate logistic regression analysis was performed to assess the predictive value of each imaging modality. All 3 imaging modalities demonstrated high diagnostic accuracy, with F1 scores of 0.9107, 0.9358, and 0.9348 for PCT, DECT, and CTTA, respectively. None of the 3 models were significantly different (P > 0.05). While each modality could effectively differentiate between ccRCC and non-ccRCC, higher IR on DECT and increased entropy on CTTA were independent predictors of ccRCC, with F1 scores of 0.9345 and 0.9272, respectively (P < 0.001). Dual-energy CT achieved the highest individual performance, with IR being the best predictor (F1 = 0.902). Iodine ratio was significantly higher in ccRCC (65.12 ± 23.73) compared to non-ccRCC (35.17 ± 17.99, P < 0.001), yielding an Area under curve (AUC) of 0.91, sensitivity of 87.5%, and specificity of 89.3%. Entropy on CTTA was the strongest texture feature, with higher values in ccRCC (7.94 ± 0.336) than non-ccRCC (6.43 ± 0.297, P < 0.001), achieving an AUC of 0.94, sensitivity of 83.0%, and specificity of 92.3%. The combined ML model integrating DECT, PCT, and CTTA parameters yielded the highest diagnostic accuracy, with an F1 score of 0.954. PCT, DECT, and CTTA effectively differentiate RCC subtypes. However, IR (DECT) and entropy (CTTA) emerged as key independent markers, suggesting their clinical utility in RCC characterization. Accurate, non-invasive biomarkers are essential to differentiate RCC subtypes, aiding in prognosis and guiding targeted therapies, particularly in ccRCC, where treatment options differ significantly.

Towards Foundation Models and Few-Shot Parameter-Efficient Fine-Tuning for Volumetric Organ Segmentation.

Silva-Rodríguez J, Dolz J, Ben Ayed I

pubmed logopapersJul 1 2025
The recent popularity of foundation models and the pre-train-and-adapt paradigm, where a large-scale model is transferred to downstream tasks, is gaining attention for volumetric medical image segmentation. However, current transfer learning strategies devoted to full fine-tuning for transfer learning may require significant resources and yield sub-optimal results when the labeled data of the target task is scarce. This makes its applicability in real clinical settings challenging since these institutions are usually constrained on data and computational resources to develop proprietary solutions. To address this challenge, we formalize Few-Shot Efficient Fine-Tuning (FSEFT), a novel and realistic scenario for adapting medical image segmentation foundation models. This setting considers the key role of both data- and parameter-efficiency during adaptation. Building on a foundation model pre-trained on open-access CT organ segmentation sources, we propose leveraging Parameter-Efficient Fine-Tuning and black-box Adapters to address such challenges. Furthermore, novel efficient adaptation methodologies are introduced in this work, which include Spatial black-box Adapters that are more appropriate for dense prediction tasks and constrained transductive inference, leveraging task-specific prior knowledge. Our comprehensive transfer learning experiments confirm the suitability of foundation models in medical image segmentation and unveil the limitations of popular fine-tuning strategies in few-shot scenarios. The project code is available: https://github.com/jusiro/fewshot-finetuning.

Rethinking boundary detection in deep learning-based medical image segmentation.

Lin Y, Zhang D, Fang X, Chen Y, Cheng KT, Chen H

pubmed logopapersJul 1 2025
Medical image segmentation is a pivotal task within the realms of medical image analysis and computer vision. While current methods have shown promise in accurately segmenting major regions of interest, the precise segmentation of boundary areas remains challenging. In this study, we propose a novel network architecture named CTO, which combines Convolutional Neural Networks (CNNs), Vision Transformer (ViT) models, and explicit edge detection operators to tackle this challenge. CTO surpasses existing methods in terms of segmentation accuracy and strikes a better balance between accuracy and efficiency, without the need for additional data inputs or label injections. Specifically, CTO adheres to the canonical encoder-decoder network paradigm, with a dual-stream encoder network comprising a mainstream CNN stream for capturing local features and an auxiliary StitchViT stream for integrating long-range dependencies. Furthermore, to enhance the model's ability to learn boundary areas, we introduce a boundary-guided decoder network that employs binary boundary masks generated by dedicated edge detection operators to provide explicit guidance during the decoding process. We validate the performance of CTO through extensive experiments conducted on seven challenging medical image segmentation datasets, namely ISIC 2016, PH2, ISIC 2018, CoNIC, LiTS17, BraTS, and BTCV. Our experimental results unequivocally demonstrate that CTO achieves state-of-the-art accuracy on these datasets while maintaining competitive model complexity. The codes have been released at: CTO.

Radiomics for lung cancer diagnosis, management, and future prospects.

Boubnovski Martell M, Linton-Reid K, Chen M, Aboagye EO

pubmed logopapersJul 1 2025
Lung cancer remains the leading cause of cancer-related mortality worldwide, with its early detection and effective treatment posing significant clinical challenges. Radiomics, the extraction of quantitative features from medical imaging, has emerged as a promising approach for enhancing diagnostic accuracy, predicting treatment responses, and personalising patient care. This review explores the role of radiomics in lung cancer diagnosis and management, with methods ranging from handcrafted radiomics to deep learning techniques that can capture biological intricacies. The key applications are highlighted across various stages of lung cancer care, including nodule detection, histology prediction, and disease staging, where artificial intelligence (AI) models demonstrate superior specificity and sensitivity. The article also examines future directions, emphasising the integration of large language models, explainable AI (XAI), and super-resolution imaging techniques as transformative developments. By merging diverse data sources and incorporating interpretability into AI models, radiomics stands poised to redefine clinical workflows, offering more robust and reliable tools for lung cancer diagnosis, treatment planning, and outcome prediction. These advancements underscore radiomics' potential in supporting precision oncology and improving patient outcomes through data-driven insights.

Impact of CT reconstruction algorithms on pericoronary and epicardial adipose tissue attenuation.

Xiao H, Wang X, Yang P, Wang L, Xi J, Xu J

pubmed logopapersJul 1 2025
This study aims to investigate the impact of adaptive statistical iterative reconstruction-Veo (ASIR-V) and deep learning image reconstruction (DLIR) algorithms on the quantification of pericoronary adipose tissue (PCAT) and epicardial adipose tissue (EAT). Furthermore, we propose to explore the feasibility of correcting the effects through fat threshold adjustment. A retrospective analysis was conducted on the imaging data of 134 patients who underwent coronary CT angiography (CCTA) between December 2023 and January 2024. These data were reconstructed into seven datasets using filtered back projection (FBP), ASIR-V at three different intensities (ASIR-V 30%, ASIR-V 50%, ASIR-V 70%), and DLIR at three different intensities (DLIR-L, DLIR-M, DLIR-H). Repeated-measures ANOVA was used to compare differences in fat, PCAT and EAT attenuation values among the reconstruction algorithms, and Bland-Altman plots were used to analyze the agreement between ASIR-V or DLIR and FBP algorithms in PCAT attenuation values. Compared to FBP, ASIR-V 30 %, ASIR-V 50 %, ASIR-V 70 %, DLIR-L, DLIR-M, and DLIR-H significantly increased fat attenuation values (-103.91 ± 12.99 HU, -102.53 ± 12.68 HU, -101.14 ± 12.78 HU, -101.81 ± 12.41 HU, -100.87 ± 12.25 HU, -99.08 ± 12.00 HU vs. -105.95 ± 13.01 HU, all p < 0.001). When the fat threshold was set at -190 to -30 HU, ASIR-V and DLIR algorithms significantly increased PCAT and EAT attenuation values compared to FBP algorithm (all p < 0.05), with these values increasing as the reconstruction intensity level increased. After correction with a fat threshold of -200 to -35 HU for ASIR-V 30 %, -200 to -40 HU for ASIR-V 50 % and DLIR-L, and -200 to -45 HU for ASIR-V 70 %, DLIR-M, and DLIR-H, the mean differences in PCAT attenuation values between ASIR-V or DLIR and FBP algorithms decreased (-0.03 to 1.68 HU vs. 2.35 to 8.69 HU), and no significant difference was found in PCAT attenuation values between FBP and ASIR-V 30 %, ASIR-V 50 %, ASIR-V 70 %, DLIR-L, and DLIR-M (all p > 0.05). Compared to the FBP algorithm, ASIR-V and DLIR algorithms increase PCAT and EAT attenuation values. Adjusting the fat threshold can mitigate the impact of ASIR-V and DLIR algorithms on PCAT attenuation values.

Medical image translation with deep learning: Advances, datasets and perspectives.

Chen J, Ye Z, Zhang R, Li H, Fang B, Zhang LB, Wang W

pubmed logopapersJul 1 2025
Traditional medical image generation often lacks patient-specific clinical information, limiting its clinical utility despite enhancing downstream task performance. In contrast, medical image translation precisely converts images from one modality to another, preserving both anatomical structures and cross-modal features, thus enabling efficient and accurate modality transfer and offering unique advantages for model development and clinical practice. This paper reviews the latest advancements in deep learning(DL)-based medical image translation. Initially, it elaborates on the diverse tasks and practical applications of medical image translation. Subsequently, it provides an overview of fundamental models, including convolutional neural networks (CNNs), transformers, and state space models (SSMs). Additionally, it delves into generative models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Autoregressive Models (ARs), diffusion Models, and flow Models. Evaluation metrics for assessing translation quality are discussed, emphasizing their importance. Commonly used datasets in this field are also analyzed, highlighting their unique characteristics and applications. Looking ahead, the paper identifies future trends, challenges, and proposes research directions and solutions in medical image translation. It aims to serve as a valuable reference and inspiration for researchers, driving continued progress and innovation in this area.

Integrating prior knowledge with deep learning for optimized quality control in corneal images: A multicenter study.

Li FF, Li GX, Yu XX, Zhang ZH, Fu YN, Wu SQ, Wang Y, Xiao C, Ye YF, Hu M, Dai Q

pubmed logopapersJul 1 2025
Artificial intelligence (AI) models are effective for analyzing high-quality slit-lamp images but often face challenges in real-world clinical settings due to image variability. This study aims to develop and evaluate a hybrid AI-based image quality control system to classify slit-lamp images, improving diagnostic accuracy and efficiency, particularly in telemedicine applications. Cross-sectional study. Our Zhejiang Eye Hospital dataset comprised 2982 slit-lamp images as the internal dataset. Two external datasets were included: 13,554 images from the Aier Guangming Eye Hospital (AGEH) and 9853 images from the First People's Hospital of Aksu District in Xinjiang (FPH of Aksu). We developed a Hybrid Prior-Net (HP-Net), a novel network that combines a ResNet-based classification branch with a prior knowledge branch leveraging Hough circle transform and frequency domain blur detection. The two branches' features are channel-wise concatenated at the fully connected layer, enhancing representational power and improving the network's ability to classify eligible, misaligned, blurred, and underexposed corneal images. Model performance was evaluated using metrics such as accuracy, precision, recall, specificity, and F1-score, and compared against the performance of other deep learning models. The HP-Net outperformed all other models, achieving an accuracy of 99.03 %, precision of 98.21 %, recall of 95.18 %, specificity of 99.36 %, and an F1-score of 96.54 % in image classification. The results demonstrated that HP-Net was also highly effective in filtering slit-lamp images from the other two datasets, AGEH and FPH of Aksu with accuracies of 97.23 % and 96.97 %, respectively. These results underscore the superior feature extraction and classification capabilities of HP-Net across all evaluated metrics. Our AI-based image quality control system offers a robust and efficient solution for classifying corneal images, with significant implications for telemedicine applications. By incorporating slightly blurred but diagnostically usable images into training datasets, the system enhances the reliability and adaptability of AI tools for medical imaging quality control, paving the way for more accurate and efficient diagnostic workflows.

World of Forms: Deformable geometric templates for one-shot surface meshing in coronary CT angiography.

van Herten RLM, Lagogiannis I, Wolterink JM, Bruns S, Meulendijks ER, Dey D, de Groot JR, Henriques JP, Planken RN, Saitta S, Išgum I

pubmed logopapersJul 1 2025
Deep learning-based medical image segmentation and surface mesh generation typically involve a sequential pipeline from image to segmentation to meshes, often requiring large training datasets while making limited use of prior geometric knowledge. This may lead to topological inconsistencies and suboptimal performance in low-data regimes. To address these challenges, we propose a data-efficient deep learning method for direct 3D anatomical object surface meshing using geometric priors. Our approach employs a multi-resolution graph neural network that operates on a prior geometric template which is deformed to fit object boundaries of interest. We show how different templates may be used for the different surface meshing targets, and introduce a novel masked autoencoder pretraining strategy for 3D spherical data. The proposed method outperforms nnUNet in a one-shot setting for segmentation of the pericardium, left ventricle (LV) cavity and the LV myocardium. Similarly, the method outperforms other lumen segmentation operating on multi-planar reformatted images. Results further indicate that mesh quality is on par with or improves upon marching cubes post-processing of voxel mask predictions, while remaining flexible in the choice of mesh triangulation prior, thus paving the way for more accurate and topologically consistent 3D medical object surface meshing.

Assessment of AI-accelerated T2-weighted brain MRI, based on clinical ratings and image quality evaluation.

Nonninger JN, Kienast P, Pogledic I, Mallouhi A, Barkhof F, Trattnig S, Robinson SD, Kasprian G, Haider L

pubmed logopapersJul 1 2025
To compare clinical ratings and signal-to-noise ratio (SNR) measures of a commercially available Deep Learning-based MRI reconstruction method (T2<sub>(DR)</sub>) against conventional T2- turbo spin echo brain MRI (T2<sub>(CN)</sub>). 100 consecutive patients with various neurological conditions underwent both T2<sub>(DR)</sub> and T2<sub>(CN)</sub> on a Siemens Vida 3 T scanner with a 64-channel head coil in the same examination. Acquisition times were 3.33 min for T2<sub>(CN)</sub> and 1.04 min for T2<sub>(DR)</sub>. Four neuroradiologists evaluated overall image quality (OIQ), diagnostic safety (DS), and image artifacts (IA), blinded to the acquisition mode. SNR and SNR<sub>eff</sub> (adjusted for acquisition time) were calculated for air, grey- and white matter, and cerebrospinal fluid. The mean patient age was 43.6 years (SD 20.3), with 54 females. The distribution of non-diagnostic ratings did not differ significantly between T2<sub>(CN)</sub> and T2<sub>(DR)</sub> (IA p = 0.108; OIQ: p = 0.700 and DS: p = 0.652). However, when considering the full spectrum of ratings, significant differences favouring T2<sub>(CN)</sub> emerged in OIQ (p = 0.003) and IA (p < 0.001). T2<sub>(CN)</sub> had higher SNR (157.9, SD 123.4) than T2<sub>(DR)</sub> (112.8, SD 82.7), p < 0.001, but T2<sub>(DR)</sub> demonstrated superior SNR<sub>eff</sub> (14.1, SD 10.3) compared to T2<sub>(CN)</sub> (10.8, SD 8.5), p < 0.001. Our results suggest that while T2<sub>(DR)</sub> may be clinically applicable for a diagnostic setting, it does not fully match the quality of high-standard conventional T2<sub>(CN)</sub>, MRI acquisitions.

Optimizing clinical risk stratification of localized prostate cancer.

Gnanapragasam VJ

pubmed logopapersJul 1 2025
To review the current risk and prognostic stratification systems in localised prostate cancer. To explore some of the most promising adjuncts to clinical models and what the evidence has shown regarding their value. There are many new biomarker-based models seeking to improve, optimise or replace clinical models. There are promising data on the value of MRI, radiomics, genomic classifiers and most recently artificial intelligence tools in refining stratification. Despite the extensive literature however, there remains uncertainty on where in pathways they can provide the most benefit and whether a biomarker is most useful for prognosis or predictive use. Comparisons studies have also often overlooked the fact that clinical models have themselves evolved and the context of the baseline used in biomarker studies that have shown superiority have to be considered. For new biomarkers to be included in stratification models, well designed prospective clinical trials are needed. Until then, there needs to be caution in interpretation of their use for day-to-day decision making. It is critical that users balance any purported incremental value against the performance of the latest clinical classification and multivariate models especially as the latter are cost free and widely available.
Page 7 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.