Sort by:
Page 6 of 1861852 results

Deep learning-based auto-contouring of organs/structures-at-risk for pediatric upper abdominal radiotherapy.

Ding M, Maspero M, Littooij AS, van Grotel M, Fajardo RD, van Noesel MM, van den Heuvel-Eibrink MM, Janssens GO

pubmed logopapersJul 1 2025
This study aimed to develop a computed tomography (CT)-based multi-organ segmentation model for delineating organs-at-risk (OARs) in pediatric upper abdominal tumors and evaluate its robustness across multiple datasets. In-house postoperative CTs from pediatric patients with renal tumors and neuroblastoma (n = 189) and a public dataset (n = 189) with CTs covering thoracoabdominal regions were used. Seventeen OARs were delineated: nine by clinicians (Type 1) and eight using TotalSegmentator (Type 2). Auto-segmentation models were trained using in-house (Model-PMC-UMCU) and a combined dataset of public data (Model-Combined). Performance was assessed with Dice Similarity Coefficient (DSC), 95 % Hausdorff Distance (HD95), and mean surface distance (MSD). Two clinicians rated clinical acceptability on a 5-point Likert scale across 15 patient contours. Model robustness was evaluated against sex, age, intravenous contrast, and tumor type. Model-PMC-UMCU achieved mean DSC values above 0.95 for five of nine OARs, while the spleen and heart ranged between 0.90 and 0.95. The stomach-bowel and pancreas exhibited DSC values below 0.90. Model-Combined demonstrated improved robustness across both datasets. Clinical evaluation revealed good usability, with both clinicians rating six of nine Type 1 OARs above four and six of eight Type 2 OARs above three. Significant performance differences were only found across age groups in both datasets, specifically in the left lung and pancreas. The 0-2 age group showed the lowest performance. A multi-organ segmentation model was developed, showcasing enhanced robustness when trained on combined datasets. This model is suitable for various OARs and can be applied to multiple datasets in clinical settings.

A lung structure and function information-guided residual diffusion model for predicting idiopathic pulmonary fibrosis progression.

Jiang C, Xing X, Nan Y, Fang Y, Zhang S, Walsh S, Yang G, Shen D

pubmed logopapersJul 1 2025
Idiopathic Pulmonary Fibrosis (IPF) is a progressive lung disease that continuously scars and thickens lung tissue, leading to respiratory difficulties. Timely assessment of IPF progression is essential for developing treatment plans and improving patient survival rates. However, current clinical standards require multiple (usually two) CT scans at certain intervals to assess disease progression. This presents a dilemma: the disease progression is identified only after the disease has already progressed. To address this issue, a feasible solution is to generate the follow-up CT image from the patient's initial CT image to achieve early prediction of IPF. To this end, we propose a lung structure and function information-guided residual diffusion model. The key components of our model include (1) using a 2.5D generation strategy to reduce computational cost of generating 3D images with the diffusion model; (2) designing structural attention to mitigate negative impact of spatial misalignment between the two CT images on generation performance; (3) employing residual diffusion to accelerate model training and inference while focusing more on differences between the two CT images (i.e., the lesion areas); and (4) developing a CLIP-based text extraction module to extract lung function test information and further using such extracted information to guide the generation. Extensive experiments demonstrate that our method can effectively predict IPF progression and achieve superior generation performance compared to state-of-the-art methods.

Quantitative CT biomarkers for renal cell carcinoma subtype differentiation: a comparison of DECT, PCT, and CT texture analysis.

Sah A, Goswami S, Gupta A, Garg S, Yadav N, Dhanakshirur R, Das CJ

pubmed logopapersJul 1 2025
To evaluate and compare the diagnostic performance of CT texture analysis (CTTA), perfusion CT (PCT), and dual-energy CT (DECT) in distinguishing between clear-cell renal cell carcinoma (ccRCC) and non-ccRCC. This retrospective study included 66 patients with RCC (52 ccRCC and 14 non-ccRCC) who underwent DECT and PCT imaging before surgery (2017-2022). This DECT parameters (iodine concentration, iodine ratio [IR]) and PCT parameters (blood flow, blood volume, mean transit time, time to peak) were measured using circular regions of interest (ROIs). CT texture analysis features were extracted from manually annotated corticomedullary-phase images. A machine learning (ML) model was developed to differentiate RCC subtypes, with performance evaluated using k-fold cross-validation. Multivariate logistic regression analysis was performed to assess the predictive value of each imaging modality. All 3 imaging modalities demonstrated high diagnostic accuracy, with F1 scores of 0.9107, 0.9358, and 0.9348 for PCT, DECT, and CTTA, respectively. None of the 3 models were significantly different (P > 0.05). While each modality could effectively differentiate between ccRCC and non-ccRCC, higher IR on DECT and increased entropy on CTTA were independent predictors of ccRCC, with F1 scores of 0.9345 and 0.9272, respectively (P < 0.001). Dual-energy CT achieved the highest individual performance, with IR being the best predictor (F1 = 0.902). Iodine ratio was significantly higher in ccRCC (65.12 ± 23.73) compared to non-ccRCC (35.17 ± 17.99, P < 0.001), yielding an Area under curve (AUC) of 0.91, sensitivity of 87.5%, and specificity of 89.3%. Entropy on CTTA was the strongest texture feature, with higher values in ccRCC (7.94 ± 0.336) than non-ccRCC (6.43 ± 0.297, P < 0.001), achieving an AUC of 0.94, sensitivity of 83.0%, and specificity of 92.3%. The combined ML model integrating DECT, PCT, and CTTA parameters yielded the highest diagnostic accuracy, with an F1 score of 0.954. PCT, DECT, and CTTA effectively differentiate RCC subtypes. However, IR (DECT) and entropy (CTTA) emerged as key independent markers, suggesting their clinical utility in RCC characterization. Accurate, non-invasive biomarkers are essential to differentiate RCC subtypes, aiding in prognosis and guiding targeted therapies, particularly in ccRCC, where treatment options differ significantly.

Towards Foundation Models and Few-Shot Parameter-Efficient Fine-Tuning for Volumetric Organ Segmentation.

Silva-Rodríguez J, Dolz J, Ben Ayed I

pubmed logopapersJul 1 2025
The recent popularity of foundation models and the pre-train-and-adapt paradigm, where a large-scale model is transferred to downstream tasks, is gaining attention for volumetric medical image segmentation. However, current transfer learning strategies devoted to full fine-tuning for transfer learning may require significant resources and yield sub-optimal results when the labeled data of the target task is scarce. This makes its applicability in real clinical settings challenging since these institutions are usually constrained on data and computational resources to develop proprietary solutions. To address this challenge, we formalize Few-Shot Efficient Fine-Tuning (FSEFT), a novel and realistic scenario for adapting medical image segmentation foundation models. This setting considers the key role of both data- and parameter-efficiency during adaptation. Building on a foundation model pre-trained on open-access CT organ segmentation sources, we propose leveraging Parameter-Efficient Fine-Tuning and black-box Adapters to address such challenges. Furthermore, novel efficient adaptation methodologies are introduced in this work, which include Spatial black-box Adapters that are more appropriate for dense prediction tasks and constrained transductive inference, leveraging task-specific prior knowledge. Our comprehensive transfer learning experiments confirm the suitability of foundation models in medical image segmentation and unveil the limitations of popular fine-tuning strategies in few-shot scenarios. The project code is available: https://github.com/jusiro/fewshot-finetuning.

Rethinking boundary detection in deep learning-based medical image segmentation.

Lin Y, Zhang D, Fang X, Chen Y, Cheng KT, Chen H

pubmed logopapersJul 1 2025
Medical image segmentation is a pivotal task within the realms of medical image analysis and computer vision. While current methods have shown promise in accurately segmenting major regions of interest, the precise segmentation of boundary areas remains challenging. In this study, we propose a novel network architecture named CTO, which combines Convolutional Neural Networks (CNNs), Vision Transformer (ViT) models, and explicit edge detection operators to tackle this challenge. CTO surpasses existing methods in terms of segmentation accuracy and strikes a better balance between accuracy and efficiency, without the need for additional data inputs or label injections. Specifically, CTO adheres to the canonical encoder-decoder network paradigm, with a dual-stream encoder network comprising a mainstream CNN stream for capturing local features and an auxiliary StitchViT stream for integrating long-range dependencies. Furthermore, to enhance the model's ability to learn boundary areas, we introduce a boundary-guided decoder network that employs binary boundary masks generated by dedicated edge detection operators to provide explicit guidance during the decoding process. We validate the performance of CTO through extensive experiments conducted on seven challenging medical image segmentation datasets, namely ISIC 2016, PH2, ISIC 2018, CoNIC, LiTS17, BraTS, and BTCV. Our experimental results unequivocally demonstrate that CTO achieves state-of-the-art accuracy on these datasets while maintaining competitive model complexity. The codes have been released at: CTO.

Radiomics for lung cancer diagnosis, management, and future prospects.

Boubnovski Martell M, Linton-Reid K, Chen M, Aboagye EO

pubmed logopapersJul 1 2025
Lung cancer remains the leading cause of cancer-related mortality worldwide, with its early detection and effective treatment posing significant clinical challenges. Radiomics, the extraction of quantitative features from medical imaging, has emerged as a promising approach for enhancing diagnostic accuracy, predicting treatment responses, and personalising patient care. This review explores the role of radiomics in lung cancer diagnosis and management, with methods ranging from handcrafted radiomics to deep learning techniques that can capture biological intricacies. The key applications are highlighted across various stages of lung cancer care, including nodule detection, histology prediction, and disease staging, where artificial intelligence (AI) models demonstrate superior specificity and sensitivity. The article also examines future directions, emphasising the integration of large language models, explainable AI (XAI), and super-resolution imaging techniques as transformative developments. By merging diverse data sources and incorporating interpretability into AI models, radiomics stands poised to redefine clinical workflows, offering more robust and reliable tools for lung cancer diagnosis, treatment planning, and outcome prediction. These advancements underscore radiomics' potential in supporting precision oncology and improving patient outcomes through data-driven insights.

Impact of CT reconstruction algorithms on pericoronary and epicardial adipose tissue attenuation.

Xiao H, Wang X, Yang P, Wang L, Xi J, Xu J

pubmed logopapersJul 1 2025
This study aims to investigate the impact of adaptive statistical iterative reconstruction-Veo (ASIR-V) and deep learning image reconstruction (DLIR) algorithms on the quantification of pericoronary adipose tissue (PCAT) and epicardial adipose tissue (EAT). Furthermore, we propose to explore the feasibility of correcting the effects through fat threshold adjustment. A retrospective analysis was conducted on the imaging data of 134 patients who underwent coronary CT angiography (CCTA) between December 2023 and January 2024. These data were reconstructed into seven datasets using filtered back projection (FBP), ASIR-V at three different intensities (ASIR-V 30%, ASIR-V 50%, ASIR-V 70%), and DLIR at three different intensities (DLIR-L, DLIR-M, DLIR-H). Repeated-measures ANOVA was used to compare differences in fat, PCAT and EAT attenuation values among the reconstruction algorithms, and Bland-Altman plots were used to analyze the agreement between ASIR-V or DLIR and FBP algorithms in PCAT attenuation values. Compared to FBP, ASIR-V 30 %, ASIR-V 50 %, ASIR-V 70 %, DLIR-L, DLIR-M, and DLIR-H significantly increased fat attenuation values (-103.91 ± 12.99 HU, -102.53 ± 12.68 HU, -101.14 ± 12.78 HU, -101.81 ± 12.41 HU, -100.87 ± 12.25 HU, -99.08 ± 12.00 HU vs. -105.95 ± 13.01 HU, all p < 0.001). When the fat threshold was set at -190 to -30 HU, ASIR-V and DLIR algorithms significantly increased PCAT and EAT attenuation values compared to FBP algorithm (all p < 0.05), with these values increasing as the reconstruction intensity level increased. After correction with a fat threshold of -200 to -35 HU for ASIR-V 30 %, -200 to -40 HU for ASIR-V 50 % and DLIR-L, and -200 to -45 HU for ASIR-V 70 %, DLIR-M, and DLIR-H, the mean differences in PCAT attenuation values between ASIR-V or DLIR and FBP algorithms decreased (-0.03 to 1.68 HU vs. 2.35 to 8.69 HU), and no significant difference was found in PCAT attenuation values between FBP and ASIR-V 30 %, ASIR-V 50 %, ASIR-V 70 %, DLIR-L, and DLIR-M (all p > 0.05). Compared to the FBP algorithm, ASIR-V and DLIR algorithms increase PCAT and EAT attenuation values. Adjusting the fat threshold can mitigate the impact of ASIR-V and DLIR algorithms on PCAT attenuation values.

Integrating prior knowledge with deep learning for optimized quality control in corneal images: A multicenter study.

Li FF, Li GX, Yu XX, Zhang ZH, Fu YN, Wu SQ, Wang Y, Xiao C, Ye YF, Hu M, Dai Q

pubmed logopapersJul 1 2025
Artificial intelligence (AI) models are effective for analyzing high-quality slit-lamp images but often face challenges in real-world clinical settings due to image variability. This study aims to develop and evaluate a hybrid AI-based image quality control system to classify slit-lamp images, improving diagnostic accuracy and efficiency, particularly in telemedicine applications. Cross-sectional study. Our Zhejiang Eye Hospital dataset comprised 2982 slit-lamp images as the internal dataset. Two external datasets were included: 13,554 images from the Aier Guangming Eye Hospital (AGEH) and 9853 images from the First People's Hospital of Aksu District in Xinjiang (FPH of Aksu). We developed a Hybrid Prior-Net (HP-Net), a novel network that combines a ResNet-based classification branch with a prior knowledge branch leveraging Hough circle transform and frequency domain blur detection. The two branches' features are channel-wise concatenated at the fully connected layer, enhancing representational power and improving the network's ability to classify eligible, misaligned, blurred, and underexposed corneal images. Model performance was evaluated using metrics such as accuracy, precision, recall, specificity, and F1-score, and compared against the performance of other deep learning models. The HP-Net outperformed all other models, achieving an accuracy of 99.03 %, precision of 98.21 %, recall of 95.18 %, specificity of 99.36 %, and an F1-score of 96.54 % in image classification. The results demonstrated that HP-Net was also highly effective in filtering slit-lamp images from the other two datasets, AGEH and FPH of Aksu with accuracies of 97.23 % and 96.97 %, respectively. These results underscore the superior feature extraction and classification capabilities of HP-Net across all evaluated metrics. Our AI-based image quality control system offers a robust and efficient solution for classifying corneal images, with significant implications for telemedicine applications. By incorporating slightly blurred but diagnostically usable images into training datasets, the system enhances the reliability and adaptability of AI tools for medical imaging quality control, paving the way for more accurate and efficient diagnostic workflows.

World of Forms: Deformable geometric templates for one-shot surface meshing in coronary CT angiography.

van Herten RLM, Lagogiannis I, Wolterink JM, Bruns S, Meulendijks ER, Dey D, de Groot JR, Henriques JP, Planken RN, Saitta S, Išgum I

pubmed logopapersJul 1 2025
Deep learning-based medical image segmentation and surface mesh generation typically involve a sequential pipeline from image to segmentation to meshes, often requiring large training datasets while making limited use of prior geometric knowledge. This may lead to topological inconsistencies and suboptimal performance in low-data regimes. To address these challenges, we propose a data-efficient deep learning method for direct 3D anatomical object surface meshing using geometric priors. Our approach employs a multi-resolution graph neural network that operates on a prior geometric template which is deformed to fit object boundaries of interest. We show how different templates may be used for the different surface meshing targets, and introduce a novel masked autoencoder pretraining strategy for 3D spherical data. The proposed method outperforms nnUNet in a one-shot setting for segmentation of the pericardium, left ventricle (LV) cavity and the LV myocardium. Similarly, the method outperforms other lumen segmentation operating on multi-planar reformatted images. Results further indicate that mesh quality is on par with or improves upon marching cubes post-processing of voxel mask predictions, while remaining flexible in the choice of mesh triangulation prior, thus paving the way for more accurate and topologically consistent 3D medical object surface meshing.

Development and validation of an interpretable machine learning model for diagnosing pathologic complete response in breast cancer.

Zhou Q, Peng F, Pang Z, He R, Zhang H, Jiang X, Song J, Li J

pubmed logopapersJul 1 2025
Pathologic complete response (pCR) following neoadjuvant chemotherapy (NACT) is a critical prognostic marker for patients with breast cancer, potentially allowing surgery omission. However, noninvasive and accurate pCR diagnosis remains a significant challenge due to the limitations of current imaging techniques, particularly in cases where tumors completely disappear post-NACT. We developed a novel framework incorporating Dimensional Accumulation for Layered Images (DALI) and an Attention-Box annotation tool to address the unique challenge of analyzing imaging data where target lesions are absent. These methods transform three-dimensional magnetic resonance imaging into two-dimensional representations and ensure consistent target tracking across time-points. Preprocessing techniques, including tissue-region normalization and subtraction imaging, were used to enhance model performance. Imaging features were extracted using radiomics and pretrained deep-learning models, and machine-learning algorithms were integrated into a stacked ensemble model. The approach was developed using the I-SPY 2 dataset and validated with an independent Tangshan People's Hospital cohort. The stacked ensemble model achieved superior diagnostic performance, with an area under the receiver operating characteristic curve of 0.831 (95 % confidence interval, 0.769-0.887) on the test set, outperforming individual models. Tissue-region normalization and subtraction imaging significantly enhanced diagnostic accuracy. SHAP analysis identified variables that contributed to the model predictions, ensuring model interpretability. This innovative framework addresses challenges of noninvasive pCR diagnosis. Integrating advanced preprocessing techniques improves feature quality and model performance, supporting clinicians in identifying patients who can safely omit surgery. This innovation reduces unnecessary treatments and improves quality of life for patients with breast cancer.
Page 6 of 1861852 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.