Sort by:
Page 39 of 1411402 results

Macrotrabecular-massive subtype in hepatocellular carcinoma based on contrast-enhanced CT: deep learning outperforms machine learning.

Jia L, Li Z, Huang G, Jiang H, Xu H, Zhao J, Li J, Lei J

pubmed logopapersAug 28 2025
To develop a CT-based deep learning model for predicting the macrotrabecular-massive (MTM) subtype of hepatocellular carcinoma (HCC) and to compare its diagnostic performance with machine learning models. We retrospectively collected contrast-enhanced CT data from patients diagnosed with HCC via histopathological examination between January 2019 and August 2023. These patients were recruited from two medical centers. All analyses were performed using two-dimensional regions of interest. We developed a novel deep learning network based on ResNet-50, named ResNet-ViT Contrastive Learning (RVCL). The RVCL model was compared against baseline deep learning models and machine learning models. Additionally, we developed a multimodal prediction model by integrating deep learning models with clinical parameters. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). A total of 368 patients (mean age, 56 ± 10; 285 [77%] male) from two institutions were retrospectively enrolled. Our RVCL model demonstrated superior diagnostic performance in predicting MTM (AUC = 0.93) on the external test set compared to the five baseline deep learning models (AUCs range 0.46-0.72, all p < 0.05) and the three machine learning models (AUCs range 0.49-0.60, all p < 0.05). However, integrating the clinical biomarker Alpha-Fetoprotein (AFP) into the RVCL model did not significant improvement in diagnostic performance (internal test data set: AUC 0.99 vs 0.95 [p = 0.08]; external test data set: AUC 0.98 vs 0.93 [p = 0.05]). The deep learning model based on contrast-enhanced CT can accurately predict the MTM subtype in HCC patients, offering a smart tool for clinical decision-making. The RVCL model introduces a transformative approach to the non-invasive diagnosis MTM subtype of HCC by harmonizing convolutional neural networks and vision transformers within a unified architecture. The RVCL model can accurately predict the MTM subtype. Deep learning outperforms machine learning for predicting MTM subtype. RVCL boosts accuracy and guides personalized therapy.

PET/CT radiomics for non-invasive prediction of immunotherapy efficacy in cervical cancer.

Du T, Li C, Grzegozek M, Huang X, Rahaman M, Wang X, Sun H

pubmed logopapersAug 28 2025
PurposeThe prediction of immunotherapy efficacy in cervical cancer patients remains a critical clinical challenge. This study aims to develop and validate a deep learning-based automatic tumor segmentation method on PET/CT images, extract texture features from the tumor regions in cervical cancer patients, and investigate their correlation with PD-L1 expression. Furthermore, a predictive model for immunotherapy efficacy will be constructed.MethodsWe retrospectively collected data from 283 pathologically confirmed cervical cancer patients who underwent <sup>18</sup>F-FDG PET/CT examinations, divided into three subsets. Subset-I (n = 97) was used to develop a deep learning-based segmentation model using Attention-UNet and region-growing methods on co-registered PET/CT images. Subset-II (n = 101) was used to explore correlations between radiomic features and PD-L1 expression. Subset-III (n = 85) was used to construct and validate a radiomic model for predicting immunotherapy response.ResultsUsing Subset-I, a segmentation model was developed. The segmentation model achieved optimal performance at the 94th epoch with an IoU of 0.746 in the validation set. Manual evaluation confirmed accurate tumor localization. Sixteen features demonstrated excellent reproducibility (ICC > 0.75). Using Subset-II, PD-L1-correlated features were extracted and identified. In Subset-II, 183 features showed significant correlations with PD-L1 expression (P < 0.05).Using these features in Subset-III, a predictive model for immunotherapy efficacy was constructed and evaluated. In Subset-III, the SVM-based radiomic model achieved the best predictive performance with an AUC of 0.935.ConclusionWe validated, respectively in Subset-I, Subset-II, and Subset-III, that deep learning models incorporating medical prior knowledge can accurately and automatically segment cervical cancer lesions, that texture features extracted from <sup>18</sup>F-FDG PET/CT are significantly associated with PD-L1 expression, and that predictive models based on these features can effectively predict the efficacy of PD-L1 immunotherapy. This approach offers a non-invasive, efficient, and cost-effective tool for guiding individualized immunotherapy in cervical cancer patients and may help reduce patient burden, accelerate treatment planning.

Experimental Assessment of Conventional Features, CNN-Based Features and Ensemble Schemes for Discriminating Benign Versus Malignant Lesions on Breast Ultrasound Images.

Bianconi F, Khan MU, Du H, Jassim S

pubmed logopapersAug 28 2025
Breast ultrasound images play a pivotal role in assessing the nature of suspicious breast lesions, particularly in patients with dense tissue. Computerized analysis of breast ultrasound images has the potential to assist the physician in the clinical decision-making and improve subjective interpretation. We assess the performance of conventional features, deep learning features and ensemble schemes for discriminating benign versus malignant breast lesions on ultrasound images. A total of 19 individual feature sets (1 morphological, 2 first-order, 10 texture-based, and 6 CNN-based) were included in the analysis. Furthermore, four combined feature sets (Best by class; Top 3, 5, and 7) and four fusion schemes (feature concatenation, majority voting, sum and product rule) were considered to generate ensemble models. The experiments were carried out on three independent open-access datasets respectively containing 252 (154 benign, 98 malignant), 232 (109 benign, 123 malignant), and 281 (187 benign, 94 malignant) lesions. CNN-based features outperformed the other individual descriptors achieving levels of accuracy between 77.4% and 83.6%, followed by morphological features (71.6%-80.8%) and histograms of oriented gradients (71.4%-77.6%). Ensemble models further improved the accuracy to 80.2% to 87.5%. Fusion schemes based on product and sum rule were generally superior to feature concatenation and majority voting. Combining individual feature sets by ensemble schemes demonstrates advantages for discriminating benign versus malignant breast lesions on ultrasound images.

Classification of computed tomography scans: a novel approach implementing an enforced random forest algorithm.

Biondi M, Bortoli E, Marini L, Avitabile R, Bartoli A, Busatti E, Tozzi A, Cimmino MC, Piccini L, Giusti EB, Guasti A

pubmed logopapersAug 28 2025
Medical imaging faces critical challenges in radiation dose management and protocol standardisation. This study introduces a machine learning approach using a random forest algorithm to classify Computed Tomography (CT) scan protocols. By leveraging dose monitoring system data, we provide a data-driven solution for establishing Diagnostic Reference Levels while minimising computational resources. We developed a classification workflow using a Random Forest Classifier to categorise CT scans into anatomical regions: head, thorax, abdomen, spine, and complex multi-region scans (thorax + abdomen and total body). The methodology featured an iterative "human-in-the-loop" refinement process involving data preprocessing, machine learning algorithm training, expert validation, and protocol classification. After training the initial model, we applied the methodology to a new, independent dataset. By analysing 52,982 CT scan records from 11 imaging devices across five hospitals, we train the classificator to distinguish multiple anatomical regions, categorising scans into head, thorax, abdomen, and spine. The final validation on the new database confirmed the model's robustness, achieving a 97 % accuracy. This research introduces a novel medical imaging protocol classification approach by shifting from manual, time-consuming processes to a data-driven approach integrating a random forest algorithm. Our study presents a transformative approach to CT scan protocol classification, demonstrating the potential of data-driven methodologies in medical imaging. We have created a framework for managing protocol classification and establishing DRL by integrating computational intelligence with clinical expertise. Future research will explore applying this methodology to other radiological procedures.

AI-driven body composition monitoring and its prognostic role in mCRPC undergoing lutetium-177 PSMA radioligand therapy: insights from a retrospective single-center analysis.

Ruhwedel T, Rogasch J, Galler M, Schatka I, Wetz C, Furth C, Biernath N, De Santis M, Shnayien S, Kolck J, Geisel D, Amthauer H, Beetz NL

pubmed logopapersAug 28 2025
Body composition (BC) analysis is performed to quantify the relative amounts of different body tissues as a measure of physical fitness and tumor cachexia. We hypothesized that relative changes in body composition (BC) parameters, assessed by an artificial intelligence-based, PACS-integrated software, between baseline imaging before the start of radioligand therapy (RLT) and interim staging after two RLT cycles could predict overall survival (OS) in patients with metastatic castration-resistant prostate cancer. We conducted a single-center, retrospective analysis of 92 patients with mCRPC undergoing [<sup>177</sup>Lu]Lu-PSMA RLT between September 2015 and December 2023. All patients had [<sup>68</sup> Ga]Ga-PSMA-11 PET/CT at baseline (≤ 6 weeks before the first RLT cycle) and at interim staging (6-8 weeks after the second RLT cycle) allowing for longitudinal BC assessment. During follow-up, 78 patients (85%) died. Median OS was 16.3 months. Median follow-up time in survivors was 25.6 months. The 1 year mortality rate was 32.6% (95%CI 23.0-42.2%) and the 5 year mortality rate was 92.9% (95%CI 85.8-100.0%). In multivariable regression, relative change in visceral adipose tissue (VAT) (HR: 0.26; p = 0.006), previous chemotherapy of any type (HR: 2.4; p = 0.003), the presence of liver metastases (HR: 2.4; p = 0.018) and a higher baseline De Ritis ratio (HR: 1.4; p < 0.001) remained independent predictors of OS. Patients with a higher decrease in VAT (< -20%) had a median OS of 10.2 months versus 18.5 months in patients with a lower VAT decrease or VAT increase (≥ -20%) (log-rank test: p = 0.008). In a separate Cox model, the change in VAT predicted OS (p = 0.005) independent of the best PSA response after 1-2 RLT cycles (p = 0.09), and there was no interaction between the two (p = 0.09). PACS-Integrated, AI-based BC monitoring detects relative changes in the VAT, Which was an independent predictor of shorter OS in our population of patients undergoing RLT.

Deep Learning Framework for Early Detection of Pancreatic Cancer Using Multi-Modal Medical Imaging Analysis

Dennis Slobodzian, Karissa Tilbury, Amir Kordijazi

arxiv logopreprintAug 28 2025
Pacreatic ductal adenocarcinoma (PDAC) remains one of the most lethal forms of cancer, with a five-year survival rate below 10% primarily due to late detection. This research develops and validates a deep learning framework for early PDAC detection through analysis of dual-modality imaging: autofluorescence and second harmonic generation (SHG). We analyzed 40 unique patient samples to create a specialized neural network capable of distinguishing between normal, fibrotic, and cancerous tissue. Our methodology evaluated six distinct deep learning architectures, comparing traditional Convolutional Neural Networks (CNNs) with modern Vision Transformers (ViTs). Through systematic experimentation, we identified and overcome significant challenges in medical image analysis, including limited dataset size and class imbalance. The final optimized framework, based on a modified ResNet architecture with frozen pre-trained layers and class-weighted training, achieved over 90% accuracy in cancer detection. This represents a significant improvement over current manual analysis methods an demonstrates potential for clinical deployment. This work establishes a robust pipeline for automated PDAC detection that can augment pathologists' capabilities while providing a foundation for future expansion to other cancer types. The developed methodology also offers valuable insights for applying deep learning to limited-size medical imaging datasets, a common challenge in clinical applications.

Adapting Foundation Model for Dental Caries Detection with Dual-View Co-Training

Tao Luo, Han Wu, Tong Yang, Dinggang Shen, Zhiming Cui

arxiv logopreprintAug 28 2025
Accurate dental caries detection from panoramic X-rays plays a pivotal role in preventing lesion progression. However, current detection methods often yield suboptimal accuracy due to subtle contrast variations and diverse lesion morphology of dental caries. In this work, inspired by the clinical workflow where dentists systematically combine whole-image screening with detailed tooth-level inspection, we present DVCTNet, a novel Dual-View Co-Training network for accurate dental caries detection. Our DVCTNet starts with employing automated tooth detection to establish two complementary views: a global view from panoramic X-ray images and a local view from cropped tooth images. We then pretrain two vision foundation models separately on the two views. The global-view foundation model serves as the detection backbone, generating region proposals and global features, while the local-view model extracts detailed features from corresponding cropped tooth patches matched by the region proposals. To effectively integrate information from both views, we introduce a Gated Cross-View Attention (GCV-Atten) module that dynamically fuses dual-view features, enhancing the detection pipeline by integrating the fused features back into the detection model for final caries detection. To rigorously evaluate our DVCTNet, we test it on a public dataset and further validate its performance on a newly curated, high-precision dental caries detection dataset, annotated using both intra-oral images and panoramic X-rays for double verification. Experimental results demonstrate DVCTNet's superior performance against existing state-of-the-art (SOTA) methods on both datasets, indicating the clinical applicability of our method. Our code and labeled dataset are available at https://github.com/ShanghaiTech-IMPACT/DVCTNet.

Dino U-Net: Exploiting High-Fidelity Dense Features from Foundation Models for Medical Image Segmentation

Yifan Gao, Haoyue Li, Feng Yuan, Xiaosong Wang, Xin Gao

arxiv logopreprintAug 28 2025
Foundation models pre-trained on large-scale natural image datasets offer a powerful paradigm for medical image segmentation. However, effectively transferring their learned representations for precise clinical applications remains a challenge. In this work, we propose Dino U-Net, a novel encoder-decoder architecture designed to exploit the high-fidelity dense features of the DINOv3 vision foundation model. Our architecture introduces an encoder built upon a frozen DINOv3 backbone, which employs a specialized adapter to fuse the model's rich semantic features with low-level spatial details. To preserve the quality of these representations during dimensionality reduction, we design a new fidelity-aware projection module (FAPM) that effectively refines and projects the features for the decoder. We conducted extensive experiments on seven diverse public medical image segmentation datasets. Our results show that Dino U-Net achieves state-of-the-art performance, consistently outperforming previous methods across various imaging modalities. Our framework proves to be highly scalable, with segmentation accuracy consistently improving as the backbone model size increases up to the 7-billion-parameter variant. The findings demonstrate that leveraging the superior, dense-pretrained features from a general-purpose foundation model provides a highly effective and parameter-efficient approach to advance the accuracy of medical image segmentation. The code is available at https://github.com/yifangao112/DinoUNet.

CardioMorphNet: Cardiac Motion Prediction Using a Shape-Guided Bayesian Recurrent Deep Network

Reza Akbari Movahed, Abuzar Rezaee, Arezoo Zakeri, Colin Berry, Edmond S. L. Ho, Ali Gooya

arxiv logopreprintAug 28 2025
Accurate cardiac motion estimation from cine cardiac magnetic resonance (CMR) images is vital for assessing cardiac function and detecting its abnormalities. Existing methods often struggle to capture heart motion accurately because they rely on intensity-based image registration similarity losses that may overlook cardiac anatomical regions. To address this, we propose CardioMorphNet, a recurrent Bayesian deep learning framework for 3D cardiac shape-guided deformable registration using short-axis (SAX) CMR images. It employs a recurrent variational autoencoder to model spatio-temporal dependencies over the cardiac cycle and two posterior models for bi-ventricular segmentation and motion estimation. The derived loss function from the Bayesian formulation guides the framework to focus on anatomical regions by recursively registering segmentation maps without using intensity-based image registration similarity loss, while leveraging sequential SAX volumes and spatio-temporal features. The Bayesian modelling also enables computation of uncertainty maps for the estimated motion fields. Validated on the UK Biobank dataset by comparing warped mask shapes with ground truth masks, CardioMorphNet demonstrates superior performance in cardiac motion estimation, outperforming state-of-the-art methods. Uncertainty assessment shows that it also yields lower uncertainty values for estimated motion fields in the cardiac region compared with other probabilistic-based cardiac registration methods, indicating higher confidence in its predictions.

Domain Adaptation Techniques for Natural and Medical Image Classification

Ahmad Chaddad, Yihang Wu, Reem Kateb, Christian Desrosiers

arxiv logopreprintAug 28 2025
Domain adaptation (DA) techniques have the potential in machine learning to alleviate distribution differences between training and test sets by leveraging information from source domains. In image classification, most advances in DA have been made using natural images rather than medical data, which are harder to work with. Moreover, even for natural images, the use of mainstream datasets can lead to performance bias. {With the aim of better understanding the benefits of DA for both natural and medical images, this study performs 557 simulation studies using seven widely-used DA techniques for image classification in five natural and eight medical datasets that cover various scenarios, such as out-of-distribution, dynamic data streams, and limited training samples.} Our experiments yield detailed results and insightful observations highlighting the performance and medical applicability of these techniques. Notably, our results have shown the outstanding performance of the Deep Subdomain Adaptation Network (DSAN) algorithm. This algorithm achieved feasible classification accuracy (91.2\%) in the COVID-19 dataset using Resnet50 and showed an important accuracy improvement in the dynamic data stream DA scenario (+6.7\%) compared to the baseline. Our results also demonstrate that DSAN exhibits remarkable level of explainability when evaluated on COVID-19 and skin cancer datasets. These results contribute to the understanding of DA techniques and offer valuable insight into the effective adaptation of models to medical data.
Page 39 of 1411402 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.