Sort by:
Page 32 of 1411402 results

Semi-supervised Deep Transfer for Regression without Domain Alignment

Mainak Biswas, Ambedkar Dukkipati, Devarajan Sridharan

arxiv logopreprintSep 5 2025
Deep learning models deployed in real-world applications (e.g., medicine) face challenges because source models do not generalize well to domain-shifted target data. Many successful domain adaptation (DA) approaches require full access to source data. Yet, such requirements are unrealistic in scenarios where source data cannot be shared either because of privacy concerns or because it is too large and incurs prohibitive storage or computational costs. Moreover, resource constraints may limit the availability of labeled targets. We illustrate this challenge in a neuroscience setting where source data are unavailable, labeled target data are meager, and predictions involve continuous-valued outputs. We build upon Contradistinguisher (CUDA), an efficient framework that learns a shared model across the labeled source and unlabeled target samples, without intermediate representation alignment. Yet, CUDA was designed for unsupervised DA, with full access to source data, and for classification tasks. We develop CRAFT -- a Contradistinguisher-based Regularization Approach for Flexible Training -- for source-free (SF), semi-supervised transfer of pretrained models in regression tasks. We showcase the efficacy of CRAFT in two neuroscience settings: gaze prediction with electroencephalography (EEG) data and ``brain age'' prediction with structural MRI data. For both datasets, CRAFT yielded up to 9% improvement in root-mean-squared error (RMSE) over fine-tuned models when labeled training examples were scarce. Moreover, CRAFT leveraged unlabeled target data and outperformed four competing state-of-the-art source-free domain adaptation models by more than 3%. Lastly, we demonstrate the efficacy of CRAFT on two other real-world regression benchmarks. We propose CRAFT as an efficient approach for source-free, semi-supervised deep transfer for regression that is ubiquitous in biology and medicine.

INR meets Multi-Contrast MRI Reconstruction

Natascha Niessen, Carolin M. Pirkl, Ana Beatriz Solana, Hannah Eichhorn, Veronika Spieker, Wenqi Huang, Tim Sprenger, Marion I. Menzel, Julia A. Schnabel

arxiv logopreprintSep 5 2025
Multi-contrast MRI sequences allow for the acquisition of images with varying tissue contrast within a single scan. The resulting multi-contrast images can be used to extract quantitative information on tissue microstructure. To make such multi-contrast sequences feasible for clinical routine, the usually very long scan times need to be shortened e.g. through undersampling in k-space. However, this comes with challenges for the reconstruction. In general, advanced reconstruction techniques such as compressed sensing or deep learning-based approaches can enable the acquisition of high-quality images despite the acceleration. In this work, we leverage redundant anatomical information of multi-contrast sequences to achieve even higher acceleration rates. We use undersampling patterns that capture the contrast information located at the k-space center, while performing complementary undersampling across contrasts for high frequencies. To reconstruct this highly sparse k-space data, we propose an implicit neural representation (INR) network that is ideal for using the complementary information acquired across contrasts as it jointly reconstructs all contrast images. We demonstrate the benefits of our proposed INR method by applying it to multi-contrast MRI using the MPnRAGE sequence, where it outperforms the state-of-the-art parallel imaging compressed sensing (PICS) reconstruction method, even at higher acceleration factors.

Implementation of Fully Automated AI-Integrated System for Body Composition Assessment on Computed Tomography for Opportunistic Sarcopenia Screening: Multicenter Prospective Study.

Urooj B, Ko Y, Na S, Kim IO, Lee EH, Cho S, Jeong H, Khang S, Lee J, Kim KW

pubmed logopapersSep 5 2025
Opportunistic computed tomography (CT) screening for the evaluation of sarcopenia and myosteatosis has been gaining emphasis. A fully automated artificial intelligence (AI)-integrated system for body composition assessment on CT scans is a prerequisite for effective opportunistic screening. However, no study has evaluated the implementation of fully automated AI systems for opportunistic screening in real-world clinical practice for routine health check-ups. The aim of this study is to evaluate the performance and clinical utility of a fully automated AI-integrated system for body composition assessment on opportunistic CT during routine health check-ups. This prospective multicenter study included 537 patients who underwent routine health check-ups across 3 institutions. Our AI algorithm models are composed of selecting L3 slice and segmenting muscle and fat area in an end-to-end manner. The AI models were integrated into the Picture Archiving and Communication System (PACS) at each institution. Technical success rate, processing time, and segmentation accuracy in Dice similarity coefficient were assessed. Body composition metrics were analyzed across age and sex groups. The fully automated AI-integrated system successfully retrieved anonymized CT images from the PACS, performed L3 selection and segmentation, and provided body composition metrics, including muscle quality maps and muscle age. The technical success rate was 100% without any failed cases requiring manual adjustment. The mean processing time from CT acquisition to report generation was 4.12 seconds. Segmentation accuracy comparing AI results and human expert results was 97.4%. Significant age-related declines in skeletal muscle area and normal-attenuation muscle area were observed, alongside increases in low-attenuation muscle area and intramuscular adipose tissue. Implementation of the fully automated AI-integrated system significantly enhanced opportunistic sarcopenia screening, achieving excellent technical success and high segmentation accuracy without manual intervention. This system has the potential to transform routine health check-ups by providing rapid and accurate assessments of body composition.

Interpretable Deep Transfer Learning for Breast Ultrasound Cancer Detection: A Multi-Dataset Study

Mohammad Abbadi, Yassine Himeur, Shadi Atalla, Wathiq Mansoor

arxiv logopreprintSep 5 2025
Breast cancer remains a leading cause of cancer-related mortality among women worldwide. Ultrasound imaging, widely used due to its safety and cost-effectiveness, plays a key role in early detection, especially in patients with dense breast tissue. This paper presents a comprehensive study on the application of machine learning and deep learning techniques for breast cancer classification using ultrasound images. Using datasets such as BUSI, BUS-BRA, and BrEaST-Lesions USG, we evaluate classical machine learning models (SVM, KNN) and deep convolutional neural networks (ResNet-18, EfficientNet-B0, GoogLeNet). Experimental results show that ResNet-18 achieves the highest accuracy (99.7%) and perfect sensitivity for malignant lesions. Classical ML models, though outperformed by CNNs, achieve competitive performance when enhanced with deep feature extraction. Grad-CAM visualizations further improve model transparency by highlighting diagnostically relevant image regions. These findings support the integration of AI-based diagnostic tools into clinical workflows and demonstrate the feasibility of deploying high-performing, interpretable systems for ultrasound-based breast cancer detection.

AI-driven and Traditional Radiomic Model for Predicting Muscle Invasion in Bladder Cancer via Multi-parametric Imaging: A Systematic Review and Meta-analysis.

Wang Z, Shi H, Wang Q, Huang Y, Feng M, Yu L, Dong B, Li J, Deng X, Fu S, Zhang G, Wang H

pubmed logopapersSep 5 2025
This study systematically evaluates the diagnostic performance of artificial intelligence (AI)-driven and conventional radiomics models in detecting muscle-invasive bladder cancer (MIBC) through meta-analytical approaches. Furthermore, it investigates their potential synergistic value with the Vesical Imaging-Reporting and Data System (VI-RADS) and assesses clinical translation prospects. This study adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. We conducted a comprehensive systematic search of PubMed, Web of Science, Embase, and Cochrane Library databases up to May 13, 2025, and manually screened the references of included studies. The quality and risk of bias of the selected studies were assessed using the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) and Radiomics Quality Score (RQS) tools. We pooled the area under the curve (AUC), sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), diagnostic odds ratio (DOR), and their 95% confidence intervals (95% CI). Additionally, meta-regression and subgroup analyses were performed to identify potential sources of heterogeneity. This meta-analysis incorporated 43 studies comprising 9624 patients. The majority of included studies demonstrated low risk of bias, with a mean RQS of 18.89. Pooled analysis yielded an AUC of 0.92 (95% CI: 0.89-0.94). The aggregate sensitivity and specificity were both 0.86 (95% CI: 0.84-0.87), with heterogeneity indices of I² = 43.58 and I² = 72.76, respectively. The PLR was 5.97 (95% CI: 5.28-6.75, I² = 64.04), while the NLR was 0.17 (95% CI: 0.15-0.19, I² = 37.68). The DOR reached 35.57 (95% CI: 29.76-42.51, I² = 99.92). Notably, all included studies exhibited significant heterogeneity (P < 0.1). Meta-regression and subgroup analyses identified several significant sources of heterogeneity, including: study center type (single-center vs. multi-center), sample size (<100 vs. ≥100 patients), dataset classification (training, validation, testing, or ungrouped), imaging modality (computed tomography [CT] vs. magnetic resonance imaging [MRI]), modeling algorithm (deep learning vs. machine learning vs. other), validation methodology (cross-validation vs. cohort validation), segmentation method (manual vs. [semi]automated), regional differences (China vs. other countries), and risk of bias (high vs. low vs. unclear). AI-driven and traditional radiomic models have exhibited robust diagnostic performance for MIBC. Nevertheless, substantial heterogeneity across studies necessitates validation through multinational, multicenter prospective cohort studies to establish external validity.

A dual-branch encoder network based on squeeze-and-excitation UNet and transformer for 3D PET-CT image tumor segmentation.

Li M, Zhu R, Li M, Wang H, Teng Y

pubmed logopapersSep 5 2025
Recognition of tumors is very important in clinical practice and radiomics; however, the segmentation task currently still needs to be done manually by experts. With the development of deep learning, automatic segmentation of tumors is gradually becoming possible. This paper combines the molecular information from PET and the pathology information from CT for tumor segmentation. A dual-branch encoder is designed based on SE-UNet (Squeeze-and-Excitation Normalization UNet) and Transformer, 3D Convolutional Block Attention Module (CBAM) is added to skip-connection, and BCE loss is used in training for improving segmentation accuracy. The new model is named TASE-UNet. The proposed method was tested on the HECKTOR2022 dataset, which obtains the best segmentation accuracy compared with state-of-the-art methods. Specifically, we obtained results of 76.10 <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>%</mo></math> and 3.27 for the two key evaluation metrics, DSC and HD95. Experiments demonstrate that the designed network is reasonable and effective. The full implementation is available at https://github.com/LiMingrui1/TASE-UNet .

A generalist foundation model and database for open-world medical image segmentation.

Zhang S, Zhang Q, Zhang S, Liu X, Yue J, Lu M, Xu H, Yao J, Wei X, Cao J, Zhang X, Gao M, Shen J, Hao Y, Wang Y, Zhang X, Wu S, Zhang P, Cui S, Wang G

pubmed logopapersSep 5 2025
Vision foundation models have demonstrated vast potential in achieving generalist medical segmentation capability, providing a versatile, task-agnostic solution through a single model. However, current generalist models involve simple pre-training on various medical data containing irrelevant information, often resulting in the negative transfer phenomenon and degenerated performance. Furthermore, the practical applicability of foundation models across diverse open-world scenarios, especially in out-of-distribution (OOD) settings, has not been extensively evaluated. Here we construct a publicly accessible database, MedSegDB, based on a tree-structured hierarchy and annotated from 129 public medical segmentation repositories and 5 in-house datasets. We further propose a Generalist Medical Segmentation model (MedSegX), a vision foundation model trained with a model-agnostic Contextual Mixture of Adapter Experts (ConMoAE) for open-world segmentation. We conduct a comprehensive evaluation of MedSegX across a range of medical segmentation tasks. Experimental results indicate that MedSegX achieves state-of-the-art performance across various modalities and organ systems in in-distribution (ID) settings. In OOD and real-world clinical settings, MedSegX consistently maintains its performance in both zero-shot and data-efficient generalization, outperforming other foundation models.

AI-powered automated model construction for patient-specific CFD simulations of aortic flows.

Du P, An D, Wang C, Wang JX

pubmed logopapersSep 5 2025
Image-based modeling is essential for understanding cardiovascular hemodynamics and advancing the diagnosis and treatment of cardiovascular diseases. Constructing patient-specific vascular models remains labor-intensive, error-prone, and time-consuming, limiting their clinical applications. This study introduces a deep-learning framework that automates the creation of simulation-ready vascular models from medical images. The framework integrates a segmentation module for accurate voxel-based vessel delineation with a surface deformation module that performs anatomically consistent and unsupervised surface refinements guided by medical image data. The integrated pipeline addresses key limitations of existing methods, enhancing geometric accuracy and computational efficiency. Evaluated on public datasets, it achieves state-of-the-art segmentation performance while substantially reducing manual effort and processing time. The resulting vascular models exhibit anatomically accurate and visually realistic geometries, effectively capturing both primary vessels and intricate branching patterns. In conclusion, this work advances the scalability and reliability of image-based computational modeling, facilitating broader applications in clinical and research settings.

Detecting, Characterizing, and Mitigating Implicit and Explicit Racial Biases in Health Care Datasets With Subgroup Learnability: Algorithm Development and Validation Study.

Gulamali F, Sawant AS, Liharska L, Horowitz C, Chan L, Hofer I, Singh K, Richardson L, Mensah E, Charney A, Reich D, Hu J, Nadkarni G

pubmed logopapersSep 4 2025
The growing adoption of diagnostic and prognostic algorithms in health care has led to concerns about the perpetuation of algorithmic bias against disadvantaged groups of individuals. Deep learning methods to detect and mitigate bias have revolved around modifying models, optimization strategies, and threshold calibration with varying levels of success and tradeoffs. However, there have been limited substantive efforts to address bias at the level of the data used to generate algorithms in health care datasets. The aim of this study is to create a simple metric (AEquity) that uses a learning curve approximation to distinguish and mitigate bias via guided dataset collection or relabeling. We demonstrate this metric in 2 well-known examples, chest X-rays and health care cost utilization, and detect novel biases in the National Health and Nutrition Examination Survey. We demonstrated that using AEquity to guide data-centric collection for each diagnostic finding in the chest radiograph dataset decreased bias by between 29% and 96.5% when measured by differences in area under the curve. Next, we wanted to examine (1) whether AEquity worked on intersectional populations and (2) if AEquity is invariant to different types of fairness metrics, not just area under the curve. Subsequently, we examined the effect of AEquity on mitigating bias when measured by false negative rate, precision, and false discovery rate for Black patients on Medicaid. When we examined Black patients on Medicaid, at the intersection of race and socioeconomic status, we found that AEquity-based interventions reduced bias across a number of different fairness metrics including overall false negative rate by 33.3% (bias reduction absolute=1.88×10-1, 95% CI 1.4×10-1 to 2.5×10-1; bias reduction of 33.3%, 95% CI 26.6%-40%; precision bias by 7.50×10-2, 95% CI 7.48×10-2 to 7.51×10-2; bias reduction of 94.6%, 95% CI 94.5%-94.7%; false discovery rate by 94.5%; absolute bias reduction=3.50×10-2, 95% CI 3.49×10-2 to 3.50×10-2). Similarly, AEquity-guided data collection demonstrated bias reduction of up to 80% on mortality prediction with the National Health and Nutrition Examination Survey (bias reduction absolute=0.08, 95% CI 0.07-0.09). Then, we wanted to compare AEquity to state-of-the-art data-guided debiasing measures such as balanced empirical risk minimization and calibration. Consequently, we benchmarked against balanced empirical risk minimization and calibration and showed that AEquity-guided data collection outperforms both standard approaches. Moreover, we demonstrated that AEquity works on fully connected networks; convolutional neural networks such as ResNet-50; transformer architectures such as VIT-B-16, a vision transformer with 86 million parameters; and nonparametric methods such as Light Gradient-Boosting Machine. In short, we demonstrated that AEquity is a robust tool by applying it to different datasets, algorithms, and intersectional analyses and measuring its effectiveness with respect to a range of traditional fairness metrics.

A Cascaded Segmentation-Classification Deep Learning Framework for Preoperative Prediction of Occult Peritoneal Metastasis and Early Recurrence in Advanced Gastric Cancer.

Zou T, Chen P, Wang T, Lei T, Chen X, Yang F, Lin X, Li S, Yi X, Zheng L, Lin Y, Zheng B, Song J, Wang L

pubmed logopapersSep 4 2025
To develop a cascaded deep learning (DL) framework integrating tumor segmentation with metastatic risk stratification for preoperative prediction of occult peritoneal metastasis (OPM) in advanced gastric cancer (GC), and validate its generalizability for early peritoneal recurrence (PR) prediction. This multicenter study enrolled 765 patients with advanced GC from three institutions. We developed a two-stage framework as follows: (1) V-Net-based tumor segmentation on CT; (2) DL-based metastatic risk classification using segmented tumor regions. Clinicopathological predictors were integrated with deep learning probabilities to construct a combined model. Validation cohorts comprised: Internal validation (Test1 for OPM, n=168; Test2 for early PR, n=212) and External validation (Test3 for early PR, n=57 from two independent centers). Multivariable analysis identified Borrmann type (OR=1.314, 95% CI: 1.239-1.394), CA125 ≥35U/mL (OR=1.301, 95% CI: 1.127-1.499), and CT-N+ stage (OR=1.259, 95% CI: 1.124-1.415) as independent OPM predictors. The combined model demonstrated robust performance for both OPM and early PR prediction: achieving AUCs of 0.938 (Train) and 0.916 (Test1) for OPM with improvements over clinical (∆AUC +0.039-+0.107) and DL-only models (∆AUC +0.044-+0.104), while attaining AUC 0.820-0.825 for early PR (Test2 and Test3) with balanced sensitivity (79.7-88.9%) and specificity (72.4-73.3%). Decision curve analysis confirmed net clinical benefit across clinical thresholds. This CT-based cascaded framework enables reliable preoperative risk stratification for OPM and early PR in advanced GC, potentially refining indications for personalized therapeutic pathways.
Page 32 of 1411402 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.