Sort by:
Page 480 of 7497489 results

Zhang J, Wu X, Liu S, Fan Y, Chen Y, Lyu G, Liu P, Liu Z, He S

pubmed logopapersJul 8 2025
Medical self-supervised learning eliminates the reliance on labels, making feature extraction simple and efficient. The intricate design of pretext tasks in single-modal self-supervised analysis presents challenges, however, compounded by an excessive dependency on data augmentation, leading to a bottleneck in medical self-supervised learning research. Consequently, this paper reanalyzes the feature learnability introduced by data augmentation strategies in medical image self-supervised learning. We introduce an adaptive self-supervised learning data augmentation method from the perspective of batch fusion. Moreover, we propose a conv embedding block for learning the incremental representation between these batches. We tested 5 fused data tasks proposed by previous researchers and it achieved a linear classification protocol accuracy of 94.25% with only 150 self-supervised feature training in Vision Transformer(ViT), which is the best among the same methods. With a detailed ablation study on previous augmentation strategies, the results indicate that the proposed medical data augmentation strategy in this paper effectively represents ultrasound data features in the self-supervised learning process. The code and weights could be found at here.

Italiano A, Gautier O, Dupont J, Assi T, Dawi L, Lawrance L, Bone A, Jardali G, Choucair A, Ammari S, Bayle A, Rouleau E, Cournede PH, Borget I, Besse B, Barlesi F, Massard C, Lassau N

pubmed logopapersJul 8 2025
With the advances in artificial intelligence (AI) and precision medicine, radiomics has emerged as a promising tool in the field of oncology. Radiogenomics integrates radiomics with genomic data, potentially offering a non-invasive method for identifying biomarkers relevant to cancer therapy. Liquid biopsy (LB) has further revolutionized cancer diagnostics by detecting circulating tumor DNA (ctDNA), enabling real-time molecular profiling. This study explores the integration of radiomics and LB to predict genomic alterations in solid tumors, including lung, colon, pancreatic, and prostate cancers. A retrospective study was conducted on 418 patients from the STING trial (NCT04932525), all of whom underwent both LB and CT imaging. Predictive models were developed using an XGBoost logistic classifier, with statistical analysis performed to compare tumor volumes, lesion counts, and affected organs across molecular subtypes. Performance was evaluated using area under the curve (AUC) values and cross-validation techniques. Radiomic models demonstrated moderate-to-good performance in predicting genomic alterations. KRAS mutations were best identified in pancreatic cancer (AUC=0.97), while moderate discrimination was noted in lung (AUC=0.66) and colon cancer (AUC=0.64). EGFR mutations in lung cancer were detected with an AUC of 0.74, while BRAF mutations showed good discriminatory ability in both lung (AUC=0.79) and colon cancer (AUC=0.76). In the radiomics predictive model, AR mutations in prostate cancer showed limited discrimination (AUC = 0.63). This study highlights the feasibility of integrating radiomics and LB for non-invasive genomic profiling in solid tumors, demonstrating significant potential in patient stratification and personalized oncology care. While promising, further prospective validation is required to enhance the generalizability of these models.

Zhou J, Xu Y, Liu Z, Pfaender F, Liu W

pubmed logopapersJul 8 2025
Unsupervised domain adaptation (UDA) methods have achieved significant progress in medical image segmentation. Nevertheless, the significant differences between the source and target domains remain a daunting barrier, creating an urgent need for more robust cross-domain solutions. Current UDA techniques generally employ a fixed, unvarying feature alignment procedure to reduce inter-domain differences throughout the training process. This rigidity disregards the shifting nature of feature distributions throughout the training process, leading to suboptimal performance in boundary delineation and detail retention on the target domain. A novel confidence-guided unsupervised domain adaptation network (CUDA-Net) is introduced to overcome persistent domain gaps, adapt to shifting feature distributions during training, and enhance boundary delineation in the target domain. This proposed network adaptively aligns features by tracking cross-domain distribution shifts throughout training, starting with adversarial alignment at early stages (coarse) and transitioning to pseudo-label-driven alignment at later stages (fine-grained), thereby leading to more accurate segmentation in the target domain. A confidence-weighted mechanism then refines these pseudo labels by prioritizing high-confidence regions while allowing low-confidence areas to be gradually explored, thereby enhancing both label reliability and overall model stability. Experiments on three representative medical image datasets, namely MMWHS17, BraTS2021, and VS-Seg, confirm the superiority of CUDA-Net. Notably, CUDA-Net outperforms eight leading methods in terms of overall segmentation accuracy (Dice) and boundary extraction precision (ASD), highlighting that it offers an efficient and reliable solution for cross-domain medical image segmentation.

Gao Q, Chen Z, Zeng D, Zhang J, Ma J, Shan H

pubmed logopapersJul 8 2025
The generalization of deep learning-based low-dose computed tomography (CT) reconstruction models to doses unseen in the training data is important and remains challenging. Previous efforts heavily rely on paired data to improve the generalization performance and robustness through collecting either diverse CT data for re-training or a few test data for fine-tuning. Recently, diffusion models have shown promising and generalizable performance in low-dose CT (LDCT) reconstruction, however, they may produce unrealistic structures due to the CT image noise deviating from Gaussian distribution and imprecise prior information from the guidance of noisy LDCT images. In this paper, we propose a noise-inspired diffusion model for generalizable LDCT reconstruction, termed NEED, which tailors diffusion models for noise characteristics of each domain. First, we propose a novel shifted Poisson diffusion model to denoise projection data, which aligns the diffusion process with the noise model in pre-log LDCT projections. Second, we devise a doubly guided diffusion model to refine reconstructed images, which leverages LDCT images and initial reconstructions to more accurately locate prior information and enhance reconstruction fidelity. By cascading these two diffusion models for dual-domain reconstruction, our NEED requires only normal-dose data for training and can be effectively extended to various unseen dose levels during testing via a time step matching strategy. Extensive qualitative, quantitative, and segmentation-based evaluations on two datasets demonstrate that our NEED consistently outperforms state-of-the-art methods in reconstruction and generalization performance. Source code is made available at https://github.com/qgao21/NEED.

Ma, Z., Yang, X., Atalay, Z., Yang, A., Collins, S., Bai, H., Bernstein, M., Baird, G., Jiao, Z.

medrxiv logopreprintJul 8 2025
Generative AI models have demonstrated strong potential in radiology report generation, but their clinical adoption depends on physician trust. In this study, we conducted a radiology-focused Turing test to evaluate how well attendings and residents distinguish AI-generated reports from those written by radiologists, and how their confidence and decision time reflect trust. we developed an integrated web-based platform comprising two core modules: Report Generation and Report Evaluation. Using the web-based platform, eight participants evaluated 48 anonymized X-ray cases, each paired with two reports from three comparison groups: radiologist vs. AI model 1, radiologist vs. AI model 2, and AI model 1 vs. AI model 2. Participants selected the AI-generated report, rated their confidence, and indicated report preference. Attendings outperformed residents in identifying AI-generated reports (49.9% vs. 41.1%) and exhibited longer decision times, suggesting more deliberate judgment. Both groups took more time when both reports were AI-generated. Our findings highlight the role of clinical experience in AI acceptance and the need for design strategies that foster trust in clinical applications. The project page of the evaluation platform is available at: https://zachatalay89.github.io/Labsite.

Lee JH, Kim PH, Son NH, Han K, Kang Y, Jeong S, Kim EK, Yoon H, Gatidis S, Vasanawala S, Yoon HM, Shin HJ

pubmed logopapersJul 8 2025
Artificial intelligence (AI) is increasingly used in radiology, but its development in pediatric imaging remains limited, particularly for emergent conditions. Ileocolic intussusception is an important cause of acute abdominal pain in infants and toddlers and requires timely diagnosis to prevent complications such as bowel ischemia or perforation. While ultrasonography is the diagnostic standard due to its high sensitivity and specificity, its accessibility may be limited, especially outside tertiary centers. Abdominal radiographs (AXRs), despite their limited sensitivity, are often the first-line imaging modality in clinical practice. In this context, AI could support early screening and triage by analyzing AXRs and identifying patients who require further ultrasonography evaluation. This study aimed to upgrade and externally validate an AI model for screening ileocolic intussusception using pediatric AXRs with multicenter data and to assess the diagnostic performance of the model in comparison with radiologists of varying experience levels with and without AI assistance. This retrospective study included pediatric patients (≤5 years) who underwent both AXRs and ultrasonography for suspected intussusception. Based on the preliminary study from hospital A, the AI model was retrained using data from hospital B and validated with external datasets from hospitals C and D. Diagnostic performance of the upgraded AI model was evaluated using sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC). A reader study was conducted with 3 radiologists, including 2 trainees and 1 pediatric radiologist, to evaluate diagnostic performance with and without AI assistance. Based on the previously developed AI model trained on 746 patients from hospital A, an additional 431 patients from hospital B (including 143 intussusception cases) were used for further training to develop an upgraded AI model. External validation was conducted using data from hospital C (n=68; 19 intussusception cases) and hospital D (n=90; 30 intussusception cases). The upgraded AI model achieved a sensitivity of 81.7% (95% CI 68.6%-90%) and a specificity of 81.7% (95% CI 73.3%-87.8%), with an AUC of 86.2% (95% CI 79.2%-92.1%) in the external validation set. Without AI assistance, radiologists showed lower performance (overall AUC 64%; sensitivity 49.7%; specificity 77.1%). With AI assistance, radiologists' specificity improved to 93% (difference +15.9%; P<.001), and AUC increased to 79.2% (difference +15.2%; P=.05). The least experienced reader showed the largest improvement in specificity (+37.6%; P<.001) and AUC (+14.7%; P=.08). The upgraded AI model improved diagnostic performance for screening ileocolic intussusception on pediatric AXRs. It effectively enhanced the specificity and overall accuracy of radiologists, particularly those with less experience in pediatric radiology. A user-friendly software platform was introduced to support broader clinical validation and underscores the potential of AI as a screening and triage tool in pediatric emergency settings.

Higashibori H, Fukumoto W, Kusuda S, Yokomachi K, Mitani H, Nakamura Y, Awai K

pubmed logopapersJul 7 2025
Artificial intelligence (AI) algorithms for lung nodule detection assist radiologists. As their performance using ultra-high-resolution CT (U-HRCT) images has not been evaluated, we investigated the usefulness of 0.25-mm slices at U-HRCT using the commercially available deep-learning-based lung nodule detection (DL-LND) system. We enrolled 63 patients who underwent U-HRCT for lung cancer and suspected lung cancer. Two board-certified radiologists identified nodules more than 4 mm in diameter on 1-mm HRCT slices and set the reference standard consensually. They recorded all lesions detected on 5-, 1-, and 0.25-mm slices by the DL-LND system. Unidentified nodules were included in the reference standard. To examine the performance of the DL-LND system, the sensitivity, and positive predictive value (PPV) and the number of false positive (FP) nodules were recorded. The mean number of lesions detected on 5-, 1-, and 0.25-mm slices was 5.1, 7.8 and 7.2 per CT scan. On 5-mm slices the sensitivity and PPV were 79.8% and 46.4%; on 1-mm slices they were 91.5% and 34.8%, and on 0.25-mm slices they were 86.7% and 36.1%. The sensitivity was significantly higher on 1- than 5-mm slices (p < 0.01) while the PPV was significantly lower on 1- than 5-mm slices (p < 0.01). A slice thickness of 0.25 mm failed to improve its performance. The mean number of FP nodules on 5-, 1-, and 0.25-mm slices was 2.8, 5.2, and 4.7 per CT scan. We found that 1 mm was the best slice thickness for U-HRCT images using the commercially available DL-LND system.

Tabo K, Kido T, Matsuda M, Tokui S, Mizogami G, Takimoto Y, Matsumoto M, Miyoshi M, Kido T

pubmed logopapersJul 7 2025
Coronary magnetic resonance angiography (CMRA) scans are generally time-consuming. CMRA with compressed sensing (CS) and artificial intelligence (AI) (CSAI CMRA) is expected to shorten the imaging time while maintaining image quality. This study aimed to evaluate the usefulness of CS and AI for non-contrast CMRA. Twenty volunteers underwent both CS and conventional CMRA. Conventional CMRA employed parallel imaging (PI) with an acceleration factor of 2. CS CMRA employed a combination of PI and CS with an acceleration factor of 3. Deep learning reconstruction was performed offline on the CS CMRA data after scanning, which was defined as CSAI CMRA. We compared the imaging time, image quality, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and vessel sharpness for each CMRA scan. The CS CMRA scan time was significantly shorter than that of conventional CMRA (460 s [343,753 s] vs. 727 s [567,939 s], p < 0.001). The image quality scores of the left anterior descending artery (LAD) and left circumflex artery (LCX) were significantly higher in conventional CMRA (LAD: 3.3 ± 0.7, LCX: 3.3 ± 0.7) and CSAI CMRA (LAD: 3.7 ± 0.6, LCX: 3.5 ± 0.7) than the CS CMRA (LAD: 2.9 ± 0.6, LCX: 2.9 ± 0.6) (p < 0.05). The right coronary artery scores did not vary among the three groups (p = 0.087). The SNR and CNR were significantly higher in CSAI CMRA (SNR: 12.3 [9.7, 13.7], CNR: 12.3 [10.5, 14.5]) and CS CMRA (SNR: 10.5 [8.2, 12.6], CNR: 9.5 [7.9, 12.6]) than conventional CMRA (SNR: 9.0 [7.8, 11.1], CNR: 7.7 [6.0, 10.1]) (p < 0.01). The vessel sharpness was significantly higher in CSAI CMRA (LAD: 0.87 [0.78, 0.91]) (p < 0.05), with no significant difference between the CS CMRA (LAD: 0.77 [0.71, 0.83]) and conventional CMRA (LAD: 0.77 [0.71, 0.86]). CSAI CMRA can shorten the imaging time while maintaining good image quality.

East SA, Wang Y, Yanamala N, Maganti K, Sengupta PP

pubmed logopapersJul 7 2025
The integration of artificial intelligence (AI) with point-of-care ultrasound (POCUS) is transforming cardiovascular diagnostics by enhancing image acquisition, interpretation, and workflow efficiency. These advancements hold promise in expanding access to cardiovascular imaging in resource-limited settings and enabling early disease detection through screening applications. This review explores the opportunities and challenges of AI-enabled POCUS as it reshapes the landscape of cardiovascular imaging. AI-enabled systems can reduce operator dependency, improve image quality, and support clinicians-both novice and experienced-in capturing diagnostically valuable images, ultimately promoting consistency across diverse clinical environments. However, widespread adoption faces significant challenges, including concerns around algorithm generalizability, bias, explainability, clinician trust, and data privacy. Addressing these issues through standardized development, ethical oversight, and clinician-AI collaboration will be critical to safe and effective implementation. Looking ahead, emerging innovations-such as autonomous scanning, real-time predictive analytics, tele-ultrasound, and patient-performed imaging-underscore the transformative potential of AI-enabled POCUS in reshaping cardiovascular care and advancing equitable healthcare delivery worldwide.

Ding W, Li L, Qiu J, Lin B, Yang M, Huang L, Wu L, Wang S, Zhuang X

pubmed logopapersJul 7 2025
Myocardial infarction (MI) is a leading cause of death worldwide. Late gadolinium enhancement (LGE) and T2-weighted cardiac magnetic resonance (CMR) imaging can respectively identify scarring and edema areas, both of which are essential for MI risk stratification and prognosis assessment. Although combining complementary information from multi-sequence CMR is useful, acquiring these sequences can be time-consuming and prohibitive, e.g., due to the administration of contrast agents. Cine CMR is a rapid and contrast-free imaging technique that can visualize both motion and structural abnormalities of the myocardium induced by acute MI. Therefore, we present a new end-to-end deep neural network, referred to as CineMyoPS, to segment myocardial pathologies, i.e., scars and edema, solely from cine CMR images. Specifically, CineMyoPS extracts both motion and anatomy features associated with MI. Given the interdependence between these features, we design a consistency loss (resembling the co-training strategy) to facilitate their joint learning. Furthermore, we propose a time-series aggregation strategy to integrate MI-related features across the cardiac cycle, thereby enhancing segmentation accuracy for myocardial pathologies. Experimental results on a multi-center dataset demonstrate that CineMyoPS achieves promising performance in myocardial pathology segmentation, motion estimation, and anatomy segmentation.
Page 480 of 7497489 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,500+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.