Sort by:
Page 136 of 3473463 results

Inter-AI Agreement in Measuring Cine MRI-Derived Cardiac Function and Motion Patterns: A Pilot Study.

Lin K, Sarnari R, Gordon DZ, Markl M, Carr JC

pubmed logopapersJul 8 2025
Manually analyzing a series of MRI images to obtain information about the heart's motion is a time-consuming and labor-intensive task. Recently, many AI-driven tools have been used to automatically analyze cardiac MRI. However, it is still unknown whether the results generated by these tools are consistent. The aim of the present study was to investigate the agreement of AI-powered automated tools for measuring cine MRI-derived cardiac function and motion indices. Cine MRI datasets of 23 healthy volunteers (10 males, 32.7 ± 11.3 years) were processed using heart deformation analysis (HDA, Trufistrain) and Circle CVI 42. The left and right ventricular (LV/RV) end-diastolic volume (LVEDV and RVEDV), end-systolic volume (LVESV and RVESV), stroke volume (LVSV and RVSV), cardiac output (LVCO and RVCO), ejection fraction (LVEF and RVEF), LV mass (LVM), LV global strain, strain rate, displacement, and velocity were calculated without interventions. Agreements and discrepancies of indices acquired with the two tools were evaluated from various aspects using t-tests, Pearson correlation coefficient (r), interclass correlation coefficient (ICC), and coefficient of variation (CoV). Systematic biases for measuring cardiac function and motion indices were observed. In global cardiac function indices, LVEF (56.9% ± 6.4 vs. 57.8% ± 5.7, p = 0.433, r = 0.609, ICC = 0.757, CoV = 6.7%) and LVM (82.7 g ± 21.6 vs. 82.6 g ± 18.7, p = 0.988, r = 0.923, ICC = 0.956, CoV = 11.7%) acquired with HDA and Circle seemed to be exchangeable. Among cardiac motion indices, circumferential strain rate demonstrated good agreements between two tools (97 ± 14.6 vs. 97.8 ± 13.6, p = 0.598, r = 0.89, ICC = 0.943, CoV = 5.1%). Cine MRI-derived cardiac function and motion indices obtained using different AI-powered image processing tools are related but may also differ. Such variations should be considered when evaluating results sourced from different studies.

Robust Bi-CBMSegNet framework for advancing breast mass segmentation in mammography with a dual module encoder-decoder approach.

Wang Y, Ali M, Mahmood T, Rehman A, Saba T

pubmed logopapersJul 8 2025
Breast cancer is a prevalent disease affecting millions of women worldwide, and early screening can significantly reduce mortality rates. Mammograms are widely used for screening, but manual readings can lead to misdiagnosis. Computer-assisted diagnosis can help physicians make faster, more accurate judgments, which benefits patients. However, segmenting and classifying breast masses in mammograms is challenging due to their similar shapes to the surrounding glands. Current target detection algorithms have limited applications and low accuracy. Automated segmentation of breast masses on mammograms is a significant research challenge due to its considerable classification and contouring. This study introduces the Bi-Contextual Breast Mass Segmentation Framework (Bi-CBMSegNet), a novel paradigm that enhances the precision and efficiency of breast mass segmentation within full-field mammograms. Bi-CBMSegNet employs an advanced encoder-decoder architecture comprising two distinct modules: the Global Feature Enhancement Module (GFEM) and the Local Feature Enhancement Module (LFEM). GFEM aggregates and assimilates features from all positions within the mammogram, capturing extensive contextual dependencies that facilitate the enriched representation of homogeneous regions. The LFEM module accentuates semantic information pertinent to each specific position, refining the delineation of heterogeneous regions. The efficacy of Bi-CBMSegNet has been rigorously evaluated on two publicly available mammography databases, demonstrating superior computational efficiency and performance metrics. The findings advocate for Bi-CBMSegNet to effectuate a significant leap forward in medical imaging, particularly in breast cancer screening, thereby augmenting the accuracy and efficacy of diagnostic and treatment planning processes.

Machine learning models using non-invasive tests & B-mode ultrasound to predict liver-related outcomes in metabolic dysfunction-associated steatotic liver disease.

Kosick HM, McIntosh C, Bera C, Fakhriyehasl M, Shengir M, Adeyi O, Amiri L, Sebastiani G, Jhaveri K, Patel K

pubmed logopapersJul 8 2025
Advanced metabolic-dysfunction-associated steatotic liver disease (MASLD) fibrosis (F3-4) predicts liver-related outcomes. Serum and elastography-based non-invasive tests (NIT) cannot yet reliably predict MASLD outcomes. The role of B-mode ultrasound (US) for outcome prediction is not yet known. We aimed to evaluate machine learning (ML) algorithms based on simple NIT and US for prediction of adverse liver-related outcomes in MASLD. Retrospective cohort study of adult MASLD patients biopsied between 2010-2021 at one of two Canadian tertiary care centers. Random forest was used to create predictive models for outcomes-hepatic decompensation, liver-related outcomes (decompensation, hepatocellular carcinoma (HCC), liver transplant, and liver-related mortality), HCC, liver-related mortality, F3-4, and fibrotic metabolic dysfunction-associated steatohepatitis (MASH). Diagnostic performance was assessed using area under the curve (AUC). 457 MASLD patients were included with 44.9% F3-4, diabetes prevalence 31.6%, 53.8% male, mean age 49.2 and BMI 32.8 kg/m<sup>2</sup>. 6.3% had an adverse liver-related outcome over mean 43 months follow-up. AUC for ML predictive models were-hepatic decompensation 0.90(0.79-0.98), liver-related outcomes 0.87(0.76-0.96), HCC 0.72(0.29-0.96), liver-related mortality 0.79(0.31-0.98), F3-4 0.83(0.76-0.87), and fibrotic MASH 0.74(0.65-0.85). Biochemical and clinical variables had greatest feature importance overall, compared to US parameters. FIB-4 and AST:ALT ratio were highest ranked biochemical variables, while age was the highest ranked clinical variable. ML models based on clinical, biochemical, and US-based variables accurately predict adverse MASLD outcomes in this multi-centre cohort. Overall, biochemical variables had greatest feature importance. US-based features were not substantial predictors of outcomes in this study.

Integrating radiomic texture analysis and deep learning for automated myocardial infarction detection in cine-MRI.

Xu W, Shi X

pubmed logopapersJul 8 2025
Robust differentiation between infarcted and normal myocardial tissue is essential for improving diagnostic accuracy and personalizing treatment in myocardial infarction (MI). This study proposes a hybrid framework combining radiomic texture analysis with deep learning-based segmentation to enhance MI detection on non-contrast cine cardiac magnetic resonance (CMR) imaging.The approach incorporates radiomic features derived from the Gray-Level Co-Occurrence Matrix (GLCM) and Gray-Level Run Length Matrix (GLRLM) methods into a modified U-Net segmentation network. A three-stage feature selection pipeline was employed, followed by classification using multiple machine learning models. Early and intermediate fusion strategies were integrated into the hybrid architecture. The model was validated on cine-CMR data from the SCD and Kaggle datasets.Joint Entropy, Max Probability, and RLNU emerged as the most discriminative features, with Joint Entropy achieving the highest AUC (0.948). The hybrid model outperformed standalone U-Net in segmentation (Dice = 0.887, IoU = 0.803, HD95 = 4.48 mm) and classification (accuracy = 96.30%, AUC = 0.97, precision = 0.96, recall = 0.94, F1-score = 0.96). Dimensionality reduction via PCA and t-SNE confirmed distinct class separability. Correlation coefficients (r = 0.95-0.98) and Bland-Altman plots demonstrated high agreement between predicted and reference infarct sizes.Integrating radiomic features into a deep learning segmentation pipeline improves MI detection and interpretability in cine-CMR. This scalable and explainable hybrid framework holds potential for broader applications in multimodal cardiac imaging and automated myocardial tissue characterization.

Uncertainty and normalized glandular dose evaluations in digital mammography and digital breast tomosynthesis with a machine learning methodology.

Sarno A, Massera RT, Paternò G, Cardarelli P, Marshall N, Bosmans H, Bliznakova K

pubmed logopapersJul 8 2025
To predict the normalized glandular dose (DgN) coefficients and the related uncertainty in mammography and digital breast tomosynthesis (DBT) using a machine learning algorithm and patient-like digital breast models. 126 patient-like digital breast phantoms were used for DgN Monte Carlo ground truth calculations. An Automatic Relevance Determination Regression algorithm was used to predict DgN from anatomical breast features. These features included compressed breast thickness, glandular fraction by volume, glandular volume, center of mass and standard deviation of the glandular tissue distribution in the cranio-caudal direction. An algorithm for data imputation was explored to account for avoiding the use of the latter two features. 5-fold cross validation showed that the predictive model provides an estimation of DgN with 1% average difference from the ground truth; this difference was less than 3% in 50% of the cases. The average uncertainty of the estimated DgN values was 9%. Excluding the information related to the glandular distribution increased this uncertainty to 17% without inducing a significant discrepancy in estimated DgN values, with half of the predicted cases differing from the ground truth by less than 9%. The data imputation algorithm reduced the estimated uncertainty, without restoring the original performance. Predictive performance improved by increasing tube voltage. The proposed methodology predicts the DgN in mammography and DBT for patient-derived breasts with an uncertainty below 9%. Predicting test evaluations reported 1% average difference from the ground truth, with 50% of the cohort cases differing by less than 5%.

Adaptive batch-fusion self-supervised learning for ultrasound image pretraining.

Zhang J, Wu X, Liu S, Fan Y, Chen Y, Lyu G, Liu P, Liu Z, He S

pubmed logopapersJul 8 2025
Medical self-supervised learning eliminates the reliance on labels, making feature extraction simple and efficient. The intricate design of pretext tasks in single-modal self-supervised analysis presents challenges, however, compounded by an excessive dependency on data augmentation, leading to a bottleneck in medical self-supervised learning research. Consequently, this paper reanalyzes the feature learnability introduced by data augmentation strategies in medical image self-supervised learning. We introduce an adaptive self-supervised learning data augmentation method from the perspective of batch fusion. Moreover, we propose a conv embedding block for learning the incremental representation between these batches. We tested 5 fused data tasks proposed by previous researchers and it achieved a linear classification protocol accuracy of 94.25% with only 150 self-supervised feature training in Vision Transformer(ViT), which is the best among the same methods. With a detailed ablation study on previous augmentation strategies, the results indicate that the proposed medical data augmentation strategy in this paper effectively represents ultrasound data features in the self-supervised learning process. The code and weights could be found at here.

The correlation of liquid biopsy genomic data to radiomics in colon, pancreatic, lung and prostatic cancer patients.

Italiano A, Gautier O, Dupont J, Assi T, Dawi L, Lawrance L, Bone A, Jardali G, Choucair A, Ammari S, Bayle A, Rouleau E, Cournede PH, Borget I, Besse B, Barlesi F, Massard C, Lassau N

pubmed logopapersJul 8 2025
With the advances in artificial intelligence (AI) and precision medicine, radiomics has emerged as a promising tool in the field of oncology. Radiogenomics integrates radiomics with genomic data, potentially offering a non-invasive method for identifying biomarkers relevant to cancer therapy. Liquid biopsy (LB) has further revolutionized cancer diagnostics by detecting circulating tumor DNA (ctDNA), enabling real-time molecular profiling. This study explores the integration of radiomics and LB to predict genomic alterations in solid tumors, including lung, colon, pancreatic, and prostate cancers. A retrospective study was conducted on 418 patients from the STING trial (NCT04932525), all of whom underwent both LB and CT imaging. Predictive models were developed using an XGBoost logistic classifier, with statistical analysis performed to compare tumor volumes, lesion counts, and affected organs across molecular subtypes. Performance was evaluated using area under the curve (AUC) values and cross-validation techniques. Radiomic models demonstrated moderate-to-good performance in predicting genomic alterations. KRAS mutations were best identified in pancreatic cancer (AUC=0.97), while moderate discrimination was noted in lung (AUC=0.66) and colon cancer (AUC=0.64). EGFR mutations in lung cancer were detected with an AUC of 0.74, while BRAF mutations showed good discriminatory ability in both lung (AUC=0.79) and colon cancer (AUC=0.76). In the radiomics predictive model, AR mutations in prostate cancer showed limited discrimination (AUC = 0.63). This study highlights the feasibility of integrating radiomics and LB for non-invasive genomic profiling in solid tumors, demonstrating significant potential in patient stratification and personalized oncology care. While promising, further prospective validation is required to enhance the generalizability of these models.

A confidence-guided Unsupervised domain adaptation network with pseudo-labeling and deformable CNN-transformer for medical image segmentation.

Zhou J, Xu Y, Liu Z, Pfaender F, Liu W

pubmed logopapersJul 8 2025
Unsupervised domain adaptation (UDA) methods have achieved significant progress in medical image segmentation. Nevertheless, the significant differences between the source and target domains remain a daunting barrier, creating an urgent need for more robust cross-domain solutions. Current UDA techniques generally employ a fixed, unvarying feature alignment procedure to reduce inter-domain differences throughout the training process. This rigidity disregards the shifting nature of feature distributions throughout the training process, leading to suboptimal performance in boundary delineation and detail retention on the target domain. A novel confidence-guided unsupervised domain adaptation network (CUDA-Net) is introduced to overcome persistent domain gaps, adapt to shifting feature distributions during training, and enhance boundary delineation in the target domain. This proposed network adaptively aligns features by tracking cross-domain distribution shifts throughout training, starting with adversarial alignment at early stages (coarse) and transitioning to pseudo-label-driven alignment at later stages (fine-grained), thereby leading to more accurate segmentation in the target domain. A confidence-weighted mechanism then refines these pseudo labels by prioritizing high-confidence regions while allowing low-confidence areas to be gradually explored, thereby enhancing both label reliability and overall model stability. Experiments on three representative medical image datasets, namely MMWHS17, BraTS2021, and VS-Seg, confirm the superiority of CUDA-Net. Notably, CUDA-Net outperforms eight leading methods in terms of overall segmentation accuracy (Dice) and boundary extraction precision (ASD), highlighting that it offers an efficient and reliable solution for cross-domain medical image segmentation.

Noise-inspired diffusion model for generalizable low-dose CT reconstruction.

Gao Q, Chen Z, Zeng D, Zhang J, Ma J, Shan H

pubmed logopapersJul 8 2025
The generalization of deep learning-based low-dose computed tomography (CT) reconstruction models to doses unseen in the training data is important and remains challenging. Previous efforts heavily rely on paired data to improve the generalization performance and robustness through collecting either diverse CT data for re-training or a few test data for fine-tuning. Recently, diffusion models have shown promising and generalizable performance in low-dose CT (LDCT) reconstruction, however, they may produce unrealistic structures due to the CT image noise deviating from Gaussian distribution and imprecise prior information from the guidance of noisy LDCT images. In this paper, we propose a noise-inspired diffusion model for generalizable LDCT reconstruction, termed NEED, which tailors diffusion models for noise characteristics of each domain. First, we propose a novel shifted Poisson diffusion model to denoise projection data, which aligns the diffusion process with the noise model in pre-log LDCT projections. Second, we devise a doubly guided diffusion model to refine reconstructed images, which leverages LDCT images and initial reconstructions to more accurately locate prior information and enhance reconstruction fidelity. By cascading these two diffusion models for dual-domain reconstruction, our NEED requires only normal-dose data for training and can be effectively extended to various unseen dose levels during testing via a time step matching strategy. Extensive qualitative, quantitative, and segmentation-based evaluations on two datasets demonstrate that our NEED consistently outperforms state-of-the-art methods in reconstruction and generalization performance. Source code is made available at https://github.com/qgao21/NEED.

External Validation of an Upgraded AI Model for Screening Ileocolic Intussusception Using Pediatric Abdominal Radiographs: Multicenter Retrospective Study.

Lee JH, Kim PH, Son NH, Han K, Kang Y, Jeong S, Kim EK, Yoon H, Gatidis S, Vasanawala S, Yoon HM, Shin HJ

pubmed logopapersJul 8 2025
Artificial intelligence (AI) is increasingly used in radiology, but its development in pediatric imaging remains limited, particularly for emergent conditions. Ileocolic intussusception is an important cause of acute abdominal pain in infants and toddlers and requires timely diagnosis to prevent complications such as bowel ischemia or perforation. While ultrasonography is the diagnostic standard due to its high sensitivity and specificity, its accessibility may be limited, especially outside tertiary centers. Abdominal radiographs (AXRs), despite their limited sensitivity, are often the first-line imaging modality in clinical practice. In this context, AI could support early screening and triage by analyzing AXRs and identifying patients who require further ultrasonography evaluation. This study aimed to upgrade and externally validate an AI model for screening ileocolic intussusception using pediatric AXRs with multicenter data and to assess the diagnostic performance of the model in comparison with radiologists of varying experience levels with and without AI assistance. This retrospective study included pediatric patients (≤5 years) who underwent both AXRs and ultrasonography for suspected intussusception. Based on the preliminary study from hospital A, the AI model was retrained using data from hospital B and validated with external datasets from hospitals C and D. Diagnostic performance of the upgraded AI model was evaluated using sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC). A reader study was conducted with 3 radiologists, including 2 trainees and 1 pediatric radiologist, to evaluate diagnostic performance with and without AI assistance. Based on the previously developed AI model trained on 746 patients from hospital A, an additional 431 patients from hospital B (including 143 intussusception cases) were used for further training to develop an upgraded AI model. External validation was conducted using data from hospital C (n=68; 19 intussusception cases) and hospital D (n=90; 30 intussusception cases). The upgraded AI model achieved a sensitivity of 81.7% (95% CI 68.6%-90%) and a specificity of 81.7% (95% CI 73.3%-87.8%), with an AUC of 86.2% (95% CI 79.2%-92.1%) in the external validation set. Without AI assistance, radiologists showed lower performance (overall AUC 64%; sensitivity 49.7%; specificity 77.1%). With AI assistance, radiologists' specificity improved to 93% (difference +15.9%; P<.001), and AUC increased to 79.2% (difference +15.2%; P=.05). The least experienced reader showed the largest improvement in specificity (+37.6%; P<.001) and AUC (+14.7%; P=.08). The upgraded AI model improved diagnostic performance for screening ileocolic intussusception on pediatric AXRs. It effectively enhanced the specificity and overall accuracy of radiologists, particularly those with less experience in pediatric radiology. A user-friendly software platform was introduced to support broader clinical validation and underscores the potential of AI as a screening and triage tool in pediatric emergency settings.
Page 136 of 3473463 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.