Sort by:
Page 95 of 99982 results

Enhancing noninvasive pancreatic cystic neoplasm diagnosis with multimodal machine learning.

Huang W, Xu Y, Li Z, Li J, Chen Q, Huang Q, Wu Y, Chen H

pubmed logopapersMay 12 2025
Pancreatic cystic neoplasms (PCNs) are a complex group of lesions with a spectrum of malignancy. Accurate differentiation of PCN types is crucial for patient management, as misdiagnosis can result in unnecessary surgeries or treatment delays, affecting the quality of life. The significance of developing a non-invasive, accurate diagnostic model is underscored by the need to improve patient outcomes and reduce the impact of these conditions. We developed a machine learning model capable of accurately identifying different types of PCNs in a non-invasive manner, by using a dataset comprising 449 MRI and 568 CT scans from adult patients, spanning from 2009 to 2022. The study's results indicate that our multimodal machine learning algorithm, which integrates both clinical and imaging data, significantly outperforms single-source data algorithms. Specifically, it demonstrated state-of-the-art performance in classifying PCN types, achieving an average accuracy of 91.2%, precision of 91.7%, sensitivity of 88.9%, and specificity of 96.5%. Remarkably, for patients with mucinous cystic neoplasms (MCNs), regardless of undergoing MRI or CT imaging, the model achieved a 100% prediction accuracy rate. It indicates that our non-invasive multimodal machine learning model offers strong support for the early screening of MCNs, and represents a significant advancement in PCN diagnosis for improving clinical practice and patient outcomes. We also achieved the best results on an additional pancreatic cancer dataset, which further proves the generality of our model.

Biological markers and psychosocial factors predict chronic pain conditions.

Fillingim M, Tanguay-Sabourin C, Parisien M, Zare A, Guglietti GV, Norman J, Petre B, Bortsov A, Ware M, Perez J, Roy M, Diatchenko L, Vachon-Presseau E

pubmed logopapersMay 12 2025
Chronic pain is a multifactorial condition presenting significant diagnostic and prognostic challenges. Biomarkers for the classification and the prediction of chronic pain are therefore critically needed. Here, in this multidataset study of over 523,000 participants, we applied machine learning to multidimensional biological data from the UK Biobank to identify biomarkers for 35 medical conditions associated with pain (for example, rheumatoid arthritis and gout) or self-reported chronic pain (for example, back pain and knee pain). Biomarkers derived from blood immunoassays, brain and bone imaging, and genetics were effective in predicting medical conditions associated with chronic pain (area under the curve (AUC) 0.62-0.87) but not self-reported pain (AUC 0.50-0.62). Notably, all biomarkers worked in synergy with psychosocial factors, accurately predicting both medical conditions (AUC 0.69-0.91) and self-reported pain (AUC 0.71-0.92). These findings underscore the necessity of adopting a holistic approach in the development of biomarkers to enhance their clinical utility.

Generation of synthetic CT from MRI for MRI-based attenuation correction of brain PET images using radiomics and machine learning.

Hoseinipourasl A, Hossein-Zadeh GA, Sheikhzadeh P, Arabalibeik H, Alavijeh SK, Zaidi H, Ay MR

pubmed logopapersMay 12 2025
Accurate quantitative PET imaging in neurological studies requires proper attenuation correction. MRI-guided attenuation correction in PET/MRI remains challenging owing to the lack of direct relationship between MRI intensities and linear attenuation coefficients. This study aims at generating accurate patient-specific synthetic CT volumes, attenuation maps, and attenuation correction factor (ACF) sinograms with continuous values utilizing a combination of machine learning algorithms, image processing techniques, and voxel-based radiomics feature extraction approaches. Brain MR images of ten healthy volunteers were acquired using IR-pointwise encoding time reduction with radial acquisition (IR-PETRA) and VIBE-Dixon techniques. synthetic CT (SCT) images, attenuation maps, and attenuation correction factors (ACFs) were generated using the LightGBM, a fast and accurate machine learning algorithm, from the radiomics-based and image processing-based feature maps of MR images. Additionally, ultra-low-dose CT images of the same volunteers were acquired and served as the standard of reference for evaluation. The SCT images, attenuation maps, and ACF sinograms were assessed using qualitative and quantitative evaluation metrics and compared against their corresponding reference images, attenuation maps, and ACF sinograms. The voxel-wise and volume-wise comparison between synthetic and reference CT images yielded an average mean absolute error of 60.75 ± 8.8 HUs, an average structural similarity index of 0.88 ± 0.02, and an average peak signal-to-noise ratio of 32.83 ± 2.74 dB. Additionally, we compared MRI-based attenuation maps and ACF sinograms with their CT-based counterparts, revealing average normalized mean absolute errors of 1.48% and 1.33%, respectively. Quantitative assessments indicated higher correlations and similarities between LightGBM-synthesized CT and Reference CT images. Moreover, the cross-validation results showed the possibility of producing accurate SCT images, MRI-based attenuation maps, and ACF sinograms. This might spur the implementation of MRI-based attenuation correction on PET/MRI and dedicated brain PET scanners with lower computational time using CPU-based processors.

Cardiac imaging for the detection of ischemia: current status and future perspectives.

Rodriguez C, Pappas L, Le Hong Q, Baquero L, Nagel E

pubmed logopapersMay 12 2025
Coronary artery disease is the main cause of mortality worldwide mandating early detection, appropriate treatment, and follow-up. Noninvasive cardiac imaging techniques allow detection of obstructive coronary heart disease by direct visualization of the arteries or myocardial blood flow reduction. These techniques have made remarkable progress since their introduction, achieving high diagnostic precision. This review aims at evaluating these noninvasive cardiac imaging techniques, rendering a thorough overview of diagnostic decision-making for detection of ischemia. We discuss the latest advances in the field such as computed tomography angiography, single-photon emission tomography, positron emission tomography, and cardiac magnetic resonance; their main advantages and disadvantages, their most appropriate use and prospects. For the review, we analyzed the literature from 2009 to 2024 on noninvasive cardiac imaging in the diagnosis of coronary artery disease. The review included the 78 publications considered most relevant, including landmark trials, review articles and guidelines. The progress in cardiac imaging is anticipated to overcome various limitations such as high costs, radiation exposure, artifacts, and differences in interpretation among observers. It is expected to lead to more automated scanning processes, and with the assistance of artificial intelligence-driven post-processing software, higher accuracy and reproducibility may be attained.

Machine learning approaches for classifying major depressive disorder using biological and neuropsychological markers: A meta-analysis.

Zhang L, Jian L, Long Y, Ren Z, Calhoun VD, Passos IC, Tian X, Xiang Y

pubmed logopapersMay 10 2025
Traditional diagnostic methods for major depressive disorder (MDD), which rely on subjective assessments, may compromise diagnostic accuracy. In contrast, machine learning models have the potential to classify and diagnose MDD more effectively, reducing the risk of misdiagnosis associated with conventional methods. The aim of this meta-analysis is to evaluate the overall classification accuracy of machine learning models in MDD and examine the effects of machine learning algorithms, biomarkers, diagnostic comparison groups, validation procedures, and participant age on classification performance. As of September 2024, a total of 176 studies were ultimately included in the meta-analysis, encompassing a total of 60,926 participants. A random-effects model was applied to analyze the extracted data, resulting in an overall classification accuracy of 0.825 (95% CI [0.810; 0.839]). Convolutional neural networks significantly outperformed support vector machines (SVM) when using electroencephalography and magnetoencephalography data. Additionally, SVM demonstrated significantly better performance with functional magnetic resonance imaging data compared to graph neural networks and gaussian process classification. The sample size was negatively correlated to classification accuracy. Furthermore, evidence of publication bias was also detected. Therefore, while this study indicates that machine learning models show high accuracy in distinguishing MDD from healthy controls and other psychiatric disorders, further research is required before these findings can be generalized to large-scale clinical practice.

Application of artificial intelligence-based three dimensional digital reconstruction technology in precision treatment of complex total hip arthroplasty.

Zheng Q, She H, Zhang Y, Zhao P, Liu X, Xiang B

pubmed logopapersMay 10 2025
To evaluate the predictive ability of AI HIP in determining the size and position of prostheses during complex total hip arthroplasty (THA). Additionally, it investigates the factors influencing the accuracy of preoperative planning predictions. From April 2021 to December 2023, patients with complex hip joint diseases were divided into the AI preoperative planning group (n = 29) and the X-ray preoperative planning group (n = 27). Postoperative X-rays were used to measure acetabular anteversion angle, abduction angle, tip-to-sternum distance, intraoperative duration, blood loss, planning time, postoperative Harris Hip Scores (at 2 weeks, 3 months, and 6 months), and visual analogue scale (VAS) pain scores (at 2 weeks and at final follow-up) to analyze clinical outcomes. On the acetabular side, the accuracy of AI preoperative planning was higher compared to X-ray preoperative planning (75.9% vs. 44.4%, P = 0.016). On the femoral side, AI preoperative planning also showed higher accuracy compared to X-ray preoperative planning (85.2% vs. 59.3%, P = 0.033). The AI preoperative planning group showed superior outcomes in terms of reducing bilateral leg length discrepancy (LLD), decreasing operative time and intraoperative blood loss, early postoperative recovery, and pain control compared to the X-ray preoperative planning group (P < 0.05). No significant differences were observed between the groups regarding bilateral femoral offset (FO) differences, bilateral combined offset (CO) differences, abduction angle, anteversion angle, or tip-to-sternum distance. Factors such as gender, age, affected side, comorbidities, body mass index (BMI) classification, bone mineral density did not affect the prediction accuracy of AI HIP preoperative planning. Artificial intelligence-based 3D planning can be effectively utilized for preoperative planning in complex THA. Compared to X-ray templating, AI demonstrates superior accuracy in prosthesis measurement and provides significant clinical benefits, particularly in early postoperative recovery.

Preoperative radiomics models using CT and MRI for microsatellite instability in colorectal cancer: a systematic review and meta-analysis.

Capello Ingold G, Martins da Fonseca J, Kolenda Zloić S, Verdan Moreira S, Kago Marole K, Finnegan E, Yoshikawa MH, Daugėlaitė S, Souza E Silva TX, Soato Ratti MA

pubmed logopapersMay 10 2025
Microsatellite instability (MSI) is a novel predictive biomarker for chemotherapy and immunotherapy response, as well as prognostic indicator in colorectal cancer (CRC). The current standard for MSI identification is polymerase chain reaction (PCR) testing or the immunohistochemical analysis of tumor biopsy samples. However, tumor heterogeneity and procedure complications pose challenges to these techniques. CT and MRI-based radiomics models offer a promising non-invasive approach for this purpose. A systematic search of PubMed, Embase, Cochrane Library and Scopus was conducted to identify studies evaluating the diagnostic performance of CT and MRI-based radiomics models for detecting MSI status in CRC. Pooled area under the curve (AUC), sensitivity, and specificity were calculated in RStudio using a random-effects model. Forest plots and a summary ROC curve were generated. Heterogeneity was assessed using I² statistics and explored through sensitivity analyses, threshold effect assessment, subgroup analyses and meta-regression. 17 studies with a total of 6,045 subjects were included in the analysis. All studies extracted radiomic features from CT or MRI images of CRC patients with confirmed MSI status to train machine learning models. The pooled AUC was 0.815 (95% CI: 0.784-0.840) for CT-based studies and 0.900 (95% CI: 0.819-0.943) for MRI-based studies. Significant heterogeneity was identified and addressed through extensive analysis. Radiomics models represent a novel and promising tool for predicting MSI status in CRC patients. These findings may serve as a foundation for future studies aimed at developing and validating improved models, ultimately enhancing the diagnosis, treatment, and prognosis of colorectal cancer.

Evaluating an information theoretic approach for selecting multimodal data fusion methods.

Zhang T, Ding R, Luong KD, Hsu W

pubmed logopapersMay 10 2025
Interest has grown in combining radiology, pathology, genomic, and clinical data to improve the accuracy of diagnostic and prognostic predictions toward precision health. However, most existing works choose their datasets and modeling approaches empirically and in an ad hoc manner. A prior study proposed four partial information decomposition (PID)-based metrics to provide a theoretical understanding of multimodal data interactions: redundancy, uniqueness of each modality, and synergy. However, these metrics have only been evaluated in a limited collection of biomedical data, and the existing work does not elucidate the effect of parameter selection when calculating the PID metrics. In this work, we evaluate PID metrics on a wider range of biomedical data, including clinical, radiology, pathology, and genomic data, and propose potential improvements to the PID metrics. We apply the PID metrics to seven different modality pairs across four distinct cohorts (datasets). We compare and interpret trends in the resulting PID metrics and downstream model performance in these multimodal cohorts. The downstream tasks being evaluated include predicting the prognosis (either overall survival or recurrence) of patients with non-small cell lung cancer, prostate cancer, and glioblastoma. We found that, while PID metrics are informative, solely relying on these metrics to decide on a fusion approach does not always yield a machine learning model with optimal performance. Of the seven different modality pairs, three had poor (0%), three had moderate (66%-89%), and only one had perfect (100%) consistency between the PID values and model performance. We propose two improvements to the PID metrics (determining the optimal parameters and uncertainty estimation) and identified areas where PID metrics could be further improved. The current PID metrics are not accurate enough for estimating the multimodal data interactions and need to be improved before they can serve as a reliable tool. We propose improvements and provide suggestions for future work. Code: https://github.com/zhtyolivia/pid-multimodal.

Improving Generalization of Medical Image Registration Foundation Model

Jing Hu, Kaiwei Yu, Hongjiang Xian, Shu Hu, Xin Wang

arxiv logopreprintMay 10 2025
Deformable registration is a fundamental task in medical image processing, aiming to achieve precise alignment by establishing nonlinear correspondences between images. Traditional methods offer good adaptability and interpretability but are limited by computational efficiency. Although deep learning approaches have significantly improved registration speed and accuracy, they often lack flexibility and generalizability across different datasets and tasks. In recent years, foundation models have emerged as a promising direction, leveraging large and diverse datasets to learn universal features and transformation patterns for image registration, thus demonstrating strong cross-task transferability. However, these models still face challenges in generalization and robustness when encountering novel anatomical structures, varying imaging conditions, or unseen modalities. To address these limitations, this paper incorporates Sharpness-Aware Minimization (SAM) into foundation models to enhance their generalization and robustness in medical image registration. By optimizing the flatness of the loss landscape, SAM improves model stability across diverse data distributions and strengthens its ability to handle complex clinical scenarios. Experimental results show that foundation models integrated with SAM achieve significant improvements in cross-dataset registration performance, offering new insights for the advancement of medical image registration technology. Our code is available at https://github.com/Promise13/fm_sam}{https://github.com/Promise13/fm\_sam.

Deeply Explainable Artificial Neural Network

David Zucker

arxiv logopreprintMay 10 2025
While deep learning models have demonstrated remarkable success in numerous domains, their black-box nature remains a significant limitation, especially in critical fields such as medical image analysis and inference. Existing explainability methods, such as SHAP, LIME, and Grad-CAM, are typically applied post hoc, adding computational overhead and sometimes producing inconsistent or ambiguous results. In this paper, we present the Deeply Explainable Artificial Neural Network (DxANN), a novel deep learning architecture that embeds explainability ante hoc, directly into the training process. Unlike conventional models that require external interpretation methods, DxANN is designed to produce per-sample, per-feature explanations as part of the forward pass. Built on a flow-based framework, it enables both accurate predictions and transparent decision-making, and is particularly well-suited for image-based tasks. While our focus is on medical imaging, the DxANN architecture is readily adaptable to other data modalities, including tabular and sequential data. DxANN marks a step forward toward intrinsically interpretable deep learning, offering a practical solution for applications where trust and accountability are essential.
Page 95 of 99982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.