Sort by:
Page 35 of 1631621 results

Radiomics and deep learning characterisation of liver malignancies in CT images - A systematic review.

Yahaya BS, Osman ND, Karim NKA, Appalanaido GK, Isa IS

pubmed logopapersJun 3 2025
Computed tomography (CT) has been widely used as an effective tool for liver imaging due to its high spatial resolution, and ability to differentiate tissue densities, which contributing to comprehensive image analysis. Recent advancements in artificial intelligence (AI) promoted the role of Machine Learning (ML) in managing liver cancers by predicting or classifying tumours using mathematical algorithms. Deep learning (DL), a subset of ML, expanded these capabilities through convolutional neural networks (CNN) that analyse large data automatically. This review examines methods, achievements, limitations, and performance outcomes of ML-based radiomics and DL models for liver malignancies from CT imaging. A systematic search for full-text articles in English on CT radiomics and DL in liver cancer analysis was conducted in PubMed, Scopus, Science Citation Index, and Cochrane Library databases between 2020 and 2024 using the keywords; machine learning, radiomics, deep learning, computed tomography, liver cancer and associated MESH terms. PRISMA guidelines were used to identify and screen studies for inclusion. A total of 49 studies were included consisting of 17 Radiomics, 24 DL, and 8 combined DL/Radiomics studies. Radiomics has been predominantly utilised for predictive analysis, while DL has been extensively applied to automatic liver and tumour segmentation with a surge of a recent increase in studies integrating both techniques. Despite the growing popularity of DL methods, classical radiomics models are still relevant and often preferred over DL methods when performance is similar, due to lower computational and data needs. Performance of models keep improving, but challenges like data scarcity and lack of standardised protocols persists.

Artificial intelligence in bone metastasis analysis: Current advancements, opportunities and challenges.

Afnouch M, Bougourzi F, Gaddour O, Dornaika F, Ahmed AT

pubmed logopapersJun 3 2025
Artificial Intelligence is transforming medical imaging, particularly in the analysis of bone metastases (BM), a serious complication of advanced cancers. Machine learning and deep learning techniques offer new opportunities to improve detection, recognition, and segmentation of bone metastasis. Yet, challenges such as limited data, interpretability, and clinical validation remain. Following PRISMA guidelines, we reviewed artificial intelligence methods and applications for bone metastasis analysis across major imaging modalities including CT, MRI, PET, SPECT, and bone scintigraphy. The survey includes traditional machine learning models and modern deep learning architectures such as CNNs and transformers. We also examined available datasets and their effect in developing artificial intelligence in this field. Artificial intelligence models have achieved strong performance across tasks and modalities, with Convolutional Neural Network (CNN) and Transformer architectures showing particularly efficient performance across different tasks. However, limitations persist, including data imbalance, overfitting risks, and the need for greater transparency. Clinical translation is also challenged by regulatory and validation hurdles. Artificial intelligence holds strong potential to improve BM diagnosis and streamline radiology workflows. To reach clinical maturity, future work must address data diversity, model explainability, and large-scale validation, which are critical steps for being trusted to be integrated into the oncology care routines.

Deep Learning-Based Opportunistic CT Osteoporosis Screening and Establishment of Normative Values

Westerhoff, M., Gyftopoulos, S., Dane, B., Vega, E., Murdock, D., Lindow, N., Herter, F., Bousabarah, K., Recht, M. P., Bredella, M. A.

medrxiv logopreprintJun 3 2025
BackgroundOsteoporosis is underdiagnosed and undertreated prompting the exploration of opportunistic screening using CT and artificial intelligence (AI). PurposeTo develop a reproducible deep learning-based convolutional neural network to automatically place a 3D region of interest (ROI) in trabecular bone, develop a correction method to normalize attenuation across different CT protocols or and scanner models, and to establish thresholds for osteoporosis in a large diverse population. MethodsA deep learning-based method was developed to automatically quantify trabecular attenuation using a 3D ROI of the thoracic and lumbar spine on chest, abdomen, or spine CTs, adjusted for different tube voltages and scanner models. Normative values, thresholds for osteoporosis of trabecular attenuation of the spine were established across a diverse population, stratified by age, sex, race, and ethnicity using reported prevalence of osteoporosis by the WHO. Results538,946 CT examinations from 283,499 patients (mean age 65 years{+/-}15, 51.2% women and 55.5% White), performed on 50 scanner models using six different tube voltages were analyzed. Hounsfield Units at 80 kVp versus 120 kVp differed by 23%, and different scanner models resulted in differences of values by < 10%. Automated ROI placement of 1496 vertebra was validated by manual radiologist review, demonstrating >99% agreement. Mean trabecular attenuation was higher in young women (<50 years) than young men (p<.001) and decreased with age, with a steeper decline in postmenopausal women. In patients older than 50 years, trabecular attention was higher in males than females (p<.001). Trabecular attenuation was highest in Blacks, followed by Asians and lowest in Whites (p<.001). The threshold for L1 in diagnosing osteoporosis was 80 HU. ConclusionDeep learning-based automated opportunistic osteoporosis screening can identify patients with low bone mineral density that undergo CT scans for clinical purposes on different scanners and protocols. Key Results 3 main results/conclusionsO_LIIn a study of 538,946 CT examinations performed in 283,499 patients using different scanner models and imaging protocols, an automated deep learning-based convolutional neural network was able to accurately place a three-dimensional regions of interest within thoracic and lumbar vertebra to measure trabecular attenuation. C_LIO_LITube voltage had a larger influence on attenuation values (23%) than scanner model (<10%). C_LIO_LIA threshold of 80 HU was identified for L1 to diagnose osteoporosis using an automated three-dimensional region of interest. C_LI

SASWISE-UE: Segmentation and synthesis with interpretable scalable ensembles for uncertainty estimation.

Chen W, McMillan AB

pubmed logopapersJun 2 2025
This paper introduces an efficient sub-model ensemble framework aimed at enhancing the interpretability of medical deep learning models, thus increasing their clinical applicability. By generating uncertainty maps, this framework enables end-users to evaluate the reliability of model outputs. We developed a strategy to generate diverse models from a single well-trained checkpoint, facilitating the training of a model family. This involves producing multiple outputs from a single input, fusing them into a final output, and estimating uncertainty based on output disagreements. Implemented using U-Net and UNETR models for segmentation and synthesis tasks, this approach was tested on CT body segmentation and MR-CT synthesis datasets. It achieved a mean Dice coefficient of 0.814 in segmentation and a Mean Absolute Error of 88.17 HU in synthesis, improved from 89.43 HU by pruning. Additionally, the framework was evaluated under image corruption and data undersampling, maintaining correlation between uncertainty and error, which highlights its robustness. These results suggest that the proposed approach not only maintains the performance of well-trained models but also enhances interpretability through effective uncertainty estimation, applicable to both convolutional and transformer models in a range of imaging tasks.

ViTU-net: A hybrid deep learning model with patch-based LSB approach for medical image watermarking and authentication using a hybrid metaheuristic algorithm.

Nanammal V, Rajalakshmi S, Remya V, Ranjith S

pubmed logopapersJun 2 2025
In modern healthcare, telemedicine, health records, and AI-driven diagnostics depend on medical image watermarking to secure chest X-rays for pneumonia diagnosis, ensuring data integrity, confidentiality, and authenticity. A 2024 study found over 70 % of healthcare institutions faced medical image data breaches. Yet, current methods falter in imperceptibility, robustness against attacks, and deployment efficiency. ViTU-Net integrates cutting-edge techniques to address these multifaceted challenges in medical image security and analysis. The model's core component, the Vision Transformer (ViT) encoder, efficiently captures global dependencies and spatial information, while the U-Net decoder enhances image reconstruction, with both components leveraging the Adaptive Hierarchical Spatial Attention (AHSA) module for improved spatial processing. Additionally, the patch-based LSB embedding mechanism ensures focused embedding of reversible fragile watermarks within each patch of the segmented non-diagnostic region (RONI), guided dynamically by adaptive masks derived from the attention mechanism, minimizing impact on diagnostic accuracy while maximizing precision and ensuring optimal utilization of spatial information. The hybrid meta-heuristic optimization algorithm, TuniBee Fusion, dynamically optimizes watermarking parameters, striking a balance between exploration and exploitation, thereby enhancing watermarking efficiency and robustness. The incorporation of advanced cryptographic techniques, including SHA-512 hashing and AES encryption, fortifies the model's security, ensuring the authenticity and confidentiality of watermarked medical images. A PSNR value of 60.7 dB, along with an NCC value of 0.9999 and an SSIM value of 1.00, underscores its effectiveness in preserving image quality, security, and diagnostic accuracy. Robustness analysis against a spectrum of attacks validates ViTU-Net's resilience in real-world scenarios.

Inferring single-cell spatial gene expression with tissue morphology via explainable deep learning

Zhao, Y., Alizadeh, E., Taha, H. B., Liu, Y., Xu, M., Mahoney, J. M., Li, S.

biorxiv logopreprintJun 2 2025
Deep learning models trained with spatial omics data uncover complex patterns and relationships among cells, genes, and proteins in a high-dimensional space. State-of-the-art in silico spatial multi-cell gene expression methods using histological images of tissue stained with hematoxylin and eosin (H&E) allow us to characterize cellular heterogeneity. We developed a vision transformer (ViT) framework to map histological signatures to spatial single-cell transcriptomic signatures, named SPiRiT. SPiRiT predicts single-cell spatial gene expression using the matched H&E image tiles of human breast cancer and whole mouse pup, evaluated by Xenium (10x Genomics) datasets. Importantly, SPiRiT incorporates rigorous strategies to ensure reproducibility and robustness of predictions and provides trustworthy interpretation through attention-based model explainability. SPiRiT model interpretation revealed the areas, and attention details it uses to predict gene expressions like marker genes in invasive cancer cells. In an apple-to-apple comparison with ST-Net, SPiRiT improved the predictive accuracy by 40%. These gene predictions and expression levels were highly consistent with the tumor region annotation. In summary, SPiRiT highlights the feasibility to infer spatial single-cell gene expression using tissue morphology in multiple-species.

Radiogenomics and Radiomics of Skull Base Chordoma: Classification of Novel Radiomic Subgroups and Prediction of Genetic Signatures and Clinical Outcomes.

Gersey ZC, Zenkin S, Mamindla P, Amjadzadeh M, Ak M, Plute T, Peddagangireddy V, Abdallah H, Muthiah N, Wang EW, Snyderman C, Gardner PA, Colen RR, Zenonos GA

pubmed logopapersJun 2 2025
Chordomas are rare, aggressive tumors of notochordal origin, commonly affecting the spine and skull base. Skull Base Chordomas (SBCs) comprise approximately 39% of cases, with an incidence of less than 1 per million annually in the U.S. Prognosis remains poor due to resistance to chemotherapy, often requiring extensive surgical resection and adjuvant radiotherapy. Current classification methods based on chromosomal deletions are invasive and costly, presenting a need for alternative diagnostic tools. Radiomics allows for non-invasive SBC diagnosis and treatment planning. We developed and validated radiomic-based models using MRI data to predict Overall Survival (OS) and Progression-Free Survival following Surgery (PFSS) in SBC patients. Machine learning classifiers, including eXtreme Gradient Boosting (XGBoost), were employed along with feature selection techniques. Unsupervised clustering identified radiomic-based subgroups, which were correlated with chromosomal deletions and clinical outcomes. Our XGBoost model demonstrated superior predictive performance, achieving an area under the curve (AUC) of 83.33% for OS and 80.36% for PFSS, outperforming other classifiers. Radiomic clustering revealed two SBC groups with differing survival and molecular characteristics, strongly correlating with chromosomal deletion profiles. These findings indicate that radiomics can non-invasively characterize SBC phenotypes and stratify patients by prognosis. Radiomics shows promise as a reliable, non-invasive tool for the prognostication and classification of SBCs, minimizing the need for invasive genetic testing and supporting personalized treatment strategies.

Efficient Medical Vision-Language Alignment Through Adapting Masked Vision Models.

Lian C, Zhou HY, Liang D, Qin J, Wang L

pubmed logopapersJun 2 2025
Medical vision-language alignment through cross-modal contrastive learning shows promising performance in image-text matching tasks, such as retrieval and zero-shot classification. However, conventional cross-modal contrastive learning (CLIP-based) methods suffer from suboptimal visual representation capabilities, which also limits their effectiveness in vision-language alignment. In contrast, although the models pretrained via multimodal masked modeling struggle with direct cross-modal matching, they excel in visual representation. To address this contradiction, we propose ALTA (ALign Through Adapting), an efficient medical vision-language alignment method that utilizes only about 8% of the trainable parameters and less than 1/5 of the computational consumption required for masked record modeling. ALTA achieves superior performance in vision-language matching tasks like retrieval and zero-shot classification by adapting the pretrained vision model from masked record modeling. Additionally, we integrate temporal-multiview radiograph inputs to enhance the information consistency between radiographs and their corresponding descriptions in reports, further improving the vision-language alignment. Experimental evaluations show that ALTA outperforms the best-performing counterpart by over 4% absolute points in text-to-image accuracy and approximately 6% absolute points in image-to-text retrieval accuracy. The adaptation of vision-language models during efficient alignment also promotes better vision and language understanding. Code is publicly available at https://github.com/DopamineLcy/ALTA.

Exploring <i>SLC25A42</i> as a Radiogenomic Marker from the Perioperative Stage to Chemotherapy in Hepatitis-Related Hepatocellular Carcinoma.

Dou L, Jiang J, Yao H, Zhang B, Wang X

pubmed logopapersJun 2 2025
<b><i>Background:</i></b> The molecular mechanisms driving hepatocellular carcinoma (HCC) and predict the chemotherapy sensitive remain unclear; therefore, identification of these key biomarkers is essential for early diagnosis and treatment of HCC. <b><i>Method:</i></b> We collected and processed Computed Tomography (CT) and clinical data from 116 patients with autoimmune hepatitis (AIH) and HCC who came to our hospital's Liver Cancer Center. We then identified and extracted important characteristic features of significant patient images and correlated them with mitochondria-related genes using machine learning techniques such as multihead attention networks, lasso regression, principal component analysis (PCA), and support vector machines (SVM). These genes were integrated into radiomics signature models to explore their role in disease progression. We further correlated these results with clinical variables to screen for driver genes and evaluate the predict ability of chemotherapy sensitive of key genes in liver cancer (LC) patients. Finally, qPCR was used to validate the expression of this gene in patient samples. <b><i>Results:</i></b> Our study utilized attention networks to identify disease regions in medical images with 97% accuracy and an AUC of 94%. We extracted 942 imaging features, identifying five key features through lasso regression that accurately differentiate AIH from HCC. Transcriptome analysis revealed 132 upregulated and 101 downregulated genes in AIH, with 45 significant genes identified by XGBOOST. In HCC analysis, PCA and random forest highlighted 11 key features. Among mitochondrial genes, <i>SLC25A42</i> correlated positively with normal tissue imaging features but negatively with cancerous tissues and was identified as a driver gene. Low expression of <i>SLC25A42</i> was associated with chemotherapy sensitive in HCC patients. <b><i>Conclusions:</i></b> In conclusion, machine learning modeling combined with genomic profiling provides a promising approach to identify the driver gene <i>SLC25A42</i> in LC, which may help improve diagnostic accuracy and chemotherapy sensitivity for this disease.

Disease-Grading Networks with Asymmetric Gaussian Distribution for Medical Imaging.

Tang W, Yang Z

pubmed logopapersJun 2 2025
Deep learning-based disease grading technologies facilitate timely medical intervention due to their high efficiency and accuracy. Recent advancements have enhanced grading performance by incorporating the ordinal relationships of disease labels. However, existing methods often assume same probability distributions for disease labels across instances within the same category, overlooking variations in label distributions. Additionally, the hyperparameters of these distributions are typically determined empirically, which may not accurately reflect the true distribution. To address these limitations, we propose a disease grading network utilizing a sample-aware asymmetric Gaussian label distribution, termed DGN-AGLD. This approach includes a variance predictor designed to learn and predict parameters that control the asymmetry of the Gaussian distribution, enabling distinct label distributions within the same category. This module can be seamlessly integrated into standard deep learning networks. Experimental results on four disease datasets validate the effectiveness and superiority of the proposed method, particularly on the IDRiD dataset, where it achieves a diabetic retinopathy accuracy of 77.67%. Furthermore, our method extends to joint disease grading tasks, yielding superior results and demonstrating significant generalization capabilities. Visual analysis indicates that our method more accurately captures the trend of disease progression by leveraging the asymmetry in label distribution. Our code is publicly available on https://github.com/ahtwq/AGNet.
Page 35 of 1631621 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.