Sort by:
Page 14 of 78773 results

Challenges in diagnosis of sarcoidosis.

Bączek K, Piotrowski WJ, Bonella F

pubmed logopapersSep 1 2025
Diagnosing sarcoidosis remains challenging. Histology findings and a variable clinical presentation can mimic other infectious, malignant, and autoimmune diseases. This review synthesizes current evidence on histopathology, sampling techniques, imaging modalities, and biomarkers and explores how emerging 'omics' and artificial intelligence tools may sharpen diagnostic accuracy. Within the typical granulomatous lesions, limited or 'burned-out' necrosis is an ancillary finding, which can be present in up to one-third of sarcoid biopsies, and demands a careful differential diagnostic work-up. Endobronchial ultrasound-guided transbronchial needle aspiration of lymph nodes has replaced mediastinoscopy as first-line sampling tool, while cryobiopsy is still under validation. Volumetric PET metrics such as total lung glycolysis and somatostatin-receptor tracers refine activity assessment; combined FDG PET/MRI improves detection of occult cardiac disease. Advanced bronchoalveolar lavage (BAL) immunophenotyping via flow cytometry and serum, BAL, and genetic biomarkers show to correlate with inflammatory burden but have low diagnostic value. Multi-omics signatures and Positron Emission Tomography with Computer Tomography radiomics, supported by deep-learning algorithms, show promising results for noninvasive diagnostic confirmation, phenotyping, and disease monitoring. No single test is conclusive for diagnosing sarcoidosis. An integrated, multidisciplinary strategy is needed. Large, multicenter, and multiethnic studies are essential to translate and validate data from emerging AI tools and -omics research into clinical routine.

Explainable self-supervised learning for medical image diagnosis based on DINO V2 model and semantic search.

Hussien A, Elkhateb A, Saeed M, Elsabawy NM, Elnakeeb AE, Elrashidy N

pubmed logopapersSep 1 2025
Medical images have become indispensable for decision-making and significantly affect treatment planning. However, increasing medical imaging has widened the gap between medical images and available radiologists, leading to delays and diagnosis errors. Recent studies highlight the potential of deep learning (DL) in medical image diagnosis. However, their reliance on labelled data limits their applicability in various clinical settings. As a result, recent studies explore the role of self-supervised learning to overcome these challenges. Our study aims to address these challenges by examining the performance of self-supervised learning (SSL) in diverse medical image datasets and comparing it with traditional pre-trained supervised learning models. Unlike prior SSL methods that focus solely on classification, our framework leverages DINOv2's embeddings to enable semantic search in medical databases (via Qdrant), allowing clinicians to retrieve similar cases efficiently. This addresses a critical gap in clinical workflows where rapid case The results affirmed SSL's ability, especially DINO v2, to overcome the challenge associated with labelling data and provide an accurate diagnosis superior to traditional SL. DINO V2 provides 100%, 99%, 99%, 100 and 95% for classification accuracy of Lung cancer, brain tumour, leukaemia and Eye Retina Disease datasets, respectively. While existing SSL models (e.g., BYOL, SimCLR) lack interpretability, we uniquely combine DINOv2 with ViT-CX, a causal explanation method tailored for transformers. This provides clinically actionable heatmaps, revealing how the model localizes tumors/cellular patternsa feature absent in prior SSL medical imaging studies Furthermore, our research explores the impact of semantic search in the medical images domain and how it can revolutionize the querying process and provide semantic results alongside SSL and the Qudra Net dataset utilized to save the embedding of the developed model after the training process. Cosine similarity measures the distance between the image query and stored information in the embedding using cosine similarity. Our study aims to enhance the efficiency and accuracy of medical image analysis, ultimately improving the decision-making process.

Uncovering novel functions of NUF2 in glioblastoma and MRI-based expression prediction.

Zhong RD, Liu YS, Li Q, Kou ZW, Chen FF, Wang H, Zhang N, Tang H, Zhang Y, Huang GD

pubmed logopapersSep 1 2025
Glioblastoma multiforme (GBM) is a lethal brain tumor with limited therapies. NUF2, a kinetochore protein involved in cell cycle regulation, shows oncogenic potential in various cancers; however, its role in GBM pathogenesis remains unclear. In this study, we investigated NUF2's function and mechanisms in GBM and developed an MRI-based machine learning model to predict its expression non-invasively, and evaluated its potential as a therapeutic target and prognostic biomarker. Functional assays (proliferation, colony formation, migration, and invasion) and cell cycle analysis were conducted using NUF2-knockdown U87/U251 cells. Western blotting was performed to assess the expression levels of β-catenin and MMP-9. Bioinformatic analyses included pathway enrichment, immune infiltration, and single-cell subtype characterization. Using preoperative T1CE Magnetic Resonance Imaging sequences from 61 patients, we extracted 1037 radiomic features and developed a predictive model using Least Absolute Shrinkage and Selection Operator regression for feature selection and random forest algorithms for classification with rigorous cross-validation. NUF2 overexpression in GBM tissues and cells was correlated with poor survival (p < 0.01). Knockdown of NUF2 significantly suppressed malignant phenotypes (p < 0.05), induced G0/G1 arrest (p < 0.01), and increased sensitivity to TMZ treatment via the β-catenin/MMP9 pathway. The radiomic model achieved superior NUF2 prediction (AUC = 0.897) using six optimized features. Key features demonstrated associations with MGMT methylation and 1p/19q co-deletion, serving as independent prognostic markers. NUF2 drives GBM progression through β-catenin/MMP9 activation, establishing its dual role as a therapeutic target and a prognostic biomarker. The developed radiogenomic model enables precise non-invasive NUF2 evaluation, thereby advancing personalized GBM management. This study highlights the translational value of integrating molecular biology with artificial intelligence in neuro-oncology.

CXR-MultiTaskNet a unified deep learning framework for joint disease localization and classification in chest radiographs.

Reddy KD, Patil A

pubmed logopapersAug 31 2025
Chest X-ray (CXR) is a challenging problem in automated medical diagnosis, where complex visual patterns of thoracic diseases must be precisely identified through multi-label classification and lesion localization. Current approaches typically consider classification and localization in isolation, resulting in a piecemeal system that does not exploit common representations and is often not clinically interpretable, as well as limited in handling multi-label diseases. Although multi-task learning frameworks, such as DeepChest and CLN, appear to meet this goal, they suffer from task interference and poor explainability, which limits their practical application in real-world clinical workflows. To address these limitations, we present a unified multi-task deep learning framework, CXR-MultiTaskNet, for simultaneously classifying thoracic diseases and localizing lesions in chest X-rays. Our framework comprises a standard ResNet50 feature extractor, two task-specific heads for multi-task learning, and a Grad-CAM-based explainability module that provides accurate predictions and enhances clinical explainability. We formulate a joint loss that weighs the relative importance of representation extraction, which is large due to class variations, and the final loss, which is larger in the detection loss that occurs in extreme class imbalances between days and the detectability of varying disease manifestation types. Recent advances made by deep learning methods in the identification of disease in chest X-ray images are promising; however, there are limitations in their performance for complete analysis due to the lack of interpretability, some inherent weaknesses of convolutional neural networks (CNN), and prior learning of classification at the image level before localization of the disease. In this paper, we propose a dual-attention-based hierarchical feature extraction approach, which addresses the challenges of deep learning in detecting diseases in chest X-ray images. Through the use of visual attention maps, the detection steps can be better tracked, and therefore, the entire process is made more interpretable than with a traditional CNN-embedding model. We also manage to obtain both disease-level and pixel-level predictions, which enable explainable and comprehensive analysis of each image and aid in localizing each detected abnormality area. The proposed approach was further optimized for X-ray images by computing the objective losses during training, which ultimately gives higher significance to smaller lesions. Experimental evaluations on a benchmark chest X-ray dataset demonstrate the potential of the proposed approach achieving a macro F1-score of 0.965 (0.968 micro F1-score) for disease classification and mean IoU of 0.851 ([email protected]) for localization of diseases Content: Model intepretability, Chest X-ray image disease detection, Detection region localization, Weakly supervised transfer learning Lesion localization → 5 of 0.927 Compared to state-of-the-art single-task and multi-task baselines, these results are consistently better. The presented framework provides an integrated, method-based approach to chest X-ray analysis that is clinically useful, interpretable, and scalable for automation, allowing for efficient diagnostic pathways and enhanced clinical decision-making. This single framework can serve as a router for next-gen explainable AI in radiology.

Resting-state fMRI Analysis using Quantum Time-series Transformer

Junghoon Justin Park, Jungwoo Seo, Sangyoon Bae, Samuel Yen-Chi Chen, Huan-Hsin Tseng, Jiook Cha, Shinjae Yoo

arxiv logopreprintAug 31 2025
Resting-state functional magnetic resonance imaging (fMRI) has emerged as a pivotal tool for revealing intrinsic brain network connectivity and identifying neural biomarkers of neuropsychiatric conditions. However, classical self-attention transformer models--despite their formidable representational power--struggle with quadratic complexity, large parameter counts, and substantial data requirements. To address these barriers, we introduce a Quantum Time-series Transformer, a novel quantum-enhanced transformer architecture leveraging Linear Combination of Unitaries and Quantum Singular Value Transformation. Unlike classical transformers, Quantum Time-series Transformer operates with polylogarithmic computational complexity, markedly reducing training overhead and enabling robust performance even with fewer parameters and limited sample sizes. Empirical evaluation on the largest-scale fMRI datasets from the Adolescent Brain Cognitive Development Study and the UK Biobank demonstrates that Quantum Time-series Transformer achieves comparable or superior predictive performance compared to state-of-the-art classical transformer models, with especially pronounced gains in small-sample scenarios. Interpretability analyses using SHapley Additive exPlanations further reveal that Quantum Time-series Transformer reliably identifies clinically meaningful neural biomarkers of attention-deficit/hyperactivity disorder (ADHD). These findings underscore the promise of quantum-enhanced transformers in advancing computational neuroscience by more efficiently modeling complex spatio-temporal dynamics and improving clinical interpretability.

Synthesize contrast-enhanced ultrasound image of thyroid nodules via generative adversarial networks.

Lai M, Yao J, Zhou Y, Zhou L, Jiang T, Sui L, Tang J, Zhu X, Huang J, Wang Y, Liu J, Xu D

pubmed logopapersAug 30 2025
This study aims to explore the feasibility of employing generative adversarial networks (GAN) to generate synthetic contrast-enhanced ultrasound (CEUS) from grayscale ultrasound images of patients with thyroid nodules while dispensing with the need for ultrasound contrast agents injection. Patients who underwent preoperative thyroid CEUS examinations between January 2020 and July 2022 were collected retrospectively. The cycle-GAN framework integrated paired and unpaired learning modules was employed to develop the non-invasive image generation process. The synthetic CEUS images was generated in three phases: pre-arterial, plateau, and venous. The evaluation included quantitative similarity metrics, classification performance, and qualitative assessment by radiologists. CEUS videos of 360 thyroid nodules from 314 patients (45 years ± 12 [SD]; 272 women) in the internal dataset and 202 thyroid nodules from 183 patients (46 years ± 13 [SD]; 148 women) in the external dataset were included. In the external testing dataset, quantitative analysis revealed a significant degree of similarity between real and synthetic CEUS images (structure similarity index, 0.89 ± 0.04; peak signal-to-noise ratio, 28.17 ± 2.42). Radiologists deemed 126 of 132 [95%] synthetic CEUS images diagnostically useful. The accuracy of radiologists in distinguishing between real and synthetic images was 55.6% (95% CI: 0.49, 0.63), with an AUC of 61.0% (95% CI: 0.65, 0.68). No statistically significant difference (p > 0.05) was observed when radiologists assessed peak intensity and enhancement patterns using real CEUS and synthetic CEUS. Both quantitative analysis and radiologist evaluations exhibited that synthetic CEUS images generated by generative adversarial networks were similar to real CEUS images. QuestionIt is feasible to generate synthetic thyroid contrast-enhanced ultrasound images using generative adversarial networks without ultrasound contrast agents injection. FindingsCompared to real contrast-enhanced ultrasound images, synthetic contrast-enhanced ultrasound images exhibited high similarity and image quality. Clinical relevanceThis non-invasive and intelligent transformation may reduce the requirement for ultrasound contrast agents in certain cases, particularly in scenarios where ultrasound contrast agents administration is contraindicated, such as in patients with allergies, poor tolerance, or limited access to resources.

Multi-DECT Image-based Interpretable Model Incorporating Habitat Radiomics and Vision Transformer Deep Learning for Preoperative Prediction of Muscle Invasion in Bladder Cancer.

Du C, Wei W, Hu M, He J, Shen J, Liu Y, Li J, Liu L

pubmed logopapersAug 30 2025
The research aims to evaluate the effectiveness of a multi-dual-energy CT (DECT) image-based interpretable model that integrates habitat radiomics with a 3D Vision Transformer (ViT) deep learning (DL) for preoperatively predicting muscle invasion in bladder cancer (BCa). This retrospective study analyzed 200 BCa patients, who were divided into a training cohort (n=140) and a test cohort (n=60) in a 7:3 ratio. Univariate and multivariate analyses were performed on the DECT quantitative parameters to identify independent predictors, which were subsequently used to develop a DECT model. The K-means algorithm was employed to generate habitat sub-regions of BCa. Traditional radiomics (Rad) model, habitat model, ResNet 18 model, ViT model, and fusion models were constructed from the 40, 70, and 100 keV virtual monochromatic images (VMIs) in DECT. The evaluation of all models used the area under the receiver operating characteristic curve (AUC), calibration curve, decision curve analysis (DCA), net reclassification index (NRI), and integrated discrimination improvement (IDI). The SHAP method was employed to interpret the optimal model and visualize its decision-making process. The Habitat-ViT model demonstrated superior performance compared to other single models, achieving an AUC of 0.997 (95% CI 0.992, 1.000) in the training cohort and 0.892 (95% CI 0.814, 0.971) in the test cohort. The incorporation of DECT quantitative parameters did not improve the performance. DCA and calibration curve assessments indicated that the Habitat-ViT model provided a favorable net benefit and demonstrated strong calibration. Furthermore, SHAP clarified the decision-making processes underlying the model's predicted outcomes. Multi-DECT image-based interpretable model that integrates habitat radiomics with a ViT DL holds promise for predicting muscle invasion status in BCa, providing valuable insights for personalized treatment planning and prognostic assessment.

MSFE-GallNet-X: a multi-scale feature extraction-based CNN Model for gallbladder disease analysis with enhanced explainability.

Nabil HR, Ahmed I, Das A, Mridha MF, Kabir MM, Aung Z

pubmed logopapersAug 30 2025
This study introduces MSFE-GallNet-X, a domain-adaptive deep learning model utilizing multi-scale feature extraction (MSFE) to improve the classification accuracy of gallbladder diseases from grayscale ultrasound images, while integrating explainable artificial intelligence (XAI) methods to enhance clinical interpretability. We developed a convolutional neural network-based architecture that automatically learns multi-scale features from a dataset comprising 10,692 high-resolution ultrasound images from 1,782 patients, covering nine gallbladder disease classes, including gallstones, cholecystitis, and carcinoma. The model incorporated Gradient-Weighted Class Activation Mapping (Grad-CAM) and Local Interpretable Model-Agnostic Explanations (LIME) to provide visual interpretability of diagnostic predictions. Model performance was evaluated using standard metrics, including accuracy and F1 score. The MSFE-GallNet-X achieved a classification accuracy of 99.63% and an F1 score of 99.50%, outperforming state-of-the-art models including VGG-19 (98.89%) and DenseNet121 (91.81%), while maintaining greater parameter efficiency, only 1·91 M parameters in gallbladder disease classification. Visualization through Grad-CAM and LIME highlighted critical image regions influencing model predictions, supporting explainability for clinical use. MSFE-GallNet-X demonstrates strong performance on a controlled and balanced dataset, suggesting its potential as an AI-assisted tool for clinical decision-making in gallbladder disease management. Not applicable.

Synthetic data generation method improves risk prediction model for early tumor recurrence after surgery in patients with pancreatic cancer.

Jeong H, Lee JM, Kim HS, Chae H, Yoon SJ, Shin SH, Han IW, Heo JS, Min JH, Hyun SH, Kim H

pubmed logopapersAug 29 2025
Pancreatic cancer is aggressive with high recurrence rates, necessitating accurate prediction models for effective treatment planning, particularly for neoadjuvant chemotherapy or upfront surgery. This study explores the use of variational autoencoder (VAE)-generated synthetic data to predict early tumor recurrence (within six months) in pancreatic cancer patients who underwent upfront surgery. Preoperative data of 158 patients between January 2021 and December 2022 was analyzed, and machine learning models-including Logistic Regression, Random Forest (RF), Gradient Boosting Machine (GBM), and Deep Neural Networks (DNN)-were trained on both original and synthetic datasets. The VAE-generated dataset (n = 94) closely matched the original data (p > 0.05) and enhanced model performance, improving accuracy (GBM: 0.81 to 0.87; RF: 0.84 to 0.87) and sensitivity (GBM: 0.73 to 0.91; RF: 0.82 to 0.91). PET/CT-derived metabolic parameters were the strongest predictors, accounting for 54.7% of the model predictive power with maximum standardized uptake value (SUVmax) showing the highest importance (0.182, 95% CI: 0.165-0.199). This study demonstrates that synthetic data can significantly enhance predictive models for pancreatic cancer recurrence, especially in data-limited scenarios, offering a promising strategy for oncology prediction models.

Integrating Pathology and CT Imaging for Personalized Recurrence Risk Prediction in Renal Cancer

Daniël Boeke, Cedrik Blommestijn, Rebecca N. Wray, Kalina Chupetlovska, Shangqi Gao, Zeyu Gao, Regina G. H. Beets-Tan, Mireia Crispin-Ortuzar, James O. Jones, Wilson Silva, Ines P. Machado

arxiv logopreprintAug 29 2025
Recurrence risk estimation in clear cell renal cell carcinoma (ccRCC) is essential for guiding postoperative surveillance and treatment. The Leibovich score remains widely used for stratifying distant recurrence risk but offers limited patient-level resolution and excludes imaging information. This study evaluates multimodal recurrence prediction by integrating preoperative computed tomography (CT) and postoperative histopathology whole-slide images (WSIs). A modular deep learning framework with pretrained encoders and Cox-based survival modeling was tested across unimodal, late fusion, and intermediate fusion setups. In a real-world ccRCC cohort, WSI-based models consistently outperformed CT-only models, underscoring the prognostic strength of pathology. Intermediate fusion further improved performance, with the best model (TITAN-CONCH with ResNet-18) approaching the adjusted Leibovich score. Random tie-breaking narrowed the gap between the clinical baseline and learned models, suggesting discretization may overstate individualized performance. Using simple embedding concatenation, radiology added value primarily through fusion. These findings demonstrate the feasibility of foundation model-based multimodal integration for personalized ccRCC risk prediction. Future work should explore more expressive fusion strategies, larger multimodal datasets, and general-purpose CT encoders to better match pathology modeling capacity.
Page 14 of 78773 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.