Sort by:
Page 18 of 78779 results

Spectral computed tomography thermometry for thermal ablation: applicability and needle artifact reduction.

Koetzier LR, Hendriks P, Heemskerk JWT, van der Werf NR, Selles M, van der Molen AJ, Smits MLJ, Goorden MC, Burgmans MC

pubmed logopapersAug 23 2025
Effective thermal ablation of liver tumors requires precise monitoring of the ablation zone. Computed tomography (CT) thermometry can non-invasively monitor lethal temperatures but suffers from metal artifacts caused by ablation equipment. This study assesses spectral CT thermometry's applicability during microwave ablation, comparing the reproducibility, precision, and accuracy of attenuation-based versus physical density-based thermometry. Furthermore, it identifies optimal metal artifact reduction (MAR) methods: O-MAR, deep learning-MAR, spectral CT, and combinations thereof. Four gel phantoms embedded with temperature sensors underwent a 10- minute, 60 W microwave ablation imaged by dual-layer spectral CT scanner in 23 scans over time. For each scan attenuation-based and physical density-based temperature maps were reconstructed. Attenuation-based and physical density-based thermometry models were tested for reproducibility over three repetitions; a fourth repetition focused on accuracy. MAR techniques were applied to one repetition to evaluate temperature precision in artifact-corrupted slices. The correlation between CT value and temperature was highly linear with an R-squared value exceeding 96 %. Model parameters for attenuation-based and physical density-based thermometry were -0.38 HU/°C and 0.00039 °C<sup>-1</sup>, with coefficients of variation of 2.3 % and 6.7 %, respectively. Physical density maps improved temperature precision in presence of needle artifacts by 73 % compared to attenuation images. O-MAR improved temperature precision with 49 % compared to no MAR. Attenuation-based thermometry yielded narrower Bland-Altman limits-of-agreement (-7.7 °C to 5.3 °C) than physical density-based thermometry. Spectral physical density-based CT thermometry at 150 keV, utilized alongside O-MAR, enhances temperature precision in presence of metal artifacts and achieves reproducible temperature measurements with high accuracy.

Decoding MGMT Methylation: A Step Towards Precision Medicine in Glioblastoma

Hafeez Ur Rehman, Sumaiya Fazal, Moutaz Alazab, Ali Baydoun

arxiv logopreprintAug 22 2025
Glioblastomas, constituting over 50% of malignant brain tumors, are highly aggressive brain tumors that pose substantial treatment challenges due to their rapid progression and resistance to standard therapies. The methylation status of the O-6-Methylguanine-DNA Methyltransferase (MGMT) gene is a critical biomarker for predicting patient response to treatment, particularly with the alkylating agent temozolomide. However, accurately predicting MGMT methylation status using non-invasive imaging techniques remains challenging due to the complex and heterogeneous nature of glioblastomas, that includes, uneven contrast, variability within lesions, and irregular enhancement patterns. This study introduces the Convolutional Autoencoders for MGMT Methylation Status Prediction (CAMP) framework, which is based on adaptive sparse penalties to enhance predictive accuracy. The CAMP framework operates in two phases: first, generating synthetic MRI slices through a tailored autoencoder that effectively captures and preserves intricate tissue and tumor structures across different MRI modalities; second, predicting MGMT methylation status using a convolutional neural network enhanced by adaptive sparse penalties. The adaptive sparse penalty dynamically adjusts to variations in the data, such as contrast differences and tumor locations in MR images. Our method excels in MRI image synthesis, preserving brain tissue, fat, and individual tumor structures across all MRI modalities. Validated on benchmark datasets, CAMP achieved an accuracy of 0.97, specificity of 0.98, and sensitivity of 0.97, significantly outperforming existing methods. These results demonstrate the potential of the CAMP framework to improve the interpretation of MRI data and contribute to more personalized treatment strategies for glioblastoma patients.

Intra-axial primary brain tumor differentiation: comparing large language models on structured MRI reports vs. radiologists on images.

Nakaura T, Uetani H, Yoshida N, Kobayashi N, Nagayama Y, Kidoh M, Kuroda JI, Mukasa A, Hirai T

pubmed logopapersAug 22 2025
Aimed to evaluate the potential of large language models (LLMs) in differentiating intra-axial primary brain tumors using structured magnetic resonance imaging (MRI) reports and compare their performance with radiologists. Structured reports of preoperative MRI findings from 137 surgically confirmed intra-axial primary brain tumors, including Glioblastoma (n = 77), Central Nervous System (CNS) Lymphoma (n = 22), Astrocytoma (n = 9), Oligodendroglioma (n = 9), and others (n = 20), were analyzed by multiple LLMs, including GPT-4, Claude-3-Opus, Claude-3-Sonnet, GPT-3.5, Llama-2-70B, Qwen1.5-72B, and Gemini-Pro-1.0. The models provided the top 5 differential diagnoses based on the preoperative MRI findings, and their top 1, 3, and 5 accuracies were compared with board-certified neuroradiologists' interpretations of the actual preoperative MRI images. Radiologists achieved top 1, 3, and 5 accuracies of 85.4%, 94.9%, and 94.9%, respectively. Among the LLMs, GPT-4 performed best with top 1, 3, and 5 accuracies of 65.7%, 84.7%, and 90.5%, respectively. Notably, GPT-4's top 3 accuracy of 84.7% approached the radiologists' top 1 accuracy of 85.4%. Other LLMs showed varying performance levels, with average accuracies ranging from 62.3% to 75.9%. LLMs demonstrated high accuracy for Glioblastoma but struggled with CNS Lymphoma and other less common tumors, particularly in top 1 accuracy. LLMs show promise as assistive tools for differentiating intra-axial primary brain tumors using structured MRI reports. However, a significant gap remains between their performance and that of board-certified neuroradiologists interpreting actual images. The choice of LLM and tumor type significantly influences the results. Question How do Large Language Models (LLM) perform when differentiating complex intra-axial primary brain tumors from structured MRI reports compared to radiologists interpreting images? Findings Radiologists outperformed all tested LLMs in diagnostic accuracy. The best model, GPT-4, showed promise but lagged considerably behind radiologists, particularly for less common tumors. Clinical relevance LLMs show potential as assistive tools for generating differential diagnoses from structured MRI reports, particularly for non-specialists, but they cannot currently replace the nuanced diagnostic expertise of a board-certified radiologist interpreting the primary image data.

Covid-19 diagnosis using privacy-preserving data monitoring: an explainable AI deep learning model with blockchain security.

Bala K, Kumar KA, Venu D, Dudi BP, Veluri SP, Nirmala V

pubmed logopapersAug 22 2025
The COVID-19 pandemic emphasised necessity for prompt, precise diagnostics, secure data storage, and robust privacy protection in healthcare. Existing diagnostic systems often suffer from limited transparency, inadequate performance, and challenges in ensuring data security and privacy. The research proposes a novel privacy-preserving diagnostic framework, Heterogeneous Convolutional-recurrent attention Transfer learning based ResNeXt with Modified Greater Cane Rat optimisation (HCTR-MGR), that integrates deep learning, Explainable Artificial Intelligence (XAI), and blockchain technology. The HCTR model combines convolutional layers for spatial feature extraction, recurrent layers for capturing spatial dependencies, and attention mechanisms to highlight diagnostically significant regions. A ResNeXt-based transfer learning backbone enhances performance, while the MGR algorithm improves robustness and convergence. A trust-based permissioned blockchain stores encrypted patient metadata to ensure data security and integrity and eliminates centralised vulnerabilities. The framework also incorporates SHAP and LIME for interpretable predictions. Experimental evaluation on two benchmark chest X-ray datasets demonstrates superior diagnostic performance, achieving 98-99% accuracy, 97-98% precision, 95-97% recall, 99% specificity, and 95-98% F1-score, offering a 2-6% improvement over conventional models such as ResNet, SARS-Net, and PneuNet. These results underscore the framework's potential for scalable, secure, and clinically trustworthy deployment in real-world healthcare systems.

Learning Explainable Imaging-Genetics Associations Related to a Neurological Disorder

Jueqi Wang, Zachary Jacokes, John Darrell Van Horn, Michael C. Schatz, Kevin A. Pelphrey, Archana Venkataraman

arxiv logopreprintAug 22 2025
While imaging-genetics holds great promise for unraveling the complex interplay between brain structure and genetic variation in neurological disorders, traditional methods are limited to simplistic linear models or to black-box techniques that lack interpretability. In this paper, we present NeuroPathX, an explainable deep learning framework that uses an early fusion strategy powered by cross-attention mechanisms to capture meaningful interactions between structural variations in the brain derived from MRI and established biological pathways derived from genetics data. To enhance interpretability and robustness, we introduce two loss functions over the attention matrix - a sparsity loss that focuses on the most salient interactions and a pathway similarity loss that enforces consistent representations across the cohort. We validate NeuroPathX on both autism spectrum disorder and Alzheimer's disease. Our results demonstrate that NeuroPathX outperforms competing baseline approaches and reveals biologically plausible associations linked to the disorder. These findings underscore the potential of NeuroPathX to advance our understanding of complex brain disorders. Code is available at https://github.com/jueqiw/NeuroPathX .

Advancements in deep learning for image-guided tumor ablation therapies: a comprehensive review.

Zhao Z, Hu Y, Xu LX, Sun J

pubmed logopapersAug 22 2025
Image-guided tumor ablation (IGTA) has revolutionized modern oncological treatments by providing minimally invasive options that ensure precise tumor eradication with minimal patient discomfort. Traditional techniques such as ultrasound (US), Computed Tomography (CT), and Magnetic Resonance Imaging (MRI) have been instrumental in the planning, execution, and evaluation of ablation therapies. However, these methods often face limitations, including poor contrast, susceptibility to artifacts, and variability in operator expertise, which can undermine the accuracy of tumor targeting and therapeutic outcomes. Incorporating deep learning (DL) into IGTA represents a significant advancement that addresses these challenges. This review explores the role and potential of DL in different phases of tumor ablation therapy: preoperative, intraoperative, and postoperative. In the preoperative stage, DL excels in advanced image segmentation, enhancement, and synthesis, facilitating precise surgical planning and optimized treatment strategies. During the intraoperative phase, DL supports image registration and fusion, and real-time surgical planning, enhancing navigation accuracy and ensuring precise ablation while safeguarding surrounding healthy tissues. In the postoperative phase, DL is pivotal in automating the monitoring of treatment responses and in the early detection of recurrences through detailed analyses of follow-up imaging. This review highlights the essential role of deep learning in modernizing IGTA, showcasing its significant implications for procedural safety, efficacy, and patient outcomes in oncology. As deep learning technologies continue to evolve, they are poised to redefine the standards of care in tumor ablation therapies, making treatments more accurate, personalized, and patient-friendly.

Unlocking the potential of radiomics in identifying fibrosing and inflammatory patterns in interstitial lung disease.

Colligiani L, Marzi C, Uggenti V, Colantonio S, Tavanti L, Pistelli F, Alì G, Neri E, Romei C

pubmed logopapersAug 22 2025
To differentiate interstitial lung diseases (ILDs) with fibrotic and inflammatory patterns using high-resolution computed tomography (HRCT) and a radiomics-based artificial intelligence (AI) pipeline. This single-center study included 84 patients: 50 with idiopathic pulmonary fibrosis (IPF)-representative of fibrotic pattern-and 34 with cellular non-specific interstitial pneumonia (NSIP) secondary to connective tissue disease (CTD)-as an example of mostly inflammatory pattern. For a secondary objective, we analyzed 50 additional patients with COVID-19 pneumonia. We performed semi-automatic segmentation of ILD regions using a deep learning model followed by manual review. From each segmented region, 103 radiomic features were extracted. Classification was performed using an XGBoost model with 1000 bootstrap repetitions and SHapley Additive exPlanations (SHAP) were applied to identify the most predictive features. The model accurately distinguished a fibrotic ILD pattern from an inflammatory ILD one, achieving an average test set accuracy of 0.91 and AUROC of 0.98. The classification was driven by radiomic features capturing differences in lung morphology, intensity distribution, and textural heterogeneity between the two disease patterns. In differentiating cellular NSIP from COVID-19, the model achieved an average accuracy of 0.89. Inflammatory ILDs exhibited more uniform imaging patterns compared to the greater variability typically observed in viral pneumonia. Radiomics combined with explainable AI offers promising diagnostic support in distinguishing fibrotic from inflammatory ILD patterns and differentiating inflammatory ILDs from viral pneumonias. This approach could enhance diagnostic precision and provide quantitative support for personalized ILD management.

Multimodal Integration in Health Care: Development With Applications in Disease Management.

Hao Y, Cheng C, Li J, Li H, Di X, Zeng X, Jin S, Han X, Liu C, Wang Q, Luo B, Zeng X, Li K

pubmed logopapersAug 21 2025
Multimodal data integration has emerged as a transformative approach in the health care sector, systematically combining complementary biological and clinical data sources such as genomics, medical imaging, electronic health records, and wearable device outputs. This approach provides a multidimensional perspective of patient health that enhances the diagnosis, treatment, and management of various medical conditions. This viewpoint presents an overview of the current state of multimodal integration in health care, spanning clinical applications, current challenges, and future directions. We focus primarily on its applications across different disease domains, particularly in oncology and ophthalmology. Other diseases are briefly discussed due to the few available literature. In oncology, the integration of multimodal data enables more precise tumor characterization and personalized treatment plans. Multimodal fusion demonstrates accurate prediction of anti-human epidermal growth factor receptor 2 therapy response (area under the curve=0.91). In ophthalmology, multimodal integration through the combination of genetic and imaging data facilitates the early diagnosis of retinal diseases. However, substantial challenges remain regarding data standardization, model deployment, and model interpretability. We also highlight the future directions of multimodal integration, including its expanded disease applications, such as neurological and otolaryngological diseases, and the trend toward large-scale multimodal models, which enhance accuracy. Overall, the innovative potential of multimodal integration is expected to further revolutionize the health care industry, providing more comprehensive and personalized solutions for disease management.

Explainable Knowledge Distillation for Efficient Medical Image Classification

Aqib Nazir Mir, Danish Raza Rizvi

arxiv logopreprintAug 21 2025
This study comprehensively explores knowledge distillation frameworks for COVID-19 and lung cancer classification using chest X-ray (CXR) images. We employ high-capacity teacher models, including VGG19 and lightweight Vision Transformers (Visformer-S and AutoFormer-V2-T), to guide the training of a compact, hardware-aware student model derived from the OFA-595 supernet. Our approach leverages hybrid supervision, combining ground-truth labels with teacher models' soft targets to balance accuracy and computational efficiency. We validate our models on two benchmark datasets: COVID-QU-Ex and LCS25000, covering multiple classes, including COVID-19, healthy, non-COVID pneumonia, lung, and colon cancer. To interpret the spatial focus of the models, we employ Score-CAM-based visualizations, which provide insight into the reasoning process of both teacher and student networks. The results demonstrate that the distilled student model maintains high classification performance with significantly reduced parameters and inference time, making it an optimal choice in resource-constrained clinical environments. Our work underscores the importance of combining model efficiency with explainability for practical, trustworthy medical AI solutions.

FedVGM: Enhancing Federated Learning Performance on Multi-Dataset Medical Images with XAI.

Tahosin MS, Sheakh MA, Alam MJ, Hassan MM, Bairagi AK, Abdulla S, Alshathri S, El-Shafai W

pubmed logopapersAug 20 2025
Advances in deep learning have transformed medical imaging, yet progress is hindered by data privacy regulations and fragmented datasets across institutions. To address these challenges, we propose FedVGM, a privacy-preserving federated learning framework for multi-modal medical image analysis. FedVGM integrates four imaging modalities, including brain MRI, breast ultrasound, chest X-ray, and lung CT, across 14 diagnostic classes without centralizing patient data. Using transfer learning and an ensemble of VGG16 and MobileNetV2, FedVGM achieves 97.7% $\pm$ 0.01 accuracy on the combined dataset and 91.9-99.1% across individual modalities. We evaluated three aggregation strategies and demonstrated median aggregation to be the most effective. To ensure clinical interpretability, we apply explainable AI techniques and validate results through performance metrics, statistical analysis, and k-fold cross-validation. FedVGM offers a robust, scalable solution for collaborative medical diagnostics, supporting clinical deployment while preserving data privacy.
Page 18 of 78779 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.