Sort by:
Page 4 of 78773 results

Application of deep learning-based convolutional neural networks in gastrointestinal disease endoscopic examination.

Wang YY, Liu B, Wang JH

pubmed logopapersSep 28 2025
Gastrointestinal (GI) diseases, including gastric and colorectal cancers, significantly impact global health, necessitating accurate and efficient diagnostic methods. Endoscopic examination is the primary diagnostic tool; however, its accuracy is limited by operator dependency and interobserver variability. Advancements in deep learning, particularly convolutional neural networks (CNNs), show great potential for enhancing GI disease detection and classification. This review explores the application of CNNs in endoscopic imaging, focusing on polyp and tumor detection, disease classification, endoscopic ultrasound, and capsule endoscopy analysis. We discuss the performance of CNN models with traditional diagnostic methods, highlighting their advantages in accuracy and real-time decision support. Despite promising results, challenges remain, including data availability, model interpretability, and clinical integration. Future directions include improving model generalization, enhancing explainability, and conducting large-scale clinical trials. With continued advancements, CNN-powered artificial intelligence systems could revolutionize GI endoscopy by enhancing early disease detection, reducing diagnostic errors, and improving patient outcomes.

Q-FSRU: Quantum-Augmented Frequency-Spectral For Medical Visual Question Answering

Rakesh Thakur, Yusra Tariq, Rakesh Chandra Joshi

arxiv logopreprintSep 28 2025
Solving tough clinical questions that require both image and text understanding is still a major challenge in healthcare AI. In this work, we propose Q-FSRU, a new model that combines Frequency Spectrum Representation and Fusion (FSRU) with a method called Quantum Retrieval-Augmented Generation (Quantum RAG) for medical Visual Question Answering (VQA). The model takes in features from medical images and related text, then shifts them into the frequency domain using Fast Fourier Transform (FFT). This helps it focus on more meaningful data and filter out noise or less useful information. To improve accuracy and ensure that answers are based on real knowledge, we add a quantum inspired retrieval system. It fetches useful medical facts from external sources using quantum-based similarity techniques. These details are then merged with the frequency-based features for stronger reasoning. We evaluated our model using the VQA-RAD dataset, which includes real radiology images and questions. The results showed that Q-FSRU outperforms earlier models, especially on complex cases needing image text reasoning. The mix of frequency and quantum information improves both performance and explainability. Overall, this approach offers a promising way to build smart, clear, and helpful AI tools for doctors.

FedAgentBench: Towards Automating Real-world Federated Medical Image Analysis with Server-Client LLM Agents

Pramit Saha, Joshua Strong, Divyanshu Mishra, Cheng Ouyang, J. Alison Noble

arxiv logopreprintSep 28 2025
Federated learning (FL) allows collaborative model training across healthcare sites without sharing sensitive patient data. However, real-world FL deployment is often hindered by complex operational challenges that demand substantial human efforts. This includes: (a) selecting appropriate clients (hospitals), (b) coordinating between the central server and clients, (c) client-level data pre-processing, (d) harmonizing non-standardized data and labels across clients, and (e) selecting FL algorithms based on user instructions and cross-client data characteristics. However, the existing FL works overlook these practical orchestration challenges. These operational bottlenecks motivate the need for autonomous, agent-driven FL systems, where intelligent agents at each hospital client and the central server agent collaboratively manage FL setup and model training with minimal human intervention. To this end, we first introduce an agent-driven FL framework that captures key phases of real-world FL workflows from client selection to training completion and a benchmark dubbed FedAgentBench that evaluates the ability of LLM agents to autonomously coordinate healthcare FL. Our framework incorporates 40 FL algorithms, each tailored to address diverse task-specific requirements and cross-client characteristics. Furthermore, we introduce a diverse set of complex tasks across 201 carefully curated datasets, simulating 6 modality-specific real-world healthcare environments, viz., Dermatoscopy, Ultrasound, Fundus, Histopathology, MRI, and X-Ray. We assess the agentic performance of 14 open-source and 10 proprietary LLMs spanning small, medium, and large model scales. While some agent cores such as GPT-4.1 and DeepSeek V3 can automate various stages of the FL pipeline, our results reveal that more complex, interdependent tasks based on implicit goals remain challenging for even the strongest models.

Using deep learning to improve genetic studies of osteoporosis

Eriksson, T., Nakamori, C.

medrxiv logopreprintSep 28 2025
To evaluate how recent advances in deep learning can improve the construction of quantitative phenotypes for genome-wide association studies (GWAS), we focused on the context of osteoporosis and bone mineral density (BMD) measurements. We applied image classifiers and transformer models to three distinct tasks. First, we developed quantitative estimates of osteoporosis severity using bone X-ray images. Second, we compared standard approaches for handling confounding variables with a multi-factor strategy based on transformer models trained on UK Biobank data. Third, we investigated whether image-based models could predict how single nucleotide polymorphisms (SNPs) associated with BMD influence bone structure. While our results were promising, application of deep learning methods did not yield substantial improvements over established approaches. Nonetheless, our findings highlight the potential of integrating imaging and machine learning techniques to refine phenotype definitions in genetic studies.

Latent Representation Learning from 3D Brain MRI for Interpretable Prediction in Multiple Sclerosis

Trinh Ngoc Huynh, Nguyen Duc Kien, Nguyen Hai Anh, Dinh Tran Hiep, Manuela Vaneckova, Tomas Uher, Jeroen Van Schependom, Stijn Denissen, Tran Quoc Long, Nguyen Linh Trung, Guy Nagels

arxiv logopreprintSep 28 2025
We present InfoVAE-Med3D, a latent-representation learning approach for 3D brain MRI that targets interpretable biomarkers of cognitive decline. Standard statistical models and shallow machine learning often lack power, while most deep learning methods behave as black boxes. Our method extends InfoVAE to explicitly maximize mutual information between images and latent variables, producing compact, structured embeddings that retain clinically meaningful content. We evaluate on two cohorts: a large healthy-control dataset (n=6527) with chronological age, and a clinical multiple sclerosis dataset from Charles University in Prague (n=904) with age and Symbol Digit Modalities Test (SDMT) scores. The learned latents support accurate brain-age and SDMT regression, preserve key medical attributes, and form intuitive clusters that aid interpretation. Across reconstruction and downstream prediction tasks, InfoVAE-Med3D consistently outperforms other VAE variants, indicating stronger information capture in the embedding space. By uniting predictive performance with interpretability, InfoVAE-Med3D offers a practical path toward MRI-based biomarkers and more transparent analysis of cognitive deterioration in neurological disease.

Theranostics in nuclear medicine: the era of precision oncology.

Gandhi N, Alaseem AM, Deshmukh R, Patel A, Alsaidan OA, Fareed M, Alasiri G, Patel S, Prajapati B

pubmed logopapersSep 26 2025
Theranostics represents a transformative advancement in nuclear medicine by integrating molecular imaging and targeted radionuclide therapy within the paradigm of personalized oncology. This review elucidates the historical evolution and contemporary clinical applications of theranostics, emphasizing its pivotal role in precision cancer management. The theranostic approach involves the coupling of diagnostic and therapeutic radionuclides that target identical molecular biomarkers, enabling simultaneous visualization and treatment of malignancies such as neuroendocrine tumors (NETs), prostate cancer, and differentiated thyroid carcinoma. Key theranostic radiopharmaceutical pairs, including Gallium-68-labeled DOTA-Tyr3-octreotate (Ga-68-DOTATATE) with Lutetium-177-labeled DOTA-Tyr3-octreotate (Lu-177-DOTATATE), and Gallium-68-labeled Prostate-Specific Membrane Antigen (Ga-68-PSMA) with Lutetium-177-labeled Prostate-Specific Membrane Antigen (Lu-177-PSMA), exemplify the "see-and-treat" principle central to this modality. This article further explores critical molecular targets such as somatostatin receptor subtype 2, prostate-specific membrane antigen, human epidermal growth factor receptor 2, CD20, and C-X-C chemokine receptor type 4, along with design principles for radiopharmaceuticals that optimize target specificity while minimizing off-target toxicity. Advances in imaging platforms, including positron emission tomography/computed tomography (PET/CT), single-photon emission computed tomography/CT (SPECT/CT), and hybrid positron emission tomography/magnetic resonance imaging (PET/MRI), have been instrumental in accurate dosimetry, therapeutic response assessment, and adaptive treatment planning. Integration of artificial intelligence (AI) and radiomics holds promise for enhanced image segmentation, predictive modeling, and individualized dosimetric planning. The review also addresses regulatory, manufacturing, and economic considerations, including guidelines from the United States Food and Drug Administration (USFDA) and European Medicines Agency (EMA), Good Manufacturing Practice (GMP) standards, and reimbursement frameworks, which collectively influence global adoption of theranostics. In summary, theranostics is poised to become a cornerstone of next-generation oncology, catalyzing a paradigm shift toward biologically driven, real-time personalized cancer care that seamlessly links diagnosis and therapy.

NextGen lung disease diagnosis with explainable artificial intelligence.

Veeramani N, S A RS, S SP, S S, Jayaraman P

pubmed logopapersSep 26 2025
The COVID-19 pandemic has been the most catastrophic global health emergency of the [Formula: see text] century, resulting in hundreds of millions of reported cases and five million deaths. Chest X-ray (CXR) images are highly valuable for early detection of lung diseases in monitoring and investigating pulmonary disorders such as COVID-19, pneumonia, and tuberculosis. These CXR images offer crucial features about the lung's health condition and can assist in making accurate diagnoses. Manual interpretation of CXR images is challenging even for expert radiologists due to the overlapping radiological features. Therefore, Artificial Intelligence (AI) based image processing took over the charge in healthcare. But still it is uncertain to trust the prediction results by an AI model. However, this can be resolved by implementing explainable artificial intelligence (XAI) tools that transform a black-box AI into a glass-box model. In this research article, we have proposed a novel XAI-TRANS model with inception based transfer learning addressing the challenge of overlapping features in multiclass classification of CXR images. Also, we proposed an improved U-Net Lung segmentation dedicated to obtaining the radiological features for classification. The proposed approach achieved a maximum precision of 98% and accuracy of 97% in multiclass lung disease classification. By leveraging XAI techniques with the evident improvement of 4.75%, specifically LIME and Grad-CAM, to provide detailed and accurate explanations for the model's prediction.

Model-driven individualized transcranial direct current stimulation for the treatment of insomnia disorder: protocol for a randomized, sham-controlled, double-blind study.

Wang Y, Jia W, Zhang Z, Bai T, Xu Q, Jiang J, Wang Z

pubmed logopapersSep 26 2025
Insomnia disorder is a prevalent condition associated with significant negative impacts on health and daily functioning. Transcranial direct current stimulation (tDCS) has emerged as a potential technique for improving sleep. However, questions remain regarding its clinical efficacy, and there is a lack of standardized individualized stimulation protocols. This study aims to evaluate the efficacy of model-driven, individualized tDCS for treating insomnia disorder through a randomized, double-blind, sham-controlled trial. A total of 40 patients diagnosed with insomnia disorder will be recruited and randomly assigned to either an active tDCS group or a sham stimulation group. Individualized stimulation parameters will be determined through machine learning-based electric field modeling incorporating structural MRI and EEG data. Participants will undergo 10 sessions of tDCS (5 days/week for 2 consecutive weeks), with follow-up assessments conducted at 2 and 4 weeks after treatment. The primary outcome is the reduction in the Insomnia Severity Index (ISI) score at two weeks post-treatment. Secondary outcomes include changes in sleep parameters, anxiety, and depression scores. This study is expected to provide evidence for the effectiveness of individualized tDCS in improving sleep quality and reducing insomnia symptoms. This integrative approach, combining advanced neuroimaging and electrophysiological biomarkers, has the potential to establish an evidence-based framework for individualized brain stimulation, optimizing therapeutic outcomes. This study is registered at ClinicalTrials.gov (Identifier: NCT06671457) and was registered on 4 November 2024. The online version contains supplementary material available at 10.1186/s12888-025-07347-5.

EqDiff-CT: Equivariant Conditional Diffusion model for CT Image Synthesis from CBCT

Alzahra Altalib, Chunhui Li, Alessandro Perelli

arxiv logopreprintSep 26 2025
Cone-beam computed tomography (CBCT) is widely used for image-guided radiotherapy (IGRT). It provides real time visualization at low cost and dose. However, photon scattering and beam hindrance cause artifacts in CBCT. These include inaccurate Hounsfield Units (HU), reducing reliability for dose calculation, and adaptive planning. By contrast, computed tomography (CT) offers better image quality and accurate HU calibration but is usually acquired offline and fails to capture intra-treatment anatomical changes. Thus, accurate CBCT-to-CT synthesis is needed to close the imaging-quality gap in adaptive radiotherapy workflows. To cater to this, we propose a novel diffusion-based conditional generative model, coined EqDiff-CT, to synthesize high-quality CT images from CBCT. EqDiff-CT employs a denoising diffusion probabilistic model (DDPM) to iteratively inject noise and learn latent representations that enable reconstruction of anatomically consistent CT images. A group-equivariant conditional U-Net backbone, implemented with e2cnn steerable layers, enforces rotational equivariance (cyclic C4 symmetry), helping preserve fine structural details while minimizing noise and artifacts. The system was trained and validated on the SynthRAD2025 dataset, comprising CBCT-CT scans across multiple head-and-neck anatomical sites, and we compared it with advanced methods such as CycleGAN and DDPM. EqDiff-CT provided substantial gains in structural fidelity, HU accuracy and quantitative metrics. Visual findings further confirm the improved recovery, sharper soft tissue boundaries, and realistic bone reconstructions. The findings suggest that the diffusion model has offered a robust and generalizable framework for CBCT improvements. The proposed solution helps in improving the image quality as well as the clinical confidence in the CBCT-guided treatment planning and dose calculations.

The Evolution and Clinical Impact of Deep Learning Technologies in Breast MRI.

Fujioka T, Fujita S, Ueda D, Ito R, Kawamura M, Fushimi Y, Tsuboyama T, Yanagawa M, Yamada A, Tatsugami F, Kamagata K, Nozaki T, Matsui Y, Fujima N, Hirata K, Nakaura T, Tateishi U, Naganawa S

pubmed logopapersSep 26 2025
The integration of deep learning (DL) in breast MRI has revolutionized the field of medical imaging, notably enhancing diagnostic accuracy and efficiency. This review discusses the substantial influence of DL technologies across various facets of breast MRI, including image reconstruction, classification, object detection, segmentation, and prediction of clinical outcomes such as response to neoadjuvant chemotherapy and recurrence of breast cancer. Utilizing sophisticated models such as convolutional neural networks, recurrent neural networks, and generative adversarial networks, DL has improved image quality and precision, enabling more accurate differentiation between benign and malignant lesions and providing deeper insights into disease behavior and treatment responses. DL's predictive capabilities for patient-specific outcomes also suggest potential for more personalized treatment strategies. The advancements in DL are pioneering a new era in breast cancer diagnostics, promising more personalized and effective healthcare solutions. Nonetheless, the integration of this technology into clinical practice faces challenges, necessitating further research, validation, and development of legal and ethical frameworks to fully leverage its potential.
Page 4 of 78773 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.