Sort by:
Page 55 of 99990 results

A novel UNet-SegNet and vision transformer architectures for efficient segmentation and classification in medical imaging.

Tongbram S, Shimray BA, Singh LS

pubmed logopapersJul 8 2025
Medical imaging has become an essential tool in the diagnosis and treatment of various diseases, and provides critical insights through ultrasound, MRI, and X-ray modalities. Despite its importance, challenges remain in the accurate segmentation and classification of complex structures owing to factors such as low contrast, noise, and irregular anatomical shapes. This study addresses these challenges by proposing a novel hybrid deep learning model that integrates the strengths of Convolutional Autoencoders (CAE), UNet, and SegNet architectures. In the preprocessing phase, a Convolutional Autoencoder is used to effectively reduce noise while preserving essential image details, ensuring that the images used for segmentation and classification are of high quality. The ability of CAE to denoise images while retaining critical features enhances the accuracy of the subsequent analysis. The developed model employs UNet for multiscale feature extraction and SegNet for precise boundary reconstruction, with Dynamic Feature Fusion integrated at each skip connection to dynamically weight and combine the feature maps from the encoder and decoder. This ensures that both global and local features are effectively captured, while emphasizing the critical regions for segmentation. To further enhance the model's performance, the Hybrid Emperor Penguin Optimizer (HEPO) was employed for feature selection, while the Hybrid Vision Transformer with Convolutional Embedding (HyViT-CE) was used for the classification task. This hybrid approach allows the model to maintain high accuracy across different medical imaging tasks. The model was evaluated using three major datasets: brain tumor MRI, breast ultrasound, and chest X-rays. The results demonstrate exceptional performance, achieving an accuracy of 99.92% for brain tumor segmentation, 99.67% for breast cancer detection, and 99.93% for chest X-ray classification. These outcomes highlight the ability of the model to deliver reliable and accurate diagnostics across various medical contexts, underscoring its potential as a valuable tool in clinical settings. The findings of this study will contribute to advancing deep learning applications in medical imaging, addressing existing research gaps, and offering a robust solution for improved patient care.

MTMedFormer: multi-task vision transformer for medical imaging with federated learning.

Nath A, Shukla S, Gupta P

pubmed logopapersJul 8 2025
Deep learning has revolutionized medical imaging, improving tasks like image segmentation, detection, and classification, often surpassing human accuracy. However, the training of effective diagnostic models is hindered by two major challenges: the need for large datasets for each task and privacy laws restricting the sharing of medical data. Multi-task learning (MTL) addresses the first challenge by enabling a single model to perform multiple tasks, though convolution-based MTL models struggle with contextualizing global features. Federated learning (FL) helps overcome the second challenge by allowing models to train collaboratively without sharing data, but traditional methods struggle to aggregate stable feature maps due to the permutation-invariant nature of neural networks. To tackle these issues, we propose MTMedFormer, a transformer-based multi-task medical imaging model. We leverage the transformers' ability to learn task-agnostic features using a shared encoder and utilize task-specific decoders for robust feature extraction. By combining MTL with a hybrid loss function, MTMedFormer learns distinct diagnostic tasks in a synergistic manner. Additionally, we introduce a novel Bayesian federation method for aggregating multi-task imaging models. Our results show that MTMedFormer outperforms traditional single-task and MTL models on mammogram and pneumonia datasets, while our Bayesian federation method surpasses traditional methods in image segmentation.

Deep Learning Approach for Biomedical Image Classification.

Doshi RV, Badhiye SS, Pinjarkar L

pubmed logopapersJul 8 2025
Biomedical image classification is of paramount importance in enhancing diagnostic precision and improving patient outcomes across diverse medical disciplines. In recent years, the advent of deep learning methodologies has significantly transformed this domain by facilitating notable advancements in image analysis and classification endeavors. This paper provides a thorough overview of the application of deep learning techniques in biomedical image classification, encompassing various types of healthcare data, including medical images derived from modalities such as mammography, histopathology, and radiology. A detailed discourse on deep learning architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and advanced models such as generative adversarial networks (GANs), is presented. Additionally, we delineate the distinctions between supervised, unsupervised, and reinforcement learning approaches, along with their respective roles within the context of biomedical imaging. This study systematically investigates 50 deep learning methodologies employed in the healthcare sector, elucidating their effectiveness in various tasks, including disease detection, image segmentation, and classification. It particularly emphasizes models that have been trained on publicly available datasets, thereby highlighting the significant role of open-access data in fostering advancements in AI-driven healthcare innovations. Furthermore, this review accentuates the transformative potential of deep learning in the realm of biomedical image analysis and delineates potential avenues for future research within this rapidly evolving field.

A confidence-guided Unsupervised domain adaptation network with pseudo-labeling and deformable CNN-transformer for medical image segmentation.

Zhou J, Xu Y, Liu Z, Pfaender F, Liu W

pubmed logopapersJul 8 2025
Unsupervised domain adaptation (UDA) methods have achieved significant progress in medical image segmentation. Nevertheless, the significant differences between the source and target domains remain a daunting barrier, creating an urgent need for more robust cross-domain solutions. Current UDA techniques generally employ a fixed, unvarying feature alignment procedure to reduce inter-domain differences throughout the training process. This rigidity disregards the shifting nature of feature distributions throughout the training process, leading to suboptimal performance in boundary delineation and detail retention on the target domain. A novel confidence-guided unsupervised domain adaptation network (CUDA-Net) is introduced to overcome persistent domain gaps, adapt to shifting feature distributions during training, and enhance boundary delineation in the target domain. This proposed network adaptively aligns features by tracking cross-domain distribution shifts throughout training, starting with adversarial alignment at early stages (coarse) and transitioning to pseudo-label-driven alignment at later stages (fine-grained), thereby leading to more accurate segmentation in the target domain. A confidence-weighted mechanism then refines these pseudo labels by prioritizing high-confidence regions while allowing low-confidence areas to be gradually explored, thereby enhancing both label reliability and overall model stability. Experiments on three representative medical image datasets, namely MMWHS17, BraTS2021, and VS-Seg, confirm the superiority of CUDA-Net. Notably, CUDA-Net outperforms eight leading methods in terms of overall segmentation accuracy (Dice) and boundary extraction precision (ASD), highlighting that it offers an efficient and reliable solution for cross-domain medical image segmentation.

[The standardization and digitalization and intelligentization represent the future development direction of hip arthroscopy diagnosis and treatment technology].

Li CB, Zhang J, Wang L, Wang YT, Kang XQ, Wang MX

pubmed logopapersJul 8 2025
In recent years, hip arthroscopy has made great progress and has been extended to the treatment of intra-articular or periarticular diseases. However, the complex structure of the hip joint, high technical operation requirements and relatively long learning curve have hindered the popularization and development of hip arthroscopy in China. Therefore, on the one hand, it is necessary to promote the research and training of standardized techniques for the diagnosis of hip disease and the treatment of arthroscopic surgery, so as to improve the safety, effectiveness and popularization of the technology. On the other hand, our organization proactively leverages cutting-edge digitalization and intelligentization technologies, including medical image digitalization, medical big data analytics, artificial intelligence, surgical navigation and robotic control, virtual reality, telemedicine, and 5G communication technology. We conduct a range of innovative research and development initiatives such as intelligent-assisted diagnosis of hip diseases, digital preoperative planning, surgical intelligent navigation and robotic procedures, and smart rehabilitation solutions. These efforts aim to facilitate a digitalization and intelligentization leap in technology and continuously enhance the precision of diagnosis and treatment. In conclusion, standardization promotes the homogenization of diagnosis and treatment, while digitalization and intelligentization facilitate the precision of operations. The synergy of the two lays the foundation for personalized diagnosis and treatment and continuous innovation, ultimately driving the rapid development of hip arthroscopy technology.

Artificial intelligence in cardiac sarcoidosis: ECG, Echo, CPET and MRI.

Umeojiako WI, Lüscher T, Sharma R

pubmed logopapersJul 8 2025
Cardiac sarcoidosis is a form of inflammatory cardiomyopathy that varies in its clinical presentation. It is associated with significant clinical complications such as high-degree atrioventricular block, ventricular tachycardia, heart failure and sudden cardiac death. It is challenging to diagnose clinically, and its increasing detection rate may represent increasing awareness of the disease by clinicians as well as a rising incidence. Prompt diagnosis and risk stratification reduces morbidity and mortality from cardiac sarcoidosis. Noninvasive diagnostic modalities such as ECG, echocardiography, PET/computed tomography (PET/CT) and cardiac MRI (cMRI) are increasingly playing important roles in cardiac sarcoidosis diagnosis. Artificial intelligence driven applications are increasingly being applied to these diagnostic modalities to improve the detection of cardiac sarcoidosis. Review of the recent literature suggests artificial intelligence based algorithms in PET/CT and cMRIs can predict cardiac sarcoidosis as accurately as trained experts, however, there are few published studies on artificial intelligence based algorithms in ECG and echocardiography. The impressive advances in artificial intelligence have the potential to transform patient screening in cardiac sarcoidosis, aid prompt diagnosis and appropriate risk stratification and change clinical practice.

Sequential Attention-based Sampling for Histopathological Analysis

Tarun G, Naman Malpani, Gugan Thoppe, Sridharan Devarajan

arxiv logopreprintJul 7 2025
Deep neural networks are increasingly applied for automated histopathology. Yet, whole-slide images (WSIs) are often acquired at gigapixel sizes, rendering it computationally infeasible to analyze them entirely at high resolution. Diagnostic labels are largely available only at the slide-level, because expert annotation of images at a finer (patch) level is both laborious and expensive. Moreover, regions with diagnostic information typically occupy only a small fraction of the WSI, making it inefficient to examine the entire slide at full resolution. Here, we propose SASHA -- {\it S}equential {\it A}ttention-based {\it S}ampling for {\it H}istopathological {\it A}nalysis -- a deep reinforcement learning approach for efficient analysis of histopathological images. First, SASHA learns informative features with a lightweight hierarchical, attention-based multiple instance learning (MIL) model. Second, SASHA samples intelligently and zooms selectively into a small fraction (10-20\%) of high-resolution patches, to achieve reliable diagnosis. We show that SASHA matches state-of-the-art methods that analyze the WSI fully at high-resolution, albeit at a fraction of their computational and memory costs. In addition, it significantly outperforms competing, sparse sampling methods. We propose SASHA as an intelligent sampling model for medical imaging challenges that involve automated diagnosis with exceptionally large images containing sparsely informative features.

Development and retrospective validation of an artificial intelligence system for diagnostic assessment of prostate biopsies: study protocol.

Mulliqi N, Blilie A, Ji X, Szolnoky K, Olsson H, Titus M, Martinez Gonzalez G, Boman SE, Valkonen M, Gudlaugsson E, Kjosavik SR, Asenjo J, Gambacorta M, Libretti P, Braun M, Kordek R, Łowicki R, Hotakainen K, Väre P, Pedersen BG, Sørensen KD, Ulhøi BP, Rantalainen M, Ruusuvuori P, Delahunt B, Samaratunga H, Tsuzuki T, Janssen EAM, Egevad L, Kartasalo K, Eklund M

pubmed logopapersJul 7 2025
Histopathological evaluation of prostate biopsies using the Gleason scoring system is critical for prostate cancer diagnosis and treatment selection. However, grading variability among pathologists can lead to inconsistent assessments, risking inappropriate treatment. Similar challenges complicate the assessment of other prognostic features like cribriform cancer morphology and perineural invasion. Many pathology departments are also facing an increasingly unsustainable workload due to rising prostate cancer incidence and a decreasing pathologist workforce coinciding with increasing requirements for more complex assessments and reporting. Digital pathology and artificial intelligence (AI) algorithms for analysing whole slide images show promise in improving the accuracy and efficiency of histopathological assessments. Studies have demonstrated AI's capability to diagnose and grade prostate cancer comparably to expert pathologists. However, external validations on diverse data sets have been limited and often show reduced performance. Historically, there have been no well-established guidelines for AI study designs and validation methods. Diagnostic assessments of AI systems often lack preregistered protocols and rigorous external cohort sampling, essential for reliable evidence of their safety and accuracy. This study protocol covers the retrospective validation of an AI system for prostate biopsy assessment. The primary objective of the study is to develop a high-performing and robust AI model for diagnosis and Gleason scoring of prostate cancer in core needle biopsies, and at scale evaluate whether it can generalise to fully external data from independent patients, pathology laboratories and digitalisation platforms. The secondary objectives cover AI performance in estimating cancer extent and detecting cribriform prostate cancer and perineural invasion. This protocol outlines the steps for data collection, predefined partitioning of data cohorts for AI model training and validation, model development and predetermined statistical analyses, ensuring systematic development and comprehensive validation of the system. The protocol adheres to Transparent Reporting of a multivariable prediction model of Individual Prognosis Or Diagnosis+AI (TRIPOD+AI), Protocol Items for External Cohort Evaluation of a Deep Learning System in Cancer Diagnostics (PIECES), Checklist for AI in Medical Imaging (CLAIM) and other relevant best practices. Data collection and usage were approved by the respective ethical review boards of each participating clinical laboratory, and centralised anonymised data handling was approved by the Swedish Ethical Review Authority. The study will be conducted in agreement with the Helsinki Declaration. The findings will be disseminated in peer-reviewed publications (open access).

An enhanced fusion of transfer learning models with optimization based clinical diagnosis of lung and colon cancer using biomedical imaging.

Vinoth NAS, Kalaivani J, Arieth RM, Sivasakthiselvan S, Park GC, Joshi GP, Cho W

pubmed logopapersJul 7 2025
Lung and colon cancers (LCC) are among the foremost reasons for human death and disease. Early analysis of this disorder contains various tests, namely ultrasound (US), magnetic resonance imaging (MRI), and computed tomography (CT). Despite analytical imaging, histopathology is one of the effective methods that delivers cell-level imaging of tissue under inspection. These are mainly due to a restricted number of patients receiving final analysis and early healing. Furthermore, there are probabilities of inter-observer faults. Clinical informatics is an interdisciplinary field that integrates healthcare, information technology, and data analytics to improve patient care, clinical decision-making, and medical research. Recently, deep learning (DL) proved to be effective in the medical sector, and cancer diagnosis can be made automatically by utilizing the capabilities of artificial intelligence (AI), enabling faster analysis of more cases cost-effectively. On the other hand, with extensive technical developments, DL has arisen as an effective device in medical settings, mainly in medical imaging. This study presents an Enhanced Fusion of Transfer Learning Models and Optimization-Based Clinical Biomedical Imaging for Accurate Lung and Colon Cancer Diagnosis (FTLMO-BILCCD) model. The main objective of the FTLMO-BILCCD technique is to develop an efficient method for LCC detection using clinical biomedical imaging. Initially, the image pre-processing stage applies the median filter (MF) model to eliminate the unwanted noise from the input image data. Furthermore, fusion models such as CapsNet, EffcientNetV2, and MobileNet-V3 Large are employed for the feature extraction. The FTLMO-BILCCD technique implements a hybrid of temporal pattern attention and bidirectional gated recurrent unit (TPA-BiGRU) for classification. Finally, the beluga whale optimization (BWO) technique alters the hyperparameter range of the TPA-BiGRU model optimally and results in greater classification performance. The FTLMO-BILCCD approach is experimented with under the LCC-HI dataset. The performance validation of the FTLMO-BILCCD approach portrayed a superior accuracy value of 99.16% over existing models.

Prediction of Motor Symptom Progression of Parkinson's Disease Through Multimodal Imaging-Based Machine Learning.

Dai Y, Imami M, Hu R, Zhang C, Zhao L, Kargilis DC, Zhang H, Yu G, Liao WH, Jiao Z, Zhu C, Yang L, Bai HX

pubmed logopapersJul 7 2025
The unrelenting progression of Parkinson's disease (PD) leads to severely impaired quality of life, with considerable variability in progression rates among patients. Identifying biomarkers of PD progression could improve clinical monitoring and management. Radiomics, which facilitates data extraction from imaging for use in machine learning models, offers a promising approach to this challenge. This study investigated the use of multi-modality imaging, combining conventional magnetic resonance imaging (MRI) and dopamine transporter single photon emission computed tomography (DAT-SPECT), to predict motor progression in PD. Motor progression was measured by changes in the Movement Disorder Society Unified Parkinson's Disease Rating Scale (MDS-UPDRS) motor subscale scores. Radiomic features were selected from the midbrain region in MRI and caudate nucleus, putamen, and ventral striatum in DAT-SPECT. Patients were stratified into fast progression vs. slow progression based on change in MDS-UPDRS in follow-up. Various feature selection methods and machine learning classifiers were evaluated for each modality, and the best-performing models were combined into an ensemble. On the internal test set, the ensemble model, which integrated clinical information, T1WI, T2WI and DAT-SPECT achieved a ROC AUC of 0.93 (95% CI: 0.80-1.00), PR AUC of 0.88 (95%CI 0.61-1.00), accuracy of 0.85 (95% CI: 0.70-0.89), sensitivity of 0.72 (95% CI: 0.43-1.00), and specificity of 0.92 (95% CI: 0.77-1.00). On the external test set, the ensemble model outperformed single-modality models with a ROC AUC of 0.77 (95% CI: 0.53-0.93), PR AUC of 0.79 (95% CI: 0.56-0.95), accuracy of 0.68 (95% CI: 0.50-0.86), sensitivity of 0.53 (95% CI: 0.27-0.82), and specificity of 0.82 (95% CI: 0.55-1.00). In conclusion, this study developed an imaging-based model to identify baseline characteristics predictive of disease progression in PD patients. The findings highlight the strength of using multiple imaging modalities and integrating imaging data with clinical information to enhance the prediction of motor progression in PD.
Page 55 of 99990 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.