Sort by:
Page 10 of 24233 results

Deep Learning Approach for Biomedical Image Classification.

Doshi RV, Badhiye SS, Pinjarkar L

pubmed logopapersJul 8 2025
Biomedical image classification is of paramount importance in enhancing diagnostic precision and improving patient outcomes across diverse medical disciplines. In recent years, the advent of deep learning methodologies has significantly transformed this domain by facilitating notable advancements in image analysis and classification endeavors. This paper provides a thorough overview of the application of deep learning techniques in biomedical image classification, encompassing various types of healthcare data, including medical images derived from modalities such as mammography, histopathology, and radiology. A detailed discourse on deep learning architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and advanced models such as generative adversarial networks (GANs), is presented. Additionally, we delineate the distinctions between supervised, unsupervised, and reinforcement learning approaches, along with their respective roles within the context of biomedical imaging. This study systematically investigates 50 deep learning methodologies employed in the healthcare sector, elucidating their effectiveness in various tasks, including disease detection, image segmentation, and classification. It particularly emphasizes models that have been trained on publicly available datasets, thereby highlighting the significant role of open-access data in fostering advancements in AI-driven healthcare innovations. Furthermore, this review accentuates the transformative potential of deep learning in the realm of biomedical image analysis and delineates potential avenues for future research within this rapidly evolving field.

Efficacy of Image Similarity as a Metric for Augmenting Small Dataset Retinal Image Segmentation

Thomas Wallace, Ik Siong Heng, Senad Subasic, Chris Messenger

arxiv logopreprintJul 7 2025
Synthetic images are an option for augmenting limited medical imaging datasets to improve the performance of various machine learning models. A common metric for evaluating synthetic image quality is the Fr\'echet Inception Distance (FID) which measures the similarity of two image datasets. In this study we evaluate the relationship between this metric and the improvement which synthetic images, generated by a Progressively Growing Generative Adversarial Network (PGGAN), grant when augmenting Diabetes-related Macular Edema (DME) intraretinal fluid segmentation performed by a U-Net model with limited amounts of training data. We find that the behaviour of augmenting with standard and synthetic images agrees with previously conducted experiments. Additionally, we show that dissimilar (high FID) datasets do not improve segmentation significantly. As FID between the training and augmenting datasets decreases, the augmentation datasets are shown to contribute to significant and robust improvements in image segmentation. Finally, we find that there is significant evidence to suggest that synthetic and standard augmentations follow separate log-normal trends between FID and improvements in model performance, with synthetic data proving more effective than standard augmentation techniques. Our findings show that more similar datasets (lower FID) will be more effective at improving U-Net performance, however, the results also suggest that this improvement may only occur when images are sufficiently dissimilar.

ViTaL: A Multimodality Dataset and Benchmark for Multi-pathological Ovarian Tumor Recognition

You Zhou, Lijiang Chen, Guangxia Cui, Wenpei Bai, Yu Guo, Shuchang Lyu, Guangliang Cheng, Qi Zhao

arxiv logopreprintJul 6 2025
Ovarian tumor, as a common gynecological disease, can rapidly deteriorate into serious health crises when undetected early, thus posing significant threats to the health of women. Deep neural networks have the potential to identify ovarian tumors, thereby reducing mortality rates, but limited public datasets hinder its progress. To address this gap, we introduce a vital ovarian tumor pathological recognition dataset called \textbf{ViTaL} that contains \textbf{V}isual, \textbf{T}abular and \textbf{L}inguistic modality data of 496 patients across six pathological categories. The ViTaL dataset comprises three subsets corresponding to different patient data modalities: visual data from 2216 two-dimensional ultrasound images, tabular data from medical examinations of 496 patients, and linguistic data from ultrasound reports of 496 patients. It is insufficient to merely distinguish between benign and malignant ovarian tumors in clinical practice. To enable multi-pathology classification of ovarian tumor, we propose a ViTaL-Net based on the Triplet Hierarchical Offset Attention Mechanism (THOAM) to minimize the loss incurred during feature fusion of multi-modal data. This mechanism could effectively enhance the relevance and complementarity between information from different modalities. ViTaL-Net serves as a benchmark for the task of multi-pathology, multi-modality classification of ovarian tumors. In our comprehensive experiments, the proposed method exhibited satisfactory performance, achieving accuracies exceeding 90\% on the two most common pathological types of ovarian tumor and an overall performance of 85\%. Our dataset and code are available at https://github.com/GGbond-study/vitalnet.

Early warning and stratification of the elderly cardiopulmonary dysfunction-related diseases: multicentre prospective study protocol.

Zhou X, Jin Q, Xia Y, Guan Y, Zhang Z, Guo Z, Liu Z, Li C, Bai Y, Hou Y, Zhou M, Liao WH, Lin H, Wang P, Liu S, Fan L

pubmed logopapersJul 5 2025
In China, there is a lack of standardised clinical imaging databases for multidimensional evaluation of cardiopulmonary diseases. To address this gap, this study protocol launched a project to build a clinical imaging technology integration and a multicentre database for early warning and stratification of cardiopulmonary dysfunction in the elderly. This study employs a cross-sectional design, enrolling over 6000 elderly participants from five regions across China to evaluate cardiopulmonary function and related diseases. Based on clinical criteria, participants are categorized into three groups: a healthy cardiopulmonary function group, a functional decrease group and an established cardiopulmonary diseases group. All subjects will undergo comprehensive assessments including chest CT scans, echocardiography, and laboratory examinations. Additionally, at least 50 subjects will undergo cardiopulmonary exercise testing (CPET). By leveraging artificial intelligence technology, multimodal data will be integrated to establish reference ranges for cardiopulmonary function in the elderly population, as well as to develop early-warning models and severity grading standard models. The study has been approved by the local ethics committee of Shanghai Changzheng Hospital (approval number: 2022SL069A). All the participants will sign the informed consent. The results will be disseminated through peer-reviewed publications and conferences.

SAMed-2: Selective Memory Enhanced Medical Segment Anything Model

Zhiling Yan, Sifan Song, Dingjie Song, Yiwei Li, Rong Zhou, Weixiang Sun, Zhennong Chen, Sekeun Kim, Hui Ren, Tianming Liu, Quanzheng Li, Xiang Li, Lifang He, Lichao Sun

arxiv logopreprintJul 4 2025
Recent "segment anything" efforts show promise by learning from large-scale data, but adapting such models directly to medical images remains challenging due to the complexity of medical data, noisy annotations, and continual learning requirements across diverse modalities and anatomical structures. In this work, we propose SAMed-2, a new foundation model for medical image segmentation built upon the SAM-2 architecture. Specifically, we introduce a temporal adapter into the image encoder to capture image correlations and a confidence-driven memory mechanism to store high-certainty features for later retrieval. This memory-based strategy counters the pervasive noise in large-scale medical datasets and mitigates catastrophic forgetting when encountering new tasks or modalities. To train and evaluate SAMed-2, we curate MedBank-100k, a comprehensive dataset spanning seven imaging modalities and 21 medical segmentation tasks. Our experiments on both internal benchmarks and 10 external datasets demonstrate superior performance over state-of-the-art baselines in multi-task scenarios. The code is available at: https://github.com/ZhilingYan/Medical-SAM-Bench.

PhotIQA: A photoacoustic image data set with image quality ratings

Anna Breger, Janek Gröhl, Clemens Karner, Thomas R Else, Ian Selby, Jonathan Weir-McCall, Carola-Bibiane Schönlieb

arxiv logopreprintJul 4 2025
Image quality assessment (IQA) is crucial in the evaluation stage of novel algorithms operating on images, including traditional and machine learning based methods. Due to the lack of available quality-rated medical images, most commonly used IQA methods employing reference images (i.e. full-reference IQA) have been developed and tested for natural images. Reported application inconsistencies arising when employing such measures for medical images are not surprising, as they rely on different properties than natural images. In photoacoustic imaging (PAI), especially, standard benchmarking approaches for assessing the quality of image reconstructions are lacking. PAI is a multi-physics imaging modality, in which two inverse problems have to be solved, which makes the application of IQA measures uniquely challenging due to both, acoustic and optical, artifacts. To support the development and testing of full- and no-reference IQA measures we assembled PhotIQA, a data set consisting of 1134 reconstructed photoacoustic (PA) images that were rated by 2 experts across five quality properties (overall quality, edge visibility, homogeneity, inclusion and background intensity), where the detailed rating enables usage beyond PAI. To allow full-reference assessment, highly characterised imaging test objects were used, providing a ground truth. Our baseline experiments show that HaarPSI$_{med}$ significantly outperforms SSIM in correlating with the quality ratings (SRCC: 0.83 vs. 0.62). The dataset is publicly available at https://doi.org/10.5281/zenodo.13325196.

A Pan-Organ Vision-Language Model for Generalizable 3D CT Representations.

Beeche C, Kim J, Tavolinejad H, Zhao B, Sharma R, Duda J, Gee J, Dako F, Verma A, Morse C, Hou B, Shen L, Sagreiya H, Davatzikos C, Damrauer S, Ritchie MD, Rader D, Long Q, Chen T, Kahn CE, Chirinos J, Witschey WR

pubmed logopapersJul 3 2025
Generalizable foundation models for computed tomographic (CT) medical imaging data are emerging AI tools anticipated to vastly improve clinical workflow efficiency. However, existing models are typically trained within narrow imaging contexts, including limited anatomical coverage, contrast settings, and clinical indications. These constraints reduce their ability to generalize across the broad spectrum of real-world presentations encountered in volumetric CT imaging data. We introduce Percival, a vision-language foundation model trained on over 400,000 CT volumes and paired radiology reports from more than 50,000 participants enrolled in the Penn Medicine BioBank. Percival employs a dual-encoder architecture with a transformer-based image encoder and a BERT-style language encoder, aligned via symmetric contrastive learning. Percival was validated on over 20,000 participants imaging data encompassing over 100,000 CT volumes. In image-text recall tasks, Percival outperforms models trained on limited anatomical windows. To assess Percival's clinical knowledge, we evaluated the biologic, phenotypic and prognostic relevance using laboratory-wide, phenome-wide association studies and survival analyses, uncovering a rich latent structure aligned with physiological measurements and disease phenotypes.

A Chain of Diagnosis Framework for Accurate and Explainable Radiology Report Generation.

Jin H, Che H, He S, Chen H

pubmed logopapersJul 3 2025
Despite the progress of radiology report generation (RRG), existing works face two challenges: 1) The performances in clinical efficacy are unsatisfactory, especially for lesion attributes description; 2) the generated text lacks explainability, making it difficult for radiologists to trust the results. To address the challenges, we focus on a trustworthy RRG model, which not only generates accurate descriptions of abnormalities, but also provides basis of its predictions. To this end, we propose a framework named chain of diagnosis (CoD), which maintains a chain of diagnostic process for clinically accurate and explainable RRG. It first generates question-answer (QA) pairs via diagnostic conversation to extract key findings, then prompts a large language model with QA diagnoses for accurate generation. To enhance explainability, a diagnosis grounding module is designed to match QA diagnoses and generated sentences, where the diagnoses act as a reference. Moreover, a lesion grounding module is designed to locate abnormalities in the image, further improving the working efficiency of radiologists. To facilitate label-efficient training, we propose an omni-supervised learning strategy with clinical consistency to leverage various types of annotations from different datasets. Our efforts lead to 1) an omni-labeled RRG dataset with QA pairs and lesion boxes; 2) a evaluation tool for assessing the accuracy of reports in describing lesion location and severity; 3) extensive experiments to demonstrate the effectiveness of CoD, where it outperforms both specialist and generalist models consistently on two RRG benchmarks and shows promising explainability by accurately grounding generated sentences to QA diagnoses and images.

A computationally frugal open-source foundation model for thoracic disease detection in lung cancer screening programs

Niccolò McConnell, Pardeep Vasudev, Daisuke Yamada, Daryl Cheng, Mehran Azimbagirad, John McCabe, Shahab Aslani, Ahmed H. Shahin, Yukun Zhou, The SUMMIT Consortium, Andre Altmann, Yipeng Hu, Paul Taylor, Sam M. Janes, Daniel C. Alexander, Joseph Jacob

arxiv logopreprintJul 2 2025
Low-dose computed tomography (LDCT) imaging employed in lung cancer screening (LCS) programs is increasing in uptake worldwide. LCS programs herald a generational opportunity to simultaneously detect cancer and non-cancer-related early-stage lung disease. Yet these efforts are hampered by a shortage of radiologists to interpret scans at scale. Here, we present TANGERINE, a computationally frugal, open-source vision foundation model for volumetric LDCT analysis. Designed for broad accessibility and rapid adaptation, TANGERINE can be fine-tuned off the shelf for a wide range of disease-specific tasks with limited computational resources and training data. Relative to models trained from scratch, TANGERINE demonstrates fast convergence during fine-tuning, thereby requiring significantly fewer GPU hours, and displays strong label efficiency, achieving comparable or superior performance with a fraction of fine-tuning data. Pretrained using self-supervised learning on over 98,000 thoracic LDCTs, including the UK's largest LCS initiative to date and 27 public datasets, TANGERINE achieves state-of-the-art performance across 14 disease classification tasks, including lung cancer and multiple respiratory diseases, while generalising robustly across diverse clinical centres. By extending a masked autoencoder framework to 3D imaging, TANGERINE offers a scalable solution for LDCT analysis, departing from recent closed, resource-intensive models by combining architectural simplicity, public availability, and modest computational requirements. Its accessible, open-source lightweight design lays the foundation for rapid integration into next-generation medical imaging tools that could transform LCS initiatives, allowing them to pivot from a singular focus on lung cancer detection to comprehensive respiratory disease management in high-risk populations.

PanTS: The Pancreatic Tumor Segmentation Dataset

Wenxuan Li, Xinze Zhou, Qi Chen, Tianyu Lin, Pedro R. A. S. Bassi, Szymon Plotka, Jaroslaw B. Cwikla, Xiaoxi Chen, Chen Ye, Zheren Zhu, Kai Ding, Heng Li, Kang Wang, Yang Yang, Yucheng Tang, Daguang Xu, Alan L. Yuille, Zongwei Zhou

arxiv logopreprintJul 2 2025
PanTS is a large-scale, multi-institutional dataset curated to advance research in pancreatic CT analysis. It contains 36,390 CT scans from 145 medical centers, with expert-validated, voxel-wise annotations of over 993,000 anatomical structures, covering pancreatic tumors, pancreas head, body, and tail, and 24 surrounding anatomical structures such as vascular/skeletal structures and abdominal/thoracic organs. Each scan includes metadata such as patient age, sex, diagnosis, contrast phase, in-plane spacing, slice thickness, etc. AI models trained on PanTS achieve significantly better performance in pancreatic tumor detection, localization, and segmentation compared to those trained on existing public datasets. Our analysis indicates that these gains are directly attributable to the 16x larger-scale tumor annotations and indirectly supported by the 24 additional surrounding anatomical structures. As the largest and most comprehensive resource of its kind, PanTS offers a new benchmark for developing and evaluating AI models in pancreatic CT analysis.
Page 10 of 24233 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.