Sort by:
Page 4 of 874 results

A novel deep learning framework for retinal disease detection leveraging contextual and local features cues from retinal images.

Khan SD, Basalamah S, Lbath A

pubmed logopapersJul 1 2025
Retinal diseases are a serious global threat to human vision, and early identification is essential for effective prevention and treatment. However, current diagnostic methods rely on manual analysis of fundus images, which heavily depends on the expertise of ophthalmologists. This manual process is time-consuming and labor-intensive and can sometimes lead to missed diagnoses. With advancements in computer vision technology, several automated models have been proposed to improve diagnostic accuracy for retinal diseases and medical imaging in general. However, these methods face challenges in accurately detecting specific diseases within images due to inherent issues associated with fundus images, including inter-class similarities, intra-class variations, limited local information, insufficient contextual understanding, and class imbalances within datasets. To address these challenges, we propose a novel deep learning framework for accurate retinal disease classification. This framework is designed to achieve high accuracy in identifying various retinal diseases while overcoming inherent challenges associated with fundus images. Generally, the framework consists of three main modules. The first module is Densely Connected Multidilated Convolution Neural Network (DCM-CNN) that extracts global contextual information by effectively integrating novel Casual Dilated Dense Convolutional Blocks (CDDCBs). The second module of the framework, namely, Local-Patch-based Convolution Neural Network (LP-CNN), utilizes Class Activation Map (CAM) (obtained from DCM-CNN) to extract local and fine-grained information. To identify the correct class and minimize the error, a synergic network is utilized that takes the feature maps of both DCM-CNN and LP-CNN and connects both maps in a fully connected fashion to identify the correct class and minimize the errors. The framework is evaluated through a comprehensive set of experiments, both quantitatively and qualitatively, using two publicly available benchmark datasets: RFMiD and ODIR-5K. Our experimental results demonstrate the effectiveness of the proposed framework and achieves higher performance on RFMiD and ODIR-5K datasets compared to reference methods.

Breast cancer detection based on histological images using fusion of diffusion model outputs.

Akbari Y, Abdullakutty F, Al Maadeed S, Bouridane A, Hamoudi R

pubmed logopapersJul 1 2025
The precise detection of breast cancer in histopathological images remains a critical challenge in computational pathology, where accurate tissue segmentation significantly enhances diagnostic accuracy. This study introduces a novel approach leveraging a Conditional Denoising Diffusion Probabilistic Model (DDPM) to improve breast cancer detection through advanced segmentation and feature fusion. The method employs a conditional channel within the DDPM framework, first trained on a breast cancer histopathology dataset and extended to additional datasets to achieve regional-level segmentation of tumor areas and other tissue regions. These segmented regions, combined with predicted noise from the diffusion model and original images, are processed through an EfficientNet-B0 network to extract enhanced features. A transformer decoder then fuses these features to generate final detection results. Extensive experiments optimizing the network architecture and fusion strategies were conducted, and the proposed method was evaluated across four distinct datasets, achieving a peak accuracy of 92.86% on the BRACS dataset, 100% on the BreCaHAD dataset, 96.66% the ICIAR2018 dataset. This approach represents a significant advancement in computational pathology, offering a robust tool for breast cancer detection with potential applications in broader medical imaging contexts.

Integrating prior knowledge with deep learning for optimized quality control in corneal images: A multicenter study.

Li FF, Li GX, Yu XX, Zhang ZH, Fu YN, Wu SQ, Wang Y, Xiao C, Ye YF, Hu M, Dai Q

pubmed logopapersJul 1 2025
Artificial intelligence (AI) models are effective for analyzing high-quality slit-lamp images but often face challenges in real-world clinical settings due to image variability. This study aims to develop and evaluate a hybrid AI-based image quality control system to classify slit-lamp images, improving diagnostic accuracy and efficiency, particularly in telemedicine applications. Cross-sectional study. Our Zhejiang Eye Hospital dataset comprised 2982 slit-lamp images as the internal dataset. Two external datasets were included: 13,554 images from the Aier Guangming Eye Hospital (AGEH) and 9853 images from the First People's Hospital of Aksu District in Xinjiang (FPH of Aksu). We developed a Hybrid Prior-Net (HP-Net), a novel network that combines a ResNet-based classification branch with a prior knowledge branch leveraging Hough circle transform and frequency domain blur detection. The two branches' features are channel-wise concatenated at the fully connected layer, enhancing representational power and improving the network's ability to classify eligible, misaligned, blurred, and underexposed corneal images. Model performance was evaluated using metrics such as accuracy, precision, recall, specificity, and F1-score, and compared against the performance of other deep learning models. The HP-Net outperformed all other models, achieving an accuracy of 99.03 %, precision of 98.21 %, recall of 95.18 %, specificity of 99.36 %, and an F1-score of 96.54 % in image classification. The results demonstrated that HP-Net was also highly effective in filtering slit-lamp images from the other two datasets, AGEH and FPH of Aksu with accuracies of 97.23 % and 96.97 %, respectively. These results underscore the superior feature extraction and classification capabilities of HP-Net across all evaluated metrics. Our AI-based image quality control system offers a robust and efficient solution for classifying corneal images, with significant implications for telemedicine applications. By incorporating slightly blurred but diagnostically usable images into training datasets, the system enhances the reliability and adaptability of AI tools for medical imaging quality control, paving the way for more accurate and efficient diagnostic workflows.

Brain Tumor Detection through Thermal Imaging and MobileNET

Roham Maiti, Debasmita Bhoumik

arxiv logopreprintJun 30 2025
Brain plays a crucial role in regulating body functions and cognitive processes, with brain tumors posing significant risks to human health. Precise and prompt detection is a key factor in proper treatment and better patient outcomes. Traditional methods for detecting brain tumors, that include biopsies, MRI, and CT scans often face challenges due to their high costs and the need for specialized medical expertise. Recent developments in machine learning (ML) and deep learning (DL) has exhibited strong capabilities in automating the identification and categorization of brain tumors from medical images, especially MRI scans. However, these classical ML models have limitations, such as high computational demands, the need for large datasets, and long training times, which hinder their accessibility and efficiency. Our research uses MobileNET model for efficient detection of these tumors. The novelty of this project lies in building an accurate tumor detection model which use less computing re-sources and runs in less time followed by efficient decision making through the use of image processing technique for accurate results. The suggested method attained an average accuracy of 98.5%.

Clinician-Led Code-Free Deep Learning for Detecting Papilloedema and Pseudopapilloedema Using Optic Disc Imaging

Shenoy, R., Samra, G. S., Sekhri, R., Yoon, H.-J., Teli, S., DeSilva, I., Tu, Z., Maconachie, G. D., Thomas, M. G.

medrxiv logopreprintJun 26 2025
ImportanceDifferentiating pseudopapilloedema from papilloedema is challenging, but critical for prompt diagnosis and to avoid unnecessary invasive procedures. Following diagnosis of papilloedema, objectively grading severity is important for determining urgency of management and therapeutic response. Automated machine learning (AutoML) has emerged as a promising tool for diagnosis in medical imaging and may provide accessible opportunities for consistent and accurate diagnosis and severity grading of papilloedema. ObjectiveThis study evaluates the feasibility of AutoML models for distinguishing the presence and severity of papilloedema using near infrared reflectance images (NIR) obtained from standard optical coherence tomography (OCT), comparing the performance of different AutoML platforms. Design, setting and participantsA retrospective cohort study was conducted using data from University Hospitals of Leicester, NHS Trust. The study involved 289 adults and children patients (813 images) who underwent optic nerve head-centred OCT imaging between 2021 and 2024. The dataset included patients with normal optic discs (69 patients, 185 images), papilloedema (135 patients, 372 images), and optic disc drusen (ODD) (85 patients, 256 images). AutoML platforms - Amazon Rekognition, Medic Mind (MM) and Google Vertex were evaluated for their ability to classify and grade papilloedema severity. Main outcomes and measuresTwo classification tasks were performed: (1) distinguishing papilloedema from normal discs and ODD; (2) grading papilloedema severity (mild/moderate vs. severe). Model performance was evaluated using area under the curve (AUC), precision, recall, F1 score, and confusion matrices for all six models. ResultsAmazon Rekognition outperformed the other platforms, achieving the highest AUC (0.90) and F1 score (0.81) in distinguishing papilloedema from normal/ODD. For papilloedema severity grading, Amazon Rekognition also performed best, with an AUC of 0.90 and F1 score of 0.79. Google Vertex and Medic Mind demonstrated good performance but had slightly lower accuracy and higher misclassification rates. Conclusions and relevanceThis evaluation of three widely available AutoML platforms using NIR images obtained from standard OCT shows promise in distinguishing and grading papilloedema. These models provide an accessible, scalable solution for clinical teams without coding expertise to feasibly develop intelligent diagnostic systems to recognise and characterise papilloedema. Further external validation and prospective testing is needed to confirm their clinical utility and applicability in diverse settings. Key PointsQuestion: Can clinician-led, code-free deep learning models using automated machine learning (AutoML) accurately differentiate papilloedema from pseudopapilloedema using optic disc imaging? Findings: Three widely available AutoML platforms were used to develop models that successfully distinguish the presence and severity of papilloedema on optic disc imaging, with Amazon Rekognition demonstrating the highest performance. Meaning: AutoML may assist clinical teams, even those with limited coding expertise, in diagnosing papilloedema, potentially reducing the need for invasive investigations.

Comparative Analysis of Automated vs. Expert-Designed Machine Learning Models in Age-Related Macular Degeneration Detection and Classification.

Durmaz Engin C, Beşenk U, Özizmirliler D, Selver MA

pubmed logopapersJun 25 2025
To compare the effectiveness of expert-designed machine learning models and code-free automated machine learning (AutoML) models in classifying optical coherence tomography (OCT) images for detecting age-related macular degeneration (AMD) and distinguishing between its dry and wet forms. Custom models were developed by an artificial intelligence expert using the EfficientNet V2 architecture, while AutoML models were created by an ophthalmologist utilizing LobeAI with transfer learning via ResNet-50 V2. Both models were designed to differentiate normal OCT images from AMD and to also distinguish between dry and wet AMD. The models were trained and tested using an 80:20 split, with each diagnostic group containing 500 OCT images. Performance metrics, including sensitivity, specificity, accuracy, and F1 scores, were calculated and compared. The expert-designed model achieved an overall accuracy of 99.67% for classifying all images, with F1 scores of 0.99 or higher across all binary class comparisons. In contrast, the AutoML model achieved an overall accuracy of 89.00%, with F1 scores ranging from 0.86 to 0.90 in binary comparisons. Notably lower recall was observed for dry AMD vs. normal (0.85) in the AutoML model, indicating challenges in correctly identifying dry AMD. While the AutoML models demonstrated acceptable performance in identifying and classifying AMD cases, the expert-designed models significantly outperformed them. The use of advanced neural network architectures and rigorous optimization in the expert-developed models underscores the continued necessity of expert involvement in the development of high-precision diagnostic tools for medical image classification.

Artificial Intelligence for Early Detection and Prognosis Prediction of Diabetic Retinopathy

Budi Susilo, Y. K., Yuliana, D., Mahadi, M., Abdul Rahman, S., Ariffin, A. E.

medrxiv logopreprintJun 20 2025
This review explores the transformative role of artificial intelligence (AI) in the early detection and prognosis prediction of diabetic retinopathy (DR), a leading cause of vision loss in diabetic patients. AI, particularly deep learning and convolutional neural networks (CNNs), has demonstrated remarkable accuracy in analyzing retinal images, identifying early-stage DR with high sensitivity and specificity. These advancements address critical challenges such as intergrader variability in manual screening and the limited availability of specialists, especially in underserved regions. The integration of AI with telemedicine has further enhanced accessibility, enabling remote screening through portable devices and smartphone-based imaging. Economically, AI-based systems reduce healthcare costs by optimizing resource allocation and minimizing unnecessary referrals. Key findings highlight the dominance of Medicine (819 documents) and Computer Science (613 documents) in research output, reflecting the interdisciplinary nature of this field. Geographically, China, the United States, and India lead in contributions, underscoring global efforts to combat DR. Despite these successes, challenges such as algorithmic bias, data privacy, and the need for explainable AI (XAI) remain. Future research should focus on multi-center validation, diverse AI methodologies, and clinician-friendly tools to ensure equitable adoption. By addressing these gaps, AI can revolutionize DR management, reducing the global burden of diabetes-related blindness through early intervention and scalable solutions.

An Open-Source Generalizable Deep Learning Framework for Automated Corneal Segmentation in Anterior Segment Optical Coherence Tomography Imaging

Kandakji, L., Liu, S., Balal, S., Moghul, I., Allan, B., Tuft, S., Gore, D., Pontikos, N.

medrxiv logopreprintJun 20 2025
PurposeTo develop a deep learning model - Cornea nnU-Net Extractor (CUNEX) - for full-thickness corneal segmentation of anterior segment optical coherence tomography (AS-OCT) images and evaluate its utility in artificial intelligence (AI) research. MethodsWe trained and evaluated CUNEX using nnU-Net on 600 AS-OCT images (CSO MS-39) from 300 patients: 100 normal, 100 keratoconus (KC), and 100 Fuchs endothelial corneal dystrophy (FECD) eyes. To assess generalizability, we externally validated CUNEX on 1,168 AS-OCT images from an infectious keratitis dataset acquired from a different device (Casia SS-1000). We benchmarked CUNEX against two recent models, CorneaNet and ScLNet. We then applied CUNEX to our dataset of 194,599 scans from 37,499 patients as preprocessing for a classification model evaluating whether segmentation improves AI prediction, including age, sex, and disease staging (KC and FECD). ResultsCUNEX achieved Dice similarity coefficient (DSC) and intersection over union (IoU) scores ranging from 94-95% and 90-99%, respectively, across healthy, KC, and FECD eyes. This was similar to ScLNet (within 3%) but better than CorneaNet (8-35% lower). On external validation, CUNEX maintained high performance (DSC 83%; IoU 71%) while ScLNet (DSC 14%; IoU 8%) and CorneaNet (DSC 16%; IoU 9%) failed to generalize. Unexpectedly, segmentation minimally impacted classification accuracy except for sex prediction, where accuracy dropped from 81 to 68%, suggesting sex-related features may lie outside the cornea. ConclusionCUNEX delivers the first open-source generalizable corneal segmentation model using the latest framework, supporting its use in clinical analysis and AI workflows across diseases and imaging platforms. It is available at https://github.com/lkandakji/CUNEX.

Towards Classifying Histopathological Microscope Images as Time Series Data

Sungrae Hong, Hyeongmin Park, Youngsin Ko, Sol Lee, Bryan Wong, Mun Yong Yi

arxiv logopreprintJun 19 2025
As the frontline data for cancer diagnosis, microscopic pathology images are fundamental for providing patients with rapid and accurate treatment. However, despite their practical value, the deep learning community has largely overlooked their usage. This paper proposes a novel approach to classifying microscopy images as time series data, addressing the unique challenges posed by their manual acquisition and weakly labeled nature. The proposed method fits image sequences of varying lengths to a fixed-length target by leveraging Dynamic Time-series Warping (DTW). Attention-based pooling is employed to predict the class of the case simultaneously. We demonstrate the effectiveness of our approach by comparing performance with various baselines and showcasing the benefits of using various inference strategies in achieving stable and reliable results. Ablation studies further validate the contribution of each component. Our approach contributes to medical image analysis by not only embracing microscopic images but also lifting them to a trustworthy level of performance.

Artificial intelligence for age-related macular degeneration diagnosis in Australia: A Novel Qualitative Interview Study.

Ly A, Herse S, Williams MA, Stapleton F

pubmed logopapersJun 14 2025
Artificial intelligence (AI) systems for age-related macular degeneration (AMD) diagnosis abound but are not yet widely implemented. AI implementation is complex, requiring the involvement of multiple, diverse stakeholders including technology developers, clinicians, patients, health networks, public hospitals, private providers and payers. There is a pressing need to investigate how AI might be adopted to improve patient outcomes. The purpose of this first study of its kind was to use the AI translation extended version of the non-adoption, abandonment, scale-up, spread and sustainability of healthcare technologies framework to explore stakeholder experiences, attitudes, enablers, barriers and possible futures of digital diagnosis using AI for AMD and eyecare in Australia. Semi-structured, online interviews were conducted with 37 stakeholders (12 clinicians, 10 healthcare leaders, 8 patients and 7 developers) from September 2022 to March 2023. The interviews were audio-recorded, transcribed and analysed using directed and summative content analysis. Technological features influencing implementation were most frequently discussed, followed by the context or wider system, value proposition, adopters, organisations, the condition and finally embedding the adaptation. Patients preferred to focus on the condition, while healthcare leaders elaborated on organisation factors. Overall, stakeholders supported a portable, device-independent clinical decision support tool that could be integrated with existing diagnostic equipment and patient management systems. Opportunities for AI to drive new models of healthcare, patient education and outreach, and the importance of maintaining equity across population groups were consistently emphasised. This is the first investigation to report numerous, interacting perspectives on the adoption of digital diagnosis for AMD in Australia, incorporating an intentionally diverse stakeholder group and the patient voice. It provides a series of practical considerations for the implementation of AI and digital diagnosis into existing care for people with AMD.
Page 4 of 874 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.