Sort by:
Page 1 of 547 results
Next

Recent technological advances in video capsule endoscopy: a comprehensive review.

Kim M, Jang HJ

pubmed logopapersSep 29 2025
Video capsule endoscopy (VCE) originally revolutionized gastrointestinal imaging by providing a noninvasive method for evaluating small bowel diseases. Recent technological innovations, including enhanced imaging systems, artificial intelligence (AI), and improved localization, have significantly improved VCE's diagnostic accuracy, efficiency, and clinical utility. This review aims to summarize and evaluate recent technological advances in VCE, focusing on system comparisons, image enhancement, localization technologies, and AI-assisted lesion detection.

Machine and Deep Learning applied to Medical Microwave Imaging: a Scoping Review from Reconstruction to Classification.

Silva T, Conceicao RC, Godinho DM

pubmed logopapersSep 25 2025
Microwave Imaging (MWI) is a promising modality due to its noninvasive nature and lower cost compared to other medical imaging techniques. These characteristics make it a potential alternative to traditional imaging techniques. It has various medical applications, particularly exploited in breast and brain imaging. Machine Learning (ML) has also been increasingly used for medical applications. This paper provides a scoping review of the role of ML in MWI, focusing on two key areas: image reconstruction and classification. The reconstruction section discusses various ML algorithms used to enhance image quality and computational efficiency, highlighting methods such as Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs). The classification section delves into the application of ML for distinguishing between different tissue types, including applications in breast cancer detection and neurological disorder classification. By analyzing the latest studies and methodologies, this review aims review to the current state of ML-enhanced MWI and sheds light on its potential for clinical applications.

Active-Supervised Model for Intestinal Ulcers Segmentation Using Fuzzy Labeling.

Chen J, Lin Y, Saeed F, Ding Z, Diyan M, Li J, Wang Z

pubmed logopapersSep 25 2025
Inflammatory bowel disease (IBD) is a chronic inflammatory condition of the intestines with a rising global incidence. Colonoscopy remains the gold standard for IBD diagnosis, but traditional image-scoring methods are subjective and complex, impacting diagnostic accuracy and efficiency. To address these limitations, this paper investigates machine learning techniques for intestinal ulcer segmentation, focusing on multi-category ulcer segmentation to enhance IBD diagnosis. We identified two primary challenges in intestinal ulcer segmentation: 1) labeling noise, where inaccuracies in medical image annotation introduce ambiguity, hindering model training, and 2) performance variability across datasets, where models struggle to maintain high accuracy due to medical image diversity. To address these challenges, we propose an active ulcer segmentation algorithm based on fuzzy labeling. A collaborative training segmentation model is designed to utilize pixel-wise confidence extracted from fuzzy labels, distinguishing high- and low-confidence regions, and enhancing robustness to noisy labels through network cooperation. To mitigate performance disparities, we introduce a data adaptation strategy leveraging active learning. By selecting high-information samples based on uncertainty and diversity, the strategy enables incremental model training, improving adaptability. Extensive experiments on public and hospital datasets validate the proposed methods. Our collaborative training model and active learning strategy show significant advantages in handling noisy labels and enhancing model performance across datasets, paving the way for more precise and efficient IBD diagnosis.

Artificial Intelligence-Led Whole Coronary Artery OCT Analysis; Validation and Identification of Drug Efficacy and Higher-Risk Plaques.

Jessney B, Chen X, Gu S, Huang Y, Goddard M, Brown A, Obaid D, Mahmoudi M, Garcia Garcia HM, Hoole SP, Räber L, Prati F, Schönlieb CB, Roberts M, Bennett M

pubmed logopapersSep 25 2025
Intracoronary optical coherence tomography (OCT) can identify changes following drug/device treatment and high-risk plaques, but analysis requires expert clinician or core laboratory interpretation, while artifacts and limited sampling markedly impair reproducibility. Assistive technologies such as artificial intelligence-based analysis may therefore aid both detailed OCT interpretation and patient management. We determined if artificial intelligence-based OCT analysis (AutoOCT) can rapidly process, optimize and analyze OCT images, and identify plaque composition changes that predict drug success/failure and high-risk plaques. AutoOCT deep learning artificial intelligence modules were designed to correct segmentation errors from poor-quality or artifact-containing OCT images, identify tissue/plaque composition, classify plaque types, measure multiple parameters including lumen area, lipid and calcium arcs, and fibrous cap thickness, and output segmented images and clinically useful parameters. Model development used 36 212 frames (127 whole pullbacks, 106 patients). Internal validation of tissue and plaque classification and measurements used ex vivo OCT pullbacks from autopsy arteries, while external validation for plaque stabilization and identifying high-risk plaques used core laboratory analysis of IBIS-4 (Integrated Biomarkers and Imaging Study-4) high-intensity statin (83 patients) and CLIMA (Relationship Between Coronary Plaque Morphology of Left Anterior Descending Artery and Long-Term Clinical Outcome Study; 62 patients) studies, respectively. AutoOCT recovered images containing common artifacts with measurements and tissue and plaque classification accuracy of 83% versus histology, equivalent to expert clinician readers. AutoOCT replicated core laboratory plaque composition changes after high-intensity statin, including reduced lesion lipid arc (13.3° versus 12.5°) and increased minimum fibrous cap thickness (18.9 µm versus 24.4 µm). AutoOCT also identified high-risk plaque features leading to patient events including minimal lumen area <3.5 mm<sup>2</sup>, Lipid arc >180°, and fibrous cap thickness <75 µm, similar to the CLIMA core laboratory. AutoOCT-based analysis of whole coronary artery OCT identifies tissue and plaque types and measures features correlating with plaque stabilization and high-risk plaques. Artificial intelligence-based OCT analysis may augment clinician or core laboratory analysis of intracoronary OCT images for trials of drug/device efficacy and identifying high-risk lesions.

Enhancing Instance Feature Representation: A Foundation Model-Based Multi-Instance Approach for Neonatal Retinal Screening.

Guo J, Wang K, Tan G, Li G, Zhang X, Chen J, Hu J, Liang Y, Jiang B

pubmed logopapersSep 22 2025
Automated analysis of neonatal fundus images presents a uniquely intricate challenge in medical imaging. Existing methodologies predominantly focus on diagnosing abnormalities from individual images, often leading to inaccuracies due to the diverse and subtle nature of neonatal retinal features. Consequently, clinical standards frequently mandate the acquisition of retinal images from multiple angles to ensure the detection of minute lesions. To accommodate this, we propose leveraging multiple fundus images captured from various regions of the retina to comprehensively screen for a wide range of neonatal ocular pathologies. We employ Multiple Instance Learning (MIL) for this task, and introduce a simple yet effective learnable structure on the existing MIL method, called Learnable Dense to Global (LD2G-MIL). Different from other methods that focus on instance-to-bag feature aggregation, the proposed method focuses on generating better instance-level representations that are co-optimized with downstream MIL targets in a learnable way. Additionally, it incorporates a bag prior-based similarity loss (BP loss) mechanism, leveraging prior knowledge to enhance performance in neonatal retinal screening. To validate the efficacy of our LD2G-MIL method, we compiled the Neonatal Fundus Images (NFI) dataset, an extensive collection comprising 115,621 retinal images from 8,886 neonatal clinical episodes. Empirical evaluations on this dataset demonstrate that our approach consistently outperforms stateof-the-art (SOTA) generic and specialized methods. The code and trained models are publicly available at https: //github.com/CVIU-CSU/LD2G-MIL.

CQH-MPN: A Classical-Quantum Hybrid Prototype Network with Fuzzy Proximity-Based Classification for Early Glaucoma Diagnosis.

Liu W, Shao H, Deng X, Jiang Y

pubmed logopapersSep 17 2025
Glaucoma is the second leading cause of blindness worldwide and the only form of irreversible vision loss, making early and accurate diagnosis essential. Although deep learning has revolutionized medical image analysis, its dependence on large-scale annotated datasets poses a significant barrier, especially in clinical scenarios with limited labeled data. To address this challenge, we propose a Classical-Quantum Hybrid Mean Prototype Network (CQH-MPN) tailored for few-shot glaucoma diagnosis. CQH-MPN integrates a quantum feature encoder, which exploits quantum superposition and entanglement for enhanced global representation learning, with a classical convolutional encoder to capture local structural features. These dual encodings are fused and projected into a shared embedding space, where mean prototype representations are computed for each class. We introduce a fuzzy proximity-based metric that extends traditional prototype distance measures by incorporating intra-class variability and inter-class ambiguity, thereby improving classification sensitivity under uncertainty. Our model is evaluated on two public retinal fundus image datasets-ACRIMA and ORIGA-under 1-shot, 3-shot, and 5-shot settings. Results show that CQH-MPN consistently outperforms other models, achieving an accuracy of 94.50%$\pm$1.04% on the ACRIMA dataset under the 1-shot setting. Moreover, the proposed method demonstrates significant performance improvements across different shot configurations on both datasets. By effectively bridging the representational power of quantum computing with classical deep learning, CQH-MPN demonstrates robust generalization in data-scarce environments. This work lays the foundation for quantum-augmented few-shot learning in medical imaging and offers a viable solution for real-world, low-resource diagnostic applications.

Enhancing Oral Health Diagnostics With Hyperspectral Imaging and Computer Vision: Clinical Dataset Study.

Römer P, Ponciano JJ, Kloster K, Siegberg F, Plaß B, Vinayahalingam S, Al-Nawas B, Kämmerer PW, Klauer T, Thiem D

pubmed logopapersSep 11 2025
Diseases of the oral cavity, including oral squamous cell carcinoma, pose major challenges to health care worldwide due to their late diagnosis and complicated differentiation of oral tissues. The combination of endoscopic hyperspectral imaging (HSI) and deep learning (DL) models offers a promising approach to the demand for modern, noninvasive tissue diagnostics. This study presents a large-scale in vivo dataset designed to support DL-based segmentation and classification of healthy oral tissues. This study aimed to develop a comprehensive, annotated endoscopic HSI dataset of the oral cavity and to demonstrate automated, reliable differentiation of intraoral tissue structures by integrating endoscopic HSI with advanced machine learning methods. A total of 226 participants (166 women [73.5%], 60 men [26.5%], aged 24-87 years) were examined using an endoscopic HSI system, capturing spectral data in the range of 500 to 1000 nm. Oral structures in red, green, and blue and HSI scans were annotated using RectLabel Pro (by Ryo Kawamura). DeepLabv3 (Google Research) with a ResNet-50 backbone was adapted for endoscopic HSI segmentation. The model was trained for 50 epochs on 70% of the dataset, with 30% for evaluation. Performance metrics (precision, recall, and F1-score) confirmed its efficacy in distinguishing oral tissue types. DeepLabv3 (ResNet-101) and U-Net (EfficientNet-B0/ResNet-50) achieved the highest overall F1-scores of 0.857 and 0.84, respectively, particularly excelling in segmenting the mucosa (0.915), retractor (0.94), tooth (0.90), and palate (0.90). Variability analysis confirmed high spectral diversity across tissue classes, supporting the dataset's complexity and authenticity for realistic clinical conditions. The presented dataset addresses a key gap in oral health imaging by developing and validating robust DL algorithms for endoscopic HSI data. It enables accurate classification of oral tissue and paves the way for future applications in individualized noninvasive pathological tissue analysis, early cancer detection, and intraoperative diagnostics of oral diseases.

RetiGen: Framework leveraging domain generalization and test-time adaptation for multi-view retinal diagnostics.

Zhang G, Chen Z, Huo J, do Rio JN, Komninos C, Liu Y, Sparks R, Ourselin S, Bergeles C, Jackson TL

pubmed logopapersSep 10 2025
Domain generalization techniques involve training a model on one set of domains and evaluating its performance on different, unseen domains. In contrast, test-time adaptation optimizes the model specifically for the target domain during inference. Both approaches improve diagnostic accuracy in medical imaging models. However, no research to date has leveraged the advantages of both approaches in an end-to-end fashion. Our paper introduces RetiGen, a test-time optimization framework designed to be integrated with existing domain generalization approaches. With an emphasis on the ophthalmic imaging domain, RetiGen leverages unlabeled multi-view color fundus photographs-a critical optical technology in retinal diagnostics. By utilizing information from multiple viewing angles, our approach significantly enhances the robustness and accuracy of machine learning models when applied across different domains. By integrating class balancing, test-time adaptation, and a multi-view optimization strategy, RetiGen effectively addresses the persistent issue of domain shift, which often hinders the performance of imaging models. Experimental results demonstrate that our method outperforms state-of-the-art techniques in both domain generalization and test-time optimization. Specifically, RetiGen increases the generalizability of the MFIDDR dataset, improving the AUC from 0.751 to 0.872, a 0.121 improvement. Similarly, for the DRTiD dataset, the AUC increased from 0.794 to 0.879, a 0.085 improvement. The code for RetiGen is publicly available at https://github.com/RViMLab/RetiGen.

Adverse cardiovascular events in coronary Plaques not undeRgoing pErcutaneous coronary intervention evaluateD with optIcal Coherence Tomography. The PREDICT-AI risk model.

Bruno F, Immobile Molaro M, Sperti M, Bianchini F, Chu M, Cardaci C, Wańha W, Gasior P, Zecchino S, Pavani M, Vergallo R, Biscaglia S, Cerrato E, Secco GG, Mennuni M, Mancone M, De Filippo O, Mattesini A, Canova P, Boi A, Ugo F, Scarsini R, Costa F, Fabris E, Campo G, Wojakowski W, Morbiducci U, Deriu M, Tu S, Piccolo R, D'Ascenzo F, Chiastra C, Burzotta F

pubmed logopapersAug 26 2025
Most acute coronary syndromes (ACS) originate from coronary plaques that are angiographically mild and not flow limiting. These lesions, often characterised by thin-cap fibroatheroma, large lipid cores and macrophage infiltration, are termed 'vulnerable plaques' and are associated with a heightened risk of future major adverse cardiovascular events (MACE). However, current imaging modalities lack robust predictive power, and treatment strategies for such plaques remain controversial. The PREDICT-AI study aims to develop and externally validate a machine learning (ML)-based risk score that integrates optical coherence tomography (OCT) plaque features and patient-level clinical data to predict the natural history of non-flow-limiting coronary lesions not treated with percutaneous coronary intervention (PCI). This is a multicentre, prospective, observational study enrolling 500 patients with recent ACS who undergo comprehensive three-vessel OCT imaging. Lesions not treated with PCI will be characterised using artificial intelligence (AI)-based plaque analysis (OctPlus software), including quantification of fibrous cap thickness, lipid arc, macrophage presence and other microstructural features. A three-step ML pipeline will be used to derive and validate a risk score predicting MACE at follow-up. Outcomes will be adjudicated blinded to OCT findings. The primary endpoint is MACE (composite of cardiovascular death, myocardial infarction, urgent revascularisation or target vessel revascularisation). Event prediction will be assessed at both the patient level and plaque level. The PREDICT-AI study will generate a clinically applicable, AI-driven risk stratification tool based on high-resolution intracoronary imaging. By identifying high-risk, non-obstructive coronary plaques, this model may enhance personalised management strategies and support the transition towards precision medicine in coronary artery disease.

Weighted loss for imbalanced glaucoma detection: Insights from visual explanations.

Nugraha DJ, Yudistira N, Widodo AW

pubmed logopapersAug 17 2025
Glaucoma is a leading cause of irreversible vision loss in ophthalmology, primarily resulting from damage to the optic nerve. Early detection is crucial but remains challenging due to the inherent class imbalance in glaucoma fundus image datasets. This study addresses this limitation by applying a weighted loss function to Convolutional Neural Networks (CNNs), evaluated on the standardized SMDG-19 dataset, which integrates data from 19 publicly available sources. Key performance metrics including recall, F1-score, precision, accuracy, and AUC were analyzed, and interpretability was assessed using Grad-CAM.The results demonstrate that recall increased from 60.3% to 87.3%, representing a relative improvement of 44.75%, while F1-score improved from 66.5% to 71.4% (+7.25%). Minor trade-offs were observed in precision, which declined from 74.5% to 69.6% (-6.53%), and in accuracy, which dropped from 84.2% to 80.7% (-4.10%). In contrast, AUC rose from 84.2% to 87.4%, reflecting a relative gain of 3.21%. Grad-CAM visualizations showed consistent focus on clinically relevant regions of the optic nerve head, underscoring the effectiveness of the weighted loss strategy in improving both the performance and interpretability of CNN-based glaucoma detection systems.
Page 1 of 547 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.