Sort by:
Page 49 of 56557 results

Residual self-attention vision transformer for detecting acquired vitelliform lesions and age-related macular drusen.

Powroznik P, Skublewska-Paszkowska M, Nowomiejska K, Gajda-Deryło B, Brinkmann M, Concilio M, Toro MD, Rejdak R

pubmed logopapersMay 16 2025
Retinal diseases recognition is still a challenging task. Many deep learning classification methods and their modifications have been developed for medical imaging. Recently, Vision Transformers (ViT) have been applied for classification of retinal diseases with great success. Therefore, in this study a novel method was proposed, the Residual Self-Attention Vision Transformer (RS-A ViT), for automatic detection of acquired vitelliform lesions (AVL), macular drusen as well as distinguishing them from healthy cases. The Residual Self-Attention module instead of Self-Attention was applied in order to improve model's performance. The new tool outperforms the classical deep learning methods, like EfficientNet, InceptionV3, ResNet50 and VGG16. The RS-A ViT method also exceeds the ViT algorithm, reaching 96.62%. For the purpose of this research a new dataset was created that combines AVL data gathered from two research centers and drusen as well as normal cases from the OCT dataset. The augmentation methods were applied in order to enlarge the samples. The Grad-CAM interpretability method indicated that this model analyses the appropriate areas in optical coherence tomography images in order to detect retinal diseases. The results proved that the presented RS-A ViT model has a great potential in classification retinal disorders with high accuracy and thus may be applied as a supportive tool for ophthalmologists.

How early can we detect diabetic retinopathy? A narrative review of imaging tools for structural assessment of the retina.

Vaughan M, Denmead P, Tay N, Rajendram R, Michaelides M, Patterson E

pubmed logopapersMay 16 2025
Despite current screening models, enhanced imaging modalities, and treatment regimens, diabetic retinopathy (DR) remains one of the leading causes of vision loss in working age adults. DR can result in irreversible structural and functional retinal damage, leading to visual impairment and reduced quality of life. Given potentially irreversible photoreceptor damage, diagnosis and treatment at the earliest stages will provide the best opportunity to avoid visual disturbances or retinopathy progression. We will review herein the current structural imaging methods used for DR assessment and their capability of detecting DR in the first stages of disease. Imaging tools, such as fundus photography, optical coherence tomography, fundus fluorescein angiography, optical coherence tomography angiography and adaptive optics-assisted imaging will be reviewed. Finally, we describe the future of DR screening programmes and the introduction of artificial intelligence as an innovative approach to detecting subtle changes in the diabetic retina. CLINICAL TRIAL REGISTRATION NUMBER: N/A.

Diff-Unfolding: A Model-Based Score Learning Framework for Inverse Problems

Yuanhao Wang, Shirin Shoushtari, Ulugbek S. Kamilov

arxiv logopreprintMay 16 2025
Diffusion models are extensively used for modeling image priors for inverse problems. We introduce \emph{Diff-Unfolding}, a principled framework for learning posterior score functions of \emph{conditional diffusion models} by explicitly incorporating the physical measurement operator into a modular network architecture. Diff-Unfolding formulates posterior score learning as the training of an unrolled optimization scheme, where the measurement model is decoupled from the learned image prior. This design allows our method to generalize across inverse problems at inference time by simply replacing the forward operator without retraining. We theoretically justify our unrolling approach by showing that the posterior score can be derived from a composite model-based optimization formulation. Extensive experiments on image restoration and accelerated MRI show that Diff-Unfolding achieves state-of-the-art performance, improving PSNR by up to 2 dB and reducing LPIPS by $22.7\%$, while being both compact (47M parameters) and efficient (0.72 seconds per $256 \times 256$ image). An optimized C++/LibTorch implementation further reduces inference time to 0.63 seconds, underscoring the practicality of our approach.

UGoDIT: Unsupervised Group Deep Image Prior Via Transferable Weights

Shijun Liang, Ismail R. Alkhouri, Siddhant Gautam, Qing Qu, Saiprasad Ravishankar

arxiv logopreprintMay 16 2025
Recent advances in data-centric deep generative models have led to significant progress in solving inverse imaging problems. However, these models (e.g., diffusion models (DMs)) typically require large amounts of fully sampled (clean) training data, which is often impractical in medical and scientific settings such as dynamic imaging. On the other hand, training-data-free approaches like the Deep Image Prior (DIP) do not require clean ground-truth images but suffer from noise overfitting and can be computationally expensive as the network parameters need to be optimized for each measurement set independently. Moreover, DIP-based methods often overlook the potential of learning a prior using a small number of sub-sampled measurements (or degraded images) available during training. In this paper, we propose UGoDIT, an Unsupervised Group DIP via Transferable weights, designed for the low-data regime where only a very small number, M, of sub-sampled measurement vectors are available during training. Our method learns a set of transferable weights by optimizing a shared encoder and M disentangled decoders. At test time, we reconstruct the unseen degraded image using a DIP network, where part of the parameters are fixed to the learned weights, while the remaining are optimized to enforce measurement consistency. We evaluate UGoDIT on both medical (multi-coil MRI) and natural (super resolution and non-linear deblurring) image recovery tasks under various settings. Compared to recent standalone DIP methods, UGoDIT provides accelerated convergence and notable improvement in reconstruction quality. Furthermore, our method achieves performance competitive with SOTA DM-based and supervised approaches, despite not requiring large amounts of clean training data.

Deep learning model based on ultrasound images predicts BRAF V600E mutation in papillary thyroid carcinoma.

Yu Y, Zhao C, Guo R, Zhang Y, Li X, Liu N, Lu Y, Han X, Tang X, Mao R, Peng C, Yu J, Zhou J

pubmed logopapersMay 16 2025
BRAF V600E mutation status detection facilitates prognosis prediction in papillary thyroid carcinoma (PTC). We developed a deep-learning model to determine the BRAF V600E status in PTC. PTC from three centers were collected as the training set (1341 patients), validation set (148 patients), and external test set (135 patients). After testing the performance of the ResNeSt-50, Vision Transformer, and Swin Transformer V2 (SwinT) models, SwinT was chosen as the optimal backbone. An integrated BrafSwinT model was developed by combining the backbone with a radiomics feature branch and a clinical parameter branch. BrafSwinT demonstrated an AUC of 0.869 in the external test set, outperforming the original SwinT, Vision Transformer, and ResNeSt-50 models (AUC: 0.782-0.824; <i>p</i> value: 0.017-0.041). BrafSwinT showed promising results in determining BRAF V600E mutation status in PTC based on routinely acquired ultrasound images and basic clinical information, thus facilitating risk stratification.

Escarcitys: A framework for enhancing medical image classification performance in scarcity of trainable samples scenarios.

Wang T, Dai Q, Xiong W

pubmed logopapersMay 16 2025
In the field of healthcare, the acquisition and annotation of medical images present significant challenges, resulting in a scarcity of trainable samples. This data limitation hinders the performance of deep learning models, creating bottlenecks in clinical applications. To address this issue, we construct a framework (EScarcityS) aimed at enhancing the success rate of disease diagnosis in scarcity of trainable medical image scenarios. Firstly, considering that Transformer-based deep learning networks rely on a large amount of trainable data, this study takes into account the unique characteristics of pathological regions. By extracting the feature representations of all particles in medical images at different granularities, a multi-granularity Transformer network (MGVit) is designed. This network leverages additional prior knowledge to assist the Transformer network during training, thereby reducing the data requirement to some extent. Next, the importance maps of particles at different granularities, generated by MGVit, are fused to construct disease probability maps corresponding to the images. Based on these maps, a disease probability map-guided diffusion generation model is designed to generate more realistic and interpretable synthetic data. Subsequently, authentic and synthetical data are mixed and used to retrain MGVit, aiming to enhance the accuracy of medical image classification in scarcity of trainable medical image scenarios. Finally, we conducted detailed experiments on four real medical image datasets to validate the effectiveness of EScarcityS and its specific modules.

Artificial intelligence in dentistry: awareness among dentists and computer scientists.

Costa ED, Vieira MA, Ambrosano GMB, Gaêta-Araujo H, Carneiro JA, Zancan BAG, Scaranti A, Macedo AA, Tirapelli C

pubmed logopapersMay 16 2025
For clinical application of artificial intelligence (AI) in dentistry, collaboration with computer scientists is necessary. This study aims to evaluate the knowledge of dentists and computer scientists regarding the utilization of AI in dentistry, especially in dentomaxillofacial radiology. 610 participants (374 dentists and 236 computer scientists) took part in a survey about AI in dentistry and radiographic imaging. Response options contained Likert scale of agreement/disagreement. Descriptive analyses of agreement scores were performed using quartiles (minimum value, first quartile, median, third quartile, and maximum value). Non-parametric Mann-Whitney test was used to compare response scores between two categories (α = 5%). Dentists academics had higher agreement scores for the questions: "knowing the applications of AI in dentistry", "dentists taking the lead in AI research", "AI education should be part of teaching", "AI can increase the price of dental services", "AI can lead to errors in radiographic diagnosis", "AI can negatively interfere with the choice of Radiology specialty", "AI can cause a reduction in the employment of radiologists", "patient data can be hacked using AI" (p < 0.05). Computer scientists had higher concordance scores for the questions "having knowledge in AI" and "AI's potential to speed up and improve radiographic diagnosis". Although dentists acknowledge the potential benefits of AI in dentistry, they remain skeptical about its use and consider it important to integrate the topic of AI into dental education curriculum. On the other hand, computer scientists confirm technical expertise in AI and recognize its potential in dentomaxillofacial radiology.

Enhancing Craniomaxillofacial Surgeries with Artificial Intelligence Technologies.

Do W, van Nistelrooij N, Bergé S, Vinayahalingam S

pubmed logopapersMay 16 2025
Artificial intelligence (AI) can be applied in multiple subspecialties in craniomaxillofacial (CMF) surgeries. This article overviews AI fundamentals focusing on classification, object detection, and segmentation-core tasks used in CMF applications. The article then explores the development and integration of AI in dentoalveolar surgery, implantology, traumatology, oncology, craniofacial surgery, and orthognathic and feminization surgery. It highlights AI-driven advancements in diagnosis, pre-operative planning, intra-operative assistance, post-operative management, and outcome prediction. Finally, the challenges in AI adoption are discussed, including data limitations, algorithm validation, and clinical integration.

Fluid fluctuations assessed with artificial intelligence during the maintenance phase impact anti-vascular endothelial growth factor visual outcomes in a multicentre, routine clinical care national age-related macular degeneration database.

Martin-Pinardel R, Izquierdo-Serra J, Bernal-Morales C, De Zanet S, Garay-Aramburu G, Puzo M, Arruabarrena C, Sararols L, Abraldes M, Broc L, Escobar-Barranco JJ, Figueroa M, Zapata MA, Ruiz-Moreno JM, Parrado-Carrillo A, Moll-Udina A, Alforja S, Figueras-Roca M, Gómez-Baldó L, Ciller C, Apostolopoulos S, Mishchuk A, Casaroli-Marano RP, Zarranz-Ventura J

pubmed logopapersMay 16 2025
To evaluate the impact of fluid volume fluctuations quantified with artificial intelligence in optical coherence tomography scans during the maintenance phase and visual outcomes at 12 and 24 months in a real-world, multicentre, national cohort of treatment-naïve neovascular age-related macular degeneration (nAMD) eyes. Demographics, visual acuity (VA) and number of injections were collected using the Fight Retinal Blindness tool. Intraretinal fluid (IRF), subretinal fluid (SRF), pigment epithelial detachment (PED), total fluid (TF) and central subfield thickness (CST) were quantified using the RetinAI Discovery tool. Fluctuations were defined as the SD of within-eye quantified values, and eyes were distributed according to SD quartiles for each biomarker. A total of 452 naïve nAMD eyes were included. Eyes with highest (Q4) versus lowest (Q1) fluid fluctuations showed significantly worse VA change (months 3-12) in IRF -3.91 versus 3.50 letters, PED -4.66 versus 3.29, TF -2.07 versus 2.97 and CST -1.85 versus 2.96 (all p<0.05), but not for SRF 0.66 versus 0.93 (p=0.91). Similar VA outcomes were observed at month 24 for PED -8.41 versus 4.98 (p<0.05), TF -7.38 versus 1.89 (p=0.07) and CST -10.58 versus 3.60 (p<0.05). The median number of injections (months 3-24) was significantly higher in Q4 versus Q1 eyes in IRF 9 versus 8, SRF 10 versus 8 and TF 10 versus 8 (all p<0.05). This multicentre study reports a negative effect in VA outcomes of fluid volume fluctuations during the maintenance phase in specific fluid compartments, suggesting that anatomical and functional treatment response patterns may be fluid-specific.

Deep learning MRI-based radiomic models for predicting recurrence in locally advanced nasopharyngeal carcinoma after neoadjuvant chemoradiotherapy: a multi-center study.

Hu C, Xu C, Chen J, Huang Y, Meng Q, Lin Z, Huang X, Chen L

pubmed logopapersMay 15 2025
Local recurrence and distant metastasis were a common manifestation of locoregionally advanced nasopharyngeal carcinoma (LA-NPC) after neoadjuvant chemoradiotherapy (NACT). To validate the clinical value of MRI radiomic models based on deep learning for predicting the recurrence of LA-NPC patients. A total of 328 NPC patients from four hospitals were retrospectively included and divided into the training(n = 229) and validation (n = 99) cohorts randomly. Extracting 975 traditional radiomic features and 1000 deep radiomic features from contrast enhanced T1-weighted (T1WI + C) and T2-weighted (T2WI) sequences, respectively. Least absolute shrinkage and selection operator (LASSO) was applied for feature selection. Five machine learning classifiers were conducted to develop three models for LA-NPC prediction in training cohort, namely Model I: traditional radiomic features, Model II: combined the deep radiomic features with Model I, and Model III: combined Model II with clinical features. The predictive performance of these models were evaluated by receive operating characteristic (ROC) curve analysis, area under the curve (AUC), accuracy, sensitivity and specificity in both cohorts. The clinical characteristics in two cohorts showed no significant differences. Choosing 15 radiomic features and 6 deep radiomic features from T1WI + C. Choosing 9 radiomic features and 6 deep radiomic features from T2WI. In T2WI, the Model II based on Random forest (RF) (AUC = 0.87) performed best compared with other models in validation cohort. Traditional radiomic model combined with deep radiomic features shows excellent predictive performance. It could be used assist clinical doctors to predict curative effect for LA-NPC patients after NACT.
Page 49 of 56557 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.