Sort by:
Page 8 of 46453 results

Identifying Pathogenesis of Acute Coronary Syndromes using Sequence Contrastive Learning in Coronary Angiography.

Ma X, Shibata Y, Kurihara O, Kobayashi N, Takano M, Kurihara T

pubmed logopapersSep 1 2025
Advances in intracoronary imaging have made it possible to distinguish different pathological mechanisms underlying acute coronary syndrome (ACS) in vivo. Accurate identification of these mechanisms is increasingly recognized as essential for enabling tailored therapeutic strategies. ACS pathogenesis is primarily classified into 2 major types: plaque rupture (PR) and plaque erosion (PE). Patients with PR are treated with intracoronary stenting, whereas those with PE may be potentially managed conservatively without stenting. The aim of this study is to develop neural networks capable of distinguishing PR from PE solely using coronary angiography (CAG). A total of 842 videos from 278 ACS patients (PR:172, PE:106) were included. To ensure the reliability of the ground truth for PR/PE classification, the ACS pathogenesis for each patient was confirmed using Optical Coherence Tomography (OCT). To enhance the learning of discriminative features across consecutive frames and improve PR/PE classification performance, we propose Sequence Contrastive Learning (SeqCon), which addresses the limitations inherent in conventional contrastive learning approaches. In the experiments, the external test set consisted of 18 PR patients (46 videos) and 11 PE patients (30 videos). SeqCon achieved an accuracy of 82.8%, sensitivity of 88.9%, specificity of 72.3%, positive predictive value of 84.2%, and negative predictive value of 80.0% at the patient-level. This is the first report to use contrastive learning for diagnosing the underlying mechanism of ACS by CAG. We demonstrated that it can be feasible to distinguish between PR and PE without intracoronary imaging modalities.

Application of deep learning for detection of nasal bone fracture on X-ray nasal bone lateral view.

Mortezaei T, Dalili Kajan Z, Mirroshandel SA, Mehrpour M, Shahidzadeh S

pubmed logopapersSep 1 2025
This study aimed to assess the efficacy of deep learning applications for the detection of nasal bone fracture on X-ray nasal bone lateral view. In this retrospective observational study, 2968 X-ray nasal bone lateral views of trauma patients were collected from a radiology centre, and randomly divided into training, validation, and test sets. Preprocessing included noise reduction by using the Gaussian filter and image resizing. Edge detection was performed using the Canny edge detector. Feature extraction was conducted using the gray-level co-occurrence matrix (GLCM), histogram of oriented gradients (HOG), and local binary pattern (LBP) techniques. Several machine learning algorithms namely CNN, VGG16, VGG19, MobileNet, Xception, ResNet50V2, and InceptionV3 were employed for the classification of images into 2 classes of normal and fracture. The accuracy was the highest for VGG16 and Swin Transformer (79%) followed by ResNet50V2 and InceptionV3 (0.74), Xception (0.72), and MobileNet (0.71). The AUC was the highest for VGG16 (0.86) followed by VGG19 (0.84), MobileNet and Xception (0.83), and Swin Transformer (0.79). The tested deep learning models were capable of detecting nasal bone fractures on X-ray nasal bone lateral views with high accuracy. VGG16 was the best model with successful results.

TransForSeg: A Multitask Stereo ViT for Joint Stereo Segmentation and 3D Force Estimation in Catheterization

Pedram Fekri, Mehrdad Zadeh, Javad Dargahi

arxiv logopreprintSep 1 2025
Recently, the emergence of multitask deep learning models has enhanced catheterization procedures by providing tactile and visual perception data through an end-to-end architec- ture. This information is derived from a segmentation and force estimation head, which localizes the catheter in X-ray images and estimates the applied pressure based on its deflection within the image. These stereo vision architectures incorporate a CNN- based encoder-decoder that captures the dependencies between X-ray images from two viewpoints, enabling simultaneous 3D force estimation and stereo segmentation of the catheter. With these tasks in mind, this work approaches the problem from a new perspective. We propose a novel encoder-decoder Vision Transformer model that processes two input X-ray images as separate sequences. Given sequences of X-ray patches from two perspectives, the transformer captures long-range dependencies without the need to gradually expand the receptive field for either image. The embeddings generated by both the encoder and decoder are fed into two shared segmentation heads, while a regression head employs the fused information from the decoder for 3D force estimation. The proposed model is a stereo Vision Transformer capable of simultaneously segmenting the catheter from two angles while estimating the generated forces at its tip in 3D. This model has undergone extensive experiments on synthetic X-ray images with various noise levels and has been compared against state-of-the-art pure segmentation models, vision-based catheter force estimation methods, and a multitask catheter segmentation and force estimation approach. It outperforms existing models, setting a new state-of-the-art in both catheter segmentation and force estimation.

Can super resolution via deep learning improve classification accuracy in dental radiography?

Çelik B, Mikaeili M, Genç MZ, Çelik ME

pubmed logopapersSep 1 2025
Deep learning-driven super resolution (SR) aims to enhance the quality and resolution of images, offering potential benefits in dental imaging. Although extensive research has focused on deep learning based dental classification tasks, the impact of applying SR techniques on classification remains underexplored. This study seeks to address this gap by evaluating and comparing the performance of deep learning classification models on dental images with and without SR enhancement. An open-source dental image dataset was utilized to investigate the impact of SR on image classification performance. SR was applied by 2 models with a scaling ratio of 2 and 4, while classification was performed by 4 deep learning models. Performances were evaluated by well-accepted metrics like structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), accuracy, recall, precision, and F1 score. The effect of SR on classification performance is interpreted through 2 different approaches. Two SR models yielded average SSIM and PSNR values of 0.904 and 36.71 for increasing resolution with 2 scaling ratios. Average accuracy and F-1 score for the classification trained and tested with 2 SR-generated images were 0.859 and 0.873. In the first of the comparisons carried out with 2 different approaches, it was observed that the accuracy increased in at least half of the cases (8 out of 16) when different models and scaling ratios were considered, while in the second approach, SR showed a significantly higher performance for almost all cases (12 out of 16). This study demonstrated that the classification with SR-generated images significantly improved outcomes. For the first time, the classification performance of dental radiographs with improved resolution by SR has been investigated. Significant performance improvement was observed compared to the case without SR.

Synthetic Orthopantomography Image Generation Using Generative Adversarial Networks for Data Augmentation.

Waqas M, Hasan S, Ghori AF, Alfaraj A, Faheemuddin M, Khurshid Z

pubmed logopapersSep 1 2025
To overcome the scarcity of annotated dental X-ray datasets, this study presents a novel pipeline for generating high-resolution synthetic orthopantomography (OPG) images using customized generative adversarial networks (GANs). A total of 4777 real OPG images were collected from clinical centres in Pakistan, Thailand, and the U.S., covering diverse anatomical features. Twelve GAN models were initially trained, with four top-performing variants selected for further training on both combined and region-specific datasets. Synthetic images were generated at 2048 × 1024 pixels, maintaining fine anatomical detail. The evaluation was conducted using (1) a YOLO-based object detection model trained on real OPGs to assess feature representation via mean average precision, and (2) expert dentist scoring for anatomical and diagnostic realism. All selected models produced realistic synthetic OPGs. The YOLO detector achieved strong performance on these images, indicating accurate structural representation. Expert evaluations confirmed high anatomical plausibility, with models M1 and M3 achieving over 50% of the reference scores assigned to real OPGs. The developed GAN-based pipeline enables the ethical and scalable creation of synthetic OPG images, suitable for augmenting datasets used in artificial intelligence-driven dental diagnostics. This method provides a practical solution to data limitations in dental artificial intelligence, supporting model development in privacy-sensitive or low-resource environments.

CXR-MultiTaskNet a unified deep learning framework for joint disease localization and classification in chest radiographs.

Reddy KD, Patil A

pubmed logopapersAug 31 2025
Chest X-ray (CXR) is a challenging problem in automated medical diagnosis, where complex visual patterns of thoracic diseases must be precisely identified through multi-label classification and lesion localization. Current approaches typically consider classification and localization in isolation, resulting in a piecemeal system that does not exploit common representations and is often not clinically interpretable, as well as limited in handling multi-label diseases. Although multi-task learning frameworks, such as DeepChest and CLN, appear to meet this goal, they suffer from task interference and poor explainability, which limits their practical application in real-world clinical workflows. To address these limitations, we present a unified multi-task deep learning framework, CXR-MultiTaskNet, for simultaneously classifying thoracic diseases and localizing lesions in chest X-rays. Our framework comprises a standard ResNet50 feature extractor, two task-specific heads for multi-task learning, and a Grad-CAM-based explainability module that provides accurate predictions and enhances clinical explainability. We formulate a joint loss that weighs the relative importance of representation extraction, which is large due to class variations, and the final loss, which is larger in the detection loss that occurs in extreme class imbalances between days and the detectability of varying disease manifestation types. Recent advances made by deep learning methods in the identification of disease in chest X-ray images are promising; however, there are limitations in their performance for complete analysis due to the lack of interpretability, some inherent weaknesses of convolutional neural networks (CNN), and prior learning of classification at the image level before localization of the disease. In this paper, we propose a dual-attention-based hierarchical feature extraction approach, which addresses the challenges of deep learning in detecting diseases in chest X-ray images. Through the use of visual attention maps, the detection steps can be better tracked, and therefore, the entire process is made more interpretable than with a traditional CNN-embedding model. We also manage to obtain both disease-level and pixel-level predictions, which enable explainable and comprehensive analysis of each image and aid in localizing each detected abnormality area. The proposed approach was further optimized for X-ray images by computing the objective losses during training, which ultimately gives higher significance to smaller lesions. Experimental evaluations on a benchmark chest X-ray dataset demonstrate the potential of the proposed approach achieving a macro F1-score of 0.965 (0.968 micro F1-score) for disease classification and mean IoU of 0.851 ([email protected]) for localization of diseases Content: Model intepretability, Chest X-ray image disease detection, Detection region localization, Weakly supervised transfer learning Lesion localization → 5 of 0.927 Compared to state-of-the-art single-task and multi-task baselines, these results are consistently better. The presented framework provides an integrated, method-based approach to chest X-ray analysis that is clinically useful, interpretable, and scalable for automation, allowing for efficient diagnostic pathways and enhanced clinical decision-making. This single framework can serve as a router for next-gen explainable AI in radiology.

Automated system of analysis to quantify pediatric hip morphology.

Gartland CN, Healy J, Lynham RS, Nowlan NC, Green C, Redmond SJ

pubmed logopapersAug 28 2025
Developmental dysplasia of the hip (DDH), a developmental deformity with an incidence of 0.1-3.4%, lacks an objective and reliable definition and assessment metric by which to conduct timely diagnosis. This work aims to address this challenge by developing a system of analysis to accurately detect 22 key anatomical landmarks in anteroposterior pelvic radiographs of the juvenile hip, from which a range of novel salient morphological measures can be derived. A coarse-to-fine approach was implemented, with six model variations of the U-Net deep neural network architecture compared for the coarse model and four variations for the fine model; model variations included differences in data augmentation applied, image input size, network attention gates, and loss function design. The best performing combination achieved a root-mean-square error in the positional accuracy of landmark detection of 3.79 mm with a bias and precision in the x-direction of 0.03 ± 17.6 mm and y-direction of 1.76 ± 22.5 mm in the image frame of reference. Average errors for each morphological metric are in line with the performance of clinical experts. Future work will use this system to perform a population analysis to accurately characterize hip joint morphology and develop an objective and reliable assessment metric for DDH.

Deep Active Learning for Lung Disease Severity Classification from Chest X-rays: Learning with Less Data in the Presence of Class Imbalance

Roy M. Gabriel, Mohammadreza Zandehshahvar, Marly van Assen, Nattakorn Kittisut, Kyle Peters, Carlo N. De Cecco, Ali Adibi

arxiv logopreprintAug 28 2025
To reduce the amount of required labeled data for lung disease severity classification from chest X-rays (CXRs) under class imbalance, this study applied deep active learning with a Bayesian Neural Network (BNN) approximation and weighted loss function. This retrospective study collected 2,319 CXRs from 963 patients (mean age, 59.2 $\pm$ 16.6 years; 481 female) at Emory Healthcare affiliated hospitals between January and November 2020. All patients had clinically confirmed COVID-19. Each CXR was independently labeled by 3 to 6 board-certified radiologists as normal, moderate, or severe. A deep neural network with Monte Carlo Dropout was trained using active learning to classify disease severity. Various acquisition functions were used to iteratively select the most informative samples from an unlabeled pool. Performance was evaluated using accuracy, area under the receiver operating characteristic curve (AU ROC), and area under the precision-recall curve (AU PRC). Training time and acquisition time were recorded. Statistical analysis included descriptive metrics and performance comparisons across acquisition strategies. Entropy Sampling achieved 93.7% accuracy (AU ROC, 0.91) in binary classification (normal vs. diseased) using 15.4% of the training data. In the multi-class setting, Mean STD sampling achieved 70.3% accuracy (AU ROC, 0.86) using 23.1% of the labeled data. These methods outperformed more complex and computationally expensive acquisition functions and significantly reduced labeling needs. Deep active learning with BNN approximation and weighted loss effectively reduces labeled data requirements while addressing class imbalance, maintaining or exceeding diagnostic performance.

Dual-model approach for accurate chest disease detection using GViT and swin transformer V2.

Ahmad K, Rehman HU, Shah B, Ali F, Hussain I

pubmed logopapersAug 28 2025
The precise detection and localization of abnormalities in radiological images are very crucial for clinical diagnosis and treatment planning. To build reliable models, large and annotated datasets are required that contain disease labels and abnormality locations. Most of the time, radiologists face challenges in identifying and segmenting thoracic diseases such as COVID-19, Pneumonia, Tuberculosis, and lung cancer due to overlapping visual patterns in X-ray images. This study proposes a dual-model approach: Gated Vision Transformers (GViT) for classification and Swin Transformer V2 for segmentation and localization. GViT successfully identifies thoracic diseases that exhibit similar radiographic features, while Swin Transformer V2 maps lung areas and pinpoints affected regions. Classification metrics, including precision, recall, and F1-scores, surpassed 0.95 while the Intersection over Union (IoU) score reached 90.98%. Performance assessment via Dice Coefficient, Boundary F1-Score, and Hausdorff Distance demonstrated the system's excellent effectiveness. This artificial intelligence solution will help radiologists in decreasing their mental workload while improving diagnostic precision in healthcare systems that face resource constraints. Transformer-based architectures show strong promise for enhancing medical imaging procedures, according to the study results. Future AI tools should build on this foundation, focusing on comprehensive and precise detection of chest diseases to support effective clinical decision-making.

Automated segmentation of soft X-ray tomography: native cellular structure with sub-micron resolution at high throughput for whole-cell quantitative imaging in yeast.

Chen J, Mirvis M, Ekman A, Vanslembrouck B, Gros ML, Larabell C, Marshall WF

pubmed logopapersAug 28 2025
Soft X-ray tomography (SXT) is an invaluable tool for quantitatively analyzing cellular structures at sub-optical isotropic resolution. However, it has traditionally depended on manual segmentation, limiting its scalability for large datasets. Here, we leverage a deep learning-based auto-segmentation pipeline to segment and label cellular structures in hundreds of cells across three <i>Saccharomyces cerevisiae</i> strains. This task-based pipeline employs manual iterative refinement to improve segmentation accuracy for key structures, including the cell body, nucleus, vacuole, and lipid droplets, enabling high-throughput and precise phenotypic analysis. Using this approach, we quantitatively compared the 3D whole-cell morphometric characteristics of wild-type, VPH1-GFP, and <i>vac14</i> strains, uncovering detailed strain-specific cell and organelle size and shape variations. We show the utility of SXT data for precise 3D curvature analysis of entire organelles and cells and detection of fine morphological features using surface meshes. Our approach facilitates comparative analyses with high spatial precision and statistical throughput, uncovering subtle morphological features at the single-cell and population level. This workflow significantly enhances our ability to characterize cell anatomy and supports scalable studies on the mesoscale, with applications in investigating cellular architecture, organelle biology, and genetic research across diverse biological contexts. [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text].
Page 8 of 46453 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.