Sort by:
Page 131 of 4003995 results

Artificial intelligence based fully automatic 3D paranasal sinus segmentation.

Kaygısız Yiğit M, Pınarbaşı A, Etöz M, Duman ŞB, Bayrakdar İŞ

pubmed logopapersJul 25 2025
Precise 3D segmentation of paranasal sinuses is essential for accurate diagnosis and treatment. This study aimed to develop a fully automated segmentation algorithm for the paranasal sinuses using the nnU-Net v2 architecture. The nnU-Net v2-based segmentation algorithm was developed using Python 3.6.1 and the PyTorch library, and its performance was evaluated on a dataset of 97 cone-beam computed tomography (CBCT) scans. Ground truth annotations were manually generated by expert radiologists using the 3D Slicer software, employing a polygonal labeling technique across sagittal, coronal, and axial planes. Model performance was assessed using several quantitative metrics, including accuracy, Dice Coefficient (DC), sensitivity, precision, Jaccard Index, Area Under the Curve (AUC), and 95% Hausdorff Distance (95% HD). The nnU-Net v2-based algorithm demonstrated high segmentation performance across all paranasal sinuses. Dice Coefficient (DC) values were 0.94 for the frontal, 0.95 for the sphenoid, 0.97 for the maxillary, and 0.88 for the ethmoid sinuses. Accuracy scores exceeded 99% for all sinuses. The 95% Hausdorff Distance (95% HD) values were 0.51 mm for both the frontal and maxillary sinuses, 0.85 mm for the sphenoid sinus, and 1.17 mm for the ethmoid sinus. Jaccard indices were 0.90, 0.91, 0.94, and 0.80, respectively. This study highlights the high accuracy and precision of the nnU-Net v2-based CNN model in the fully automated segmentation of paranasal sinuses from CBCT images. The results suggest that the proposed model can significantly contribute to clinical decision-making processes, facilitating diagnostic and therapeutic procedures.

Quantifying physiological variability and improving reproducibility in 4D-flow MRI cerebrovascular measurements with self-supervised deep learning.

Jolicoeur BW, Yardim ZS, Roberts GS, Rivera-Rivera LA, Eisenmenger LB, Johnson KM

pubmed logopapersJul 25 2025
To assess the efficacy of self-supervised deep learning (DL) denoising in reducing measurement variability in 4D-Flow MRI, and to clarify the contributions of physiological variation to cerebrovascular hemodynamics. A self-supervised DL denoising framework was trained on 3D radially sampled 4D-Flow MRI data. The model was evaluated in a prospective test-retest imaging study in which 10 participants underwent multiple 4D-Flow MRI scans. This included back-to-back scans and a single scan interleaved acquisition designed to isolate noise from physiological variations. The effectiveness of DL denoising was assessed by comparing pixelwise velocity and hemodynamic metrics before and after denoising. DL denoising significantly enhanced the reproducibility of 4D-Flow MRI measurements, reducing the 95% confidence interval of cardiac-resolved velocity from 215 to 142 mm/s in back-to-back scans and from 158 to 96 mm/s in interleaved scans, after adjusting for physiological variation. In derived parameters, DL denoising did not significantly improve integrated measures, such as flow rates, but did significantly improve noise sensitive measures, such as pulsatility index. Physiologic variation in back-to-back time-resolved scans contributed 26.37% ± 0.08% and 32.42% ± 0.05% of standard error before and after DL. Self-supervised DL denoising enhances the quantitative repeatability of 4D-Flow MRI by reducing technical noise; however, variations from physiology and post-processing are not removed. These findings underscore the importance of accounting for both technical and physiological variability in neurovascular flow imaging, particularly for studies aiming to establish biomarkers for neurodegenerative diseases with vascular contributions.

Machine learning approach to DNA methylation and neuroimaging signatures as biomarkers for psychological resilience in young adults.

Lin SH, Chen YH, Yang MH, Lin CW, Lu AK, Yang CT, Chang YH, Chen BY, Hsieh S, Lin SH

pubmed logopapersJul 24 2025
Psychological resilience is influenced by both psychological and biological factors. However, the potential of using DNA methylation (DNAm) probes and brain imaging variables to predict psychological resilience remains unclear. This study aimed to investigate DNAm, structural magnetic resonance imaging (sMRI), and diffusion tensor imaging (DTI) as biomarkers for psychological resilience. Additionally, we evaluated the ability of epigenetic and imaging markers to distinguish between individuals with low and high resilience using machine learning algorithms. A total of 130 young adults assessed with the Connor-Davidson Resilience Scale (CD-RISC) were divided into high and low psychological resilience groups. We utilized two feature selection algorithms, the Boruta and variable selection using random forest (varSelRF), to identify important variables based on nine for DNAm, sixty-eight for gray matter volume (GMV) measured with sMRI, and fifty-four diffusion indices of DTI. We constructed machine learning models to identify low resilience individuals using the selected variables. The study identified thirteen variables (five DNAm, five GMV, and three DTI diffusion indices) from feature selection methods. We utilized the selected variables based on 10-fold cross validation using four machine learning models for low resilience (AUC = 0.77-0.82). In interaction analysis, we identified cg03013609 had a stronger interaction with cg17682313 and the rostral middle frontal gyrus in the right hemisphere for psychological resilience. Our findings supported the concept that DNAm, sMRI, and DTI signatures can identify individuals with low psychological resilience. These combined epigenetic imaging markers demonstrated high discriminative abilities for low psychological resilience using machine learning models.

RealDeal: Enhancing Realism and Details in Brain Image Generation via Image-to-Image Diffusion Models

Shen Zhu, Yinzhu Jin, Tyler Spears, Ifrah Zawar, P. Thomas Fletcher

arxiv logopreprintJul 24 2025
We propose image-to-image diffusion models that are designed to enhance the realism and details of generated brain images by introducing sharp edges, fine textures, subtle anatomical features, and imaging noise. Generative models have been widely adopted in the biomedical domain, especially in image generation applications. Latent diffusion models achieve state-of-the-art results in generating brain MRIs. However, due to latent compression, generated images from these models are overly smooth, lacking fine anatomical structures and scan acquisition noise that are typically seen in real images. This work formulates the realism enhancing and detail adding process as image-to-image diffusion models, which refines the quality of LDM-generated images. We employ commonly used metrics like FID and LPIPS for image realism assessment. Furthermore, we introduce new metrics to demonstrate the realism of images generated by RealDeal in terms of image noise distribution, sharpness, and texture.

TextSAM-EUS: Text Prompt Learning for SAM to Accurately Segment Pancreatic Tumor in Endoscopic Ultrasound

Pascal Spiegler, Taha Koleilat, Arash Harirpoush, Corey S. Miller, Hassan Rivaz, Marta Kersten-Oertel, Yiming Xiao

arxiv logopreprintJul 24 2025
Pancreatic cancer carries a poor prognosis and relies on endoscopic ultrasound (EUS) for targeted biopsy and radiotherapy. However, the speckle noise, low contrast, and unintuitive appearance of EUS make segmentation of pancreatic tumors with fully supervised deep learning (DL) models both error-prone and dependent on large, expert-curated annotation datasets. To address these challenges, we present TextSAM-EUS, a novel, lightweight, text-driven adaptation of the Segment Anything Model (SAM) that requires no manual geometric prompts at inference. Our approach leverages text prompt learning (context optimization) through the BiomedCLIP text encoder in conjunction with a LoRA-based adaptation of SAM's architecture to enable automatic pancreatic tumor segmentation in EUS, tuning only 0.86% of the total parameters. On the public Endoscopic Ultrasound Database of the Pancreas, TextSAM-EUS with automatic prompts attains 82.69% Dice and 85.28% normalized surface distance (NSD), and with manual geometric prompts reaches 83.10% Dice and 85.70% NSD, outperforming both existing state-of-the-art (SOTA) supervised DL models and foundation models (e.g., SAM and its variants). As the first attempt to incorporate prompt learning in SAM-based medical image segmentation, TextSAM-EUS offers a practical option for efficient and robust automatic EUS segmentation.

DEEP Q-NAS: A new algorithm based on neural architecture search and reinforcement learning for brain tumor identification from MRI.

Hasan MS, Komol MMR, Fahim F, Islam J, Pervin T, Hasan MM

pubmed logopapersJul 24 2025
A significant obstacle in brain tumor treatment planning is determining the tumor's actual size. Magnetic resonance imaging (MRI) is one of the first-line brain tumor diagnosis. It takes a lot of effort and mostly depends on the operator's experience to manually separate the size of a brain tumor from 3D MRI volumes. Machine learning has been vastly enhanced by deep learning and computer-aided tumor detection methods. This study proposes to investigate the architecture of object detectors, specifically focusing on search efficiency. In order to provide more specificity, our goal is to effectively explore the Feature Pyramid Network (FPN) and prediction head of a straightforward anchor-free object detector called DEEP Q-NAS. The study utilized the BraTS 2021 dataset which includes multi-parametric magnetic resonance imaging (mpMRI) scans. The architecture we found outperforms the latest object detection models (like Fast R-CNN, YOLOv7, and YOLOv8) by 2.2 to 7 points with average precision (AP) on the MS COCO 2017 dataset. It has a similar level of complexity and less memory usage, which shows how effective our proposed NAS is for object detection. The DEEP Q-NAS with ResNeXt-152 model demonstrates the highest level of detection accuracy, achieving a rate of 99%.

Contrast-Enhanced CT-Based Deep Learning and Habitat Radiomics for Analysing the Predictive Capability for Oral Squamous Cell Carcinoma.

Liu Q, Liang Z, Qi X, Yang S, Fu B, Dong H

pubmed logopapersJul 24 2025
This study aims to explore a novel approach for predicting cervical lymph node metastasis (CLNM) and pathological subtypes in oral squamous cell carcinoma (OSCC) by comparing deep learning (DL) and habitat analysis models based on contrast-enhanced CT (CECT). A retrospective analysis was conducted using CECT images from patients diagnosed with OSCC via paraffin pathology at the Second Affiliated Hospital of Dalian Medical University. All patients underwent primary tumor resection and cervical lymph node dissection, with a total of 132 cases included. A DL model was developed by analysing regions of interest (ROIs) in the CECT images using a convolutional neural network (CNN). For habitat analysis, the ROI images were segmented into 3 regions using K-means clustering, and features were selected through a fully connected neural network (FCNN) to build the model. A separate clinical model was constructed based on nine clinical features, including age, gender, and tumor location. Using LNM and pathological subtypes as endpoints, the predictive performance of the clinical model, DL model, habitat analysis model, and a combined clinical + habitat model was evaluated using confusion matrices and receiver operating characteristic (ROC) curves. For LNM prediction, the combined clinical + habitat model achieved an area under the ROC curve (AUC) of 0.97. For pathological subtype prediction, the AUC was 0.96. The DL model yielded an AUC of 0.83 for LNM prediction and 0.91 for pathological subtype classification. The clinical model alone achieved an AUC of 0.94 for predicting LNM. The integrated habitat-clinical model demonstrates improved predictive performance. Combining habitat analysis with clinical features offers a promising approach for the prediction of oral cancer. The habitat-clinical integrated model may assist clinicians in performing accurate preoperative prognostic assessments in patients with oral cancer.

Latent-k-space of Refinement Diffusion Model for Accelerated MRI Reconstruction.

Lu Y, Xie X, Wang S, Liu Q

pubmed logopapersJul 24 2025
Recent advances have applied diffusion model (DM) to magnetic resonance imaging (MRI) reconstruction, demonstrating impressive performance. However, current DM-based MRI reconstruction methods suffer from two critical limitations. First, they model image features at the pixel-level and require numerous iterations for the final image reconstruction, leading to high computational costs. Second, most of these methods operate in the image domain, which cannot avoid the introduction of secondary artifacts. To address these challenges, we propose a novel latent-k-space refinement diffusion model (LRDM) for MRI reconstruction. Specifically, we encode the original k-space data into a highly compact latent space to capture the primary features for accelerated acquisition and apply DM in the low-dimensional latent-k-space to generate prior knowledge. The compact latent space allows the DM to require only 4 iterations to generate accurate priors. To compensate for the inevitable loss of detail during latent-k-space diffusion, we incorporate an additional diffusion model focused exclusively on refining high-frequency structures and features. The results from both models are then decoded and combined to obtain the final reconstructed image. Experimental results demonstrate that the proposed method significantly reduces reconstruction time while delivering comparable image reconstruction quality to conventional DM-based approaches.&#xD.

AI-Driven Framework for Automated Detection of Kidney Stones in CT Images: Integration of Deep Learning Architectures and Transformers.

Alshenaifi R, Alqahtani Y, Ma S, Umapathy S

pubmed logopapersJul 24 2025
Kidney stones, a prevalent urological condition, associated with acute pain requires prompt and precise diagnosis for optimal therapeutic intervention. While computed tomography (CT) imaging remains the definitive diagnostic modality, manual interpretation of these images is a labor-intensive and error-prone process. This research endeavors to introduce Artificial Intelligence based methodology for automated detection and classification of renal calculi within the CT images. To identify the CT images with kidney stones, a comprehensive exploration of various ML and DL architectures, along with rigorous experimentation with diverse hyperparameters, was undertaken to refine the model's performance. The proposed workflow involves two key stages: (1) precise segmentation of pathological regions of interest (ROIs) using DL algorithms, and (2) binary classification of the segmented ROIs using both ML and DL models. The SwinTResNet model, optimized using the RMSProp algorithm with a learning rate of 0.0001, demonstrated optimal performance, achieving a training accuracy of 97.27% and a validation accuracy of 96.16% in the segmentation task. The Vision Transformer (ViT) architecture, when coupled with the ADAM optimizer and a learning rate of 0.0001, exhibited robust convergence and consistently achieved the highest performance metrics. Specifically, the model attained a peak training accuracy of 96.63% and a validation accuracy of 95.67%. The results demonstrate the potential of this integrated framework to enhance diagnostic accuracy and efficiency, thereby supporting improved clinical decision-making in the management of kidney stones.

A Lightweight Hybrid DL Model for Multi-Class Chest X-ray Classification for Pulmonary Diseases.

Precious JG, S R, B SP, R R V, M SSM, Sapthagirivasan V

pubmed logopapersJul 24 2025
Pulmonary diseases have become one of the main reasons for people's health decline, impacting millions of people worldwide. Rapid advancement of deep learning has significantly impacted medical image analysis by improving diagnostic accuracy and efficiency. Timely and precise diagnosis of these diseases proves to be invaluable for effective treatment procedures. Chest X-rays (CXR) perform a pivotal role in diagnosing various respiratory diseases by offering valuable insights into the chest and lung regions. This study puts forth a hybrid approach for classifying CXR images into four classes namely COVID-19, tuberculosis, pneumonia, and normal (healthy) cases. The presented method integrates a machine learning method, Support Vector Machine (SVM), with a pre-trained deep learning model for improved classification accuracy and reduced training time. Data from a number of public sources was used in this study, which represents a wide range of demographics. Class weights were implemented during training to balance the contribution of each class in order to address the class imbalance. Several pre-trained architectures, namely DenseNet, MobileNet, EfficientNetB0, and EfficientNetB3, have been investigated, and their performance was evaluated. Since MobileNet achieved the best classification accuracy of 94%, it was opted for the hybrid model, which combines MobileNet with SVM classifier, increasing the accuracy to 97%. The results suggest that this approach is reliable and holds great promise for clinical applications.&#xD.
Page 131 of 4003995 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.