Sort by:
Page 150 of 3563559 results

RealDeal: Enhancing Realism and Details in Brain Image Generation via Image-to-Image Diffusion Models

Shen Zhu, Yinzhu Jin, Tyler Spears, Ifrah Zawar, P. Thomas Fletcher

arxiv logopreprintJul 24 2025
We propose image-to-image diffusion models that are designed to enhance the realism and details of generated brain images by introducing sharp edges, fine textures, subtle anatomical features, and imaging noise. Generative models have been widely adopted in the biomedical domain, especially in image generation applications. Latent diffusion models achieve state-of-the-art results in generating brain MRIs. However, due to latent compression, generated images from these models are overly smooth, lacking fine anatomical structures and scan acquisition noise that are typically seen in real images. This work formulates the realism enhancing and detail adding process as image-to-image diffusion models, which refines the quality of LDM-generated images. We employ commonly used metrics like FID and LPIPS for image realism assessment. Furthermore, we introduce new metrics to demonstrate the realism of images generated by RealDeal in terms of image noise distribution, sharpness, and texture.

Latent-k-space of Refinement Diffusion Model for Accelerated MRI Reconstruction.

Lu Y, Xie X, Wang S, Liu Q

pubmed logopapersJul 24 2025
Recent advances have applied diffusion model (DM) to magnetic resonance imaging (MRI) reconstruction, demonstrating impressive performance. However, current DM-based MRI reconstruction methods suffer from two critical limitations. First, they model image features at the pixel-level and require numerous iterations for the final image reconstruction, leading to high computational costs. Second, most of these methods operate in the image domain, which cannot avoid the introduction of secondary artifacts. To address these challenges, we propose a novel latent-k-space refinement diffusion model (LRDM) for MRI reconstruction. Specifically, we encode the original k-space data into a highly compact latent space to capture the primary features for accelerated acquisition and apply DM in the low-dimensional latent-k-space to generate prior knowledge. The compact latent space allows the DM to require only 4 iterations to generate accurate priors. To compensate for the inevitable loss of detail during latent-k-space diffusion, we incorporate an additional diffusion model focused exclusively on refining high-frequency structures and features. The results from both models are then decoded and combined to obtain the final reconstructed image. Experimental results demonstrate that the proposed method significantly reduces reconstruction time while delivering comparable image reconstruction quality to conventional DM-based approaches.&#xD.

AI-Driven Framework for Automated Detection of Kidney Stones in CT Images: Integration of Deep Learning Architectures and Transformers.

Alshenaifi R, Alqahtani Y, Ma S, Umapathy S

pubmed logopapersJul 24 2025
Kidney stones, a prevalent urological condition, associated with acute pain requires prompt and precise diagnosis for optimal therapeutic intervention. While computed tomography (CT) imaging remains the definitive diagnostic modality, manual interpretation of these images is a labor-intensive and error-prone process. This research endeavors to introduce Artificial Intelligence based methodology for automated detection and classification of renal calculi within the CT images. To identify the CT images with kidney stones, a comprehensive exploration of various ML and DL architectures, along with rigorous experimentation with diverse hyperparameters, was undertaken to refine the model's performance. The proposed workflow involves two key stages: (1) precise segmentation of pathological regions of interest (ROIs) using DL algorithms, and (2) binary classification of the segmented ROIs using both ML and DL models. The SwinTResNet model, optimized using the RMSProp algorithm with a learning rate of 0.0001, demonstrated optimal performance, achieving a training accuracy of 97.27% and a validation accuracy of 96.16% in the segmentation task. The Vision Transformer (ViT) architecture, when coupled with the ADAM optimizer and a learning rate of 0.0001, exhibited robust convergence and consistently achieved the highest performance metrics. Specifically, the model attained a peak training accuracy of 96.63% and a validation accuracy of 95.67%. The results demonstrate the potential of this integrated framework to enhance diagnostic accuracy and efficiency, thereby supporting improved clinical decision-making in the management of kidney stones.

A Lightweight Hybrid DL Model for Multi-Class Chest X-ray Classification for Pulmonary Diseases.

Precious JG, S R, B SP, R R V, M SSM, Sapthagirivasan V

pubmed logopapersJul 24 2025
Pulmonary diseases have become one of the main reasons for people's health decline, impacting millions of people worldwide. Rapid advancement of deep learning has significantly impacted medical image analysis by improving diagnostic accuracy and efficiency. Timely and precise diagnosis of these diseases proves to be invaluable for effective treatment procedures. Chest X-rays (CXR) perform a pivotal role in diagnosing various respiratory diseases by offering valuable insights into the chest and lung regions. This study puts forth a hybrid approach for classifying CXR images into four classes namely COVID-19, tuberculosis, pneumonia, and normal (healthy) cases. The presented method integrates a machine learning method, Support Vector Machine (SVM), with a pre-trained deep learning model for improved classification accuracy and reduced training time. Data from a number of public sources was used in this study, which represents a wide range of demographics. Class weights were implemented during training to balance the contribution of each class in order to address the class imbalance. Several pre-trained architectures, namely DenseNet, MobileNet, EfficientNetB0, and EfficientNetB3, have been investigated, and their performance was evaluated. Since MobileNet achieved the best classification accuracy of 94%, it was opted for the hybrid model, which combines MobileNet with SVM classifier, increasing the accuracy to 97%. The results suggest that this approach is reliable and holds great promise for clinical applications.&#xD.

Deep learning-based real-time detection of head and neck tumors during radiation therapy.

Gardner M, Ben Bouchta Y, Truant D, Mylonas A, Sykes JR, Sundaresan P, Keall PJ

pubmed logopapersJul 24 2025

Clinical drivers for real-time head and neck (H&N) tumor tracking during radiation therapy (RT) are accounting for motion caused by changes to the immobilization mask fit, and to reduce mask-related patient distress by replacing the masks with patient motion management methods. The purpose of this paper is to investigate a deep learning-based method to segment H&N tumors in patient kilovoltage (kV) x-ray images to enable real-time H&N tumor tracking during RT.
Approach: An ethics-approved clinical study collected data from 17 H&N cancer patients undergoing conventional H&N RT. For each patient, personalized conditional Generative Adversarial Networks (cGANs) were trained to segment H&N tumors in kV x-ray images. Network training data were derived from each patient's planning CT and contoured gross tumor volumes (GTV). For each training epoch, the planning CT and GTV were deformed and forward projected to create the training dataset. The testing data consisted of kV x-ray images used to reconstruct the pre-treatment CBCT volume for the first, middle and end fractions. The ground truth tumor locations were derived by deformably registering the planning CT to the pre-treatment CBCT and then deforming the GTV and forward projecting the deformed GTV. The generated cGAN segmentations were compared to ground truth tumor segmentations using the absolute magnitude of the centroid error and the mean surface distance (MSD) metrics.
Main Results:
The centroid error for the nasopharynx (n=4), oropharynx (n=9) and larynx (n=4) patients was 1.5±0.9mm, 2.4±1.6mm, 3.5±2.2mm respectively and the MSD was 1.5±0.3mm, 1.9±0.9mm and 2.3±1.0mm respectively. There was a weak correlation between the centroid error and the LR tumor location (r=0.41), which was higher for oropharynx patients (r=0.77).
Significance: The paper reports on markerless H&N tumor detection accuracy using kV images. Accurate tracking of H&N tumors can enable more precise RT leading to mask-free RT enabling better patient outcomes.&#xD.

Association of initial core volume on non-contrast CT using a deep learning algorithm with clinical outcomes in acute ischemic stroke: a potential tool for selection and prognosis?

Flores A, Ustrell X, Seró L, Suarez A, Avivar Y, Cruz-Criollo L, Galecio-Castillo M, Cespedes J, Cendrero J, Salvia V, Garcia-Tornel A, Olive Gadea M, Canals P, Ortega-Gutierrez S, Ribó M

pubmed logopapersJul 24 2025
In an extended time window, contrast-based neuroimaging is valuable for treatment selection or prognosis in patients with stroke undergoing reperfusion treatment. However, its immediate availability remains limited, especially in resource-constrained regions. We sought to evaluate the association of initial core volume (ICV) measured on non-contrast computed tomography (NCCT) by a deep learning-based algorithm with outcomes in patients undergoing reperfusion treatment. Consecutive patients who received reperfusion treatments were collected from a prospectively maintained registry in three comprehensive stroke centers from January 2021 to May 2024. ICV on admission was estimated on NCCT by a previously validated deep learning algorithm (Methinks). Outcomes of interest included favorable outcome (modified Rankin Scale score 0-2 at 90 days) and symptomatic intracranial hemorrhage (sICH). The study comprised 658 patients of mean (SD) age 72.7 (14.4) years and median (IQR) baseline National Institutes of Health Stroke Scale (NIHSS) score of 12 (6-19). Primary endovascular treatment was performed in 53.7% of patients and 24.9% received IV thrombolysis only. Patients with favorable outcomes had a lower mean (SD) automated ICV (aICV; 12.9 (26.9) mL vs 34.9 (40) mL, P<0.001). Lower aICV was associated with a favorable outcome (adjusted OR 0.983 (95% CI 0.975 to 0.992), P<0.001) after adjusted logistic regression. For every 1 mL increase in aICV, the odds of a favorable outcome decreased by 1.7%. Patients who experienced sICH had a higher mean (SD) aICV (47.8 (61.1) mL vs 20.5 (32) mL, P=0.001). Higher aICV was independently associated with sICH (adjusted OR 1.014 (95% CI 1.004 to 1.025), P=0.009) after adjusted logistic regression. For every 1 mL increase in aICV, the odds of sICH increased by 1.4%. In patients with stroke undergoing reperfusion therapy, aICV assessment on NCCT predicts long-term outcomes and sICH. Further studies determining the potential role of aICV assessment to safely expand and simplify reperfusion therapies based on AI interpretation of NCCT may be justified.

EXPEDITION: an Exploratory deep learning method to quantitatively predict hematoma progression after intracerebral hemorrhage.

Chen S, Li Z, Li Y, Mi D

pubmed logopapersJul 24 2025
This study aims to develop an Exploratory deep learning method to quantitatively predict hematoma progression (EXPEDITION in short) after intracerebral hemorrhage (ICH). Patients with primary ICH in the basal ganglia or thalamus were retrospectively enrolled, and their baseline non-contrast CT (NCCT) image, CT perfusion (CTP) images, and subsequent re-examining NCCT images from the 2nd to the 8th day after baseline CTP were collected. The subjects who had received three or more re-examining scans were categorized into the test data set, and others were assigned to the training data set. Hematoma volume was estimated by manually outlining the lesion shown on each NCCT scan. Cerebral venous hemodynamic feature was extracted from CTP images. Then, EXPEDITION was trained. The Bland-Altman analysis was used to assess the prediction performance. A total of 126 patients were enrolled initially, and 73 patients were included in the final analysis. They were then categorized into the training data set (58 patients with 93 scans) and the test data set (15 patients with 50 scans). For the test set, the mean difference [mean ±1.96SD] of hematoma volume between the EXPEDITION prediction and the reference is -0.96 [-9.64, +7.71] mL. Specifically, in the test set, the consistency between the true and the predicted volume values was compared, indicating that the EXPEDITION achieved the needed accuracy for quantitative prediction of hematoma progression. An Exploratory deep learning method, EXPEDITION, was proposed to quantitatively predict hematoma progression after primary ICH in basal ganglia or thalamus.

Enhanced HER-2 prediction in breast cancer through synergistic integration of deep learning, ultrasound radiomics, and clinical data.

Hu M, Zhang L, Wang X, Xiao X

pubmed logopapersJul 24 2025
This study integrates ultrasound Radiomics with clinical data to enhance the diagnostic accuracy of HER-2 expression status in breast cancer, aiming to provide more reliable treatment strategies for this aggressive disease. We included ultrasound images and clinicopathologic data from 210 female breast cancer patients, employing a Generative Adversarial Network (GAN) to enhance image clarity and segment the region of interest (ROI) for Radiomics feature extraction. Features were optimized through Z-score normalization and various statistical methods. We constructed and compared multiple machine learning models, including Linear Regression, Random Forest, and XGBoost, with deep learning models such as CNNs (ResNet101, VGG19) and Transformer technology. The Grad-CAM technique was used to visualize the decision-making process of the deep learning models. The Deep Learning Radiomics (DLR) model integrated Radiomics features with deep learning features, and a combined model further integrated clinical features to predict HER-2 status. The LightGBM and ResNet101 models showed high performance, but the combined model achieved the highest AUC values in both training and testing, demonstrating the effectiveness of integrating diverse data sources. The study successfully demonstrates that the fusion of deep learning with Radiomics analysis significantly improves the prediction accuracy of HER-2 status, offering a new strategy for personalized breast cancer treatment and prognostic assessments.

Disease probability-enhanced follow-up chest X-ray radiology report summary generation.

Wang Z, Deng Q, So TY, Chiu WH, Lee K, Hui ES

pubmed logopapersJul 24 2025
A chest X-ray radiology report describes abnormal findings not only from X-ray obtained at a given examination, but also findings on disease progression or change in device placement with reference to the X-ray from previous examination. Majority of the efforts on automatic generation of radiology report pertain to reporting the former, but not the latter, type of findings. To the best of the authors' knowledge, there is only one work dedicated to generating summary of the latter findings, i.e., follow-up radiology report summary. In this study, we propose a transformer-based framework to tackle this task. Motivated by our observations on the significance of medical lexicon on the fidelity of report summary generation, we introduce two mechanisms to bestow clinical insight to our model, namely disease probability soft guidance and masked entity modeling loss. The former mechanism employs a pretrained abnormality classifier to guide the presence level of specific abnormalities, while the latter directs the model's attention toward medical lexicon. Extensive experiments were conducted to demonstrate that the performance of our model exceeded the state-of-the-art.

Deep learning reconstruction of zero echo time magnetic resonance imaging: diagnostic performance in axial spondyloarthritis.

Yi J, Hahn S, Lee HJ, Lee S, Park S, Lee J, de Arcos J, Fung M

pubmed logopapersJul 24 2025
To compare the diagnostic performance of deep learning reconstruction (DLR) of zero echo time (ZTE) MRI for structural lesions in patients with axial spondyloarthritis, against T1WI and ZTE MRI without DLR, using CT as the reference standard. From February 2021 to December 2022, 26 patients (52 sacroiliac joints (SIJ) and 104 quadrants) underwent SIJ MRIs. Three readers assessed overall image quality and structural conspicuity, scoring SIJs for structural lesions on T1WI, ZTE, and ZTE DLR 50%, 75%, and 100%, respectively. Diagnostic performance was evaluated using CT as the reference standard, and inter-reader agreement was assessed using weighted kappa. ZTE DLR 100% showed the highest image quality scores for readers 1 and 2, and the best structural conspicuity scores for all three readers. In readers 2 and 3, ZTE DLR 75% showed the best diagnostic performance for bone sclerosis, outperforming T1WI and ZTE (all p < 0.05). In all readers, ZTE DLR 100% showed superior diagnostic performance for bone erosion compared to T1WI and ZTE (all p < 0.01). For bone sclerosis, ZTE DLR 50% showed the highest kappa coefficients between readers 1 and 2 and between readers 1 and 3. For bone erosion, ZTE DLR 100% showed the highest kappa coefficients between readers. ZTE MRI with DLR outperformed T1WI and ZTE MRI without DLR in diagnosing bone sclerosis and erosion of the SIJ, while offering similar subjective image quality and structural conspicuity. Question With zero echo time (ZTE) alone, small structural lesions, such as bone sclerosis and erosion, are challenging to confirm in axial spondyloarthritis. Findings ZTE deep learning reconstruction (DLR) showed higher diagnostic performance for detecting bone sclerosis and erosion, compared with T1WI and ZTE. Clinical relevance Applying DLR to ZTE enhances diagnostic capability for detecting bone sclerosis and erosion in the sacroiliac joint, aiding in the early diagnosis of axial spondyloarthritis.
Page 150 of 3563559 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.