Sort by:
Page 193 of 6526512 results

Wang C, Long J, Liu X, Xu W, Zhang H, Liu Z, Yu M, Wang C, Wu Y, Sun A, Xu K, Meng Y

pubmed logopapersSep 5 2025
Carotid artery disease is a major cause of stroke and is frequently evaluated using Carotid CT Angiography (CTA). However, the associated radiation exposure and contrast agent use raise concerns, particularly for high-risk patients. Recent advances in Deep Learning Image Reconstruction (DLIR) offer new potential to enhance image quality under low-dose conditions. This study aimed to evaluate the effectiveness of the DLIR-H algorithm in improving image quality of 40 keV Virtual Monoenergetic Images (VMI) in dual-energy CTA (DE-CTA) while minimizing radiation dose and contrast agent usage. A total of 120 patients undergoing DE-CTA were prospectively divided into four groups: one control group using ASIR-V and three experimental groups using DLIR-L, DLIR-M, and DLIR-H algorithms. All scans employed a "triple-low" protocol-low radiation, low contrast volume, and low injection rate. Objective image quality was assessed via CT values, image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). Subjective image quality was evaluated using a 5-point Likert scale. The DLIR-H group showed the greatest improvements in image quality, with significantly reduced noise and increased SNR and CNR, particularly at complex vascular sites such as the carotid bifurcation and internal carotid artery. Radiation dose and contrast volume were reduced by 15.6 % and 17.5 %, respectively. DLIR-H also received the highest subjective image quality scores. DLIR-H significantly enhances DE-CTA image quality under ultra-low-dose conditions, preserving diagnostic detail while reducing patient risk. DLIR-H supports safer and more effective carotid imaging, especially for high-risk groups like renal-impaired patients and those needing repeated scans, enabling wider clinical use of ultra-low-dose protocols.

Thierry Judge, Nicolas Duchateau, Khuram Faraz, Pierre-Marc Jodoin, Olivier Bernard

arxiv logopreprintSep 5 2025
Simulated ultrasound image sequences are key for training and validating machine learning algorithms for left ventricular strain estimation. Several simulation pipelines have been proposed to generate sequences with corresponding ground truth motion, but they suffer from limited realism as they do not consider speckle decorrelation. In this work, we address this limitation by proposing an improved simulation framework that explicitly accounts for speckle decorrelation. Our method builds on an existing ultrasound simulation pipeline by incorporating a dynamic model of speckle variation. Starting from real ultrasound sequences and myocardial segmentations, we generate meshes that guide image formation. Instead of applying a fixed ratio of myocardial and background scatterers, we introduce a coherence map that adapts locally over time. This map is derived from correlation values measured directly from the real ultrasound data, ensuring that simulated sequences capture the characteristic temporal changes observed in practice. We evaluated the realism of our approach using ultrasound data from 98 patients in the CAMUS database. Performance was assessed by comparing correlation curves from real and simulated images. The proposed method achieved lower mean absolute error compared to the baseline pipeline, indicating that it more faithfully reproduces the decorrelation behavior seen in clinical data.

Mainak Biswas, Ambedkar Dukkipati, Devarajan Sridharan

arxiv logopreprintSep 5 2025
Deep learning models deployed in real-world applications (e.g., medicine) face challenges because source models do not generalize well to domain-shifted target data. Many successful domain adaptation (DA) approaches require full access to source data. Yet, such requirements are unrealistic in scenarios where source data cannot be shared either because of privacy concerns or because it is too large and incurs prohibitive storage or computational costs. Moreover, resource constraints may limit the availability of labeled targets. We illustrate this challenge in a neuroscience setting where source data are unavailable, labeled target data are meager, and predictions involve continuous-valued outputs. We build upon Contradistinguisher (CUDA), an efficient framework that learns a shared model across the labeled source and unlabeled target samples, without intermediate representation alignment. Yet, CUDA was designed for unsupervised DA, with full access to source data, and for classification tasks. We develop CRAFT -- a Contradistinguisher-based Regularization Approach for Flexible Training -- for source-free (SF), semi-supervised transfer of pretrained models in regression tasks. We showcase the efficacy of CRAFT in two neuroscience settings: gaze prediction with electroencephalography (EEG) data and ``brain age'' prediction with structural MRI data. For both datasets, CRAFT yielded up to 9% improvement in root-mean-squared error (RMSE) over fine-tuned models when labeled training examples were scarce. Moreover, CRAFT leveraged unlabeled target data and outperformed four competing state-of-the-art source-free domain adaptation models by more than 3%. Lastly, we demonstrate the efficacy of CRAFT on two other real-world regression benchmarks. We propose CRAFT as an efficient approach for source-free, semi-supervised deep transfer for regression that is ubiquitous in biology and medicine.

Natascha Niessen, Carolin M. Pirkl, Ana Beatriz Solana, Hannah Eichhorn, Veronika Spieker, Wenqi Huang, Tim Sprenger, Marion I. Menzel, Julia A. Schnabel

arxiv logopreprintSep 5 2025
Multi-contrast MRI sequences allow for the acquisition of images with varying tissue contrast within a single scan. The resulting multi-contrast images can be used to extract quantitative information on tissue microstructure. To make such multi-contrast sequences feasible for clinical routine, the usually very long scan times need to be shortened e.g. through undersampling in k-space. However, this comes with challenges for the reconstruction. In general, advanced reconstruction techniques such as compressed sensing or deep learning-based approaches can enable the acquisition of high-quality images despite the acceleration. In this work, we leverage redundant anatomical information of multi-contrast sequences to achieve even higher acceleration rates. We use undersampling patterns that capture the contrast information located at the k-space center, while performing complementary undersampling across contrasts for high frequencies. To reconstruct this highly sparse k-space data, we propose an implicit neural representation (INR) network that is ideal for using the complementary information acquired across contrasts as it jointly reconstructs all contrast images. We demonstrate the benefits of our proposed INR method by applying it to multi-contrast MRI using the MPnRAGE sequence, where it outperforms the state-of-the-art parallel imaging compressed sensing (PICS) reconstruction method, even at higher acceleration factors.

Mohammad Abbadi, Yassine Himeur, Shadi Atalla, Wathiq Mansoor

arxiv logopreprintSep 5 2025
Breast cancer remains a leading cause of cancer-related mortality among women worldwide. Ultrasound imaging, widely used due to its safety and cost-effectiveness, plays a key role in early detection, especially in patients with dense breast tissue. This paper presents a comprehensive study on the application of machine learning and deep learning techniques for breast cancer classification using ultrasound images. Using datasets such as BUSI, BUS-BRA, and BrEaST-Lesions USG, we evaluate classical machine learning models (SVM, KNN) and deep convolutional neural networks (ResNet-18, EfficientNet-B0, GoogLeNet). Experimental results show that ResNet-18 achieves the highest accuracy (99.7%) and perfect sensitivity for malignant lesions. Classical ML models, though outperformed by CNNs, achieve competitive performance when enhanced with deep feature extraction. Grad-CAM visualizations further improve model transparency by highlighting diagnostically relevant image regions. These findings support the integration of AI-based diagnostic tools into clinical workflows and demonstrate the feasibility of deploying high-performing, interpretable systems for ultrasound-based breast cancer detection.

Eini P, Eini P, Serpoush H, Rezayee M, Tremblay J

pubmed logopapersSep 5 2025
Carotid artery plaques, a hallmark of atherosclerosis, are key risk indicators for ischemic stroke, a major global health burden with 101 million cases and 6.65 million deaths in 2019. Early ultrasound detection is vital but hindered by manual analysis limitations. Machine learning (ML) offers a promising solution for automated plaque detection, yet its comparative performance is underexplored. This systematic review and meta-analysis evaluates ML models for carotid plaque detection using ultrasound. We searched PubMed, Scopus, Embase, Web of Science, and ProQuest for studies on ML-based carotid plaque detection with ultrasound, following PRISMA guidelines. Eligible studies reported diagnostic metrics and used a reference standard. Data on study characteristics, ML models, and performance were extracted, with risk of bias assessed via PROBAST+AI. Pooled sensitivity, specificity, AUROC were calculated using STATA 18 with MIDAS and METADTA modules. Of ten studies, eight were meta-analyzed (200-19,751 patients) Best models showed a pooled sensitivity 0.94 (95% CI: 0.88-0.97), specificity 0.95 (95% CI: 0.86-0.98), AUROC 0.98 (95% CI: 0.97-0.99), and DOR 302 (95% CI: 54-1684), with high heterogeneity (I² = 90%) and no publication bias. ML models show promise in carotid plaque detection, supporting potential clinical integration for stroke prevention, though high heterogeneity and potential bias highlight the need for standardized validation.

Xu D, Liu H, Miao X, O'Connor D, Scholey JE, Yang W, Feng M, Ohliger M, Lin H, Ruan D, Yang Y, Sheng K

pubmed logopapersSep 5 2025
Accelerating MR acquisition is essential for image guided therapeutic applications. Compressed sensing (CS) has been developed to minimize image artifacts in accelerated scans, but the required iterative reconstruction is computationally complex and difficult to generalize. Convolutional neural networks (CNNs)/Transformers-based deep learning (DL) methods emerged as a faster alternative but face challenges in modeling continuous k-space, a problem amplified with non-Cartesian sampling commonly used in accelerated acquisition. In comparison, implicit neural representations can model continuous signals in the frequency domain and thus are compatible with arbitrary k-space sampling patterns. The current study develops a novel generative-adversarially trained implicit neural representations (k-GINR) for de novo undersampled non-Cartesian k-space reconstruction. k-GINR consists of two stages: 1) supervised training on an existing patient cohort; 2) self-supervised patient-specific optimization. The StarVIBE T1-weighted liver dataset consisting of 118 prospectively acquired scans and corresponding coil data were employed for testing. k-GINR is compared with two INR based methods, NeRP and k-NeRP, an unrolled DL method, Deep Cascade CNN, and CS. k-GINR consistently outperformed the baselines with a larger performance advantage observed at very high accelerations (PSNR: 6.8%-15.2% higher at 3 times, 15.1%-48.8% at 10 times, and 29.3%-60.5% higher at 20 times). The reconstruction times for k-GINR, NeRP, k-NeRP, CS, and Deep Cascade CNN were approximately 3 minutes, 4-10 minutes, 3 minutes, 4 minutes and 3 second, respectively. k-GINR, an innovative two-stage INR network incorporating adversarial training, was designed for direct non-Cartesian k-space reconstruction for new incoming patients. It demonstrated superior image quality compared to CS and Deep Cascade CNN across a wide range of acceleration ratios.

Jinhao Wang, Florian Vogl, Pascal Schütz, Saša Ćuković, William R. Taylor

arxiv logopreprintSep 5 2025
Veriserum is an open-source dataset designed to support the training of deep learning registration for dual-plane fluoroscopic analysis. It comprises approximately 110,000 X-ray images of 10 knee implant pair combinations (2 femur and 5 tibia implants) captured during 1,600 trials, incorporating poses associated with daily activities such as level gait and ramp descent. Each image is annotated with an automatically registered ground-truth pose, while 200 images include manually registered poses for benchmarking. Key features of Veriserum include dual-plane images and calibration tools. The dataset aims to support the development of applications such as 2D/3D image registration, image segmentation, X-ray distortion correction, and 3D reconstruction. Freely accessible, Veriserum aims to advance computer vision and medical imaging research by providing a reproducible benchmark for algorithm development and evaluation. The Veriserum dataset used in this study is publicly available via https://movement.ethz.ch/data-repository/veriserum.html, with the data stored at ETH Z\"urich Research Collections: https://doi.org/10.3929/ethz-b-000701146.

Ozcelik G, Erol S, Korkut S, Kose Cetinkaya A, Ozcelik H

pubmed logopapersSep 5 2025
Bronchopulmonary dysplasia (BPD) is a significant morbidity in premature infants. This study aimed to assess the accuracy of the model's predictions in comparison to clinical outcomes. Medical records of premature infants born ≤ 28 weeks and < 1250 g between January 1, 2020, and December 31, 2021, in the neonatal intensive care unit were obtained. In this retrospective model development and validation study, an artificial intelligence model was developed using DenseNet121 deep learning architecture. The data set and test set consisted of chest radiographs obtained on postnatal day 1 as well as during the 2nd, 3rd, and 4th weeks. The model predicted the likelihood of developing no BPD, or mild, moderate, or severe BPD. The accuracy of the artificial intelligence model was tested based on the clinical outcomes of patients. This study included 122 premature infants with a birth weight of 990 g (range: 840-1120 g). Of these, 33 (27%) patients did not develop BPD, 24 (19.7%) had mild BPD, 28 (23%) had moderate BPD, and 37 (30%) had severe BPD. A total of 395 chest radiographs from these patients were used to develop an artificial intelligence (AI) model for predicting BPD. Area under the curve values, representing the accuracy of predicting severe, moderate, mild, and no BPD, were as follows: 0.79, 0.75, 0.82, and 0.82 for day 1 radiographs; 0.88, 0.82, 0.74, and 0.94 for week 2 radiographs; 0.87, 0.83, 0.88, and 0.96 for week 3 radiographs; and 0.90, 0.82, 0.86, and 0.97 for week 4 radiographs. The artificial intelligence model successfully identified BPD on chest radiographs and classified its severity. The accuracy of the model can be improved using larger control and external validation datasets.

Han, Y., Hanania, A. N., Siddiqui, Z. A., Ugarte, V., Zhou, B., Mohamed, A. S. R., Pathak, P., Hamstra, D. A., Sun, B.

medrxiv logopreprintSep 5 2025
Purpose/ObjectiveCurrent radiotherapy (RT) planning workflows rely on pre-treatment simulation CT (sCT), which can significantly delay treatment initiation, particularly in resource-constrained settings. While diagnostic CT (dCT) offers a potential alternative for expedited planning, inherent geometric discrepancies from sCT in patient positioning and table curvature limit its direct use for accurate RT planning. This study presents a novel AI-based method designed to overcome these limitations by generating synthetic simulation CT (ssCT) directly from standard dCT for spinal palliative RT, aiming to eliminate the need for sCT and accelerate the treatment workflow. Materials/MethodsssCTs were generated using two neural network models to adjust spine position and correct table curvature. The neural networks use a three-layer structure (ReLU activation), optimized by Adam with MSE loss and MAE metrics. The models were trained on paired dCT and sCT images from 30 patients undergoing palliative spine radiotherapy from a safety-net hospital, with 22 cases used for training and 8 for testing. To explore institutional dependence, the models were also tested on 7 patients from an academic medical center (AMC). To evaluate ssCT accuracy, both ssCT and dCT were aligned with sCT using the same frame of reference rigid registration on bone windows. Dosimetric differences were assessed by comparing dCT vs. sCT and ssCT vs. sCT, quantifying deviations in dose-volume histogram (DVH) metrics, including Dmean, Dmax, D95, D99, V100, V107, and root-mean-square (RMS) differences. The imaging and plan quality was assessed by four radiation oncologists using a Likert score. The Wilcoxon signed-rank test was used to determine whether there is a significant difference between the two methods. ResultsFor the safety-net hospital cases, the generated ssCT demonstrated significantly improved geometric and dosimetric accuracy compared to dCT. ssCT reduced the mean difference in key dosimetric parameters (e.g., Dmean difference decreased from 2.0% for dCT vs. sCT to 0.57% for ssCT vs. sCT with significant improvement under the Wilcoxon signed-rank test) and achieved a significant reduction in the RMS difference of DVH curves (from 6.4% to 2.2%). Furthermore, physician evaluations showed that ssCT was consistently rated as significantly superior for treatment planning images (mean scores improving from "Acceptable" for dCT to "Good to Perfect" for ssCT), reflecting improved confidence in target and tissue positioning. In the academic medical-center cohort--where technologists already apply meticulous pre-scan alignment--ssCT still yielded statistically significant, though smaller, improvements in several dosimetric endpoints and in observer ratings. ConclusionOur AI-driven approach successfully generates ssCT from dCT that achieves geometric and dosimetric accuracy comparable to sCT for spinal palliative RT planning. By specifically addressing critical discrepancies like spine position and table curvature, this method offers a robust approach to bypass the need for dedicated sCT simulations. This advancement has the potential to significantly streamline the RT workflow, reduce treatment uncertainties, and accelerate time to treatment, offering a highly promising solution for improving access to timely and accurate radiotherapy, especially in limited-resource environments.
Page 193 of 6526512 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.