Sort by:
Page 5 of 1751742 results

A deep learning framework for reconstructing Breast Amide Proton Transfer weighted imaging sequences from sparse frequency offsets to dense frequency offsets.

Yang Q, Su S, Zhang T, Wang M, Dou W, Li K, Ren Y, Zheng Y, Wang M, Xu Y, Sun Y, Liu Z, Tan T

pubmed logopapersJul 1 2025
Amide Proton Transfer (APT) technique is a novel functional MRI technique that enables quantification of protein metabolism, but its wide application is largely limited in clinical settings by its long acquisition time. One way to reduce the scanning time is to obtain fewer frequency offset images during image acquisition. However, sparse frequency offset images are not inadequate to fit the z-spectral, a curve essential to quantifying the APT effect, which might compromise its quantification. In our study, we develop a deep learning-based model that allows for reconstructing dense frequency offsets from sparse ones, potentially reducing scanning time. We propose to leverage time-series convolution to extract both short and long-range spatial and frequency features of the APT imaging sequence. Our proposed model outperforms other seq2seq models, achieving superior reconstruction with a peak signal-to-noise ratio of 45.8 (95% confidence interval (CI): [44.9 46.7]), and a structural similarity index of 0.989 (95% CI:[0.987 0.993]) for the tumor region. We have integrated a weighted layer into our model to evaluate the impact of individual frequency offset on the reconstruction process. The weights assigned to the frequency offset at ±6.5 ppm, 0 ppm, and 3.5 ppm demonstrate higher significance as learned by the model. Experimental results demonstrate that our proposed model effectively reconstructs dense frequency offsets (n = 29, from 7 to -7 with 0.5 ppm as an interval) from data with 21 frequency offsets, reducing scanning time by 25%. This work presents a method for shortening the APT imaging acquisition time, offering potential guidance for parameter settings in APT imaging and serving as a valuable reference for clinicians.

Breast tumour classification in DCE-MRI via cross-attention and discriminant correlation analysis enhanced feature fusion.

Pan F, Wu B, Jian X, Li C, Liu D, Zhang N

pubmed logopapersJul 1 2025
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) has proven to be highly sensitive in diagnosing breast tumours, due to the kinetic and volumetric features inherent in it. To utilise the kinetics-related and volume-related information, this paper aims to develop and validate a classification for differentiating benign and malignant breast tumours based on DCE-MRI, though fusing deep features and cross-attention-encoded radiomics features using discriminant correlation analysis (DCA). Classification experiments were conducted on a dataset comprising 261 individuals who underwent DCE-MRI including those with multiple tumours, resulting in 137 benign and 163 malignant tumours. To improve the strength of correlation between features and reduce features' redundancy, a novel fusion method that fuses deep features and encoded radiomics features based on DCA (eFF-DCA) is proposed. The eFF-DCA includes three components: (1) a feature extraction module to capture kinetic information across phases, (2) a radiomics feature encoding module employing a cross-attention mechanism to enhance inter-phase feature correlation, and (3) a DCA-based fusion module that transforms features to maximise intra-class correlation while minimising inter-class redundancy, facilitating effective classification. The proposed eFF-DCA method achieved an accuracy of 90.9% and an area under the receiver operating characteristic curve of 0.942, outperforming methods using single-modal features. The proposed eFF-DCA utilises DCE-MRI kinetic-related and volume-related features to improve breast tumour diagnosis accuracy, but non-end-to-end design limits multimodal fusion. Future research should explore unified end-to-end deep learning architectures that enable seamless multimodal feature fusion and joint optimisation of feature extraction and classification.

A Minimal Annotation Pipeline for Deep Learning Segmentation of Skeletal Muscles.

Baudin PY, Balsiger F, Beck L, Boisserie JM, Jouan S, Marty B, Reyngoudt H, Scheidegger O

pubmed logopapersJul 1 2025
Translating quantitative skeletal muscle MRI biomarkers into clinics requires efficient automatic segmentation methods. The purpose of this work is to investigate a simple yet effective iterative methodology for building a high-quality automatic segmentation model while minimizing the manual annotation effort. We used a retrospective database of quantitative MRI examinations (n = 70) of healthy and pathological thighs for training a nnU-Net segmentation model. Healthy volunteers and patients with various neuromuscular diseases, broadly categorized as dystrophic, inflammatory, neurogenic, and unlabeled NMDs. We designed an iterative procedure, progressively adding cases to the training set and using a simple visual five-level rating scale to judge the validity of generated segmentations for clinical use. On an independent test set (n = 20), we assessed the quality of the segmentation in 13 individual thigh muscles using standard segmentation metrics-dice coefficient (DICE) and 95% Hausdorff distance (HD95)-and quantitative biomarkers-cross-sectional area (CSA), fat fraction (FF), and water-T1/T2. We obtained high-quality segmentations (DICE = 0.88 ± 0.15/0.86 ± 0.14, HD95 = 6.35 ± 12.33/6.74 ± 11.57 mm), comparable to recent works, although with a smaller training set (n = 30). Inter-rater agreement on the five-level scale was fair to moderate but showed progressive improvement of the segmentation model along with the iterations. We observed limited differences from manually delineated segmentations on the quantitative outcomes (MAD: CSA = 65.2 mm<sup>2</sup>, FF = 1%, water-T1 = 8.4 ms, water-T2 = 0.35 ms), with variability comparable to manual delineations.

The implementation of artificial intelligence in serial monitoring of post gamma knife vestibular schwannomas: A pilot study.

Singh M, Jester N, Lorr S, Briano A, Schwartz N, Mahajan A, Chiang V, Tommasini SM, Wiznia DH, Buono FD

pubmed logopapersJul 1 2025
Vestibular schwannomas (VS) are benign tumors that can lead to hearing loss, balance issues, and tinnitus. Gamma Knife Radiosurgery (GKS) is a common treatment for VS, aimed at halting tumor growth and preserving neurological function. Accurate monitoring of VS volume before and after GKS is essential for assessing treatment efficacy. To evaluate the accuracy of an artificial intelligence (AI) algorithm, originally developed to identify NF2-SWN-related VS, in segmenting non-NF2-SWN-related VS and detecting volume changes pre- and post-GKS. We hypothesize this AI algorithm, trained on NF2-SWN-related VS data, will accurately apply to non-NF2-SWN VS and VS treated with GKS. In this retrospective cohort study, we reviewed data from an established Gamma Knife database, identifying 16 patients who underwent GKS for VS and had pre- and post-GKS scans. Contrast-enhanced T1-weighted MRI scans were analyzed with both manual segmentation and the AI algorithm. DICE similarity coefficients were computed to compare AI and manual segmentations, and a paired t-test was used to assess statistical significance. Volume changes for pre- and post-GKS scans were calculated for both segmentation methods. The mean DICE score between AI and manual segmentations was 0.91 (range 0.79-0.97). Pre- and post-GKS DICE scores were 0.91 (range 0.79-0.97) and 0.92 (range 0.81-0.97), indicating high spatial overlap. AI-segmented VS volumes pre- and post-GKS were consistent with manual measurements, with high DICE scores indicating strong spatial overlap. The AI algorithm processed scans within 5 min, suggesting it offers a reliable, efficient alternative for clinical monitoring. DICE scores showed high similarity between manual and AI segmentations. The pre- and post-GKS VS volume percentage changes were also similar between manual and AI-segmented VS volumes, indicating that our AI algorithm can accurately detect changes in tumor growth.

CALIMAR-GAN: An unpaired mask-guided attention network for metal artifact reduction in CT scans.

Scardigno RM, Brunetti A, Marvulli PM, Carli R, Dotoli M, Bevilacqua V, Buongiorno D

pubmed logopapersJul 1 2025
High-quality computed tomography (CT) scans are essential for accurate diagnostic and therapeutic decisions, but the presence of metal objects within the body can produce distortions that lower image quality. Deep learning (DL) approaches using image-to-image translation for metal artifact reduction (MAR) show promise over traditional methods but often introduce secondary artifacts. Additionally, most rely on paired simulated data due to limited availability of real paired clinical data, restricting evaluation on clinical scans to qualitative analysis. This work presents CALIMAR-GAN, a generative adversarial network (GAN) model that employs a guided attention mechanism and the linear interpolation algorithm to reduce artifacts using unpaired simulated and clinical data for targeted artifact reduction. Quantitative evaluations on simulated images demonstrated superior performance, achieving a PSNR of 31.7, SSIM of 0.877, and Fréchet inception distance (FID) of 22.1, outperforming state-of-the-art methods. On real clinical images, CALIMAR-GAN achieved the lowest FID (32.7), validated as a valuable complement to qualitative assessments through correlation with pixel-based metrics (r=-0.797 with PSNR, p<0.01; r=-0.767 with MS-SSIM, p<0.01). This work advances DL-based artifact reduction into clinical practice with high-fidelity reconstructions that enhance diagnostic accuracy and therapeutic outcomes. Code is available at https://github.com/roberto722/calimar-gan.

Prediction of PD-L1 expression in NSCLC patients using PET/CT radiomics and prognostic modelling for immunotherapy in PD-L1-positive NSCLC patients.

Peng M, Wang M, Yang X, Wang Y, Xie L, An W, Ge F, Yang C, Wang K

pubmed logopapersJul 1 2025
To develop a positron emission tomography/computed tomography (PET/CT)-based radiomics model for predicting programmed cell death ligand 1 (PD-L1) expression in non-small cell lung cancer (NSCLC) patients and estimating progression-free survival (PFS) and overall survival (OS) in PD-L1-positive patients undergoing first-line immunotherapy. We retrospectively analysed 143 NSCLC patients who underwent pretreatment <sup>18</sup>F-fluorodeoxyglucose (<sup>18</sup>F-FDG) PET/CT scans, of whom 86 were PD-L1-positive. Clinical data collected included gender, age, smoking history, Tumor-Node-Metastases (TNM) staging system, pathologic types, laboratory parameters, and PET metabolic parameters. Four machine learning algorithms-Bayes, logistic, random forest, and Supportsupport vector machine (SVM)-were used to build models. The predictive performance was validated using receiver operating characteristic (ROC) curves. Univariate and multivariate Cox analyses identified independent predictors of OS and PFS in PD-L1-positive expression patients undergoing immunotherapy, and a nomogram was created to predict OS. A total of 20 models were built for predicting PD-L1 expression. The clinical combined PET/CT radiomics model based on the SVM algorithm performed best (area under curve for training and test sets: 0.914 and 0.877, respectively). The Cox analyses showed that smoking history independently predicted PFS. SUVmean, monocyte percentage and white blood cell count were independent predictors of OS, and the nomogram was created to predict 1-year, 2-year, and 3-year OS based on these three factors. We developed PET/CT-based machine learning models to help predict PD-L1 expression in NSCLC patients and identified independent predictors of PFS and OS in PD-L1-positive patients receiving immunotherapy, thereby aiding precision treatment.

Novel artificial intelligence approach in neurointerventional practice: Preliminary findings on filter movement and ischemic lesions in carotid artery stenting.

Sagawa H, Sakakura Y, Hanazawa R, Takahashi S, Wakabayashi H, Fujii S, Fujita K, Hirai S, Hirakawa A, Kono K, Sumita K

pubmed logopapersJul 1 2025
Embolic protection devices (EPDs) used during carotid artery stenting (CAS) are crucial in reducing ischemic complications. Although minimizing the filter-type EPD movement is considered important, limited research has demonstrated this practice. We used an artificial intelligence (AI)-based device recognition technology to investigate the correlation between filter movements and ischemic complications. We retrospectively studied 28 consecutive patients who underwent CAS using FilterWire EZ (Boston Scientific, Marlborough, MA, USA) from April 2022 to September 2023. Clinical data, procedural videos, and postoperative magnetic resonance imaging were collected. An AI-based device detection function in the Neuro-Vascular Assist (iMed Technologies, Tokyo, Japan) was used to quantify the filter movement. Multivariate proportional odds model analysis was performed to explore the correlations between postoperative diffusion-weighted imaging (DWI) hyperintense lesions and potential ischemic risk factors, including filter movement. In total, 23 patients had sufficient information and were eligible for quantitative analysis. Fourteen patients (60.9 %) showed postoperative DWI hyperintense lesions. Multivariate analysis revealed significant associations between filter movement distance (odds ratio, 1.01; 95 % confidence interval, 1.00-1.02; p = 0.003) and high-intensity signals in time-of-flight magnetic resonance angiography with DWI hyperintense lesions. Age, symptomatic status, and operative time were not significantly correlated. Increased filter movement during CAS was correlated with a higher incidence of postoperative DWI hyperintense lesions. AI-based quantitative evaluation of endovascular techniques may enable demonstration of previously unproven recommendations. To the best of our knowledge, this is the first study to use an AI system for quantitative evaluation to address real-world clinical issues.

Adoption of artificial intelligence in healthcare: survey of health system priorities, successes, and challenges.

Poon EG, Lemak CH, Rojas JC, Guptill J, Classen D

pubmed logopapersJul 1 2025
The US healthcare system faces significant challenges, including clinician burnout, operational inefficiencies, and concerns about patient safety. Artificial intelligence (AI), particularly generative AI, has the potential to address these challenges, but its adoption, effectiveness, and barriers to implementation are not well understood. To evaluate the current state of AI adoption in US healthcare systems, assess successes and barriers to implementation during the early generative AI era. This cross-sectional survey was conducted in Fall 2024, and included 67 health systems members of the Scottsdale Institute, a collaborative of US non-profit healthcare organizations. Forty-three health systems completed the survey (64% response rate). Respondents provided data on the deployment status and perceived success of 37 AI use cases across 10 categories. The primary outcomes were the extent of AI use case development, piloting, or deployment, the degree of reported success for AI use cases, and the most significant barriers to adoption. Across the 43 responding health systems, AI adoption and perceptions of success varied significantly. Ambient Notes, a generative AI tool for clinical documentation, was the only use case with 100% of respondents reporting adoption activities, and 53% reported a high degree of success with using AI for Clinical Documentation. Imaging and radiology emerged as the most widely deployed clinical AI use case, with 90% of organizations reporting at least partial deployment, although successes with diagnostic use cases were limited. Similarly, many organizations have deployed AI for clinical risk stratification such as early sepsis detection, but only 38% report high success in this area. Immature AI tools were identified a significant barrier to adoption, cited by 77% of respondents, followed by financial concerns (47%) and regulatory uncertainty (40%). Ambient Notes is rapidly advancing in US healthcare systems and demonstrating early success. Other AI use cases show varying degrees of adoption and success, constrained by barriers such as immature AI tools, financial concerns, and regulatory uncertainty. Addressing these challenges through robust evaluations, shared strategies, and governance models will be essential to ensure effective integration and adoption of AI into healthcare practice.

A multi-task neural network for full waveform ultrasonic bone imaging.

Li P, Liu T, Ma H, Li D, Liu C, Ta D

pubmed logopapersJul 1 2025
It is a challenging task to use ultrasound for bone imaging, as the bone tissue has a complex structure with high acoustic impedance and speed-of-sound (SOS). Recently, full waveform inversion (FWI) has shown promising imaging for musculoskeletal tissues. However, the FWI showed a limited ability and tended to produce artifacts in bone imaging because the inversion process would be more easily trapped in local minimum for bone tissue with a large discrepancy in SOS distribution between bony and soft tissues. In addition, the application of FWI required a high computational burden and relatively long iterations. The objective of this study was to achieve high-resolution ultrasonic imaging of bone using a deep learning-based FWI approach. In this paper, we proposed a novel network named CEDD-Unet. The CEDD-Unet adopts a Dual-Decoder architecture, with the first decoder tasked with reconstructing the SOS model, and the second decoder tasked with finding the main boundaries between bony and soft tissues. To effectively capture multi-scale spatial-temporal features from ultrasound radio frequency (RF) signals, we integrated a Convolutional LSTM (ConvLSTM) module. Additionally, an Efficient Multi-scale Attention (EMA) module was incorporated into the encoder to enhance feature representation and improve reconstruction accuracy. Using the ultrasonic imaging modality with a ring array transducer, the performance of CEDD-Unet was tested on the SOS model datasets from human bones (noted as Dataset1) and mouse bones (noted as Dataset2), and compared with three classic reconstruction architectures (Unet, Unet++, and Att-Unet), four state-of-the-art architecture (InversionNet, DD-Net, UPFWI, and DEFE-Unet). Experiments showed that CEDD-Unet outperforms all competing methods, achieving the lowest MAE of 23.30 on Dataset1 and 25.29 on Dataset2, the highest SSIM of 0.9702 on Dataset1 and 0.9550 on Dataset2, and the highest PSNR of 30.60 dB on Dataset1 and 32.87 dB on Dataset2. Our method demonstrated superior reconstruction quality, with clearer bone boundaries, reduced artifacts, and improved consistency with ground truth. Moreover, CEDD-Unet surpasses traditional FWI by producing sharper skeletal SOS reconstructions, reducing computational cost, and eliminating the reliance for an initial model. Ablation studies further confirm the effectiveness of each network component. The results suggest that CEDD-Unet is a promising deep learning-based FWI method for high-resolution bone imaging, with the potential to reconstruct accurate and sharp-edged skeletal SOS models.
Page 5 of 1751742 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.