Back to all papers

From Speech to Sonography: Spectral Networks for Ultrasound Microstructure Classification.

November 27, 2025pubmed logopapers

Authors

Tehrani AKZ,Tang A,Ravanelli M,Cloutier G,Rafati I,Nguyen BN,Trinh QH,Rosado-Mendez I,Rivaz H

Abstract

The frequency dependence of backscattered radiofrequency (RF) signals produced by ultrasound scanners carries rich information related to the tissue microstructure (i.e., scatterer size, attenuation). This information can be sue to classify tissues based on microstructural changes associated to disease onset and progression. Conventional convolutional neural networks (CNNs) can learn this information directly from radio-frequency (RF) data, but they often struggle to achieve adequate frequency selectivity. This increases model complexity and convergence time, and limits generalization. To overcome these challenges, SincNet, originally developed for speech processing, was adapted to classify RF data based on differences in frequency properties. Rather than learning every filter coefficient, SincNet only learns each filter's low frequency and bandwidth, dramatically reducing the number of parameters and improving frequency resolution. For model interpretability, a Gradient-Weighted Filter Contribution is introduced, which highlights the importance of spectral bands. The approach was validated on three datasets: simulated data with different scatterer sizes, experimental phantom data, and in vivo data of rats which were fed a methionine and choline- deficient diet to develop liver steatosis, inflammation, and fibrosis. The modified SincNet consistently achieved the best results in material/tissue classifications.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.