Publication Types:

Sort by year:

Self-Calibrating Acoustic Sensor Networks with Per-Channel Energy Normalization

Conference paper
Vincent Lostanlen
Euronoise
Publication year: 2021

The recent surge of machine learning models for wireless sensor networks brings new opportunities for environmental acoustics. Yet, these models are prone to statistical deviations, e.g., due to unforeseen changes in recording hardware or atmospheric conditions. In a supervised learning context, mitigating such deviations is all the more difficult that the area of coverage is vast. I propose to mitigate this problem by applying a form of adaptive gain control in the time-frequency domain, known as Per-Channel Energy Normalization (PCEN). While PCEN has recently been introduced for keyword spotting in the smart home, i show that it is also beneficial for outdoor sensing applications. Specifically, i discuss the deployment of PCEN for terrestrial bio-acoustics, marine bio-acoustics, and urban acoustics. Finally, i formulate three unsolved problems regarding PCEN, approached from the different perspectives of signal processing, real-time systems, and deep learning.

wav2shape: Hearing the Shape of a Drum Machine

Conference paper
Han Han, Vincent Lostanlen
Proceedings of Forum Acusticum
Publication year: 2020

Disentangling and recovering physical attributes, such as shape and material, from a few waveform examples is a challenging inverse problem in audio signal processing, with numerous applications in musical acoustics as well as structural engineering. We propose to address this problem via a combination of time-frequency analysis and supervised machine learning. We start by synthesizing a dataset of sounds using the functional transformation method. Then, we represent each percussive sound in terms of its time-invariant scattering transform coefficients and formulate the parametric estimation of the resonator as multidimensional regression with a deep convolutional neural network. We interpolate scattering coefficients over the surface of the drum as a surrogate for potentially missing data and study the response of the neural network to interpolated samples. Lastly, we resynthesize drum sounds from scattering coefficients, therefore paving the way towards a deep generative model of drum sounds whose latent variables are physically interpretable.

Playing Technique Recognition by Joint Time–Frequency Scattering

Conference paper
Changhong Wang, Vincent Lostanlen, Emmanouil Benetos, and Elaine Chew
Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing
Publication year: 2020

Playing techniques are important expressive elements in music signals. In this paper, we propose a recognition system based on the joint time–frequency scattering transform (jTFST) for pitch evolution-based playing techniques (PETs), a group of playing techniques with monotonic pitch changes over time. The jTFST represents spectro-temporal patterns in the time–frequency domain, capturing discriminative information of PETs. As a case study, we analyse three commonly used PETs of the Chinese bamboo flute: acciacatura, portamento, and glissando, and encode their characteristics using the jTFST. To verify the proposed approach, we create a new dataset, the CBF-petsDB, containing PETs played in isolation as well as in the context of whole pieces performed and annotated by professional players. Feeding the jTFST to a machine learning classifier, we obtain F-measures of 71% for acciacatura, 59% for portamento, and 83% for glissando detection, and provide explanatory visualisations of scattering coefficients for each technique.

OrchideaSOL: A Dataset of Instrumental Samples with Extended Techniques

Conference paper
Carmine Emanuele Cella, Daniele Ghisi, Vincent Lostanlen, Fabien Lévy, Joshua Fineberg, Yan Maresz
Proceedings of the International Computer Music Conference (ICMC)
Publication year: 2020

This paper introduces OrchideaSOL, a free dataset of samples of extended instrumental playing techniques, designed to be used as default dataset for the Orchidea framework for target-based computer-aided orchestration.

OrchideaSOL is a reduced and modified subset of Studio On Line, or SOL for short, a dataset developed at Ircam between 1996 and 1998. We motivate the reasons behind OrchideaSOL and describe the differences between the original SOL and our dataset. We will also show the work done in improving the dynamic ranges of orchestral families and other aspects of the data.

One or Two Frequencies? The Scattering Transform Answers

One or Two Frequencies? The Scattering Transform Answers

Conference paper
Vincent Lostanlen, Alice Cohen-Hadria, Juan Pablo Bello
Proceedings of the European Signal Processing Conference (EUSIPCO)
Publication year: 2020

With the aim of constructing a biologically plausible model of machine listening, we study the representation of a multicomponent stationary signal by a wavelet scattering network. First, we show that renormalizing second-order nodes by their first-order parents gives a simple numerical criterion to assess whether two neighboring components will interfere psychoacoustically. Secondly, we run a manifold learning algorithm (Isomap) on scattering coefficients to visualize the similarity space underlying parametric additive synthesis. Thirdly, we generalize the “one or two components” framework to three sine waves or more and prove that the effective scattering depth of a Fourier series grows in logarithmic proportion to its bandwidth.

Learning the Helix Topology of Musical Pitch

Conference paper
Vincent Lostanlen, Sripathi Sridhar, Brian McFee, Andrew Farnsworth, and Juan Pablo Bello
Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)
Publication year: 2020

To explain the consonance of octaves, music psychologists represent pitch as a helix where azimuth and axial coordinate correspond to pitch class and pitch height respectively. This article addresses the problem of discovering this helical structure from unlabeled audio data. We measure Pearson correlations in the constant-Q transform (CQT) domain to build a K-nearest neighbor graph between frequency subbands. Then, we run the Isomap manifold learning algorithm to represent this graph in a three-dimensional space in which straight lines approximate graph geodesics. Experiments on isolated musical notes demonstrate that the resulting manifold resembles a helix which makes a full turn at every octave. A circular shape is also found in English speech, but not in urban noise. We discuss the impact of various design choices on the visualization: instrumentarium, loudness mapping function, and number of neighbors K.

Learning a Lie Algebra from Unlabeled Data Pairs

Learning a Lie Algebra from Unlabeled Data Pairs

Conference paper
Chris Ick, Vincent Lostanlen
Proceedings of the 1st DeepMath conference, New York City, NY, USA, November 2020
Publication year: 2020

This article proposes a machine learning method to discover a nonlinear transformation which maps a collection of source vectors onto a collection of target vectors. The key idea is to learn the Lie algebra associated to the underlying one-parameter subgroup of the general linear group. This method has the advantage of not requiring any human intervention other than collecting data samples by pairs, i.e., before and after the action of the group.

Chirping Up the Right Tree: Incorporating Biological Taxonomies into Bioacoustic Classifiers

Conference paper
Jason Cramer, Vincent Lostanlen, Andrew Farnsworth, Justin Salamon, and Juan Pablo Bello
Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)
Publication year: 2020

Class imbalance in the training data hinders the generalization ability of machine listening systems. In the context of bioacoustics, this issue may be circumvented by aggregating species labels into super-groups of higher taxonomic rank: genus, family, order, and so forth. However, different applications of machine listening to wildlife monitoring may require different levels of granularity. This paper introduces TaxoNet, a deep neural network for structured classification of signals from living organisms. TaxoNet is trained as a multitask and multilabel model, following a new architectural principle in end-to-end learning named “hierarchical composition”: shallow layers extract a shared representation to predict a root taxon, while deeper layers specialize recursively to lower-rank taxa. In this way, TaxoNet is capable of handling taxonomic uncertainty, out-of-vocabulary labels, and open-set deployment settings. An experimental benchmark on two new bioacoustic datasets (ANAFCC and BirdVox-14SD) leads to state-of-the-art results in bird species classification. Furthermore, on a task of coarse-grained classification, TaxoNet also outperforms a flat single-task model trained on aggregate labels.

Arrhythmia Classification of 12-lead Electrocardiograms by Hybrid Scattering–LSTM Networks

Conference paper
Philip A. Warrick, Vincent Lostanlen, Michael Eickenberg, Joakim Andén, Masun Nabhan Homsi
Proceedings of the Computing in Cardiology (CinC) Conference
Publication year: 2020

Electrocardiogram (ECG) analysis is the standard ofcare for the diagnosis of irregular heartbeat patterns, known as arrhythmias. This paper presents a deep learning system for the automatic detection and multilabel classification of arrhythmias in ECG recordings. Our system composes three differentiable operators: a scattering transform (ST), a depthwise separable convolutional network (DSC), and a bidirectional long short-term memory network (BiLSTM). The originality of our approach is that all three operators are implemented in Python. This is in contrast to previous publications, which pre-computed ST coefficients in MATLAB. The implementation of ST on Python was made possible by using a new software library for scattering transform named Kymatio.This paper presents the first successful application of Kymatio to the analysis of biomedical signals. As part of the PhysioNet/Computing in Cardiology Challenge 2020, we trained our hybrid Scattering–LSTM model to classify 27 cardiac arrhythmias from two databases of 12–lead ECGs: CPSC2018 and PTB-XL, comprising 32k recordings in total. Our team “BitScattered” achieved a Challenge metric of 0.536±0.012 over ten folds of cross-validation.

The Shape of RemiXXXes to Come: Audio Texture Synthesis with Time–frequency Scattering

Conference paper
Vincent Lostanlen, Florian Hecker
Proceedings of the International Conference on Digital Audio Effects (DAFx), 2019
Publication year: 2019

This article explains how to apply timefrequency scattering, a convolutional operator extracting modulations in the timefrequency domain at different rates and scales, to the re-synthesis and manipulation of audio textures.

Adaptive Time–Frequency Scattering for Periodic Modulation Recognition in Music Signals

Conference paper
Changhong Wang, Vincent Lostanlen, Emmanouil Benetos, Elaine Chew
Proceedings of the International Society on Music Information Retrieval (ISMIR) Conference
Publication year: 2019

Vibratos, tremolos, trills, and flutter-tongue are techniques frequently found in vocal and instrumental music. A common feature of these techniques is the periodic modulation in the time–frequency domain. We propose a representation based on time–frequency scattering to model the interclass variability for fine discrimination of these periodic modulations. Time–frequency scattering is an instance of the scattering transform, an approach for building invariant, stable, and informative signal representations. The proposed representation is calculated around the wavelet subband of maximal acoustic energy, rather than over all the wavelet bands. To demonstrate the feasibility of this approach, we build a system that computes the representation as input to a machine learning classifier. Whereas previously published datasets for playing technique analysis focus primarily on techniques recorded in isolation, for ecological validity, we create a new dataset to evaluate the system. The dataset, named CBF-periDB, contains full-length expert performances on the Chinese bamboo flute that have been thoroughly annotated by the players themselves. We report F-measures of 99% for flutter-tongue, 82% for trill, 69% for vibrato, and 51% for tremolo detection, and provide explanatory visualisations of scattering coefficients for each of these techniques.

BirdVox-full-night: A Dataset and Benchmark for Avian Flight Call Detection

Conference paper
Vincent Lostanlen, Justin Salamon, Andrew Farnsworth, Steve Kelling, Juan Pablo Bello
Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing
Publication year: 2018

This article addresses the automatic detection of vocal, nocturnally migrating birds from a network of acoustic sensors. Thus far, owing to the lack of annotated continuous recordings, existing methods had been benchmarked in a binary classification setting (presence vs. absence). Instead, with the aim of comparing them in event detection, we release BirdVox-full-night, a dataset of 62 hours of audio comprising 35402 flight calls of nocturnally migrating birds, as recorded from 6 sensors. We find a large performance gap between energy-based detection functions and data-driven machine listening. The best model is a deep convolutional neural network trained with data augmentation. We correlate recall with the density of flight calls over time and frequency and identify the main causes of false alarm.

Deep Convolutional Networks on the Pitch Spiral for Musical Instrument Recognition

Conference paper
Vincent Lostanlen, Carmine-Emanuele Cella
Proceedings of the International Society on Music Information Retrieval (ISMIR) Conference
Publication year: 2016

Musical performance combines a wide range of pitches, nuances, and expressive techniques. Audio-based classification of musical instruments thus requires to build signal representations that are invariant to such transformations. This article investigates the construction of learned convolutional architectures for instrument recognition, given a limited amount of annotated training data. In this context, we benchmark three different weight sharing strategies for deep convolutional networks in the time-frequency domain: temporal kernels; time-frequency kernels; and a linear combination of time-frequency kernels which are one octave apart, akin to a Shepard pitch spiral. We provide an acoustical interpretation of these strategies within the source-filter framework of quasi-harmonic sounds with a fixed spectral envelope, which are archetypal of musical notes. The best classification accuracy is obtained by hybridizing all three convolutional layers into a single deep learning architecture.

Wavelet Scattering on the Pitch Spiral

Conference paper
Vincent Lostanlen, Stéphane Mallat
Proceedings of the International Conference on Digital Audio Effects (DAFx)
Publication year: 2015

We present a new representation of harmonic sounds that linearizes the dynamics of pitch and spectral envelope, while remaining stable to deformations in the timefrequency plane. It is an instance of the scattering transform, a generic operator which cascades wavelet convolutions and modulus nonlinearities. It is derived from the pitch spiral, in that convolutions are successively performed in time, log-frequency, and octave index. We give a closed-form approximation of spiral scattering coefficients for a nonstationary generalization of the harmonic sourcefilter model.