DAFx13 Logo
DAFx-13 / Registration

16th International Conference
on Digital Audio Effects
DAFx-13
September 2-5, 2013
Maynooth, Ireland

DAFx-13 Program

DAFx-13 at a glance:

Keynote Speakers:

We are very happy to introduce our three keynote speakers for DAFx-13, all leaders in their respective fields:

(Tues 3) John ffitch - Parallel Computing and Audio Processing

Abstract:

With the change in hardware developments from faster clock speeds to more cores it is imperative that the audio computing community determine how they can make best use this change. Drawing on a range of experiments by myself and my collaborators, and by others, this talk explores a number of ways this could be achieved. In particular there will be consideration of thread-level parallelism as in the Csound multi-core system, and a range of GPU programs for spectral transformations and filtering.

Bio:

Adjunct Professor to the Department of Music at NUI Maynooth, Professor John ffitch is a computer scientist, mathematician and composer. He was born in Barnsley, Yorkshire, England in 1945, he was educated at Cambridge, where he obtained his undergraduate degree and completed his PhD. Areas of his research work include relativity, planetary astronomy, computer algebra and the Lisp language. He was the recipient of the 1975 Adams Prize for Mathematics. He is a former Chair of Software Engineering at Bath, having retired in 2011.


(Weds 4) Dave Smith - Synthesizers from Analog to Digital, from Hardware to Software, and back to Analog

Abstract:

Analog and digital, hardware and software, the progression of synthesizer technology over the last 40 years has seen interesting developments as product designs have both changed drastically and also come full circle, implementing exact replicas of earlier instruments. Starting with all-analog subtractive monophonic instruments in the early 70’s, the first big jump came with the Prophet 5 in 1978, the first musical instrument with an embedded microprocessor. Other digitally-controlled analog synths soon appeared, and the MIDI standard was developed to allow digital communication between these instruments. The first digital instruments were unveiled in 1983, and changed the environment significantly, with lower costs, higher voice counts, and a different sound. In the mid-90s, the first software synths were developed, as general-purpose computers became fast enough to natively handle the digital signal processing. At the same time, the first digital emulations of analog subtractive synthesizers became available. We now find instruments with real analog circuitry reappearing in the last 10 years, completing the loop. This talk will review this progression of synthesizer technology through personal reflections, covering design decisions, musician preferences, impact on musical styles, and the challenges of musical instrument design with such a large range of technologies available.

Bio:

Dave Smith founded Sequential Circuits, the premier manufacturer of professional music synthesizers, in the mid-70s. In 1977, he designed the Prophet-5, the world's first microprocessor-based musical instrument. This revolutionary product was the world's first polyphonic and programmable synth, and set the standard for all synth designs that have followed. The Prophet instruments played a major part in the recordings of all popular music styles, and are still prized by musicians today.


(Thurs 5) Vesa Välimäki - Virtual Analog Modelling

Abstract:

This keynote talk focuses on signal processing techniques for modeling analog audio systems used in music technology. Many analog music systems produce a distinctive and desirable sound, but the original devices may be expensive or hard to access and maintain. Examples include classic synthe-sizer modules and vintage guitar amplifiers. It is therefore of interest to give such systems a new life as software simulations, which will be accessible to many. Virtual analog modeling approaches can be divided into three categories: 1. reduction of artifacts in digital signal processing, 2. introducing analog ‘feel’ to digital signal processing, and 3. emulation of specific analog equipment. An example of the first category includes the replacement of discrete unit delays with smoothly varying interpolated de-lays, and the reduction of aliasing occurring in oscillators and nonlinearities. Analog ‘feel’ comes from the simulation of typical characteristics or limitations of analog systems, such as limited bandwidth, distortion, parameter drift, and added noise. Emulation is the most demanding task, because it refers to the detailed imitation of the response of a particular device, whose behavior is often nonlinear. Emula-tion can be based on physical modeling of an analog circuit, or on the black-box method, which mod-els the system based on observing its input and output relations. An overview of recent research in the area of virtual analog modeling will be presented. Topics include antialiasing oscillator algorithms, virtual analog synthesizer filters, modeling of guitar pickups, spring reverberation units, ring modula-tors, carbon microphones, and audio antiquing. Virtual analog research can also open new opportuni-ties beyond software versions of old technology. This talk will mention some examples of such possi-bilities.

Bio:

Vesa Välimäki received the degrees of Master of Science in Technology (1992), Licentiate of Science in Technology (1994), and Doctor of Science in Technology (1995), all in electrical engineering, from the Helsinki University of Technology (TKK), Espoo, Finland. His doctoral thesis dealt with fractional delay filters and discrete-time simulation of acoustic tubes for sound synthesis. Prof. Välimäki is a Fellow of the Audio Engineering Society, a Senior Member of the Institute of Electrical and Electronics Engineers (IEEE Signal Processing Society), and a Life Member of the Acoustical Society of Finland. In 2003, he was appointed Docent in signal processing at the Pori School of Technology and Economics, Tampere University of Technology. During the academic year 2008-2009 he was on leave as a Senior Researcher under a grant from the Academy of Finland and spent part of the year as a Visiting Scholar at the Center for Computer Research in Music and Acoustics (CCRMA), Stanford University, CA. His research interests are in digital processing of audio and musical signals. He is the author or co-author of more than 300 scientific or technical papers and is responsible for four patents.


Workshop/Tutorial Sessions:

The main DAFx program is supported by invited workshop/tutorial sessions in key topics of interest to the DAFx audience on the Monday directly before the three days of oral presentations and poster sessions. These workshops are open to all DAFx13 delegates:

[Mon 2 - #1] Head Related Transfer Function Measurement - Prof. Tianshu Qu, Peking University

Abstract:

This workshop considers the measurement and analysis of head-related transfer functions (HRTFs). HRTFs describes the sound transmission characteristics from a sound source to eardrum in free field. It is widely applied to room acoustics simulation, three-dimensional (3D) sound visualization, and sound localization in virtual reality technologies. Most HRTF measurements have been conducted in distal region conditions, where the sound source location is at least 1m away from the listener. However, in the proximal region (sound source within 1m of the listener), the situation is quite different because the distance-related effects on HRTFs or binaural cues become significant when the sound source is close to the head. Thus, measuring HRTFs requires care and this tutorial will examine issues that influence the procedures.

Bio:

Tianshu Qu got B.Eng. and M.Eng degree from Jilin University of Technology in 1993 and 1999. In 2002, he got Ph.D. degree from Jilin University. Then, he became a post-doctor of Peking University. In 2004, he finished the post-doctor work and became an assistant professor in Peking University. And now, he is an associate professor in Peking University. From 2011.3 to 2012.3, he went to Michigan State University as a visiting scholar. His principal interests are in acoustic signal processing, binaural auditory model and virtual sound.


[Mon 2 - #2] Dynamic convolution techniques in live performance - Prof. Sigurd Saue, Norwegian University of Science and Technology Trondheim, Norway.

Abstract:

Convolution plays an important role in digital audio processing, typically to impose the spectral and temporal characteristics of a given impulse response onto an audio signal. In this workshop we will discuss how convolution can be utilized as a live interprocessing technique, for instance in the context of improvised electroacoustic music, working on two live inputs or on dynamically updated “impulse responses”. After covering some basic challenges with convolution as a live performance tool, the tutorial will present a number of approaches to increase the dynamic control of convolution. Topics include live sampling of impulse responses, transient analysis for control of smearing and rhythmic precision, and multilayered cross-convolution. The workshop will touch upon work in progress and participants are welcome to contribute with comments and suggestions. The workshop will include several audio examples and live demonstrations of our convolver plugins implemented in the open source software Csound and Cabbage.

Bio:

Sigurd Saue is currently an associate professor in Music Technology at the NTNU Department of Music where he also heads the Music Technology program. He has an MSc in Acoustics and Electrical engineering from NTNU and worked on a PhD thesis on sonification before spending more than 10 years as a software developer in the oil and gas industry, working specifically with signal processing of seismic data. Simultaneously he has implemented a number of permanent sound art installations all across Norway.


[Mon 2 - #3] Upmixing from Mono using Sound Source Separation: A Real World Case Study - Dr. Derry Fitzgerald, Audio Research Group, Dublin Institute of Technology, Ireland.

Abstract:

Recently, sound source separation technologies were used to a number of stereo mixes from the original mono recordings for a number of songs by the Beach Boys. These were issued on a number of album reissues in 2012. For these songs, stereo mixes could not otherwise be created as some or all of the multitrack tapes were missing, or there were significant parts which were added live during mixdown. This tutorial will provide an overview of the technologies used, including factorisation based techniques, and median filtering based techniques for both vocal and percussion separation, as well as user assisted separation in cases where parts of the multitracks were available. It will also highlight some of the issues encountered when deploying sound source separation technologies for upmixing in the real world.

Bio:

Derry FitzGerald is Stokes Lecturer at the Audio Research Group in DIT. Previous to this he worked as a post-doctoral researcher in the Dept. of Electronic Engineering at Cork Institute of Technology, having previously completed a Ph.D. and an M.A. at Dublin Institute of Technology. He has also worked as a researcher on the DITME project at D.I.T. He has also worked as a Chemical Engineer for some years.
In the field of music and audio, he has also worked as a sound engineer and has written scores for theatre.
His research interests are in the areas of automatic music transcription, sound source separation, tensor factorizations, and music information retrieval systems. He has recently utilised his sound source separation technologies to create the first ever officially released stereo mixes of several songs for the Beach Boys, including Good Vibrations, Help me Rhonda and I get around


[Mon 2 - #4] The making of RAZOR, an additive synthesizer by Errorsmith - Errorsmith, Musician and External Reaktor Instrument Builder for Native Instruments, Berlin

Abstract:

RAZOR is an additive synthesizer created by Errorsmith and Native Instruments using Reaktor, a graphical programming environment for instruments and effects. Errorsmith talks about the work process that lead from the first idea to the finished product. He explains the concept behind the synthesizer, the design decisions that were taken, technical challenges he faced and shows implementation details in Reaktor. Some of RAZORS unique features will be demonstrated and explained in detail like its 'additive' reverb, dissonance effects and special filter types, which show a new twist to additive synthesis.

Bio:

Errorsmith (real name Erik Wiegand) is a berlin based musician and instrument developer. He is part of MMM and Smith n Hack, musical collaboration with friends. He studied communication and computer science in the 90s but dropped college to work on music and to develop his own software instruments. He worked at Native Instruments from 1999 to 2004 first as Quality Manager then as Instrument Designer. Since then he sporadically worked as free lance for them. In 2009 he and Native Instruments started to work on RAZOR, an innovative additive synthesizer, which was released in March 2011.



DAFx-13 Complete Program:

Monday 2nd September
09:00 - 18:00
Registration
[Iontas - Foyer]
Workshop 1:

[Iontas - TH 0.003]
09:30 - 11:00
Head related Transfer Function Measurement
Prof. Tianshu Qu (Peking University)
11:00 - 11:30
Coffee
[Iontas - Foyer]
Workshop 2:

[Iontas - TH 0.003]
11:30 - 13:00
Dynamic Convolution Techniques in Live Performance
Prof. Sigurd Saue (Norwegian University of Science and Technology, Trondheim)
13:00 - 14:30
Lunch
[Phoenix Restaruant]
Workshop 3:

[Iontas - TH 0.003]
14:30 - 16:00
Upmixing from Mono using Sound Source Separation: A Real World Case Study
Dr. Derry Fitzgerald (Dublin Institute of Technology)
16:00 - 16:30
Coffee
[Iontas - Foyer]
Workshop 4:

[Iontas - TH 0.003]
16:30 - 18:00
The making of RAZOR, an additive synthesizer by Errorsmith
Errorsmith, Musician and External Reaktor Instrument Builder for Native Instruments, Berlin
19:00 - 21:30
Welcome Reception
[Pugin Hall - South Campus]

Tuesday 3rd September
08:30 - 09:00
Registration
[Iontas - Foyer]
09:00 - 09:10
Opening
[Iontas - TH 0.003]
Keynote 1:

[Iontas - TH 0.003]
09:10 - 10:20
Parallel Computing and Audio Processsing
John ffitch
Oral Session 1:
Music Information Retrieval and Parameter Estimation
Chair: Derry Fitzgerald

[Iontas - TH 0.003]
10:20 - 10:40
Expressive Oriented Time-Scale Adjustment for Mis-Played Musical Signals Based on Tempo Curve Estimations
Yuma Koizumi, Katunobu Itou
10:40 - 11:00
Information Retrieval of Marovany Zither Music Based on an Original Optical-Based System
Dorian Cazau, Olivier Adam, Marc Chemillier
11:00 - 11:20
The Tonalness Spectrum: Feature-Based Estimation of Tonal Components
Sebastian Kraft, Alexander Lerch, Udo Zölzer
11:20 - 11:30
Poster Presentations 1:
[Iontas - TH 0.003]

Perception & Evaluation of Audio Quality in Music Production
Alex Wilson, Bruno Fazenda

Unsupervised Audio Key and Chord Recognition
Yun-Sheng Wang, Harry Wechsler

Efficient DSP Implementation of Median Filtering for Real-Time Audio Noise Reduction
Stephan Herzog

Separation of Unvoiced Fricatives in Singing Voice Mixtures with Semi-Supervised NMF
Jordi Janer, Richard Marxer
11:30 - 12:00
Coffee/Poster Session
[Iontas - Foyer]
Oral Session II:
Parameter Estimation and Generative Processes
Chair: Jez Wells

[Iontas - TH 0.003]
12:00 - 12:20
A Complex Wavelet Based Fundamental Frequency Estimator in Single-Channel Polyphonic Signals
Jesús Ponce de León, Fernando Beltrán, José R. Beltrán
12:20 - 12:40
Maximum Filter Vibrato Suppression for Onset Detection
Sebastian Böck and Gerhard Widmer
12:40 - 13:00
Generating Musical Accompaniment Using Finite State Transducers
Jonathan P. Forsyth, Juan P. Bello
13:00 - 14:30
Lunch
[Phoenix Restaurant]
Oral Session III:
Source Separation
Chair: Vesa Välimäki

[Iontas - TH 0.003]
14:30 - 14:50
Re-Thinking Sound Separation: Prior Information and Additivity Constraint in Separation Algorithms
Estefanía Cano, Christian Dittmar, Gerald Schuller
14:50 - 15:10
Study of Regularizations and Constraints in NMF-Based Drums Monaural Separation
Ricard Marxer, Jordi Janer
15:10 - 15:30
Reverse Engineering Stereo Music Recordings Pursuing an Informed Two-Stage Approach
Stanislaw Gorlow, Sylvain Marchand
15:30 - 15:50
Source Separation and Analysis of Piano Music Signals Using Instrument-Specific Sinusoidal Model
Wain Man Szeto, Kin Hong Wong
15:50 - 16:00
Poster Presentations II:
[Iontas - TH 0.003]

Low-Latency Bass Separation Using Harmonic-Percussion Decomposition
Ricard Marxer, Jordi Janer

A 3D Multi-Plate Environment for Sound Synthesis
Alberto Torin, Stefan Bilbao

Pure Data External for Reactive HMM-Based Speech and Singing Synthesis
Maria Astrinaki, Alexis Moinet, Nicolas d’Alessandro, Thierry Dutoit

Simulation of Textured Audio Harmonics Using Random Fractal Phaselets
J. Blackledge, D. Fitzgerald, R. Hickson
16:00 - 16:30
Coffee/Poster Session
[Iontas - Foyer]
Oral Session IV:
Sound Synthesis and Processing
Chair : Philippe Depalle

[Iontas - TH 0.003]
16:30 - 16:50
Source Filter Model For Expressive Gu-Qin Synthesis and its iOS App
Pei-Ching Li, Wei-Chen Chang, Tien-Min Wang, Ya-Han Kuo, Alvin W. Y. Su
16:50 - 17:10
Extended Source-Filter Model for Harmonic Instruments for Expressive Control of Sound Synthesis and Transformation
Henrik Hahn, Axel Röbel
17:10 - 17:30
Bit Bending: an Introduction
Kurt James Werner, Mayank Sanganeria
20:00 - 22:30
Concert
[TBA]



Wednesday 4th September
08:45 - 09:00
Registration
[Iontas - Foyer]
Keynote 2:

[Iontas - TH 0.003]
09:00 - 10:20
Synthesizers from Analog to Digital, from Hardware to Software, and back to Analog
Dave Smith
Oral Session V:
Physical Models
Chair : Sascha Disch

[Iontas - TH 0.003]
10:20 - 10:40
Numerical Simulation of Spring Reverberation
Stefan Bilbao
10:40 - 11:00
Fourth-Order and Optimised Finite Difference Schemes for the 2-D Wave Equation
Brian Hamilton and Stefan Bilbao
11:00 - 11:20
Parametric Audio Coding of Bass Guitar Recordings Using a Tuned Physical Modeling Algorithm
Jakob Abeßer, Patrick Kramer, Christian Dittmar, Gerald Schuller
11:20 - 11:30
Poster Presentations III:
[Iontas - TH 0.003]

3D Particle Systems for Audio Applications
Nuno Fonseca

Physically Informed Synthesis of Jackhammer Tool Impact Sounds
Sami Oksanen, Julian Parker, Vesa Välimäki


Guitar Preamp Simulation Using Connection Currents
Jaromir Macak

Chromax, the Other Side of the Spectral Delay Between Signal Processing and Composition
Arshia Cont, Carlo Laurenzi, Marco Stroppa
11:30 - 12:00
Coffee/Poster Session
[Iontas - Foyer]
Oral Session VI:
Analysis-Synthesis Methods
Chair : Stefan Bilbao

[Iontas - TH 0.003]
12:00 - 12:20
Analysis/Synthesis Using Time-Varying Windows and Chirped Atoms
Corey Kereliuk, Philippe Depalle
12:20 - 12:40
On the Modeling of Sound Textures Based on the STFT Representation
Wei-Hsiang Liao, Axel Röbel, Alvin W.Y. Su
12:40 - 13:00
A Streaming Audio Mosaicing Vocoder Implementation
Edward Costello, Victor Lazzarini, Joseph Timoney
13:00 - 14:30
Lunch
[Phoenix Restaurant]
Oral Session VII: Interaction and Control
Chair : Simon Lui

[Iontas - TH 0.003]
14:30 - 14:50
Navigating in a Space of Synthesized Interaction-Sounds: Rubbing, Scratching and Rolling Sounds
S. Conan, E. Thoret, M. Aramaki, O. Derrien, C. Gondre, R. Kronland-Martinet, S. Ystad
14:50 - 15:10
A Modeller-Simulator for Instrumental Playing of Virtual Musical Instruments
James Leonard, Nicolas Castagné, Claude Cadoz, Jean-Loup Florens
15:10 - 15:30
Rumbator: a Flamenco Rumba Cover Version Generator Based on Audio Processing at Note Level
Carles Roig, Isabel Barbancho, Emilio Molina, Lorenzo J. Tardón, Ana María Barbancho
15:30 - 15:50
Controlling a Non Linear Friction Model for Evocative Sound Synthesis Applications
Etienne Thoret, Mitsuko Aramaki, Charles Gondre, Richard Kronland-Martinet, Sølvi Ystad
15:50 - 16:00
Poster Presentations IV:
[Iontas - TH 0.003]

TELTPC Based Re-Synthesis Method for Isolated Notes of Polyphonic Instrumental Music Recordings
Ya-Han Kuo, Wei-Chen Chang, Tien-Ming Wang, Alvin W.Y. Su

Time-Frequency Analysis of Musical Signals using the Phase Coherence
Alessio Degani, Marco Dalai, Riccardo Leonardi, Pierangelo Migliorati

Modelling and Separation of Singing Voice Breathiness in Polyphonic Mixtures
Ricard Marxer, Jordi Janer

Audio-Tactile Glove
Gareth Young, David Murphy, Jeffrey Weeter
16:00 - 16:30
Coffee/Poster Session
[Iontas - Foyer]
Oral Session VIII:
Source Separation and Restoration
Chair : Tom Lysaght

[Iontas - TH 0.003]
16:30 - 16:50
Stereo Vocal Extraction Using Adress and Nearest Neighbours Median Filtering
Derry FitzGerald
16:50 - 17:10
Timbre-Constrained Recursive Time-Varying Analysis for Musical Note Separation
Yiju Lin, Wei-Chen Chang, Tien-Ming Wang, Alvin W.Y. Su, Wei-Hsiang Liao
17:10 - 17:30
Comparison of Various Predictors for Audio Extrapolation
Marco Fink, Martin Holters, Udo Zölzer
19:00 - 19:30
Buses to Barberstown Castle
[NUIM]
19:30 - 22:30
Conference Banquet
[Barberstown Castle]

Thursday 5th September
08:45 - 09:00 Registration
[Iontas - Foyer]
Keynote 3:
[Iontas - TH 0.003]
09:00 - 10:20
Virtual Analog Modelling
Vesa Välimäki
Oral Session IX:
Audio Effects
Chair : Udo Zölzer

[Iontas - TH 0.003]
10:20 - 10:40

Kronos VST – The Programmable Effect Plugin
Vesa Norilo
10:40 - 11:00

A Digital Model of the Buchla Lowpass-Gate
Julian Parker, Stefano D’Angelo
11:00 - 11:20

Doppler Effects without Equations
Peter Brinkmann, Michael Gogins
11:20 - 11:30
Poster Presentations V:
[Iontas - TH 0.003]

Csoundo For Android
Rory Walsh, Conor Robotham

Digital Audio Device Creation by the use of a Domain Specific Language and a Hardware Abstraction Layer
Stefan Jaritz

Faust2android: a Faust Architecture For Android
Romain Michon

Incremental Functional Reactive Programming for Interactive Music Signal Processing
Caleb Reach
11:30 - 12:00
Coffee/Poster Session
[Iontas - Foyer]
Oral Session X:
Audio Effects and Coding
Chair : Sylvain Marchand

[Iontas - TH 0.003]
12:00 - 12:20

Music Dereverberation by Spectral Linear Prediction in Live Recordings
Katariina Mahkonen, Antti Eronen, Tuomas Virtanen, Elina Helander, Victor Popa, Jussi Leppänen, Igor D.D. Curcio
12:20 - 12:40

Audio Time-Scaling for Slow Motion Sports Videos
Alexis Moinet, Thierry Dutoit, Philippe Latour
12:40 - 13:00

Error Robust Delay-Free Lossy Audio Coding Based on ADPCM
Gediminas Simkus, Martin Holters, Udo Zölzer
13:00 - 14:30
Lunch
[Iontas - TH 0.003]
Oral Session XI:
Spatial Audio and Room Acoustics
Chair : Sigurd Saue

[Iontas - TH 0.003]
14:30 - 14:50

Selection And Interpolation of Head-Related Transfer Functions for Rendering Moving Virtual Sound Sources
Hannes Gamper
14:50 - 15:10

Room Acoustics Modelling using Gpu-Accelerated Finite Difference and Finite Volume Methods On a Face-Centered Cubic Grid
Brian Hamilton, Craig J. Webb
15:10 - 15:30

A New Reverberator based on Variable Sparsity Convolution
Bo Holm-Rasmussen, Heidi-Maria Lehtonen, Vesa Välimäki
15:30 - 15:40
Poster Presentations VI:
[Iontas - TH 0.003]

B-Format Acoustic Impulse Response Measurement and Analysis In the Forest at Koli National Park, Finland
Simon Shelley, Damian Murphy and Andrew Chadwick

A Scalable Architecture for General Real-Time Array-Based DSP on FPGAs with Application to the Wave Equation
Ross P. Kirk, Jeremy J. Wells

Center Signal Scaling using Signal-to-Downmix Ratios
Christian Uhle

Real-Time Dynamic Image-Source Implementation For Auralisation
André Oliveira, Guilherme Campos, Paulo Dias, Damian Murphy, José Vieira, Catarina Mendonça, Jorge Santos
15:40 - 16:00
Coffee/Poster Session
[Iontas- Foyer]
16:00 - 16:30
DAFx Scientific Committee Meeting
Oral Session XII:
Emotion and Perception
Chair : Joe Timoney

[Iontas - TH 0.003]
16:30 - 16:50

A Preliminary Analysis of the Continuous Axis Value of the Three-dimensional PAD Speech Emotional State Mode
Simon Lui
16:50 - 17:10

Perceptual Investigation of Image Placement with Ambisonics for Non-Centred Listeners
Peter Stitt, Stéphanie Bertet, Maarten van Walstijn
17:10 - 17:30
Closing - Handover to DAFx14 - Erlangen
17:10 - 17:30





dafx13.nuim.ie
Last modified: 2012-09-03 17:13:04