I head the Interactive Audio Lab. We make machines that hear, understand and manipulate sound. To do this, we develop new methods in Machine Learning (e.g. automated gradient clipping), Signal Processing (e.g. the Multi-scale Common-fate Transform) and Human Computer Interaction. Example applications include a search engine that lets you explore a collection of sounds by using a sound as the search key, tools to automatically separate and transcribe music, and tools to let you apply audio effects by specifying the goal in natural language (e.g. make the audio sound "warmer").

When I'm not researching, you may find me teaching classes like Digital Luthier or Deep Learning.

When I'm not being an academic, I tend to be making music with groups like Son Monarcas and The East Loop and, back in the day, Balkano.

Prof. Bryan Pardo

pardo at northwestern.edu

Computer Science
Northwestern University
Seely Mudd Building, Room 3115
2233 Tech Drive
Evanston, IL 60208
USA


Interactive Audio Lab
Google Scholar