I head the Interactive Audio Lab. We make machines that hear, understand and manipulate sound. To do this, we develop new methods in Machine Learning (e.g. automated gradient clipping), Signal Processing (e.g. the Multi-scale Common-fate Transform) and Human Computer Interaction. Example applications include a search engine that lets you explore a collection of sounds by using a sound as the search key, tools to automatically separate and transcribe music, and tools to let you apply audio effects by specifying the goal in natural language (e.g. make the audio sound "warmer").