In addition to being co-director of the Northwestern University Center for Human Computer Interaction + Design, I head the Interactive Audio Lab. We make machines that understand and manipulate sound. To do this, we develop new methods in Machine Learning (e.g. automated gradient clipping), and Signal Processing (e.g. the Multi-scale Common-fate Transform) and apply them to new kinds of user interfaces and interactions. Examples include:
When I'm not being an academic, I make music with the Latin Folk Fusion ensemble Son Monarcas.