About
I am an applied machine learning scientist and recent PhD graduate of the Harvard-MIT Speech and Hearing Bioscience and Technology (SHBT) Program, where I worked with Satra Ghosh in the Senseable Intelligence Group at MIT. My work focuses on building and evaluating multimodal and perception-driven systems using large-scale naturalistic datasets, including whole-brain fMRI encoding models and large pediatric speech corpora.
Previously, I was a lab manager in the Robertson Lab at Dartmouth, working both there and in the Kanwisher Lab at MIT, where I designed and implemented VR eye-tracking and salience modeling pipelines to study visual attention. I hold a master’s degree in Digital Musics from Dartmouth, where I worked with Michael Casey on machine learning approaches to neural stimulus reconstruction and music information retrieval.
My broader research experience spans Alzheimer’s disease clinical trials, deep-sea marine biology, microelectronics, and genomics. Across domains, a common thread has been the use of quantitative and data-driven methods to model complex biological and physical systems.
Outside of research, I have taken pictures of a Tibetan spaniel named François and enjoy writing and producing music.