About
I am an applied machine learning researcher and computational neuroscientist, recently graduated from the Harvard-MIT Speech and Hearing Bioscience and Technology (SHBT) Program, where I worked with Satra Ghosh in the Senseable Intelligence Group at MIT.
My work focuses on building and evaluating models for complex real-world signals: speech, audio, movies, gaze, behavior, and brain data. I like problems where the data are rich, messy, human, and not easily reduced to a tidy benchmark.
At MIT and Harvard, I built large-scale predictive modeling workflows for naturalistic fMRI, multimodal feature extraction pipelines for movie stimuli, pediatric speech datasets for machine learning challenges, and research tools for annotating audiovisual data. One through-line across the work is turning ambiguous scientific questions into datasets, models, validation strategies, and interpretable evidence.
Previously, I was a lab manager in the Robertson Lab at Dartmouth, working both there and in the Kanwisher Lab at MIT, where I designed and implemented VR eye-tracking and salience modeling pipelines to study visual attention. I hold a master’s degree in Digital Musics from Dartmouth, where I worked with Michael Casey on machine learning approaches to neural stimulus reconstruction and music information retrieval.
My broader research experience spans Alzheimer’s disease clinical trials, deep-sea marine biology, microelectronics, and genomics. Across domains, a common thread has been the use of quantitative and data-driven methods to model complex biological and physical systems.
Outside of research, I have taken pictures of a Tibetan spaniel named François and enjoy writing and producing music.