She/Her How To Pronounce My Name nematzadeh at deepmind.com
I am a staff research scientist at DeepMind. Before joining DeepMind, I was a postdoctoral researcher at UC Berkeley advised by Tom Griffiths and affiliated with the Computational Cognitive Science Lab and BAIR. I received a PhD and an MSc in Computer Science from the University of Toronto. I was advised by Suzanne Stevenson and Afsaneh Fazly, and was a member of the Computational Linguistics group.
During my PhD, I studied how children learn, represent, and search for semantic information through computational (cognitive) modeling. Here are my PhD thesis and its précis. Broadly, my research interests are in the intersection of computational linguistics, cognitive science, and machine learning.
My recent work has focused on evaluation and analysis of neural representations. In particular, I am interested in understanding the strengths and shortcomings of pre-trained language or vision-language models. I study these models taking two general approaches: (a) probing for certain capabilities, such as verb understanding, number discrimination, theory of mind, common-sense reasoning, and action segmentation; (b) evaluating models in transfer settings while controlling for data and modelling choices (e.g., multimodal transformers and VQA).
Here is a short talk summarizing some of this work; a list of my talks by topic can be found here.