Tyler Chang
Hello!
I am a research scientist at Google DeepMind working on multilinguality for [Gemini].
I'm also interested in model interpretability, pretraining dynamics, and the science of behavior in general.
For details, see my [publications] or [CV].

I did my undergrad in math and cognitive science at Carleton College in Northfield, Minnesota, and my PhD in cognitive science at UC San Diego.
Outside of research, I enjoy playing piano, running, and taking blurry photos in the ocean. For questions about my research, contact me at tachang@ucsd.edu!
Recent Highlights
- [MRL 2025] is organizing a shared task to develop a multilingual physical commonsense reasoning evaluation dataset! Details on how to submit are [here].
- We published a [blog post], [preprint], and [demo] for our work at Google DeepMind scaling training data attribution methods to LLM pretraining! I'll be presenting this work at ICLR 2025.
- We released [Goldfish], a suite of small, comparable monolingual language models for 350 languages!