Gabe Grand

Research Profile

I’m a PhD student at MIT CSAIL co-advised by Jacob Andreas and Josh Tenenbaum.

I’m broadly interested in the interface between language and thinking. My research combines techniques from natural language processing, program synthesis, and classical symbolic methods. I’m working closely with researchers at MIT and other institutions to build a community around neurosymbolic programming.

My research is supported by an MIT Presidential Fellowship and the NSF Graduate Research Fellowship.

Full bio

Selected Publications

LILO: Learning Interpretable Libraries by Compressing and Documenting Code. Gabriel Grand, Lionel Wong, Matthew Bowers, Theo X. Olausson, Muxin Liu, Joshua B. Tenenbaum, Jacob Andreas. ArXiV: 2310.19791 (2023).
[arXiv] [GitHub]

From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought. Lionel Wong*, Gabriel Grand*, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum. ArXiV: 2306.12672 (2023).
[arXiv] [GitHub]

Sequential Monte Carlo Steering of Large Language Models using Probabilistic Programs. Alexander K Lew, Tan Zhi-Xuan, Gabriel Grand, Vikash K Mansinghka. ArXiV: 2306.03081 (2023).
[arXiv] [GitHub] [Docs]

Identifying concept libraries from language about object structure. Catherine Wong*, William P. McCarthy*, Gabriel Grand*, Yoni Friedman, Joshua B. Tenenbaum, Jacob Andreas, Robert D. Hawkins, and Judith E. Fan. CogSci (2022).
[arXiv] [website] [GitHub]

“Semantic projection” recovers rich human knowledge of multiple object features from word embeddings. Gabriel Grand, Idan Blank, Francisco Pereira, and Evelina Fedorenko. Nature Human Behaviour (2022).
[Nature] [MIT McGovern Institute] [arXiv]

Full list of publications

Recent News

🚀 10/2023: LILO preprint and code now released! arXiv:2310.19791

🎬 06/2023: Presented on “From Word Models to World Models” at the “LLMs meet CogSci” virtual townhall at CogSci 2023. Link to talk recording (15 mins)

🧠 06/2023: Gave a contributed talk with Lio Wong at SPP 2023 in Pittsburgh on our new work: From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought. arXiv:2306.12672

λ 01/2023: This week, I’m attending POPL 2023 here in Boston!

☀️ 06/2022: This summer, I’ll be attending the Neurosymbolic Summer School at Caltech as well as CogSci 2022 in Toronto.

🤖 05/2021: I will be starting my PhD at MIT EECS and CSAIL this fall! I’m thrilled to continue my research career under the co-mentorship of Jacob Andreas and Josh Tenenbaum.

🎓 04/2021: I am grateful to accept an NSF Graduate Research Fellowship in support of my PhD research.

Last updated on October 30, 2023