I’m a PhD student at MIT, studying AI, machine learning, and program synthesis.
I work on the idea that learning programs from data could produce machines which possess more human-like intelligence. My research has two aims:
- Developing novel methods for synthesizing programs, combining deep learning and symbolic techniques.
- Applying program synthesis to build human-like AI models to solve concept learning and reasoning tasks.
I’ve also spent time as an intern at Facebook AI Research. Before MIT, I received a BA in physics from Harvard.
- We posted our new “scratchpad” paper on Arxiv: paper, tweet.
- Summer 2021: Internship at Google Brain with Augustus Odena and Charles Sutton. Check out our paper on program synthesis with large langauge models!
- Our paper Representing Partial Programs with Blended Abstract Semantics was acccepted to ICLR 2021
- Presented Learning Compositional Rules via Neural Program Synthesis at NeurIPS 2020
- Summer 2020: Internship at Facebook AI with Brenden Lake
- Presented Write, Execute, Assess: Program Synthesis with a REPL at NeurIPS 2019
- Our paper Learning to Infer Program Sketches was accepted to ICML 2019 and featured on MIT News
An up-to-date list may be found at Google Scholar
Show Your Work: Scratchpads for Intermediate Computation with Language Models
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, Augustus Odena.
Program Synthesis with Large Language Models.
Jacob Austin*, Augustus Odena*, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, Charles Sutton.
Implicit Representations of Meaning in Neural Language Models.
Belinda Z. Li, Maxwell Nye, Jacob Andreas.
ACL 2021. [code]
Improving Coherence and Consistency in Neural Sequence Models with Dual-System, Neuro-Symbolic Reasoning.
Maxwell Nye, Michael Henry Tessler, Josh Tenenbaum, Brenden Lake.
Communicating Natural Programs to Humans and Machines.
Samuel Acquaviva, Yewen Pu, Marta Kryven, Catherine Wong, Gabrielle E Ecanow, Maxwell Nye, Theodoros Sechopoulos, Michael Henry Tessler, Josh Tenenbaum.
In submission. [data]
A Large-Scale Benchmark for Few-Shot Program Induction and Synthesis.
Ferran Alet*, Javier Lopez-Contreras*, James Koppel, Maxwell Nye, Armando Solar-Lezama, Tomas Lozano-Perez, Leslie Kaelbling, Josh Tenenbaum.
ICML 2021. Spotlight.
Representing Partial Programs with Blended Abstract Semantics.
Maxwell Nye, Yewen Pu, Matthew Bowers, Jacob Andreas, Josh Tenenbaum, Armando Solar-Lezama.
ICLR 2021. Also presented at NeurIPS 2020 Workshop on Computer-Assisted Programming.
DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning.
Kevin Ellis, Catherine Wong, Maxwell Nye, Mathias Sablé-Meyer, Luc Cary, Lucas Morales, Luke Hewitt, Armando Solar-Lezama, Josh Tenenbaum.
Learning Compositional Rules via Neural Program Synthesis.
Maxwell Nye, Armando Solar-Lezama, Josh Tenenbaum, Brenden Lake.
NeurIPS 2020. [code]
Also presented at NeurIPS 2019 Workshop on Context and Compositionality as a spotlight talk.
Write, Execute, Assess: Program Synthesis with a REPL.
Kevin Ellis*, Maxwell Nye*, Yewen Pu*, Felix Sosa*, Josh Tenenbaum, Armando Solar-Lezama.
The Variational Homoencoder: Learning to Infer High-Capacity Generative Models from Few Examples.
Luke Hewitt, Maxwell Nye, Andreea Gane, Tommi Jaakkola, Josh Tenenbaum.
UAI 2018, oral presentation. [code]
Are Efficient Deep Representations Learnable?
Maxwell Nye, Andrew Saxe.
ICLR 2018, workshop.
- MIT (2017 - present)
PhD. Research Areas: Machine Learning and Program Synthesis
Advisors: Josh Tenenbaum and Armando Solar-Lezama
- Harvard University (2013 - 2017)
Advisor: Haim Sompolinsky