I’m a Machine Learning researcher trying to get computers to automatically write code. I’m working on something new—more details soon!
I recently received a PhD from MIT, where I was advised by Josh Tenenbaum and Armando Solar-Lezama. I also worked with Brenden Lake (NYU/Facebook AI) and Jacob Andreas. My research focus at MIT was program synthesis: using deep learning as well as symbolic techniques to automatically write programs.
I interned at Google Brain, working with Augustus Odena and Charles Sutton. We used very large language models (>100 billion parameters) to write and understand Python programs.
I’ve also spent time as an intern at Facebook AI Research. Before MIT, I studied physics at Harvard.
Updates
- I defended my thesis and graduated from MIT! Thanks so much to everybody I worked with at MIT.
- We posted our new “scratchpad” paper on Arxiv: paper, tweet.
- Summer 2021: Internship at Google Brain with Augustus Odena and Charles Sutton. Check out our paper on program synthesis with large langauge models!
- Our paper Representing Partial Programs with Blended Abstract Semantics was acccepted to ICLR 2021
- Presented Learning Compositional Rules via Neural Program Synthesis at NeurIPS 2020
- Summer 2020: Internship at Facebook AI with Brenden Lake
- Presented Write, Execute, Assess: Program Synthesis with a REPL at NeurIPS 2019
- Our paper Learning to Infer Program Sketches was accepted to ICML 2019 and featured on MIT News
Publications
An up-to-date list may be found at Google Scholar
-
Show Your Work: Scratchpads for Intermediate Computation with Language Models
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, Augustus Odena.
2021. -
Program Synthesis with Large Language Models.
Jacob Austin*, Augustus Odena*, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, Charles Sutton.
2021. [data] -
Implicit Representations of Meaning in Neural Language Models.
Belinda Z. Li, Maxwell Nye, Jacob Andreas.
ACL 2021. [code] -
Improving Coherence and Consistency in Neural Sequence Models with Dual-System, Neuro-Symbolic Reasoning.
Maxwell Nye, Michael Henry Tessler, Josh Tenenbaum, Brenden Lake.
NeurIPS 2021. -
Communicating Natural Programs to Humans and Machines.
Samuel Acquaviva, Yewen Pu, Marta Kryven, Catherine Wong, Gabrielle E Ecanow, Maxwell Nye, Theodoros Sechopoulos, Michael Henry Tessler, Josh Tenenbaum.
In submission. [data] -
A Large-Scale Benchmark for Few-Shot Program Induction and Synthesis.
Ferran Alet*, Javier Lopez-Contreras*, James Koppel, Maxwell Nye, Armando Solar-Lezama, Tomas Lozano-Perez, Leslie Kaelbling, Josh Tenenbaum.
ICML 2021. Spotlight. -
Representing Partial Programs with Blended Abstract Semantics.
Maxwell Nye, Yewen Pu, Matthew Bowers, Jacob Andreas, Josh Tenenbaum, Armando Solar-Lezama.
ICLR 2021. Also presented at NeurIPS 2020 Workshop on Computer-Assisted Programming. -
DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning.
Kevin Ellis, Catherine Wong, Maxwell Nye, Mathias Sablé-Meyer, Luc Cary, Lucas Morales, Luke Hewitt, Armando Solar-Lezama, Josh Tenenbaum.
PLDI 2021. -
Learning Compositional Rules via Neural Program Synthesis.
Maxwell Nye, Armando Solar-Lezama, Josh Tenenbaum, Brenden Lake.
NeurIPS 2020. [code]
Also presented at NeurIPS 2019 Workshop on Context and Compositionality as a spotlight talk. -
Write, Execute, Assess: Program Synthesis with a REPL.
Kevin Ellis*, Maxwell Nye*, Yewen Pu*, Felix Sosa*, Josh Tenenbaum, Armando Solar-Lezama.
NeurIPS 2019. -
Learning to Infer Program Sketches.
Maxwell Nye, Luke Hewitt, Josh Tenenbaum, Armando Solar-Lezama.
ICML 2019. [code]
Press: MIT News -
The Variational Homoencoder: Learning to Infer High-Capacity Generative Models from Few Examples.
Luke Hewitt, Maxwell Nye, Andreea Gane, Tommi Jaakkola, Josh Tenenbaum.
UAI 2018, oral presentation. [code] -
Are Efficient Deep Representations Learnable?
Maxwell Nye, Andrew Saxe.
ICLR 2018, workshop.
*equal contribution
Education
- MIT (2017 - 2022)
PhD. Research Areas: Machine Learning and Program Synthesis
Advisors: Josh Tenenbaum and Armando Solar-Lezama - Harvard University (2013 - 2017)
BA, Physics
Advisor: Haim Sompolinsky