Michael Hu
{NLP, training data, cognitive science}
I am a third-year PhD student at the NYU Center for Data Science, working with Kyunghyun Cho and Tal Linzen. I work on algorithms that optimize the training data of language models. I am supported by an NSF Graduate Research Fellowship.
In my spare time, I enjoy cooking, running, and playing basketball.
Previously, I completed a BSE at Princeton CS, where I spent two lovely years working with Karthik Narasimhan and Tom Griffiths.
news
Jul 17, 2024 | New preprint: “The importance of human-scale language modeling for psycholinguistics.” |
---|---|
Nov 21, 2023 | “Latent State Models of Training Dynamics” accepted to TMLR. |
Dec 9, 2022 | “Using natural language and program abstractions to instill human inductive biases in machines” received an Outstanding Paper Award at NeurIPS 2022! 🏅 |
Sep 1, 2022 | Started a PhD at the NYU Center for Data Science as an NSF GRFP Scholar. |
Oct 6, 2021 | “Safe RL with Natural Language Constraints” accepted to NeurIPS 2021 as a spotlight presentation! |
selected publications
2023
- TMLR