Prithviraj (or Prithvi or Raj) focuses on all things interactive and situated language learning. How to get agents to generate contextually relevant language in service of a goal. He previously received his BS and PhD at Georgia Tech. When not dreaming of agents, he splits his time between martial arts, theatre, table top games, and general outdoorsy things.
I am a part-time Postdoctoral Young Investigator at AI2 and a postdoctoral researcher at Stanford University working with Jure Leskovec, Chris Manning, and the SNAP and NLP groups. I completed my PhD at the University of Washington advised by Professor Yejin Choi. My research centers on developing natural language agents that can model, represent, and reason about knowledge.
Yonatan's research focuses on the areas of Natural Language Processing and Computational Linguistics. His primary interest is in training computers to understand human language via naturalistic sources of supervision. One way he approaches this is by uncovering linguistic structure from raw text. More recently he has also been pursuing learning via grounded interaction.
Jeff Da is an undergraduate student at the University of Washington.
My research focuses on the computational foundations of intelligent behavior, through the lens of natural language. My goal is to explore the different ways in which we can develop and evaluate systems that understand and reason with (and about) natural language in different contexts.
Xiang Li (Lorraine) is a young investigator with the Mosaic team. She defended her Ph.D. in Computer Science from UMass Amherst in August 2022. She will join the Department of Computer Science at the University of Pittsburgh as an assistant professor in the fall of 2023. Her research is at the intersection of NLP, ML, commonsense reasoning, and knowledge representation. More specifically, her research focuses on designing probabilistic models and evaluation methods for implicit commonsense knowledge in language. While not doing research, she enjoys traveling, hiking, and wandering around farmers' markets on the weekend.
Jenny Liang is a senior student at the University of Washington studying computer science and informatics.
Chaitanya Malaviya completed a masters in language technologies from the School of Computer Science at Carnegie Mellon University in August 2018. Before that, he finished a bachelors in computer engineering in Singapore. His research interests are in the areas of probabilistic graphical models, multilingual NLP and deep learning applied to NLP. When he's not working, he enjoys hiking, running, listening to music and learning new languages.
Rachel Rudinger is a Postdoctoral Young Investigator at AI2. She earned her Ph.D. in Computer Science from Johns Hopkins University in 2019. Her research interests include common-sense reasoning, semantic representations and issues of bias and fairness in natural language processing. In 2020, Rachel will join the University of Maryland, College Park as an assistant professor of Computer Science. Outside of research, Rachel enjoys studying foreign languages, experiencing new foods and participating in the sport of curling.
Keisuke Sakaguchi is a Research Scientist at AI2. He received his Ph.D. in Computer Science at Johns Hopkins University in 2018 advised by Benjamin Van Durme and Matt Post. He is broadly interested in computational and cognitive aspects of language processing.
Vered completed her PhD in Computer Science in Israel at Bar-Ilan University's NLP lab in 2019 (advisor: Ido Dagan). Her research interests are lexical semantics, multiword expressions, and revealing implicit information not mentioned in texts. Besides research, she likes to go to the gym, and she is slowly learning to speak Italian and play electric guitar.
I am currently a Young Investigator at AI2 on the Mosaic team (led by Yejin Choi), based in Seattle, WA. In July 2023, I will join UC Berkeley EECS as an Assistant Professor. I am also technically an ABD (all-but-dissertation) PhD candidate in Computer Science at Cornell University, based at Cornell Tech in New York, NY, and advised by Yoav Artzi. In 2016, I graduated from Ohio State University with a BS in Computer Science and Engineering and a minor in Linguistics. My research spans natural language processing, machine learning, and computer vision. I build systems that use language to interact with people, e.g., in collaborative interactions (like CerealBar). I design models and datasets that address and represent problems in language grounding (e.g., NLVR). I also develop learning algorithms for systems that learn language through interaction.
Swabha received her PhD in Language and Information Technologies from CMU in May 2019, completing part of her degree in Seattle, where she was a visiting student at UW. Her current research interests include structured prediction and transfer learning. Among other places, she interned at AI2 during the summer and fall of 2018. Outside of research, she loves to dance, cook and socialize.
Youngjae holds a B.S. and a Ph.D. from Seoul National University. His research interests include Multimodal Representation Learning, and Language Grounding on Multimedia such as film, YouTube and VR videos. At home, he likes to travel with family, and play basketball with friends.
I am Wangchunshu Zhou, an in-coming CS PhD student at Stanford. I received my master’s degree from the Sino-French Engineering School, Beihang University, advised by Professor Ke Xu. My primary research goal is to apply Deep Learning for Natural Language Processing and develop Language Technology for All. To achieve this goal and make language technology accessible in most people’s lives, I identify two major research topics that I’m interested in: efficiency and trust worthiness of NLP models. Efficiency involves both the amount of computation and data required for (pre-)training and using NLP models. Trustworthiness involves the interpretability, fairness, and robustness with respect to adversarial attacks and out-of-distribution samples.
Jize Cao is an undergraduate student at the University of Washington.
Yue Dong is a fourth-year Ph.D. student in Computer Science at McGill University and MILA. Her primary research interests are in text summarization and conditional text generation. Her work includes designing models that balance content and discourse for different genres (e.g. news, medical documents, and scientific domain) in extractive summarization and designing systems that ensure the factual consistency in abstractive summarization. Yue’s research internship at AI2 focused on long-range dependency in language models through memory modules and attention-based decoding.
Denis is pursuing his PhD in Informatics at the University of Edinburgh in Scotland, where his research is situated at the intersection of machine translation, natural language understanding, and generation. During his internship with the Mosaic team, he investigated the capacity of generative models to perform grounded, goal-oriented social reasoning under constraints imposed by commonsense morality.
Max Forbes is a PhD student at the University of Washington working in the Natural Language Processing Group.
Saadia is a fourth-year PhD student in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. She is advised by Prof. Yejin Choi. Her research area is natural language understanding and generation. She is excited about machine learning techniques and deep-learning models for social commonsense and logical reasoning in text.
Ari Holtzman is a PhD student at the University of Washington advised by Yejin Choi. He works on creating [discursive machines](https://www.youtube.com/watch?v=bLho19gWz_o).
I work on natural language processing, machine learning, social interactions, and computer vision. Previously, I earned a PhD in Computer Science at Cornell University.
Lifu Huang is a fifth-year Ph.D in Rensselaer Polytechnic Institute. His research mainly focuses on information extraction and question answering.
Alisa is a PhD student at the University of Washington, having graduated rom Northwestern University with majors in computer science and math. She is broadly interested in devising methods of improving & evaluating model robustness and exploring how to align NLP with the end goals of human users. During her internship at Mosaic, she worked on detoxification of language models.
Julia is a PhD student at the University of Michigan School of Information, advised by Ceren Budak and supported by a Google PhD Fellowship. She completed her BA in Linguistics (2018) and MS in Computer Science (2019) at Stanford University, where she was advised by Dan Jurafsky. Her research interests include natural language processing, sociolinguistics, and computational social science.
James is a PhD student from University of Washington. He graduated from U.C. Berkeley in 2019 with a B.S. in Electrical Engineering & Computer Science. His research interests are vision & language, language generation, knowledge acquisition and compositionality.
Hannah Rashkin is a fifth year PhD student at the University of Washington working on natural language processing with Professor Yejin Choi. Her main research focuses are the integration of commonsense reasoning about social relationships for NLP tasks as well as computational social science applications for NLP tools.
Sebastin is a PhD student at the University of Washington advised by Katharina Reinecke. His broad interests lie in the intersection of natural language processing and human computer interaction. His current research focuses on how to make NLP systems more culturally inclusive and make it work for populations around the world.
Alex is a PhD student at New York University studying computer science, focusing on machine learning for natural language. He is jointly advised by Professors Sam Bowman and Kyunghyun Cho, and am part of the Machine Learning for Language group at NYU. He graduated from Harvard University with a bachelor’s in applied mathematics and a master’s in computer science, where he was advised by Alexander Rush and spent time with the Harvard Natural Language Processing group.
Peter West received his B.Sc. in Honors Computer Science from the University of British Columbia, and is now a graduate researcher with Yejin Choi at the University of Washington. His current research is focussed on creative methods for unsupervised learning in NLP.
Pei (Leo) Zhou is a third year Ph.D. student in Computer Science at the University of Southern California (USC) and Information Sciences Institute (ISI) with Annenberg Fellowhsip, co-advised by Professors Jay Pujara and Xiang Ren. Pei graduated with a Bachelor of Science degree in Mathematics of Computation from UCLA in 2019. I was working closely with Prof. Kai-Wei Chang and Prof. Yizhou Sun. In summers of 2020 and 2021, I interned as an applied scientist and a researcher at Amazon Alexa AI.