Research Vision

Can an elephant fit through a doorway?

When trying to test whether or not a machine has common sense, you might think to ask it a simple question like "Can an elephant fit through a doorway?" Such intuitive explorations can prove immensely useful; however, machines can often find the right answer for the wrong reason. Thus, defining what it means for machines to have common sense is one of the primary challenges in the area. As part of our work, we're invested in exploring and iterating on a strong suite of core commonsense tasks against which we can measure progress.


SocialIQA: Commonsense Reasoning about Social Interactions

Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi EMNLP  2019

PIQA: Reasoning about Physical Commonsense in Natural Language

Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi AAAI  2020

WinoGrande: An Adversarial Winograd Schema Challenge at Scale

Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi AAAI  2020

Abductive Commonsense Reasoning

Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, and 3 more... ICLR  2020

COSMOS QA: Machine Reading Comprehension with Contextual Commonsense Reasoning

Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi EMNLP  2019

SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference

Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi EMNLP  2018

HellaSwag: Can a Machine Really Finish Your Sentence?

Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi ACL  2019

From Recognition to Cognition: Visual Commonsense Reasoning

Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi CVPR  2018