Bio

Bio for Invited Talks: Zhijing Jin (she/her) is an incoming Assistant Professor at the University of Toronto, and currently a postdoc at Max Planck Institute in Germany. Her research focuses on Causal Inference for NLP, AI Safety in Multi-Agent LLMs, and AI for Causal Science. She has received three Rising Star awardstwo Best Paper awards at NeurIPS 2024 Workshops, two PhD Fellowships, and a postdoc fellowship. She has authored over 65 papers, many of which appear at top AI conferences (e.g., ACL, EMNLP, NAACL, NeurIPS, ICLR, AAAI), and her work have been featured in CHIP Magazine, WIRED, and MIT News. She co-organizes many workshops (e.g., several NLP for Positive Impact Workshops at ACL and EMNLP, and Causal Representation Learning Workshop at NeurIPS 2024), and leads the Tutorial on Causality for LLMs at NeurIPS 2024, and Tutorial on CausalNLP at EMNLP 2022. To support diversity, she organizes the ACL Year-Round Mentorship. More information can be found on her personal website: zhijing-jin.com

Talk Abstract

Can LLMs Achieve Causal Reasoning and Cooperation?

Causal reasoning is a cornerstone of human intelligence and a critical capability for artificial systems aiming to achieve advanced understanding and decision-making. While large language models (LLMs) excel on many tasks, a key question remains: How can these models reason better about causality? Causal questions that humans can pose span a wide range of fields, from Newton’s fundamental question, “Why do apples fall?” which LLMs can now retrieve from standard textbook knowledge, to complex inquiries such as, “What are the causal effects of minimum wage introduction?”—a topic recognized with the 2021 Nobel Prize in Economics. My research focuses on automating causal reasoning across all types of questions. To achieve this, I explore the causal reasoning capabilities that have emerged in state-of-the-art LLMs, and enhance their ability to perform causal inference by guiding them through structured, formal steps. Further, I also introduce how causality of individual behavior can link to group outcomes, and cover findings in our multi-agent simulacra work about whether LLMs learn to cooperate. Finally, I will outline a future research agenda for building the next generation of LLMs capable of scientific-level causal reasoning.

Photo

Here are two photos of mine: (1) a talk photo, and (2) a close-up profile photo. Feel free to select the one that best suits your use case, and don’t hesitate to crop it if necessary.

Our Funders

We are thankful to entities who fund our research.