Bio for Invited Talks: Zhijing Jin (she/her) is an incoming Assistant Professor in Computer Science at the University of Toronto, and currently a postdoctoral research scientist at Max Planck Institute in Germany. She is a faculty member at the Vector Institute, a CIFAR AI Chair, an ELLIS advisor, and faculty affiliate at the Schwartz Reisman Institute in Toronto. She co-chairs the ACL Ethics Committee, and the ACL Year-Round Mentorship. Her research focuses on Causal Reasoning with LLMs, and AI Safety in Multi-Agent LLMs. She has received three Rising Star awards, two Best Paper awards at NeurIPS 2024 Workshops, two PhD Fellowships, and a postdoc fellowship. She has authored over 80 papers, many of which appear at top AI conferences (e.g., ACL, EMNLP, NAACL, NeurIPS, ICLR, AAAI), and her work have been featured in CHIP Magazine, WIRED, and MIT News. She co-organizes many workshops (e.g., several NLP for Positive Impact Workshops at ACL and EMNLP, and Causal Representation Learning Workshop at NeurIPS 2024), and leads the Tutorial on Causality for LLMs at NeurIPS 2024, and Tutorial on CausalNLP at EMNLP 2022. See more info at zhijing-jin.com
Shorter Bio: Zhijing Jin (she/her) is an incoming Assistant Professor at the University of Toronto. She serves as a CIFAR AI Chair, an ELLIS advisor, and a faculty member at the Vector Institute, and the Schwartz Reisman Institute. She co-chairs the ACL Ethics Committee, and the ACL Year-Round Mentorship. Her research focuses on Causal Reasoning with LLMs, and AI Safety in Multi-Agent LLMs. She has published over 80 papers and has received three Rising Star awards, and two Best Paper awards at NeurIPS 2024 Workshops.
Talk Abstract
Can LLMs Achieve Causal Reasoning and Cooperation?
Causal reasoning is a cornerstone of human intelligence and a critical capability for artificial systems aiming to achieve advanced understanding and decision-making. While large language models (LLMs) excel on many tasks, a key question remains: How can these models reason better about causality? Causal questions that humans can pose span a wide range of fields, from Newton’s fundamental question, “Why do apples fall?” which LLMs can now retrieve from standard textbook knowledge, to complex inquiries such as, “What are the causal effects of minimum wage introduction?”—a topic recognized with the 2021 Nobel Prize in Economics. My research focuses on automating causal reasoning across all types of questions. To achieve this, I explore the causal reasoning capabilities that have emerged in state-of-the-art LLMs, and enhance their ability to perform causal inference by guiding them through structured, formal steps. Further, I also introduce how causality of individual behavior can link to group outcomes, and cover findings in our multi-agent simulacra work about whether LLMs learn to cooperate. Finally, I will outline a future research agenda for building the next generation of LLMs capable of scientific-level causal reasoning.
Photo
Here are two photos of mine: (1) a talk photo, and (2) a close-up profile photo. Feel free to select the one that best suits your use case, and don’t hesitate to crop it if necessary.
Our Funders
We are thankful to entities who fund our research.


