Zhijing Jin

Incoming Assistant Professor at the University of Toronto

Email: zjin.admin@cs.toronto.edu      Research: Google Scholar | CV
𝕏: @ZhijingJin 🦋: ZhijingJin (Pronounced like “G-Gin Gin”)

I am an incoming Assistant Professor at the University of Toronto, and currently a research scientist at the Max Planck Institute with Bernhard Schoelkopf, based in Europe. I am also a CIFAR AI Chair, faculty member at the Vector Institute, an ELLIS advisor, and faculty affiliate at the Schwartz Reisman Institute.

My research areas are Large Language Models (LLMs), Causal Inference, and Responsible AI. Specifically, my vertical work focuses on Causal Reasoning with LLMs (Causal AI Scientist, Corr2Cause, CLadder, Quriosity, Survey), Multi-Agent LLMs (GovSim, SanctSim, MoralSim [Slides] [Blogpost]), and Moral Reasoning in LLMs (TrolleyProblems, MoralLens, MoralExceptQA). To support the quality of my vertical work, my horizontal work brings in Mechanistic Interpretability (CompMechs, Mem vs Reasoning), and Adversarial Robustness (CRL Defense, TextFooler, AccidentalVulnerability, RouterAttack). My research contributes to AI Safety and AI for Science. Here are my slides of NLP and Democracy Defense.

I am the recipient of 3 Rising Star awards, 2 Best Paper Awards at NeurIPS 2024 Workshops, and several fellowships at Open Philanthropy and the Future of Life Institute. In the international academic community, I am a co-chair of the ACL Ethics Committee, co-organizer of the ACL Year-Round Mentorship, and a main supporter of the NLP for Positive Impact Workshop series. My work is reported in CHIP Magazine, WIRED, and MIT News.

Our Jinesis AI Lab

We conduct frontier research on AI, Large Language Models, and Causality. Check applications below.

Yongjin Yang (PhD)

Multi-Agent LLMs | AI Alignment

Rohan Subramani (PhD)

AI Safety | LLM Agents

Ryan Faulkner (PhD)

Multi-Agent LLMs

Andrei Muresanu (PhD)

Interpretability | AI Safety

Yahang Qi (PhD)

Causality | LLMs

Furkan Danisman (PhD)

LLMs | Statistics

Samuel Simko (RA)

Adversarial Robustness

LLM Interpretability

LLM Interpretability

David Guzman (RA)

Multi-Agent LLMs | LLM Bias

Changling Li (MSc)

Multi-Agent LLMs | AI Safety

Pepijn Cobben (MSc)

Multi-Agent LLMs | Game Theory

Jiarui Liu (Mentee)

Socially Responsible LLMs

Angelo Huang (MSc)

Multi-Agent LLMs

Vishal Verma (MSc)

Causal LLMs

Punya Syon Pandey (BS)

Jailbreaking | RL

Irene Strauss (MSc)

LLMs | Philosophy

Sawal Acharya (MSc)

Causal LLMs

Joeun Yook (BS)

LLMs and Democracy Defense

Check out the complete list of students and alumni on my CV.

*The Jinesis AI Lab is pronounced as “Genesis”, in memory of Prof. Patrick Winston.

Research Overview

My technical work focuses on causal inference methods for NLP, specifically to address robustness [1,2,3,4], interpretability [4,5], and causal/logical reasoning [6,7,8,9] of LLMs. See my Tutorial@NeurIPS 2024, Keynote@EMNLP 2023 BlackboxNLP Workshop, and Tutorial@EMNLP 2022.

I also extend the broader impact of Causal NLP to social good applications, with foundational work on the NLP for Social Good (NLP4SG) framework [10,11MIT News], social policy analysis [12,13], gender bias [14,15], and healthcare [16,17,18]. See my Talk@EMNLP 2022 NLP for Positive Impact Workshop, and 5 related workshops I’ve co-organized.

For community service, I co-organize the ACL Year-Round Mentorship (a network of 650+ mentees and 90+ NLP mentors), and provide research guidance [19] and career suggestions.

“Harness the power of AI to make the world a better place.”

Vision of the Jinesis AI Lab@UToronto

Founded in 2025

Watch & Read

Keep in Contact

Keep posted of our latest research & activities by subscribing to the mailing list.