Zhijing Jin

Assistant Professor at the University of Toronto

Email: zjin@cs.toronto.edu      Research: Google Scholar | CV
Social Media: 𝕏 @ZhijingJin | 🦋 ZhijingJin (Pronounced like “G-Gin Gin”)

I am an incoming Assistant Professor at the University of Toronto starting in Fall 2025, and currently a postdoc at the Max Planck Institute with Bernhard Schoelkopf, based in Europe. I will also be a CIFAR AI Chair, ELLIS advisor, and faculty member at the Vector Institute.

My research areas are Large Language Models (LLMs), Natural Language Processing (NLP), and Causal Inference. I work on Causal Reasoning with LLMs (Corr2Cause, CLadder, Quriosity & Survey), Multi-Agent AI Safety problems (GovSim, SanctSim & my OECD talk), Moral Reasoning in LLMs (TrolleyProblems, MoralExceptQA), Mechanistic Interpretability (CompMechs, Mem vs Reasoning), and Adversarial Attack (TextFooler, RouterAttack). My technical research contributes to AI Safety and AI for Science.

I am the recipient of 3 Rising Star awards, 2 Best Paper Awards at NeurIPS 2024 Workshops, and several fellowships at Open Philanthropy and the Future of Life Institute. My work is reported in CHIP Magazine, WIRED, and MIT News. My research is funded by NSERC, Schmidt Sciences, MPI, UofT, and Cooperative AI Foundation.

Our Jinesis AI Lab

We conduct frontier research on AI, large language models, and causal reasoning. We constantly have openings for student researchers (at UofT, ETH, UMich, and many other affiliations), and provide remote research mentorship for people in any career stage from anywhere in the world.

Yongjin Yang (PhD)

Multi-Agent LLMs | AI Alignment

Rohan Subramani (PhD)

AI Safety | LLM Agents

Andrei Muresanu (PhD)

Interpretability | AI Safety

Yahang Qi (PhD)

Causality | LLMs | Statistics

David Guzman (MSc)

Multi-Agent LLMs | LLM Bias

Samuel Simko (MSc)

Causal LLMs | Jailbreaking

Steffen Backmann (MSc)

Multi-Agent LLMs | Game Theory

Shrey Mittal (MSc)

Multi-Agent LLMs

Moral Reasoning in LLMs

Mechanistic Interpretability

Irene Strauss (MSc)

Causal LLMs | Philosophy

Punya Syon Pandey (BS)

Jailbreaking | Finetuning

Jiarui Liu (Mentee)

Socially Responsible LLMs

Paul He (BS)

Causal LLMs | Formal Language

Check out the complete list of students and alumni on my CV.

*The Jinesis AI Lab is pronounced as “Genesis”, in memory of Prof. Patrick Winston.

Research Overview

My technical work focuses on causal inference methods for NLP, specifically to address robustness [1,2,3,4], interpretability [4,5], and causal/logical reasoning [6,7,8,9] of LLMs. See my Tutorial@NeurIPS 2024, Keynote@EMNLP 2023 BlackboxNLP Workshop, and Tutorial@EMNLP 2022.

I also extend the broader impact of Causal NLP to social good applications, with foundational work on the NLP for Social Good (NLP4SG) framework [10,11MIT News], social policy analysis [12,13], gender bias [14,15], and healthcare [16,17,18]. See my Talk@EMNLP 2022 NLP for Positive Impact Workshop, and 5 related workshops I’ve co-organized.

For community service, I co-organize the ACL Year-Round Mentorship (a network of 650+ mentees and 90+ NLP mentors), and provide research guidance [19] and career suggestions.

“Harness the power of AI to make the world a better place.”

Vision of the Jinesis AI Lab@UToronto

Founded in 2025

Watch & Read

Keep in Contact

Keep posted of our latest research & activities by subscribing to the mailing list.