I am currently a PhD supervised by Prof Bernhard Schoelkopf at Max Planck Institute & ETH, mentored by Prof Rada Mihalcea (UMich), and co-supervised through the ELLIS PhD Program by Prof Mrinmaya Sachan and Prof Ryan Cotterell at ETH. My research is supported by the Open Philanthropy AI Fellowship and Future of Life Institute PhD Fellowship. I worked with Prof Mona Diab during my internship at Meta AI.
My research focuses on socially responsible NLP by causal inference. Specifically, I am active in (1) promoting NLP for social good, and (2) developing CausalNLP to improve robustness, fairness, and interpretability of NLP models, as well as analyze the causes of social problems. (Quick pointers: my bio, and publications.)
Job Market Updates
- I’m on the academic job market for Assistant Professorship positions. See my research statement, teaching statement, diversity statement, and CV.
- Here is a poster of my research highlight.
- I will be presenting my research in person at EMNLP in Singapore during Dec 5-10, and at NeurIPS in New Orleans during Dec 11-17. Happy for any chats!
- For my NLP for Social Good line of work:
- Check out our overview work: our NLP4SG website, framework of NLP4SG (ACL 2021), tracking how NLP papers align with UN SDGs (EMNLP 2023 Findings), our Workshops on NLP for Positive Impact (EMNLP 2022, ACL 2021), and my talk (video, slides) at EMNLP 2022.
- Check out our specific work on COVID policies (EMNLP 2021 Findings), moral exceptions (NeurIPS 2022 Oral), misinformation in climate claims (EMNLP 2022 Findings), healthcare (AAHPM 2020, JPSM 2020), and policymaking (book chapter, 2023)
- For my CausalNLP line of work:
- For my efforts to establish a global supportive network for NLP researchers: Check out our ACL Year-Round Mentorship (a network of 650+ mentees and 90+ NLP mentors), and my list of useful resources for career advice in NLP/AI.
For Master/undergrad students looking for research experience, I will probably open new slots after the end of March next year. You can check the eligibility and apply here.
When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment
Zhijing Jin*, Sydney Levine*, Fernando Gonzalez*, Ojasv Kamal, Maarten Sap, Mrinmaya Sachan†, Rada Mihalcea†, Josh Tenenbaum†, Bernhard Schoelkopf†
NeurIPS 2022 (Oral) / CogSci 2022 (Disciplinary Diversity and Integration Award) / Paper / Code / Slides / 5-Min Talk@NeurIPS / Twitter Thread / Invited Talk@CyberValley, Germany
Natural Language Processing for Policymaking
Zhijing Jin, Rada Mihalcea
Book Chapter of Handbook of Computational Social Science for Policy 2023 (European Commission Joint Research Center)
Robust Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment
Di Jin*, Zhijing Jin*, Tianyi Joey Zhou, Peter Szolovits
AAAI 2020 (20.6%) / Paper / Code / Slides / Poster / MIT News / ACM TechNews / WeVolver / VentureBeat / Synced (Chinese)
To follow talks on my research, please check out my YouTube.