I am an enthusiastic AI researcher and social good promoter. I am currently a 2nd-year PhD (started in 2020) supervised by Prof Bernhard Schoelkopf at Max Planck Institute & ETH, mentored by Prof Rada Mihalcea (UMich), and co-supervised through the ELLIS PhD Program by Prof Mrinmaya Sachan and Prof Ryan Cotterell at ETH.
My research has two goals: (1) to promote NLP for social good, and (2) to improve AI by connecting NLP with causality. And my career goal is to be a professor in NLP. (Quick pointers: my bio, mentors & mentees, and publications.)
- If you are a master/undergrad student looking for research experience, feel free to read the eligibility in this application form and fill it out. My projects recently need a student or collaborator who is highly familiar with Twitter (e.g., what gets popular; what gets banned; what are controversial social topics), and technical background is not compulsory. Feel free to directly email me if you are a fit.
- For my NLP for Social Good line of work (5+ papers, 3 workshops):
- Check out my position papers “How Good Is NLP?” (ACL 2021 Findings), NLP Alignment for AI safety (2021).
- Check out my research on NLP for COVID policies (EMNLP 2021 Findings), for misinformation in climate claims (arXiv 2022), for healthcare (AAHPM 2020, JPSM 2020), for political analysis (book chapter, 2021), with an overview here.
- For my Causality+NLP line of work (6+ papers, 3 workshop/conference/tutorial):
- Check out (1) my new arXiv (logical fallacy detection),
- (2) causal insights to improve NLP modeling (causal direction of data collection matters, EMNLP 2021 oral),
- (3) causal methods to analyze linguistic phenomena (Slangvolution, ACL 2022) and policies (COVID policies, EMNLP 2021 Findings).
- Upcoming: Stay tuned for my CausalNLP tutorial at EMNLP 2022.
- For my efforts to establish a global supportive network for NLP researchers: Check out our ACL Year-Round Mentorship (a network of 650+ mentees and 90+ NLP mentors), my list of useful resources for career advice in NLP/AI, and other resources.
Robust Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment
Di Jin*, Zhijing Jin*, Tianyi Joey Zhou, Peter Szolovits
AAAI 2020 (20.6%) / Paper / Code / Slides / Poster / MIT News / ACM TechNews / WeVolver / VentureBeat / Synced (Chinese)
CogSci Prediction of Story Ending with Optimistic and Pessimistic Mindsets
Zhijing Jin, Prof. Patrick Winston (1943-2019)
Tool Word Representations for Computing Semantic Relatedness
Zhijing Jin, Xiaolong Gong, Linpeng Huang
In contribution to the AAAI 2018 paper.
Tutorial & Survey
NLP Graph Neural Net Applications for Natural Language Processing
Xipeng Qiu, Zhijing Jin, Xiangkun Hu