I am a Ph.D student under the supervision of Prof. Yoav Goldberg at the Natural Language Processing lab of Bar Ilan University.
My research interests are mainly in explainable artificial intelligence and neural network interpretability, with focus on natural language processing. I am also working on imperfect information learning, such as positive-unlabeled learning, in the context of NLP.
- Mar 2021: Uploaded new paper: “Contrastive Explanations for Model Interpretability” following my internship at AI2, MOSAIC.
- Jan 2021: “Scalable Evaluation and Improvement of Document Set Expansion via Neural Positive-Unlabeled Learning” has been accepted to EACL 2021 :)
- Dec 2020: Two new paper accepts :)) “Aligning Faithful Interpretations with their Social Attribution” to TACL 2020, and “Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI” to FAccT 2021! (camera ready versions available)
- Nov 2020: Our paper Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals was accepted to TACL 2020!
- Oct 2020: I gave a talk about “Formalizing Properties of Interpretability in NLP” at the Data Science Bond seminar, Microsoft.
- Oct 2020: Uploaded a new paper on Formalizing Trust in AI.
- Oct 2020: I’ve started my internship on contrastive explainability at AI2’s MOSAIC team.
- Sep 2020: Exposing Shallow Heuristics of Relation Extraction Models with Challenge Data was accepted to EMNLP 2020!
- Aug 2020: I presented our paper on updating task-oriented dialogue systems using logs (video) at the KDD Converse workshop, KDD 2020.
- Jul 2020: I presented our survey-position paper on faithful interpretability at ACL 2020.
- Jun 2020: We uploaded two new papers: Aligning Faithful Interpretations with their Social Attribution and Amnesic Probing of Linguistic Properties and MLM Predictions.