Match!
Xiaodan Zhu
Queen's University
52Publications
25H-index
2,515Citations
Publications 52
Newest
Nov 3, 2019 in IJCNLP (International Joint Conference on Natural Language Processing)
#1Jia-Chen Gu (USTC: University of Science and Technology of China)H-Index: 2
#2Zhen-Hua Ling (USTC: University of Science and Technology of China)H-Index: 25
Last.Quan Liu (USTC: University of Science and Technology of China)H-Index: 4
view all 4 authors...
#1Parinaz Sobhani (U of O: University of Ottawa)H-Index: 10
#2Diana Inkpen (U of O: University of Ottawa)H-Index: 25
Last.Xiaodan Zhu (Queen's University)H-Index: 25
view all 3 authors...
3 CitationsSource
#1Yu-Ping RuanH-Index: 2
#2Xiaodan ZhuH-Index: 25
Last.Si WeiH-Index: 15
view all 6 authors...
Winograd Schema Challenge (WSC) was proposed as an AI-hard problem in testing computers' intelligence on common sense representation and reasoning. This paper presents the new state-of-theart on WSC, achieving an accuracy of 71.1%. We demonstrate that the leading performance benefits from jointly modelling sentence structures, utilizing knowledge learned from cutting-edge pretraining models, and performing fine-tuning. We conduct detailed analyses, showing that fine-tuning is critical for achiev...
2 Citations
#1Yu-Ping Ruan (USTC: University of Science and Technology of China)H-Index: 2
#2Zhen-Hua Ling (USTC: University of Science and Technology of China)H-Index: 25
Last.Xiaodan Zhu (Queen's University)H-Index: 25
view all 5 authors...
We present our work on Track 2 in the Dialog System Technology Challenges 7 (DSTC7). The DSTC7-Track 2 aims to evaluate the response generation of fully data-driven conversation models in knowledge-grounded settings, which provides the contextual-relevant factual texts. The Sequenceto-Sequence models have been widely used for end-to-end generative conversation modelling and achieved impressive results. However, they tend to output dull and repeated responses in previous studies. Our work aims to...
Jan 1, 2019 in NAACL (North American Chapter of the Association for Computational Linguistics)
#1Jonathan May (ISI: Information Sciences Institute)H-Index: 15
#2Ekaterina Shutova (UvA: University of Amsterdam)H-Index: 12
Last.Saif M. Mohammad (National Research Council)H-Index: 35
view all 6 authors...
Feb 15, 2018 in ICLR (International Conference on Learning Representations)
#1Qian Chen (USTC: University of Science and Technology of China)H-Index: 11
#2Xiaodan Zhu (Queen's University)H-Index: 25
Last.Diana Inkpen (U of O: University of Ottawa)H-Index: 25
view all 4 authors...
Modeling informal inference in natural language is very challenging. With the recent availability of large annotated data, it has become feasible to train complex models such as neural networks to perform natural language inference (NLI), which have achieved state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform NLI from the data? If not, how can NLI models benefit from external knowledge and how to build NLI models ...
#1Junbei Zhang (USTC: University of Science and Technology of China)H-Index: 1
#2Xiaodan Zhu (Queen's University)H-Index: 25
Last.Hui Jiang (York University)H-Index: 27
view all 7 authors...
Neural networks have recently been intensively explored for machine comprehension and question answering. Core to the problems is the involvement of questions and hence the understanding of them — questions play a central role in machine comprehension, questions answering, and many other problems (e.g., information retrieval and query-based summarization). In this paper, we explore better question understanding and representation. First, we propose enriched question representation by encoding sy...
1 CitationsSource
#1Qian ChenH-Index: 11
#2Xiaodan ZhuH-Index: 25
Last.Si WeiH-Index: 15
view all 5 authors...
Modeling informal inference in natural language is very challenging. With the recent availability of large annotated data, it has become feasible to train complex models such as neural networks to perform natural language inference (NLI), which have achieved state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform NLI from the data? If not, how can NLI models benefit from external knowledge and how to build NLI models ...
17 Citations
#1Qian ChenH-Index: 11
#2Xiaodan ZhuH-Index: 25
Last.Diana InkpenH-Index: 25
view all 6 authors...
The RepEval 2017 Shared Task aims to evaluate natural language understanding models for sentence representation, in which a sentence is represented as a fixed-length vector with neural networks and the quality of the representation is tested with a natural language inference task. This paper describes our system (alpha) that is ranked among the top in the Shared Task, on both the in-domain test set (obtaining a 74.9% accuracy) and on the cross-domain test set (also attaining a 74.9% accuracy), d...
32 Citations
Jan 1, 2017 in EACL (Conference of the European Chapter of the Association for Computational Linguistics)
#1Parinaz Sobhani (U of O: University of Ottawa)H-Index: 10
#2Diana Inkpen (U of O: University of Ottawa)H-Index: 25
Last.Xiaodan Zhu (National Research Council)H-Index: 25
view all 3 authors...
12 CitationsSource
123456