top of page
Mt Badly, CA
August 17, 2019

"THE BEST WAY TO PREDICT THE FUTURE IS TO IMPLEMENT IT."

yuchen

eleanor

jiang

ABOUT ME

Eleanor is the founder&CEO of AIWaves Inc. She was a Ph.D. researcher in the Institute of Machine Learning at ETH Zürich, where she was supervised by Ryan Cotterell and Mrinmaya Sachan. She was in the Direct Doctorate program and also obtained her Master’s in Computer Science at ETH Zürich. Previously, she did her undergraduate studies at Zhejiang University's Chu Kochen Honors College and spent some time at UCLA, supervised by Kai-Wei Chang. Before founding AIWaves, she was fortunate enough to work at Microsoft Research Asia (MSRA) in Beijing and National Institute of Advanced Industrial Science and Technology (AIST) in Tokyo, collaborating with brilliant minds like Ming Zhou, Dongdong Zhang and Hiroya Takamura.

 

Her research is focused on Natural Language Processing and Machine Learning. At the moment, she is mainly interested in generating and understanding long-form structured content (e.g. novels, essays, blogs, etc.). To this end, she has recently been dabbling into long-form text generation, controlled text generation, document-level machine translation, and creative writing. She is also a big fan of fun applications of natural language generation (lyrics generation, etc.).

When Eleanor is not busy honing large language models, she indulges in hobbies like skiing, hiking, playing the piano, and learning new languages.

Drop her a line at eleanorjiang630@gmail.com if you're up for building a cool NLP product together.

  • pngwing.com
  • googlescholar
  • github
  • Twitter
  • LinkedIn
  • blog
About

NEWS

news

Publications

BlonDe: An Automatic Evaluation Metric for Document-level Machine Translation.

Yuchen Eleanor Jiang, Tianyu Liu, Shuming Ma, Dongdong Zhang, Jian Yang, Haoyang Huang, Rico Sennrich, Mrinmaya Sachan, Ryan Cotterell, Ming Zhou. [NAACL 2022 oral]

Keywords: machine translation; evaluation

A Structured Span Selector.

Tianyu Liu, Yuchen Eleanor Jiang, Mrinmaya Sachan, Ryan Cotterell. [NAACL 2022 oral]

  • A structured model which directly learns to select an optimal set of spans for various span selection problems, e.g. coreference resolution and semantic role labeling. [pdf][code]

Keywords: coreference resolution; semantic role labeling

Autoregressive Structured Prediction with Language Models

Tianyu Liu, Yuchen Eleanor Jiang, Nicholas Monath, Mrinmaya Sachan, Ryan Cotterell. [EMNLP 2022 findings]

  • An approach to model structures as sequences of actions in an autoregressive manner with pretrained language models, allowing in-structure dependencies to be learned without any loss. [pdf][code]

Keywords: entity and relation extraction; coreference resolution; NER

Poor Man's Quality Estimation: Predicting Reference-Based MT Metrics Without the Reference

Vilém Zouhar, Shehzaad Dhuliawala, Wangchunshu Zhou, Nico Daheim, Tom Kocmi, Yuchen Eleanor Jiang, Mrinmaya Sachan. [EACL 2023 oral]

  • We propose a better approach to leverage monolingual data to boost machine translation performance, i.e. leveraging the metric estimation task for pre-training a quality estimation model.  [pdf][code]

Keywords: machine translation; quality estimation

A Bilingual Parallel Corpus with Discourse Annotations

Yuchen Eleanor Jiang, Tianyu Liu, Shuming Ma, Dongdong Zhang, Ryan Cotterell, Mrinmaya Sachan.

  • The BWB corpus consists of Chinese novels translated by experts into English, and the annotated test set is designed to probe the ability of machine translation systems to model various discourse phenomena. [pdf][dataset]

Keywords: machine translation; discourse; narrative

Investigating the Role of Centering Theory in Neural Coreference Resolution.

Yuchen Eleanor Jiang, Mrinmaya Sachan, Ryan Cotterell. 

  • We investigate the connection between centering theory (the best-known linguistic model for discourse coherence) and modern coreference resolution systems. [pdf][code]

Keywords: coreference resolution; discourse coherence

Deconstructing the Reading Time--Surprisal Relationship.

Yuchen Eleanor Jiang, Clara Isabel Meister, Tiago Pimentel, Tianyu Liu, Mrinmaya Sachan, Ryan D Cotterell, Roger P. Levy. 

  • We investigate the non-linear relationship between "the predictive power of language models" (surprisal) and the cognitive processing load. [pdf][code]

Keywords: reading comprehension; human language processing

Learning Directional Sentence-Pair Embedding for Natural Language Reasoning.

Yuchen Jiang, Zhenxin Xiao, Kai-Wei Chang. [AAAI 2020 (SA)]

  • A mutual attention mechanism for modeling directional inter-sentence relations. [pdf][code]

Keywords: natural language reasoning

Learning Language-agnostic Entity Prototype for Zero-shot Cross-lingual Entity Linking.

Haihong Yang*, Yuchen Jiang*, Zhongkai Hu, Wei Zhang, Yangbin Shi, Boxing Chen, Huajun Chen. [Arxiv]

  • A cross-lingual entity linking model and a knowledge-augmented dataset for cross-lingual entity linking. [pdf][code][dataset]

Keywords: entity linking; cross-lingual language model

BOSH: An efficient meta-algorithm for decision-based attacks.

Zhenxin Xiao, Puyudi Yang, Yuchen Jiang, Kai-Wei Chang, Cho-Jui. Hsieh. [Arxiv 2019]

  • An efficient meta-algorithm, which improves existing hard-label black-box attack algorithms through Bayesian Optimization (BO) and Successive Halving (SH). [pdf][code]

Keywords: machine learning; optimization; robustness 

publcations
bottom of page