top of page
Mt Badly, CA
August 17, 2019






Eleanor is the founder&CEO of AIWaves Inc. She was a Ph.D. researcher in the Institute of Machine Learning at ETH Zürich, where she was supervised by Ryan Cotterell and Mrinmaya Sachan. She was in the Direct Doctorate program and also obtained her Master’s in Computer Science at ETH Zürich. Previously, she did her undergraduate studies at Zhejiang University's Chu Kochen Honors College and spent some time at UCLA, supervised by Kai-Wei Chang. Before founding AIWaves, she was fortunate enough to work at Microsoft Research Asia (MSRA) in Beijing and National Institute of Advanced Industrial Science and Technology (AIST) in Tokyo, collaborating with brilliant minds like Ming Zhou, Dongdong Zhang and Hiroya Takamura.


Her research is focused on Natural Language Processing and Machine Learning. At the moment, she is mainly interested in generating and understanding long-form structured content (e.g. novels, dialogues, etc.). To this end, she has recently been dabbling into interactive generation, controlled text generation, document-level machine translation, coreference resolution, and coherence modeling. She is also a big fan of fun applications of natural language generation (chatGPT, lyrics generation, etc.).

When Eleanor is not busy honing large language models, she indulges in hobbies like skiing, hiking, playing the piano, and learning new languages.

Drop her a line at if you're up for building a cool NLP product together.

  • googlescholar
  • github
  • Twitter
  • LinkedIn
  • blog


  • [Oct. 2023] 🎉 Launched our first writing agent product Wawawrite! Give it a try if you speak Chinese!
  • [Oct. 2023] 🎉 AIWaves secured a multimillion-dollar Pre-A funding, led by BlueRun Ventures China.
  • [Sept. 2023] 🎉 We published Agents and it is trending on GitHub
  • [May 2023] 🎉 Two papers submitted to NeurIPS 2023:
    • RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text. [demo]

    • Efficient Prompting via Dynamic In-Context Learning. 

  • [May 2023] 🎉 Two papers accepted at ACL 2023:

    • Don't Group, Just Rescore: A Simpler Alternative to Constrained Beam Search.

    • Discourse-Centric Evaluation of Machine Translation with a Densely Annotated Parallel Corpus.

  • [May 2023] 🎉 AIWaves secured a multi-million-yuan angel investment, led by Ofound Ventures.​

  • [Apr. 2023] 🎉 One paper accepted at ICML 2023:

    • Controlled Text Generation with Natural Language Instructions.

  • [Apr. 2023] Founded AIWaves Inc., focusing on building AI Agents for Content Creation!

  • [Feb. 2023] Back to Microsoft Research Asia! Ready for some serious waves with ChatGPT!

  • [Jan. 2023] 🎉 One paper accepted at EACL 2023:

    • Poor Man's Quality Estimation: Predicting Reference-Based MT Metrics Without the Reference.

  • [July 2022] Attending EMNLP 2022 in Abu Dhabi🇦🇪!

  • [Oct. 2022] 🎉 One paper accepted at EMNLP 2022:

    • Autoregressive Structured Prediction with Language Models.

  • [Sept. 2022] Invited talk at Univesity of Tokyo, hosted by Miyao Group.

  • [Sept. 2022] Invited talk at Tokyo Institute of Technology, hosted by Okazaki Lab.

  • [August 2022] Visiting the National Institute of Advanced Industrial Science and Technology (AIST) in Japan, working with Dr. Hiroya Takamura on building language models for finance.

  • [July 2022] Invited talk at the TextShuttle.

  • [July 2022] Attending NAACL 2022 in Seattle!

  • [May 2022] 🎉 Two papers (both oral) accepted at NAACL 2022:

    • BlonDe: An Automatic Evaluation Metric for Document-level Machine Translation.

    • A Structured Span Selector.

  • [March 2022]  TAing for Advanced Formal Language Theory (263-5352-00L)

  • [Feb. 2022] 🎉 Officially got my MSc degree in CS. Glad to be full-time on NLP research & teaching!

  • [Sept. 2020] Joining Rycolab and Mrinmaya's Lab!



BlonDe: An Automatic Evaluation Metric for Document-level Machine Translation.

Yuchen Eleanor Jiang, Tianyu Liu, Shuming Ma, Dongdong Zhang, Jian Yang, Haoyang Huang, Rico Sennrich, Mrinmaya Sachan, Ryan Cotterell, Ming Zhou. [NAACL 2022 oral]

Keywords: machine translation; evaluation

A Structured Span Selector.

Tianyu Liu, Yuchen Eleanor Jiang, Mrinmaya Sachan, Ryan Cotterell. [NAACL 2022 oral]

  • A structured model which directly learns to select an optimal set of spans for various span selection problems, e.g. coreference resolution and semantic role labeling. [pdf][code]

Keywords: coreference resolution; semantic role labeling

Autoregressive Structured Prediction with Language Models

Tianyu Liu, Yuchen Eleanor Jiang, Nicholas Monath, Mrinmaya Sachan, Ryan Cotterell. [EMNLP 2022 findings]

  • An approach to model structures as sequences of actions in an autoregressive manner with pretrained language models, allowing in-structure dependencies to be learned without any loss. [pdf][code]

Keywords: entity and relation extraction; coreference resolution; NER

Poor Man's Quality Estimation: Predicting Reference-Based MT Metrics Without the Reference

Vilém Zouhar, Shehzaad Dhuliawala, Wangchunshu Zhou, Nico Daheim, Tom Kocmi, Yuchen Eleanor Jiang, Mrinmaya Sachan. [EACL 2023 oral]

  • We propose a better approach to leverage monolingual data to boost machine translation performance, i.e. leveraging the metric estimation task for pre-training a quality estimation model.  [pdf][code]

Keywords: machine translation; quality estimation

A Bilingual Parallel Corpus with Discourse Annotations

Yuchen Eleanor Jiang, Tianyu Liu, Shuming Ma, Dongdong Zhang, Ryan Cotterell, Mrinmaya Sachan.

  • The BWB corpus consists of Chinese novels translated by experts into English, and the annotated test set is designed to probe the ability of machine translation systems to model various discourse phenomena. [pdf][dataset]

Keywords: machine translation; discourse; narrative

Investigating the Role of Centering Theory in Neural Coreference Resolution.

Yuchen Eleanor Jiang, Mrinmaya Sachan, Ryan Cotterell. 

  • We investigate the connection between centering theory (the best-known linguistic model for discourse coherence) and modern coreference resolution systems. [pdf][code]

Keywords: coreference resolution; discourse coherence

Deconstructing the Reading Time--Surprisal Relationship.

Yuchen Eleanor Jiang, Clara Isabel Meister, Tiago Pimentel, Tianyu Liu, Mrinmaya Sachan, Ryan D Cotterell, Roger P. Levy. 

  • We investigate the non-linear relationship between "the predictive power of language models" (surprisal) and the cognitive processing load. [pdf][code]

Keywords: reading comprehension; human language processing

Learning Directional Sentence-Pair Embedding for Natural Language Reasoning.

Yuchen Jiang, Zhenxin Xiao, Kai-Wei Chang. [AAAI 2020 (SA)]

  • A mutual attention mechanism for modeling directional inter-sentence relations. [pdf][code]

Keywords: natural language reasoning

Learning Language-agnostic Entity Prototype for Zero-shot Cross-lingual Entity Linking.

Haihong Yang*, Yuchen Jiang*, Zhongkai Hu, Wei Zhang, Yangbin Shi, Boxing Chen, Huajun Chen. [Arxiv]

  • A cross-lingual entity linking model and a knowledge-augmented dataset for cross-lingual entity linking. [pdf][code][dataset]

Keywords: entity linking; cross-lingual language model

BOSH: An efficient meta-algorithm for decision-based attacks.

Zhenxin Xiao, Puyudi Yang, Yuchen Jiang, Kai-Wei Chang, Cho-Jui. Hsieh. [Arxiv 2019]

  • An efficient meta-algorithm, which improves existing hard-label black-box attack algorithms through Bayesian Optimization (BO) and Successive Halving (SH). [pdf][code]

Keywords: machine learning; optimization; robustness 

bottom of page