Wednesday, January 25, 2023
HomeNatural Language ProcessingNAACL 2019 Highlights

NAACL 2019 Highlights


This publish discusses highlights of NAACL 2019. It covers switch studying, widespread sense reasoning, pure language technology, bias, non-English languages, and variety and inclusion.

NAACL 2019 Highlights

Replace 19.04.20: Added a translation of this publish in Spanish.

This publish discusses highlights of the 2019 Annual Convention of the North American Chapter of the Affiliation for Computational Linguistics (NAACL 2019).

You could find previous highlights of conferences right here. The convention accepted 424 papers (which you will discover right here) and had 1575 individuals (see the opening session slides for extra particulars). These are the matters that caught out for me most:

Switch studying

The room on the Switch Studying in NLP tutorial (Picture credit score: Luca Soldaini)

Curiosity in switch studying stays excessive. The Switch Studying in NLP tutorial (pictured above and arranged by Matthew Peters, Swabha Swayamdipta, Thomas Wolf, and me) was packed. NAACL 2019 awarded the very best lengthy paper award to BERT, arguably essentially the most impactful latest switch studying technique. Regardless of its recency, convention papers already leveraged BERT for aspect-based sentiment evaluation, evaluate studying comprehension, widespread sense reasoning, and open-domain query answering.

On the RepEval workshop, Kristina Toutanova mentioned methods to use switch studying for open-domain query answering. With applicable pretraining utilizing an Inverse Cloze Activity, the retriever and reader might be fine-tuned immediately on QA pairs with out an intermediate IR system. This demonstrates that a cautious initialization + fine-tuning are two key substances for switch studying and work even on difficult duties. This has additionally been proven previously for studying cross-lingual phrase embeddings and unsupervised MT. She additionally made the purpose that single-vector sentence/paragraph representations are very helpful for retrieval—and that we must always proceed to work on them. General, there are various thrilling analysis instructions in switch studying in NLP, a few of which we outlined at the tip of our tutorial. My different highlights embody:

  • Single-step Auxiliary loss Switch Studying (SiATL; Chronopoulou et al.), an “embarrassingly easy” strategy that reduces a number of the complexity of ULMFiT through multi-task studying and exponentially decaying the auxiliary loss.
  • AutoSeM (Guo et al.), a two-stage pipeline for multi-task studying that makes use of multi-armed bandits and Bayesian optimization to be taught the very best auxiliary process and the very best process mixing ratio respectively.
  • An analysis of contextual illustration throughout 16 duties (Liu et al.) that exhibits that they’re unhealthy at capturing fine-grained linguistic information and better layers in RNNs are extra task-specific than in Transformers.

Frequent sense reasoning

Language modelling is a pretraining process that has been proven to be taught usually helpful representations at scale. Nonetheless, there are some issues which can be merely by no means written, even in billions of tokens. Overcoming this reporting bias is a key problem in adapting language fashions to extra advanced duties. To check reasoning with information that’s typically left unsaid, the greatest useful resource paper used the widespread sense information base ConceptNet as “seed”. They created CommonsenseQA, a dataset of multiple-choice questions the place most solutions have the identical relation to the goal idea (see beneath).

Instance question-answer pairs in CommonsenseQA (Supply: Talmor et al.)

This requires the mannequin to make use of widespread sense fairly than simply relational or co-occurrence info to reply the query. BERT achieves 55.9% accuracy on this dataset—and is estimated to attain round 75% with 100k examples—nonetheless effectively beneath human efficiency 88.9%. What does it take to get to these 88.9%? Most definitely structured information, interactive and multimodal studying. In his speak on the Workshop on Shortcomings in Imaginative and prescient and Language (SiLV), Yoav Artzi mentioned language variety in grounded NLU, noting that we have to transfer from artificial to extra lifelike photos for studying grounded representations.

One other prerequisite for pure language understanding is compositional reasoning. The Deep Studying for Pure Language Inference tutorial mentioned pure language inference, a typical benchmark for evaluating such types of reasoning in-depth. I notably appreciated the next papers:

  • A label consistency framework for procedural textual content comprehension (Du et al.) that encourages consistency between predictions from descriptions of the identical course of. This can be a intelligent method to make use of instinct and extra knowledge to include an inductive bias into the mannequin.
  • Discrete Reasoning Over the content material of Paragraphs (DROP; Dua et al.), which requires fashions to resolve references in a query and carry out discrete operations (e.g. addition, counting, sorting) over a number of referents within the textual content.

Pure language technology

On the NeuralGen workshop, Graham Neubig mentioned strategies to optimize a non-differentiable goal operate comparable to BLEU immediately, together with minimal threat coaching and REINFORCE and methods to take care of their instability and get them to work. Whereas we had touched on switch studying for pure language technology (NLG) in our tutorial, Sasha Rush supplied many extra particulars and mentioned totally different strategies of utilizing language fashions to enhance NLG high quality. One other method to enhance pattern high quality is to deal with decoding. Yejin Choi mentioned a new sampling technique that samples from the top of the distribution and results in higher textual content high quality. She additionally mentioned the technology of faux information and the way massive pretrained language fashions comparable to Grover can be utilized to defend in opposition to them.

Generative adversarial networks (GANs) are a preferred method to generate photos, however to this point have underperformed for language. The Deep Adversarial Studying for NLP tutorial argued that we must always not surrender on them because the unsupervised or self-supervised studying executed by GANs has many purposes in NLP.

One other compelling side of technology is to allow a number of brokers to speak successfully. Moreover offering a window into how language emerges, it might be crucial for interactive studying and to switch information amongst brokers. Angeliki Lazaridou mentioned in her SiLV workshop speak that deep reinforcement studying instruments appear to work effectively for this setting however argued that higher biases are wanted. As well as, it’s nonetheless tough to interface emergent language to pure language.

Commerce-offs between high quality and variety of various fashions (circles) on NLG duties (Picture credit score: Hashimoto et al.)

I additionally loved the next papers:

  • Human Unified with Statistical Analysis (HUSE; Hashimoto et al.), a brand new metric for pure language technology that may think about each variety and high quality and yields a Pareto frontier by buying and selling off one of many two (see above). Strategies comparable to temperature annealing end in larger high quality, however cut back variety.
  • Separating planning from realization (Moryossef et al.) can enhance the standard of generated textual content from structured knowledge comparable to RDF triplets as there are sometimes a number of methods structured info might be realized in textual content.
  • Decoupling syntax and floor type technology (Cao & Clark) is one other method to take care of the underspecified downside of textual content technology from structured knowledge (on this case, summary which means representations).
  • A scientific evaluation that probes how helpful the visible modality really is for multimodal translation (Caglayan et al.) and was awarded the very best quick paper award. It observes that fashions with much less textual info extra strongly depend on the visible context, opposite to present beliefs.

Bias

The theme of the convention was mannequin bias. The varied units of keynotes match very effectively into this theme. The primary keynote by Arvind Narayanan particularly highlighted one under-appreciated side of bias, i.e. that we will leverage the bias in our fashions to enhance our understanding of human tradition.

On the entire, there’s a superb line between fascinating and undesirable bias. We regularly attempt to encode inductive bias about how the world works, comparable to objects being invariant to translation. Alternatively, we don’t need our fashions to be taught superficial cues or relations that aren’t a part of our presumably idealized notion of the world, comparable to gender bias. Finally, super-human efficiency mustn’t simply entail that fashions outperform people quantitively but additionally that they’re much less biased and fallible.

Lastly, we needs to be aware that know-how has lasting affect in the actual world. As one vivid instance of this, Kieran Snyder recounted in her keynote the time when she needed to design a sorting algorithm for Sinhala (see beneath). Sorting Sinhalese names was crucial for the Sri Lankan authorities to have the ability to seek for survivors within the aftermath of the 2004 tsunami. Her determination on methods to alphabetize the language later turned a part of an official authorities coverage.

Vowels in Sinhala (Picture credit score: Omniglot)

A few of my favorite papers on bias embody:

  • Debiasing strategies solely superficially take away bias in phrase embeddings (Gonen & Goldberg); bias continues to be mirrored in—and might be recovered from—the distances within the debiased embeddings.
  • An analysis of bias in contextualized phrase embeddings (Zhao et al.) finds that ELMo syntactically and unequally encodes gender info and—extra importantly—that this bias is inherited by downstream fashions, comparable to a coreference system.

Non-English languages

On the subject of various languages, in the course of the convention, the “Bender Rule”—named after Emily Bender who is thought for her advocacy for multilingual language processing, amongst different issues—was continuously invoked after shows. Briefly, the rule states: “At all times identify the language(s) you’re engaged on.” Not explicitly figuring out the language into account results in English being perceived because the default and as proxy for different languages, which is problematic in some ways (see Emily’s slides for a radical rationale).

On this vein, a few of my favorite papers from the convention examine how the efficiency of our fashions adjustments as we apply them different languages:

  • Polyglot contextual representations (Mulcaire et al.) which can be skilled on English and a further language by initializing phrase embeddings with cross-lingual representations. For some settings (Chinese language SRL, Arabic NER), cross-lingual coaching yields massive enhancements.
  • A examine on switch of dependency parsers skilled on English to 30 different languages (Ahmad et al.) finds that RNNs skilled on English switch effectively to languages near English, however self-attention fashions switch higher to distant languages.
  • An unsupervised POS tagger for low-resource languages (Cardenas et al.) that “deciphers” Brown cluster ids to be able to generate the POS sequence and achieves state-of-the-art efficiency on Sinhalese (see above).

Range and inclusion

Badge stickers at NAACL 2019 (Picture credit score: Natalie Schluter)

Because the group is rising it can be crucial that new members really feel included and that their voices are heard. NAACL 2019 enforce a variety of initiatives on this regard, from considerate touches comparable to badge stickers (see above) to matching newcomers with mentors and “massive siblings”, to basic ones comparable to childcare (see beneath) and dwell captions. I notably appreciated the dwell tweeting, which made the convention accessible to individuals who couldn’t attend.

Childcare room at NAACL 2019 (Picture credit score: Kieran Snyder)

Translations

This publish has been translated into the next languages:

Cowl picture: The room on the Switch Studying in NLP tutorial (Picture credit score: Dan Simonson)



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments