character-aware neural language modelsyoonkim/data/char-nlm-slides.pdf · deep rnn (pascanu et al....

82
Character-Aware Neural Language Models Yoon Kim Yacine Jernite David Sontag Alexander Rush Harvard SEAS New York University Code: https://github.com/yoonkim/lstm-char-cnn Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 1 / 76

Upload: others

Post on 06-Feb-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character-Aware Neural Language Models

Yoon Kim Yacine Jernite David Sontag Alexander Rush

Harvard SEAS New York University

Code: https://github.com/yoonkim/lstm-char-cnn

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 1 / 76

Page 2: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Language Model

Language Model (LM): probability distribution over a sequence of words.

p(w1, . . . ,wT ) for any sequence of length T from a vocabulary V (withwi ∈ V for all i).

Important for many downstream applications:

machine translation

speech recognition

text generation

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 2 / 76

Page 3: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Count-based Language Models

By the chain rule, any distribution can be factorized as

p(w1, . . . ,wT ) =T∏t=1

p(wt |w1, . . . ,wt−1)

Count-based n-gram language models make a Markov assumption:

p(wt |w1, . . . ,wt) ≈ p(wt |wt−n, . . . ,wt−1)

Need smoothing to deal with rare n-grams.

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 3 / 76

Page 4: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Neural Language Models

Neural Language Models (NLM)

Represent words as dense vectors in Rn (word embeddings).

wt ∈ R|V| : One-hot representation of word ∈ V at time t⇒ xt = Xwt : Word embedding (X ∈ Rn×|V|, n < |V|)

Train a neural net that composes history to predict next word.

p(wt = j |w1, . . . ,wt−1) =exp(pj · g(x1, . . . , xt−1) + qj)∑

j ′∈Vexp(pj ′ · g(x1, . . . , xt−1) + qj ′)

= softmax(Pg(x1, . . . , xt−1) + q)

pj ∈ Rm, qj ∈ R : Output word embedding/bias for word j ∈ Vg : Composition function

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 4 / 76

Page 5: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Feed-forward NLM (Bengio, Ducharme, and Vincent 2003)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 5 / 76

Page 6: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Feed-forward NLM (Bengio, Ducharme, and Vincent 2003)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 6 / 76

Page 7: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Feed-forward NLM (Bengio, Ducharme, and Vincent 2003)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 7 / 76

Page 8: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Feed-forward NLM (Bengio, Ducharme, and Vincent 2003)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 8 / 76

Page 9: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Recurrent Neural Network LM (Mikolov et al. 2011)

Maintain a hidden state vector ht that is recursively calculated.

ht = f (Wxt + Uht−1 + b)

ht ∈ Rm : Hidden state at time t (summary of history)

W ∈ Rm×n : Input-to-hidden transformation

U ∈ Rm×m : Hidden-to-hidden transformation

f (·) : Non-linearity

Apply softmax to ht .

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 9 / 76

Page 10: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Recurrent Neural Network LM (Mikolov et al. 2011)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 10 / 76

Page 11: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Recurrent Neural Network LM (Mikolov et al. 2011)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 11 / 76

Page 12: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Recurrent Neural Network LM (Mikolov et al. 2011)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 12 / 76

Page 13: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Word Embeddings (Collobert et al. 2011; Mikolov et al. 2012)

Key ingredient in Neural Language Models.

After training, similar words are close in the vector space.

(Not unique to NLMs)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 13 / 76

Page 14: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

NLM Performance (on Penn Treebank)

Difficult/expensive to train, but performs well.

Language Model Perplexity

5-gram count-based (Mikolov and Zweig 2012) 141.2RNN (Mikolov and Zweig 2012) 124.7Deep RNN (Pascanu et al. 2013) 107.5LSTM (Zaremba, Sutskever, and Vinyals 2014) 78.4

Renewed interest in language modeling.

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 14 / 76

Page 15: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

NLM Issue

Issue: The fundamental unit of information is still the word

Separate embeddings for “trading”, “leading”, “training”, etc.

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 15 / 76

Page 16: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

NLM Issue

Issue: The fundamental unit of information is still the word

Separate embeddings for “trading”, “trade”, “trades”, etc.

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 16 / 76

Page 17: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

NLM Issue

No parameter sharing across orthographically similar words.

Orthography contains much semantic/syntactic information.

How can we leverage subword information for language modeling?

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 17 / 76

Page 18: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Previous (NLM-based) Work

Use morphological segmenter as a preprocessing step

unfortunately ⇒ unPRE − fortunateSTM − lySUF

Luong, Socher, and Manning 2013: Recursive Neural Network overmorpheme embeddings

Botha and Blunsom 2014: Sum over word/morpheme embeddings

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 18 / 76

Page 19: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

This Work

Main Idea: No morphology, use characters directly.

Convolutional Neural Networks (CNN) (LeCun et al. 1989)

Central to deep learning systems in vision.

Shown to be effective for NLP tasks (Collobert et al. 2011).

CNNs in NLP typically involve temporal (rather than spatial)convolutions over words.

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 19 / 76

Page 20: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

This Work

Main Idea: No morphology, use characters directly.

Convolutional Neural Networks (CNN) (LeCun et al. 1989)

Central to deep learning systems in vision.

Shown to be effective for NLP tasks (Collobert et al. 2011).

CNNs in NLP typically involve temporal (rather than spatial)convolutions over words.

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 19 / 76

Page 21: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Network Architecture: Overview

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 20 / 76

Page 22: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character-level CNN (CharCNN)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 21 / 76

Page 23: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character-level CNN (CharCNN)

C ∈ Rd×l : Matrix representation of word (of length l)

H ∈ Rd×w : Convolutional filter matrix

d : Dimensionality of character embeddings (e.g. 15)

w : Width of convolution filter (e.g. 1–7)

1. Apply a convolution between C and H to obtain a vector f ∈ Rl−w+1

f[i ] = 〈C[∗, i : i + w − 1],H〉

where 〈A,B〉 = Tr(ABT ) is the Frobenius inner product.2. Take the max-over-time (with bias and nonlinearity)

y = tanh(maxi{f[i ]}+ b)

as the feature corresponding to the filter H (for a particular word).

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 22 / 76

Page 24: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character-level CNN (CharCNN)

C ∈ Rd×l : Matrix representation of word (of length l)

H ∈ Rd×w : Convolutional filter matrix

d : Dimensionality of character embeddings (e.g. 15)

w : Width of convolution filter (e.g. 1–7)

1. Apply a convolution between C and H to obtain a vector f ∈ Rl−w+1

f[i ] = 〈C[∗, i : i + w − 1],H〉

where 〈A,B〉 = Tr(ABT ) is the Frobenius inner product.

2. Take the max-over-time (with bias and nonlinearity)

y = tanh(maxi{f[i ]}+ b)

as the feature corresponding to the filter H (for a particular word).

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 22 / 76

Page 25: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character-level CNN (CharCNN)

C ∈ Rd×l : Matrix representation of word (of length l)

H ∈ Rd×w : Convolutional filter matrix

d : Dimensionality of character embeddings (e.g. 15)

w : Width of convolution filter (e.g. 1–7)

1. Apply a convolution between C and H to obtain a vector f ∈ Rl−w+1

f[i ] = 〈C[∗, i : i + w − 1],H〉

where 〈A,B〉 = Tr(ABT ) is the Frobenius inner product.2. Take the max-over-time (with bias and nonlinearity)

y = tanh(maxi{f[i ]}+ b)

as the feature corresponding to the filter H (for a particular word).Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 22 / 76

Page 26: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character-level CNN (CharCNN)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 23 / 76

Page 27: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character-level CNN (CharCNN)

C ∈ Rd×l : Representation of absurdity

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 24 / 76

Page 28: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character-level CNN (CharCNN)

H ∈ Rd×w : Convolutional filter matrix of width w = 3

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 25 / 76

Page 29: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character-level CNN (CharCNN)

f[1] = 〈C[∗, 1 : 3],H〉

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 26 / 76

Page 30: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character-level CNN (CharCNN)

f[1] = 〈C[∗, 1 : 3],H〉

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 27 / 76

Page 31: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character-level CNN (CharCNN)

f[2] = 〈C[∗, 2 : 4],H〉

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 28 / 76

Page 32: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character-level CNN (CharCNN)

f[T − 2] = 〈C[∗,T − 2 : T ],H〉

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 29 / 76

Page 33: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character-level CNN (CharCNN)

y [1] = maxi{f[i ]}

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 30 / 76

Page 34: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character-level CNN (CharCNN)

Each filter picks out a character n-gram

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 31 / 76

Page 35: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character-level CNN (CharCNN)

f ′[1] = 〈C[∗, 1 : 2],H′〉

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 32 / 76

Page 36: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character-level CNN (CharCNN)

y [2] = maxi{f ′[i ]}

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 33 / 76

Page 37: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character-level CNN (CharCNN)

Many filter matrices (25–200) per width (1–7)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 34 / 76

Page 38: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character-level CNN (CharCNN)

Add bias, apply nonlinearity

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 35 / 76

Page 39: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character-level CNN (CharCNN)

Before

Word embedding

PTB Perplexity: 85.4

Now

Output from CharCNN

PTB Perplexity: 84.6

CharCNN is slower, but convolution operations on GPU have been veryoptimized.

Can we model more complex interactions between character n-gramspicked up by the filters?

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 36 / 76

Page 40: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Highway Network

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 37 / 76

Page 41: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Highway Network

y : output from CharCNN

Multilayer Perceptronz = g(Wy + b)

Highway Network(Srivastava, Greff, and Schmidhuber 2015)

z = t� g(WHy + bH) + (1− t)� y

WH ,bH : Affine transformation

t = σ(WTy + bT ) : transform gate

1− t : carry gate

Hierarchical, adaptive composition of character n-grams.

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 38 / 76

Page 42: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Highway Network

y : output from CharCNN

Multilayer Perceptronz = g(Wy + b)

Highway Network(Srivastava, Greff, and Schmidhuber 2015)

z = t� g(WHy + bH) + (1− t)� y

WH ,bH : Affine transformation

t = σ(WTy + bT ) : transform gate

1− t : carry gate

Hierarchical, adaptive composition of character n-grams.

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 38 / 76

Page 43: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Highway Network

Input from CharCNN

Input to LSTM

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 39 / 76

Page 44: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Highway Network

Model Perplexity

Word Model 85.4

No Highway Layers 84.6One MLP Layer 92.6One Highway Layer 79.7Two Highway Layers 78.9

No more gains with 2+ layers.

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 40 / 76

Page 45: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Results: English Penn Treebank

PPL Size

KN-5 (Mikolov et al. 2012) 141.2 2 mRNN (Mikolov et al. 2012) 124.7 6 mDeep RNN (Pascanu et al. 2013) 107.5 6 mSum-Prod Net (Cheng et al. 2014) 100.0 5 mLSTM-Medium (Zaremba, Sutskever, and Vinyals 2014) 82.7 20 mLSTM-Huge (Zaremba, Sutskever, and Vinyals 2014) 78.4 52 m

LSTM-Word-Small 97.6 5 mLSTM-Char-Small 92.3 5 mLSTM-Word-Large 85.4 20 mLSTM-Char-Large 78.9 19 m

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 41 / 76

Page 46: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Data

Data-s Data-l|V| |C| T |V| |C| T

English (En) 10 k 51 1 m 60 k 197 20 mCzech (Cs) 46 k 101 1 m 206 k 195 17 mGerman (De) 37 k 74 1 m 339 k 260 51 mSpanish (Es) 27 k 72 1 m 152 k 222 56 mFrench (Fr) 25 k 76 1 m 137 k 225 57 mRussian (Ru) 62 k 62 1 m 497 k 111 25 m

|V| = Word vocab Size|C| = Character vocab sizeT = number of tokens in training set.

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 42 / 76

Page 47: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Data

Data-s Data-l|V| |C| T |V| |C| T

English (En) 10 k 51 1 m 60 k 197 20 mCzech (Cs) 46 k 101 1 m 206 k 195 17 mGerman (De) 37 k 74 1 m 339 k 260 51 mSpanish (Es) 27 k 72 1 m 152 k 222 56 mFrench (Fr) 25 k 76 1 m 137 k 225 57 mRussian (Ru) 62 k 62 1 m 497 k 111 25 m

|V| varies quite a bit by language.

(effectively use the full vocabulary)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 43 / 76

Page 48: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Baselines

Kneser-Ney LM: Count-based baseline

Word LSTM: Word embeddings as input

Morpheme LBL (Botha and Blunsom 2014)

Input for word k is

xk︸︷︷︸word embedding

+∑j∈Mk

mj

︸ ︷︷ ︸morpheme embeddings

Morpheme LSTM: Same input as above, but with LSTM architecture

Morphemes obtained from running an unsupervised morphological taggerMorfessor Cat-MAP (Creutz and Lagus 2007).

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 44 / 76

Page 49: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Perplexity on Data-S (1 M Tokens)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 45 / 76

Page 50: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Perplexity on Data-S (1 M Tokens)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 46 / 76

Page 51: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Perplexity on Data-S (1 M Tokens)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 47 / 76

Page 52: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Perplexity on Data-S (1 M Tokens)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 48 / 76

Page 53: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Perplexity on Data-S (1 M Tokens)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 49 / 76

Page 54: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Perplexity on Data-L (17-57 M Tokens)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 50 / 76

Page 55: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Learned Word Representations

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 51 / 76

Page 56: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Learned Word Representations

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 52 / 76

Page 57: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Learned Word Representations

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 53 / 76

Page 58: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Learned Word Representations (In Vocab)

(Based on cosine similarity)

In Vocabularywhile his you richard trading

although your conservatives jonathan advertisedWord letting her we robert advertising

Embedding though my guys neil turnoverminute their i nancy turnover

chile this your hard headingCharacters whole hhs young rich training

(before highway) meanwhile is four richer readingwhite has youth richter leading

meanwhile hhs we eduard tradeCharacters whole this your gerard training

(after highway) though their doug edward tradednevertheless your i carl trader

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 54 / 76

Page 59: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Learned Word Representations (In Vocab)

(Based on cosine similarity)

In Vocabularywhile his you richard trading

although your conservatives jonathan advertisedWord letting her we robert advertising

Embedding though my guys neil turnoverminute their i nancy turnover

chile this your hard headingCharacters whole hhs young rich training

(before highway) meanwhile is four richer readingwhite has youth richter leading

meanwhile hhs we eduard tradeCharacters whole this your gerard training

(after highway) though their doug edward tradednevertheless your i carl trader

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 54 / 76

Page 60: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Learned Word Representations (In Vocab)

(Based on cosine similarity)

In Vocabularywhile his you richard trading

although your conservatives jonathan advertisedWord letting her we robert advertising

Embedding though my guys neil turnoverminute their i nancy turnover

chile this your hard headingCharacters whole hhs young rich training

(before highway) meanwhile is four richer readingwhite has youth richter leading

meanwhile hhs we eduard tradeCharacters whole this your gerard training

(after highway) though their doug edward tradednevertheless your i carl trader

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 54 / 76

Page 61: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Learned Word Representations (In Vocab)

(Based on cosine similarity)

In Vocabularywhile his you richard trading

although your conservatives jonathan advertisedWord letting her we robert advertising

Embedding though my guys neil turnoverminute their i nancy turnover

chile this your hard headingCharacters whole hhs young rich training

(before highway) meanwhile is four richer readingwhite has youth richter leading

meanwhile hhs we eduard tradeCharacters whole this your gerard training

(after highway) though their doug edward tradednevertheless your i carl trader

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 55 / 76

Page 62: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Learned Word Representations (In Vocab)

(Based on cosine similarity)

In Vocabularywhile his you richard trading

although your conservatives jonathan advertisedWord letting her we robert advertising

Embedding though my guys neil turnoverminute their i nancy turnover

chile this your hard headingCharacters whole hhs young rich training

(before highway) meanwhile is four richer readingwhite has youth richter leading

meanwhile hhs we eduard tradeCharacters whole this your gerard training

(after highway) though their doug edward tradednevertheless your i carl trader

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 56 / 76

Page 63: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Learned Word Representations (In Vocab)

(Based on cosine similarity)

In Vocabularywhile his you richard trading

although your conservatives jonathan advertisedWord letting her we robert advertising

Embedding though my guys neil turnoverminute their i nancy turnover

chile this your hard headingCharacters whole hhs young rich training

(before highway) meanwhile is four richer readingwhite has youth richter leading

meanwhile hhs we eduard tradeCharacters whole this your gerard training

(after highway) though their doug edward tradednevertheless your i carl trader

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 57 / 76

Page 64: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Learned Word Representations (In Vocab)

(Based on cosine similarity)

In Vocabularywhile his you richard trading

although your conservatives jonathan advertisedWord letting her we robert advertising

Embedding though my guys neil turnoverminute their i nancy turnover

chile this your hard headingCharacters whole hhs young rich training

(before highway) meanwhile is four richer readingwhite has youth richter leading

meanwhile hhs we eduard tradeCharacters whole this your gerard training

(after highway) though their doug edward tradednevertheless your i carl trader

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 58 / 76

Page 65: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Learned Word Representations (OOV)

Out-of-Vocabularycomputer-aided misinformed looooook

computer-guided informed lookCharacters computerized performed cook

(before highway) disk-drive transformed lookscomputer inform shook

computer-guided informed lookCharacters computer-driven performed looks

(after highway) computerized outperformed lookedcomputer transformed looking

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 59 / 76

Page 66: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Learned Word Representations (OOV)

Out-of-Vocabularycomputer-aided misinformed looooook

computer-guided informed lookCharacters computerized performed cook

(before highway) disk-drive transformed lookscomputer inform shook

computer-guided informed lookCharacters computer-driven performed looks

(after highway) computerized outperformed lookedcomputer transformed looking

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 60 / 76

Page 67: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Learned Word Representations (OOV)

Out-of-Vocabularycomputer-aided misinformed looooook

computer-guided informed lookCharacters computerized performed cook

(before highway) disk-drive transformed lookscomputer inform shook

computer-guided informed lookCharacters computer-driven performed looks

(after highway) computerized outperformed lookedcomputer transformed looking

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 61 / 76

Page 68: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Convolutional Layer

Does each filter truly pick out a character n-gram?

0.4$ %0.8$ 2.2$ 0.1$ 0.5$ %0.4$ 0.4$ %0.4$ 0.1$

0.1$ 1.2$ 1.5$ %0.8$ %1.5$ 0.2$ 0.1$ 1.2$ 0.7$

0.2$ 0.1$ %1.2$ 0.2$ %0.2$ 0.3$ 0.2$ %1.3$ %0.1$

%0.2$ %0.5$ 0.1$ 0.2$ %0.3$ 0.3$ %0.1$ 1.0$ %0.3$

a b s u r d i t y

0.1$ 0.7$ 0.2$ %0.1$ 0.2$ %0.4$ 0.5$ 0.7$

Concatena3on$of$character$embeddings$

Max%over%3me$pooling$

Single$filter$output$

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 62 / 76

Page 69: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Convolutional Filters

For each filter, visualize 100 substrings with the highest filter response

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 63 / 76

Page 70: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Convolutional Filters

For each filter, visualize 100 substrings with the highest filter response

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 64 / 76

Page 71: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Character N-gram Representations

Prefixes, Suffixes, Hyphenated, Others

Prefixes: character n-grams that start with ‘start-of-word’ character, suchas {un, {mis. Suffixes defined similarly.

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 65 / 76

Page 72: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Conclusion

A character-aware language model that relies only on character-levelinputs: CNN over characters + LSTM.

Outperforms strong word/morpheme LSTM baselines.

Much recent work on character inputs:

Santos and Zadrozny 2014: CNN over characters concatenated withword embeddings into CRF.

Zhang and LeCun 2015: Deep CNN over characters for documentclassification.

Ballesteros, Dyer, and Smith 2015: LSTM over characters for parsing.

Ling et al. 2015: LSTM over characters into another LSTM forlanguage modeling/POS-tagging.

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 66 / 76

Page 73: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Future Work

Subword information on the output.

As an encoder/decoder in neuralmachine translation.

CharCNN + Highway layers forrepresentation learning (e.g. as inputinto word2vec)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 67 / 76

Page 74: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Appendix: Performance vs Corpus/Vocab Size

How does relative performance vary as corpus/vocabulary sizes vary?

Experiment on German large dataset:

Use the first T tokens of the training set.

Take the most frequent K words as the vocabulary and replace restwith <unk>

Compare % perplexity reduction going from word to character LSTM.

Vocabulary Size10 k 25 k 50 k 100 k

1 m 17 16 21 –Training 5 m 8 14 16 21

Size 10 m 9 9 12 1525 m 9 8 9 10

Character model outperforms word model in all scenarios.

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 68 / 76

Page 75: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Appendix: Hyperparameters

Small Large

CNN

d 15 15w [1, 2, 3, 4, 5, 6] [1, 2, 3, 4, 5, 6, 7]h [25 · w ] [min{200, 50 · w}]f tanh tanh

HW-Netl 1 2g ReLU ReLU

LSTMl 2 2m 300 650

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 69 / 76

Page 76: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Appendix: Results on Data-S

Cs De Es Fr Ru

B&BKN-4 545 366 241 274 396MLBL 465 296 200 225 304

SmallWord 503 305 212 229 352Morph 414 278 197 216 290Char 401 260 182 189 278

LargeWord 493 286 200 222 357Morph 398 263 177 196 271Char 371 239 165 184 261

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 70 / 76

Page 77: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Appendix: Results on Data-L

Cs De Es Fr Ru En

B&BKN-4 862 463 219 243 390 291MLBL 643 404 203 227 300 273

SmallWord 701 347 186 202 353 236Morph 615 331 189 209 331 233Char 578 305 169 190 313 216

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 71 / 76

Page 78: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Appendix: Effect of Highway Layers (PTB)

Small Model Large Model

No Highway Layers 100.3 84.6One Highway Layer 92.3 79.7Two Highway Layers 90.1 78.9Multilayer Perceptron 111.2 92.6

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 72 / 76

Page 79: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

Appendix: LSTM (Hochreiter and Schmidhuber 1997)

Long short-term memory (LSTM) (Hochreiter and Schmidhuber 1997):Augment RNN with (latent) cell vectors to allow for learning of long-rangedependencies.

it = σ(Wixt + Uiht−1 + bi )

ft = σ(Wf xt + Uf ht−1 + bf )

ot = σ(Woxt + Uoht−1 + bo)

gt = tanh(Wgxt + Ught−1 + bg )

ct = ft � ct−1 + it � gt

ht = ot � tanh(ct)

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 73 / 76

Page 80: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

References I

Bengio, Yoshua, Rejean Ducharme, and Pascal Vincent (2003). “A NeuralProbabilistic Language Model”. In: Journal of Machine LearningResearch 3, pp. 1137–1155.

Mikolov, Tomas et al. (2011). “Empirical Evaluation and Combination ofAdvanced Language Modeling Techniques”. In: Proceedings ofINTERSPEECH.

Collobert, Ronan et al. (2011). “Natural Language Processing (almost)from Scratch”. In: Journal of Machine Learning Research 12,pp. 2493–2537.

Mikolov, Tomas et al. (2012). “Subword Language Modeling with NeuralNetworks”. In: preprint: www.fit.vutbr.cz/̃imikolov/rnnlm/char.pdf.

Mikolov, Tomas and Geoffrey Zweig (2012). “Context DependentRecurrent Neural Network Language Model”. In: Proceedings of SLT.

Pascanu, Razvan et al. (2013). “How to Construct Deep NeuralNetworks”. In: arXiv:1312.6026.

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 74 / 76

Page 81: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

References II

Zaremba, Wojciech, Ilya Sutskever, and Oriol Vinyals (2014). “RecurrentNeural Network Regularization”. In: arXiv:1409.2329.

Luong, Minh-Thang, Richard Socher, and Chris Manning (2013). “BetterWord Representations with Recursive Neural Networks for Morphology”.In: Proceedings of CoNLL.

Botha, Jan and Phil Blunsom (2014). “Compositional Morphology forWord Representations and Language Modelling”. In: Proceedings ofICML.

LeCun, Yann et al. (1989). “Handwritten Digit Recognition with aBackpropagation Network”. In: Proceedings of NIPS.

Srivastava, Rupesh Kumar, Klaus Greff, and Jurgen Schmidhuber (2015).“Training Very Deep Networks”. In: arXiv:1507.06228.

Creutz, Mathias and Krista Lagus (2007). “Unsupervised Models forMorpheme Segmentation and Morphology Learning”. In: Proceedingsof the ACM Transations on Speech and Language Processing.

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 75 / 76

Page 82: Character-Aware Neural Language Modelsyoonkim/data/char-nlm-slides.pdf · Deep RNN (Pascanu et al. 2013) 107:5 6 m Sum-Prod Net (Cheng et al. 2014) 100:0 5 m ... Kim, Jernite, Sontag,

References III

Cheng, Wei Chen et al. (2014). “Language Modeling with Sum-ProductNetworks”. In: Proceedings of INTERSPEECH.

Santos, Cicero Nogueira dos and Bianca Zadrozny (2014). “LearningCharacter-level Representations for Part-of-Speech Tagging”. In:Proceedings of ICML.

Zhang, Xiang and Yann LeCun (2015). “Text Understanding FromScratch”. In: arXiv:1502.01710.

Ballesteros, Miguel, Chris Dyer, and Noah A. Smith (2015). “ImprovedTransition-Based Parsing by Modeling Characters instead of Words withLSTMs”. In: Proceedings of EMNLP 2015.

Ling, Wang et al. (2015). “Finding Function in Form: CompositionalCharacter Models for Open Vocabulary Word Representation”. In:Proceedings of EMNLP.

Hochreiter, Sepp and J́’urgen Schmidhuber (1997). “Long Short-TermMemory”. In: Neural Computation 9, pp. 1735–1780.

Kim, Jernite, Sontag, Rush Character-Aware Neural Language Models 76 / 76