lecture 10 nltk pos tagging part 3 topics taggers rule based taggers probabilistic taggers...

Post on 03-Jan-2016

220 Views

Category:

Documents

4 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Lecture 10 NLTK POS Tagging Part 3

Lecture 10 NLTK POS Tagging Part 3

Topics Topics Taggers Rule Based Taggers Probabilistic Taggers Transformation Based Taggers - Brill Supervised learning

Readings: Chapter 5.4-?Readings: Chapter 5.4-?

February 18, 2013

CSCE 771 Natural Language Processing

– 2 – CSCE 771 Spring 2011

OverviewOverviewLast TimeLast Time

Overview of POS Tags

TodayToday Part of Speech Tagging Parts of Speech Rule Based taggers Stochastic taggers Transformational taggers

ReadingsReadings Chapter 5.4-5.?

– 3 – CSCE 771 Spring 2011

brown_lrnd_tagged = brown_lrnd_tagged = brown.tagged_words(categories='learned', brown.tagged_words(categories='learned', simplify_tags=True)simplify_tags=True)

tags = [b[1] for (a, b) in tags = [b[1] for (a, b) in nltk.ibigrams(brown_lrnd_tagged) if a[0] == 'often']nltk.ibigrams(brown_lrnd_tagged) if a[0] == 'often']

fd = nltk.FreqDist(tags)fd = nltk.FreqDist(tags)

print fd.tabulate()print fd.tabulate()

VN V VD ADJ DET ADV P , CNJ . TO VBZ VG WHVN V VD ADJ DET ADV P , CNJ . TO VBZ VG WH

15 12 8 5 5 4 4 3 3 1 1 1 1 115 12 8 5 5 4 4 3 3 1 1 1 1 1

– 4 – CSCE 771 Spring 2011

highly ambiguous wordshighly ambiguous words

>>> brown_news_tagged = >>> brown_news_tagged = brown.tagged_words(categories='news', brown.tagged_words(categories='news', simplify_tags=True)simplify_tags=True)

>>> data = nltk.ConditionalFreqDist((word.lower(), tag) ... >>> data = nltk.ConditionalFreqDist((word.lower(), tag) ... for (word, tag) in brown_news_tagged) for (word, tag) in brown_news_tagged)

>>> for word in data.conditions(): >>> for word in data.conditions():

... if len(data[word]) > 3: ... if len(data[word]) > 3:

... tags = data[word].keys() ... tags = data[word].keys()

... print word, ' '.join(tags) ... print word, ' '.join(tags)

... ...

best ADJ ADV NP V best ADJ ADV NP V

better ADJ ADV V DETbetter ADJ ADV V DET

……..

– 5 – CSCE 771 Spring 2011

Tag PackageTag Package

http://nltk.org/api/nltk.tag.html#module-nltk.taghttp://nltk.org/api/nltk.tag.html#module-nltk.tag

– 6 – CSCE 771 Spring 2011

Python's Dictionary Methods: Python's Dictionary Methods:

..

– 7 – CSCE 771 Spring 2011

5.4   Automatic Tagging5.4   Automatic Tagging

Training setTraining set

Test setTest set

### setup### setup

import nltk, re, pprintimport nltk, re, pprint

from nltk.corpus import brownfrom nltk.corpus import brown

brown_tagged_sents = brown_tagged_sents = brown.tagged_sents(categories='news')brown.tagged_sents(categories='news')

brown_sents = brown.sents(categories='news')brown_sents = brown.sents(categories='news')

– 8 – CSCE 771 Spring 2011

Default.tagger NNDefault.tagger NNtags = [tag for (word, tag) in tags = [tag for (word, tag) in

brown.tagged_words(categories='news')]brown.tagged_words(categories='news')]

print nltk.FreqDist(tags).max()print nltk.FreqDist(tags).max()

raw = 'I do not like green eggs and ham, I …Sam I am!'raw = 'I do not like green eggs and ham, I …Sam I am!'

tokens = nltk.word_tokenize(raw)tokens = nltk.word_tokenize(raw)

default_tagger = nltk.DefaultTagger('NN')default_tagger = nltk.DefaultTagger('NN')

print default_tagger.tag(tokens)print default_tagger.tag(tokens)

[('I', 'NN'), ('do', 'NN'), ('not', 'NN'), ('like', 'NN'), …[('I', 'NN'), ('do', 'NN'), ('not', 'NN'), ('like', 'NN'), …

print default_tagger.evaluate(brown_tagged_sents)print default_tagger.evaluate(brown_tagged_sents)

0.1308948425720.130894842572

– 9 – CSCE 771 Spring 2011

Tagger2: regexp_taggerTagger2: regexp_tagger

patterns = [patterns = [

(r'.*ing$', 'VBG'), (r'.*ing$', 'VBG'), # gerunds# gerunds

(r'.*ed$', 'VBD'), (r'.*ed$', 'VBD'), # simple past# simple past

(r'.*es$', 'VBZ'), (r'.*es$', 'VBZ'), # 3rd singular present# 3rd singular present

(r'.*ould$', 'MD'), (r'.*ould$', 'MD'), # modals# modals

(r'.*\'s$', 'NN$'), (r'.*\'s$', 'NN$'), # possessive nouns# possessive nouns

(r'.*s$', 'NNS'), (r'.*s$', 'NNS'), # plural nouns# plural nouns

(r'^-?[0-9]+(.[0-9]+)?$', 'CD'), # cardinal numbers(r'^-?[0-9]+(.[0-9]+)?$', 'CD'), # cardinal numbers

(r'.*', 'NN') (r'.*', 'NN') # nouns (default)# nouns (default)

]]

regexp_tagger = nltk.RegexpTagger(patterns)regexp_tagger = nltk.RegexpTagger(patterns)

– 10 – CSCE 771 Spring 2011

Evaluate regexp_taggerEvaluate regexp_tagger

regexp_tagger = nltk.RegexpTagger(patterns)regexp_tagger = nltk.RegexpTagger(patterns)

print regexp_tagger.tag(brown_sents[3])print regexp_tagger.tag(brown_sents[3])

[('``', 'NN'), ('Only', 'NN'), ('a', 'NN'), ('relative', 'NN'), …[('``', 'NN'), ('Only', 'NN'), ('a', 'NN'), ('relative', 'NN'), …

print regexp_tagger.evaluate(brown_tagged_sents)print regexp_tagger.evaluate(brown_tagged_sents)

0.2032639178950.203263917895

– 11 – CSCE 771 Spring 2011

Unigram Tagger: 100 Most Freq tagUnigram Tagger: 100 Most Freq tag

fd = nltk.FreqDist(brown.words(categories='news'))fd = nltk.FreqDist(brown.words(categories='news'))cfd = cfd =

nltk.ConditionalFreqDist(brown.tagged_words(categories='news'))nltk.ConditionalFreqDist(brown.tagged_words(categories='news'))

most_freq_words = fd.keys()[:100]most_freq_words = fd.keys()[:100]

likely_tags = dict((word, cfd[word].max()) for word in likely_tags = dict((word, cfd[word].max()) for word in most_freq_words)most_freq_words)

baseline_tagger = nltk.UnigramTagger(model=likely_tags)baseline_tagger = nltk.UnigramTagger(model=likely_tags)

print baseline_tagger.evaluate(brown_tagged_sents)print baseline_tagger.evaluate(brown_tagged_sents)

0.4557849513690.455784951369

– 12 – CSCE 771 Spring 2011

Likely_tags; Backoff to NNLikely_tags; Backoff to NN

sent = brown.sents(categories='news')[3]sent = brown.sents(categories='news')[3]

baseline_tagger.tag(sent)baseline_tagger.tag(sent)('Only', 'NN'), ('a', 'NN'), ('relative', 'NN'), ('handful', 'NN'), ('of', 'NN'),('Only', 'NN'), ('a', 'NN'), ('relative', 'NN'), ('handful', 'NN'), ('of', 'NN'),

baseline_tagger = baseline_tagger = nltk.UnigramTagger(model=likely_tags,nltk.UnigramTagger(model=likely_tags,

backoff=nltk.DefaultTagger('NN'))backoff=nltk.DefaultTagger('NN'))

print baseline_tagger.tag(sent)print baseline_tagger.tag(sent)'Only', 'NN'), ('a', 'AT'), ('relative', 'NN'), ('handful', 'NN'), ('of', 'IN'),'Only', 'NN'), ('a', 'AT'), ('relative', 'NN'), ('handful', 'NN'), ('of', 'IN'),

print baseline_tagger.evaluate(brown_tagged_sents)print baseline_tagger.evaluate(brown_tagged_sents)

0.5817769556660.581776955666

– 13 – CSCE 771 Spring 2011

Performance of Easy TaggersPerformance of Easy Taggers

.. Tagger Performance Comment

NN tagger 0.13

Regexp tagger 0.20

100 Most Freq tag 0.46

Likely_tags; Backoff to NN

0.58

– 14 – CSCE 771 Spring 2011

def performance(cfd, wordlist):def performance(cfd, wordlist):

lt = dict((word, cfd[word].max()) for word in wordlist)lt = dict((word, cfd[word].max()) for word in wordlist)

baseline_tagger = nltk.UnigramTagger(model=lt, baseline_tagger = nltk.UnigramTagger(model=lt, backoff=nltk.DefaultTagger('NN'))backoff=nltk.DefaultTagger('NN'))

return return baseline_tagger.evaluate(brown.tagged_sents(categbaseline_tagger.evaluate(brown.tagged_sents(categories='news'))ories='news'))

– 15 – CSCE 771 Spring 2011

DisplayDisplaydef display(): def display():

import pylabimport pylab

words_by_freq = words_by_freq = list(nltk.FreqDist(brown.words(categories='news')))list(nltk.FreqDist(brown.words(categories='news')))

cfd = cfd = nltk.ConditionalFreqDist(brown.tagged_words(categnltk.ConditionalFreqDist(brown.tagged_words(categories='news'))ories='news'))

sizes = 2 ** pylab.arange(15)sizes = 2 ** pylab.arange(15)

perfs = [performance(cfd, words_by_freq[:size]) for perfs = [performance(cfd, words_by_freq[:size]) for size in sizes]size in sizes]

pylab.plot(sizes, perfs, '-bo')pylab.plot(sizes, perfs, '-bo')

pylab.title('Lookup Tagger Perf. vs Model Size')pylab.title('Lookup Tagger Perf. vs Model Size')

pylab.xlabel('Model Size')pylab.xlabel('Model Size')

pylab.ylabel('Performance')pylab.ylabel('Performance')

pylab.show()pylab.show()

– 16 – CSCE 771 Spring 2011

Error !?Error !?

Traceback (most recent call last):Traceback (most recent call last):

File File "C:/Users/mmm/Documents/Courses/771/Python771/"C:/Users/mmm/Documents/Courses/771/Python771/ch05.4.py", line 70, in <module>ch05.4.py", line 70, in <module>

import pylabimport pylab

ImportError: No module named pylabImportError: No module named pylab

google (download pylab) google (download pylab) scipy ?? scipy ??

– 17 – CSCE 771 Spring 2011

5.5 N-gram Tagging5.5 N-gram Tagging

from nltk.corpus import brownfrom nltk.corpus import brown

brown_tagged_sents = brown_tagged_sents = brown.tagged_sents(categories='news')brown.tagged_sents(categories='news')

brown_sents = brown.sents(categories='news')brown_sents = brown.sents(categories='news')

unigram_tagger = unigram_tagger = nltk.UnigramTagger(brown_tagged_sents)nltk.UnigramTagger(brown_tagged_sents)

print unigram_tagger.tag(brown_sents[2007])print unigram_tagger.tag(brown_sents[2007])

[('Various', 'JJ'), ('of', 'IN'), ('the', 'AT'), ('apartments', 'NNS'), ('are', [('Various', 'JJ'), ('of', 'IN'), ('the', 'AT'), ('apartments', 'NNS'), ('are', 'BER'), ('of', 'IN'),'BER'), ('of', 'IN'),

print unigram_tagger.evaluate(brown_tagged_sents)print unigram_tagger.evaluate(brown_tagged_sents)

0.9349006503970.934900650397

– 18 – CSCE 771 Spring 2011

Dividing into Training/Test SetsDividing into Training/Test Sets

size = int(len(brown_tagged_sents) * 0.9)size = int(len(brown_tagged_sents) * 0.9)

print sizeprint size

41604160

train_sents = brown_tagged_sents[:size]train_sents = brown_tagged_sents[:size]

test_sents = brown_tagged_sents[size:]test_sents = brown_tagged_sents[size:]

unigram_tagger = nltk.UnigramTagger(train_sents)unigram_tagger = nltk.UnigramTagger(train_sents)

print unigram_tagger.evaluate(test_sents)print unigram_tagger.evaluate(test_sents)

0.8110236220470.811023622047

– 19 – CSCE 771 Spring 2011

bigram_tagger 1rst try -- bigram_tagger 1rst try --

bigram_tagger = nltk.BigramTagger(train_sents)bigram_tagger = nltk.BigramTagger(train_sents)

print "bigram_tagger.tag-2007", print "bigram_tagger.tag-2007", bigram_tagger.tag(brown_sents[2007])bigram_tagger.tag(brown_sents[2007])

bigram_tagger.tag-2007 [('Various', 'JJ'), ('of', 'IN'), ('the', 'AT'), bigram_tagger.tag-2007 [('Various', 'JJ'), ('of', 'IN'), ('the', 'AT'), ('apartments', 'NNS'), ('are', 'BER')('apartments', 'NNS'), ('are', 'BER')

unseen_sent = brown_sents[4203]unseen_sent = brown_sents[4203]

print "bigram_tagger.tag-4203", print "bigram_tagger.tag-4203", bigram_tagger.tag(unseen_sent)bigram_tagger.tag(unseen_sent)

bigram_tagger.tag-4203 [('The', 'AT'), ('is', 'BEZ'), ('13.5', bigram_tagger.tag-4203 [('The', 'AT'), ('is', 'BEZ'), ('13.5', None), ('million', None), (',', None), ('divided', None), None), ('million', None), (',', None), ('divided', None),

print bigram_tagger.evaluate(test_sents)print bigram_tagger.evaluate(test_sents)

0.102162862554 ---not too good0.102162862554 ---not too good

– 20 – CSCE 771 Spring 2011

Backoff bigramunigram NNBackoff bigramunigram NN

t0 = nltk.DefaultTagger('NN')t0 = nltk.DefaultTagger('NN')

t1 = nltk.UnigramTagger(train_sents, backoff=t0)t1 = nltk.UnigramTagger(train_sents, backoff=t0)

t2 = nltk.BigramTagger(train_sents, backoff=t1)t2 = nltk.BigramTagger(train_sents, backoff=t1)

print t2.evaluate(test_sents)print t2.evaluate(test_sents)

0.8447124489190.844712448919

– 21 – CSCE 771 Spring 2011

Your turn: tribiuni NNYour turn: tribiuni NN

– 22 – CSCE 771 Spring 2011

Tagging Unknown WordsTagging Unknown Words

Our approach to tagging unknown words still uses backoff to a Our approach to tagging unknown words still uses backoff to a regular-expression tagger or a default tagger. These are regular-expression tagger or a default tagger. These are unable to make use of context. Thus, if our tagger unable to make use of context. Thus, if our tagger encountered the word blog, not seen during training, it encountered the word blog, not seen during training, it would assign it the same tag, regardless of whether this would assign it the same tag, regardless of whether this word appeared in the context the blog or to blog. How can word appeared in the context the blog or to blog. How can we do better with these unknown words, or out-of-we do better with these unknown words, or out-of-vocabulary items?vocabulary items?

A useful method to tag unknown words based on context is to A useful method to tag unknown words based on context is to limit the vocabulary of a tagger to the most frequent n limit the vocabulary of a tagger to the most frequent n words, and to replace every other word with a special word words, and to replace every other word with a special word UNK using the method shown in UNK using the method shown in 5.3. During training, a . During training, a unigram tagger will probably learn that UNK is usually a unigram tagger will probably learn that UNK is usually a noun. However, the n-gram taggers will detect contexts in noun. However, the n-gram taggers will detect contexts in which it has some other tag. For example, if the preceding which it has some other tag. For example, if the preceding word is to (tagged TO), then UNK will probably be tagged as word is to (tagged TO), then UNK will probably be tagged as a verb.a verb.

– 23 – CSCE 771 Spring 2011

Serialization = pickleSerialization = pickle

SavingSaving

Object serializationObject serialization

from cPickle import dumpfrom cPickle import dump

output=open('t2.pkl', 'wb')output=open('t2.pkl', 'wb')

dump(t2, output, -1)dump(t2, output, -1)

output.close()output.close()

LoadingLoading

from cPickle import loadfrom cPickle import load

input = open('t2.pkl', 'rb')input = open('t2.pkl', 'rb')

tagger = load(input)tagger = load(input)

input.close()input.close()

– 24 – CSCE 771 Spring 2011

Performance LimitationsPerformance Limitations

– 25 – CSCE 771 Spring 2011

text = """The board's action shows what free enterprisetext = """The board's action shows what free enterprise

is up against in our complex maze of regulatory is up against in our complex maze of regulatory laws ."""laws ."""

tokens = text.split()tokens = text.split()

tagger.tag(tokens)tagger.tag(tokens)

cfd = nltk.ConditionalFreqDist(cfd = nltk.ConditionalFreqDist(

((x[1], y[1], z[0]), z[1])((x[1], y[1], z[0]), z[1])

for sent in brown_tagged_sentsfor sent in brown_tagged_sents

for x, y, z in nltk.trigrams(sent))for x, y, z in nltk.trigrams(sent))

ambiguous_contexts = [c for c in cfd.conditions() if ambiguous_contexts = [c for c in cfd.conditions() if len(cfd[c]) > 1]len(cfd[c]) > 1]

print sum(cfd[c].N() for c in ambiguous_contexts) / cfd.N()print sum(cfd[c].N() for c in ambiguous_contexts) / cfd.N()

– 26 – CSCE 771 Spring 2011

Confusion MatrixConfusion Matrix

test_tags = [tag for sent in test_tags = [tag for sent in brown.sents(categories='editorial')brown.sents(categories='editorial')

for (word, tag) in t2.tag(sent)]for (word, tag) in t2.tag(sent)]

gold_tags = [tag for (word, tag) in gold_tags = [tag for (word, tag) in brown.tagged_words(categories='editorial')]brown.tagged_words(categories='editorial')]

print nltk.ConfusionMatrix(gold_tags, test_tags)print nltk.ConfusionMatrix(gold_tags, test_tags)

overwhelming outputoverwhelming output

– 27 – CSCE 771 Spring 2011

– 28 – CSCE 771 Spring 2011

nltk.tag.brill.demo()nltk.tag.brill.demo()

Loading tagged data... Loading tagged data...

Done loading.Done loading.

Training unigram tagger:Training unigram tagger:

[accuracy: 0.832151][accuracy: 0.832151]

Training bigram tagger:Training bigram tagger:

[accuracy: 0.837930][accuracy: 0.837930]

Training Brill tagger on 1600 sentences...Training Brill tagger on 1600 sentences...

Finding initial useful rules...Finding initial useful rules...

Found 9757 useful rules.Found 9757 useful rules.

– 29 – CSCE 771 Spring 2011

S F r O | Score = Fixed - BrokenS F r O | Score = Fixed - Broken

c i o t | R Fixed = num tags changed incorrect -> correctc i o t | R Fixed = num tags changed incorrect -> correct

o x k h | u Broken = num tags changed correct -> incorrecto x k h | u Broken = num tags changed correct -> incorrect

r e e e | l Other = num tags changed incorrect -> incorrectr e e e | l Other = num tags changed incorrect -> incorrect

e d n r | ee d n r | e

------------------+-------------------------------------------------------------------------+-------------------------------------------------------

11 15 4 0 | WDT -> IN if the tag of words i+1...i+2 is 'DT'11 15 4 0 | WDT -> IN if the tag of words i+1...i+2 is 'DT'

10 12 2 0 | IN -> RB if the text of the following word is10 12 2 0 | IN -> RB if the text of the following word is

| 'well'| 'well'

9 9 0 0 | WDT -> IN if the tag of the preceding word is9 9 0 0 | WDT -> IN if the tag of the preceding word is

| 'NN', and the tag of the following word is 'NNP'| 'NN', and the tag of the following word is 'NNP'

7 9 2 0 | RBR -> JJR if the tag of words i+1...i+2 is 'NNS'7 9 2 0 | RBR -> JJR if the tag of words i+1...i+2 is 'NNS'

7 10 3 0 | WDT -> IN if the tag of words i+1...i+2 is 'NNS'7 10 3 0 | WDT -> IN if the tag of words i+1...i+2 is 'NNS'

– 30 – CSCE 771 Spring 2011

5 5 0 0 | WDT -> IN if the tag of the preceding word is5 5 0 0 | WDT -> IN if the tag of the preceding word is

| 'NN', and the tag of the following word is 'PRP'| 'NN', and the tag of the following word is 'PRP'

4 4 0 1 | WDT -> IN if the tag of words i+1...i+3 is 'VBG'4 4 0 1 | WDT -> IN if the tag of words i+1...i+3 is 'VBG'

3 3 0 0 | RB -> IN if the tag of the preceding word is 'NN',3 3 0 0 | RB -> IN if the tag of the preceding word is 'NN',

| and the tag of the following word is 'DT'| and the tag of the following word is 'DT'

3 3 0 0 | RBR -> JJR if the tag of the following word is3 3 0 0 | RBR -> JJR if the tag of the following word is

| 'NN'| 'NN'

3 3 0 0 | VBP -> VB if the tag of words i-3...i-1 is 'MD'3 3 0 0 | VBP -> VB if the tag of words i-3...i-1 is 'MD'

3 3 0 0 | NNS -> NN if the text of the preceding word is3 3 0 0 | NNS -> NN if the text of the preceding word is

| 'one'| 'one'

3 3 0 0 | RP -> RB if the text of words i-3...i-1 is 'were'3 3 0 0 | RP -> RB if the text of words i-3...i-1 is 'were'

3 3 0 0 | VBP -> VB if the text of words i-2...i-1 is "n't"3 3 0 0 | VBP -> VB if the text of words i-2...i-1 is "n't"

Brill accuracy: 0.839156Brill accuracy: 0.839156

Done; rules and errors saved to rules.yaml and errors.out.Done; rules and errors saved to rules.yaml and errors.out.

– 31 – CSCE 771 Spring 2011

top related