sp10 cs288 lecture 22 -- summarization.pptpeople.eecs.berkeley.edu/~klein/cs288/sp10/slides/sp10...

Post on 11-Oct-2020

11 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

1

Statistical NLPSpring 2010

Lecture 22: Summarization

Dan Klein – UC Berkeley

Includes slides from Aria Haghighi, Dan Gillick

2

Selection

• Maximum Marginal Relevancemid-‘90s

present

Maximize similarity to the query

Minimize redundancy

[Carbonell and Goldstein, 1998]ss11

ss33

ss22

ss44QQ

Greedy search over sentences

3

• Maximum Marginal Relevance

• Graph algorithms [Mihalcea 05++]

mid-‘90s

present

Selection

• Maximum Marginal Relevance

• Graph algorithms

mid-‘90s

presentss11

ss33

ss22

ss44Nodes are sentences

Selection

4

• Maximum Marginal Relevance

• Graph algorithms

mid-‘90s

presentss11

ss33

ss22

ss44Nodes are sentences

Edges are similarities

Selection

• Maximum Marginal Relevance

• Graph algorithms

mid-‘90s

presentss11

ss33

ss22

ss44Nodes are sentences

Edges are similarities

Stationary distribution

represents node centrality

Selection

5

• Maximum Marginal Relevance

• Graph algorithms

• Word distribution models

mid-‘90s

present

Input document distribution Summary distribution

~

ww PPAA(w)(w)

Obama ?

speech ?

health ?

Montana ?

ww PPDD(w)(w)

Obama 0.017

speech 0.024

health 0.009

Montana 0.002

Selection

• Maximum Marginal Relevance

• Graph algorithms

• Word distribution models

mid-‘90s

present

SumBasic [Nenkova and Vanderwende, 2005]

Value(wi) = PD(wi)

Value(si) = sum of its word values

Choose si with largest value

Adjust PD(w)

Repeat until length constraint

Selection

6

• Maximum Marginal Relevance

• Graph algorithms

• Word distribution models

• Regression models

mid-‘90s

present

ss11

ss22

ss33

word valuesword values positionposition lengthlength

12 1 24

4 2 14

6 3 18

ss22

ss33

ss11

F(x)

frequency is just one of many features

Selection

• Maximum Marginal Relevance

• Graph algorithms

• Word distribution models

• Regression models

• Topic model-based[Haghighi and Vanderwende, 2009]

mid-‘90s

present

Selection

7

8

9

10

H & V 09PYTHY

11

12

• Maximum Marginal Relevance

• Graph algorithms

• Word distribution models

• Regression models

• Topic models

• Globally optimal search

mid-‘90s

present [McDonald, 2007]

ss11

ss33

ss22

ss44QQ

Optimal search using MMR

Integer Linear Program

Selection

[Gillick and Favre, 2008]

Universal health care is a divisive issue.

Obama addressed the House on Tuesday.

President Obama remained calm.

The health care bill is a major test for the

Obama administration.ss11

ss22

ss33

ss44

conceptconcept valuevalue

Selection

13

[Gillick and Favre, 2008]

Universal health care is a divisive issue.

Obama addressed the House on Tuesday.

President Obama remained calm.

The health care bill is a major test for the

Obama administration.conceptconcept valuevalue

obama 3

ss11

ss22

ss33

ss44

Selection

[Gillick and Favre, 2008]

Universal health care is a divisive issue.

Obama addressed the House on Tuesday.

President Obama remained calm.

The health care bill is a major test for the

Obama administration.conceptconcept valuevalue

obama 3

health 2

ss11

ss22

ss33

ss44

Selection

14

[Gillick and Favre, 2008]

Universal health care is a divisive issue.

Obama addressed the House on Tuesday.

President Obama remained calm.

The health care bill is a major test for the

Obama administration.conceptconcept valuevalue

obama 3

health 2

house 1

ss11

ss22

ss33

ss44

Selection

[Gillick and Favre, 2008]

Universal health care is a divisive issue.

Obama addressed the House on Tuesday.

President Obama remained calm.

conceptconcept valuevalue

obama 3

health 2

house 1

ss11

ss22

ss33

ss44

The health care bill is a major test for the

Obama administration.

summarysummary lengthlength valuevalue

{s1, s3} 17 5

{s2, s3, s4} 17 6

Length limit:

18 wordsgreedy

optimal

Selection

15

[Gillick, Riedhammer, Favre, Hakkani-Tur, 2008]

total concept value

summary length limit

maintain consistency between

selected sentences and concepts

Integer Linear Program for the maximum coverage model

Selection

[Gillick and Favre, 2009]

This ILP is tractable for reasonable

problems

Selection

16

How to include sentence position?

First sentences

are unique

Selection

How to include sentence position?

Only allow first sentences

in the summary

Up-weight concepts

appearing in first

sentences

Identify more sentences

that look like first

sentences

surprisingly strong baseline

included in TAC 2009 system

first sentence classifier is

not reliable enough yet

Selection

17

Some interesting work on sentence ordering

[Barzilay et. al., 1997; 2002]

But choosing independent sentences is easier

• First sentences usually stand alone well

• Sentences without unresolved pronouns

• Classifier trained on OntoNotes: <10% error rate

Baseline ordering module (chronological) is not

obviously worse than anything fancier

Selection

•52 submissions

•27 teams

•44 topics

• 10 input docs

• 100 word summaries

Gillick & Favre• Rating scale: 1-10

• Humans in [8.3, 9.3]

• Rating scale: 1-10

• Humans in [8.5, 9.3]

• Rating scale: 0-1

• Humans in [0.62, 0.77]

• Rating scale: 0-1

• Humans in [0.11, 0.15]

Results [G & F, 2009]

18

[Gillick and Favre, 2008]

Error Breakdown?

Sentence extraction is limiting

... and boring!

But abstractive summaries are

much harder to generate…

in 25 words?

Beyond Extraction?

19

http://www.rinkworks.com/bookaminute/

top related