a mathematical comment on the fundamental difference between scientific theory formation and legal...

Post on 04-Jan-2016

215 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

A Mathematical Comment on the Fundamental Difference Between Scientific

Theory Formation and Legal Theory Formation

Ronald P. Loui

St. Louis

USA

Why? Who?

• Philosophers of science (students of generalized inductive reasoning) should find the legal theory formation problem (generalized moral reasoning) interesting now that there are new tools:

– Defeasible conditional– A record of arguments

– Models of procedures• Diachronic models: confirmational conditionalization, belief

revision, derogation, contraction, choice• Models of legislative compromise, linguistic interpretation and

determination

Why? Who?

• What are the similarities, dissimilarities?– Obviously: attitude toward error– What else?– What formal ramifications?

• Could the LTF problem be expressed as simply as the STF problem?

Further Motivation

• Is Machine Learning too quick to simplify the problem?

• Can the important nuances of LTF and STF be written in a mathematically brief way?

Legal Theory Formation: LTF

• Case 1:– Facts: a b c d e– Decision: h

• Case 2:– Facts: a b c d e f– Decision: !h

• Induced rule(s): – Defeasibly, a b c d e >__ h– Defeasibly, a b c d e f >__ !h

Why not:

a >__ h

a f >__ !h

Scientific Theory Formation: STF

• Case 1:– Facts: a b c d e – Decision: h

• Case 2:– Facts: a b c d e f– Decision: !h

• Induced rule(s): – Deductively, a b c d e !f h– Deductively, a b c d e f !h

Why not:

!f h

f h

SFT vs. LFT

• Conditionals:– Deductive vs. – Defeasible

• Bias:– What is simpler? vs. – What is motivated by argument?

• Input:– State (complete closed world) vs. – Partial (incomplete) Description

• STF, LFT vs: Belief revision (AGM) – too much (=epistemic state + constraints on chance) vs. – too little (=not enough guidance among choices)

Curve-Fitting: assign error as required

Spline-Fitting: complexify as required

2-DNF Fitting

• Data:– Case 1: a b c d– Case 2: !a b c !d– Case 3: a !b !c d

• Formula:– (a v b) ^ (c v d)

Transitive fitting

• Reports of indifference, preference• A ~ B• B > C• A ~ C• C ~ D• A ~ D• Error: remove B > C, actually B ~ C (1 of 5)

SFT vs. LFT

• Fit:– Quantify error (like overturning precedent in

LFT) vs.– Distinguish as needed (like auxiliary

hypotheses in SFT)

• SO FAR, ALL THIS IS OBVIOUS

More Nuanced Model of SFT

• Kyburg:– Corpus of accepted beliefs K– Probability of s given K: PK(s)– s is acceptable? PK(s) > 1-e– Theory is U: U K = D-Thm(K0 U)– SFT: choose U* to “fit” K0

• Best fit of U* gives largest PI-Thm(K)• PI-Thm(K) = K {s | PK(s) > 1-e }

– Trades power (simplicity) and error (fit)• If U is too simple, it doesn’t fit, hence all PK small• If U is too complicated, D-Thm(K0 U) small

More Nuanced Model of LFT

• Loui-Norman (Prakken-Sartor-Hage-Verheij-Lodder-Roth)– A case has arguments, A1, … Ak , B1, … Bk-1

– Arguments have structure• Trees, labeled with propositions• Argument for h, h is root• Leaves are uncontested “facts”• Internal nodes are “intermediary conclusions”• Defeasible rules: Children(p) >__ p

Argument for h

hp qa bc

d

Argument for h

h

p q

a

bc d

Argument for h

h

p q

a b c d

Argument for h

h

p q

a b c d

Argument for h

h

p q

a b c d

Defeasibly,

a >__ p

b c d >__ q

p q >__ h

Dialectical Tree

A1

A3

B2A2

B1

petitioner respondent

Dialectical Tree

A1

A3

B2

A2

B1

Interferes

defeats

defeats

defeats

Dialectical Tree

A1 (for h)

A3 for !q

B2 for !r

A2 for !q

B1 (for !p)

Interferes

defeats

Defeats

defeats

More Nuanced Model of LFT

• Loui-Norman (Prakken-Sartor-Hage-Verheij-Lodder-Roth)– A case has arguments, A1, … Ak , B1, … Bk-1

– Arguments have structure– Induced rules must be grounded in

• cases Δ (e.g. c1 = ({a,b,c,d,e}, h, {(h,{(p,{a}),(q,{b,c,d})}, …) or

• background sources Ω (e.g., p q >__ h, r17 = ({p,q},h) )

SFT vs. LFT

• Invention:– Out of (mathematical) thin air vs.– Possible interpretations of cases

• Purpose:– To discover rules from cases– To summarize cases as rules

SFT vs. LFT

• Invention:– Out of (mathematical) thin air vs.– Possible interpretations of cases

• Purpose:– To discover (nomological) rules from cases– To summarize cases as (linguistic) rules

SFT vs. LFT

• Invention:– Out of (mathematical) thin air vs.

– Possible interpretations of cases

• Purpose:– To discover (nomological) rules from (accident of)

cases

– To summarize (wisdom of) cases as (linguistic) rules

What is grounded?

• Case: a b c d e ]__ h

• φ = {a, b, c, d, e}

• Any C φ as lhs for rule for h?

• What if d was used only to argue against h?

• d >__ h

• Really? (Even Ashley disallows this)

• What if e was used only to rebut d-based argument?

• a b c e >__ h

• Really? e isn't relevant except to undercut d.

Proper Elisions I: Argument Trees

h

p q

a b c d

p b c d >__ h

Proper Elisions I: Argument Trees

h

p q

a b c d

!q

a b f

p b c d >__ h p b c d f >__ h ?

Proper Elisions II: Dialectical Trees

A1 (for h)

A3 for !q

B2 for !r

A2 for !q

B1 (for !p)

Interferes

defeats

Defeats

defeats

Proper Elisions II: Dialectical Trees

A1 (for h)

A3 for !q

B2 for !r

A2 for !q

B1 (for !p)

Interferes

defeats

Defeats

defeats

Proper Elisions II: Dialectical Trees

A1 (for h)

A3 for !q

B2 for !r

A2 for !q

B1 (for !p)

Interferes

defeats

Defeats

defeats

SFT vs. LFT

1. Defeasible2. Differences distinguished3. Cases

summarized/organized4. Argument is crucial5. Justification obsessed6. Loui:

ArgumentsGroundingProper ElisionPrinciples

1. Deductive

2. Error quantified

3. Rules discovered

4. Probability is key

5. Simplicity biased

6. Kyburg:

Acceptance

Error

Inference

Coherence

More Nuanced Model of SFT

• Kyburg:– Corpus of accepted beliefs K– Probability of s given K: PK(s)– s is acceptable? PK(s) > 1-e– Theory is U: U K = D-Thm(K0 U)– SFT: choose U* to “fit” K0

• Best fit of U* gives largest PI-Thm(K)• PI-Thm(K) = K {s | PK(s) > 1-e }

– Trades power (simplicity) and error (fit)• If U is too simple, it doesn’t fit, hence all PK small• If U is too complicated, D-Thm(K0 U) small

More Nuanced Model of LFT

• Loui-Norman (Prakken-Sartor-Hage-Verheij-Lodder-Roth)– A case has arguments, A1, … Ak , B1, … Bk-1

– Arguments have structure– Induced rules must be grounded in

• cases Δ (e.g. c1 = ({a,b,c,d,e}, h, {(h,{(p,{a}),(q,{b,c,d})}, …) or

• background sources Ω (e.g., p q >__ h, r17 = ({p,q},h) )

– And proper elisions

Machine Learning?

• Models are too simple

• The problem is in the modeling, not the algorithm

• SVM is especially insulting

Acknowledgements

• Henry Kyburg

• Ernest Nagel, Morris Cohen

• Jeff Norman

• Guillermo Simari

• AnaMaguitman, Carlos Chesñevar, Alejandro Garcia

• John Pollock, Thorne McCarty, Henry Prakken

top related