problem solving and adaptive logics. a logico

418
Problem Solving and Adaptive Logics. A Logico-Philosophical Study Diderik Batens Centre for Logic and Philosophy of Science Ghent University, Belgium [email protected] http://logica.UGent.be/dirk/ http://logica.UGent.be/centrum/

Upload: others

Post on 27-Dec-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Problem Solving and Adaptive Logics.

A Logico-Philosophical Study

Diderik Batens

Centre for Logic and Philosophy of Science

Ghent University, Belgium

[email protected]

http://logica.UGent.be/dirk/

http://logica.UGent.be/centrum/

H

CONTENTS

1 The Problem, the Claim and the Plan

2 Prospective dynamics: pushing the Heuristics into the Proofs

3 Problem-solving processes

4 Enter adaptive logics

5 Prospective dynamics for adaptive logics

6 Extensions, open problems, and the bright side of life

1 The Problem, the Claim and the Plan

1.1 On Solving Problems

1.2 Worries from the Philosophy of Science and from Erotetic Logic

1.3 Mastering Proof Heuristics

1.4 Unusual Logics Needed

1.5 The Traditional View On Logic

1.6 Logical Systems vs. Logical Procedures

1.7 The Plan

1 0

1.1 On Solving Problems H

problem solving is central for understanding the sciences

in philosophy of science: since Kuhn, . . . , Laudan

1.1 On Solving Problems H

problem solving is central for understanding the sciences

in philosophy of science: since Kuhn, . . . , Laudan

from 1980s on: scientific discovery is specific kind of problem solving

(cf. also scientific creativity)

1.1 On Solving Problems H

problem solving is central for understanding the sciences

in philosophy of science: since Kuhn, . . . , Laudan

from 1980s on: scientific discovery is specific kind of problem solving

(cf. also scientific creativity)

two kinds of contributions:

(i) A.I.: set of computer programs

(ii) philosophy of science:

informal, often vague (Kuhn > Laudan > Nickles)

Nickles: role of constraints (+ change + rational violation)

1.1 On Solving Problems H

problem solving is central for understanding the sciences

in philosophy of science: since Kuhn, . . . , Laudan

from 1980s on: scientific discovery is specific kind of problem solving

(cf. also scientific creativity)

two kinds of contributions:

(i) A.I.: set of computer programs

too specific

(ii) philosophy of science:

informal, often vague (Kuhn > Laudan > Nickles)

Nickles: role of constraints (+ change + rational violation)

1.1 On Solving Problems H

problem solving is central for understanding the sciences

in philosophy of science: since Kuhn, . . . , Laudan

from 1980s on: scientific discovery is specific kind of problem solving

(cf. also scientific creativity)

two kinds of contributions:

(i) A.I.: set of computer programs

too specific

(ii) philosophy of science:

informal, often vague (Kuhn > Laudan > Nickles)

Nickles: role of constraints (+ change + rational violation)

nothing on the process: how proceed in order to solve

H

H

we need (again) a general approach

here proposed: a formal approach (similar to a formal logic)

H

we need (again) a general approach

here proposed: a formal approach (similar to a formal logic)

Is this possible?

main worries discussed in 1.2

first some more on problems

H

H

“problem” in broad sense:

in principle all kinds & all domains

scientific and everyday (same kind of reasoning behind them)

H

“problem” in broad sense:

in principle all kinds & all domains

scientific and everyday (same kind of reasoning behind them)

problems: difficulties vs. questions

H

“problem” in broad sense:

in principle all kinds & all domains

scientific and everyday (same kind of reasoning behind them)

problems: difficulties vs. questions

justified questions derive from difficulties

H

“problem” in broad sense:

in principle all kinds & all domains

scientific and everyday (same kind of reasoning behind them)

problems: difficulties vs. questions

justified questions derive from difficulties

questions answered from knowledge system / by extending it

H

“problem” in broad sense:

in principle all kinds & all domains

scientific and everyday (same kind of reasoning behind them)

problems: difficulties vs. questions

justified questions derive from difficulties

questions answered from knowledge system / by extending it

knowledge system may involve / run into difficulties

H

“problem” in broad sense:

in principle all kinds & all domains

scientific and everyday (same kind of reasoning behind them)

problems: difficulties vs. questions

justified questions derive from difficulties

questions answered from knowledge system / by extending it

knowledge system may involve / run into difficulties

whether a question is difficult to answer does not depend

on whether it derives from a difficulty

H

problem: will be written as a set of questions H

problem: will be written as a set of questions H

consider:

original problem is ?A, ∼A

if B, C and D, then A

problem: will be written as a set of questions H

consider:

original problem is ?A, ∼A

if B, C and D, then A

leads to questions ?B, ∼B, ?C, ∼C and ?D, ∼D

problem: will be written as a set of questions H

consider:

original problem is ?A, ∼A

if B, C and D, then A

leads to questions ?B, ∼B, ?C, ∼C and ?D, ∼D

but these are connected: if one of them receives the wrong answer,

answering the others is useless with respect to the original problem

so (in this context) they form a single problem:

?B, ∼B, ?C, ∼C, ?D, ∼Dwhich is dropped as a whole if one of the questions has an unsuitable

answer

problem: will be written as a set of questions H

consider:

original problem is ?A, ∼A

if B, C and D, then A

leads to questions ?B, ∼B, ?C, ∼C and ?D, ∼D

but these are connected: if one of them receives the wrong answer,

answering the others is useless with respect to the original problem

so (in this context) they form a single problem:

?B, ∼B, ?C, ∼C, ?D, ∼Dwhich is dropped as a whole if one of the questions has an unsuitable

answer

actually: problem = set of questions + set of pursued answers

(but this will appear from the context)

H

a problem solving process (psp) has two important features: H

(1) it contains subsidiary and/or derived problems

(derived from a previous problem

derived from previous problem + premises)

a problem solving process (psp) has two important features: H

(1) it contains subsidiary and/or derived problems

(derived from a previous problem

derived from previous problem + premises)

(2) it is goal-directed (unlike a proof on the standard definition)

all steps are sensible in view of the goal (the problem solution)

a problem solving process (psp) has two important features: H

(1) it contains subsidiary and/or derived problems

(derived from a previous problem

derived from previous problem + premises)

(2) it is goal-directed (unlike a proof on the standard definition)

all steps are sensible in view of the goal (the problem solution)

Note: a step may be sensible because it contributes to the solution of

the problem, or because it shows that a certain road to that solution is

a dead end

H

An example H

Galilei looking for the law of the free fall

absence of adequate measuring instruments!

H

An example H

Galilei looking for the law of the free fall

absence of adequate measuring instruments!

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

the same force that makes the ball fall, makes it roll down the slope

H

An example H

Galilei looking for the law of the free fall

absence of adequate measuring instruments!

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

the same force that makes the ball fall, makes it roll down the slope

An example H

Galilei looking for the law of the free fall

absence of adequate measuring instruments!

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

the same force that makes the ball fall, makes it roll down the slope

measuring the times?

H

An example H

Galilei looking for the law of the free fall

absence of adequate measuring instruments!

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

weigh the amount of water flowing in a vessel from the start to the

point where the ball hits the wooden block

compare the weights for different positions of the block

(only the ratios matter)

H

interesting example:

• admittedly: no conceptual changes involved

• some sophistication

· solution is a generalization (not a singular statement)

· new empirical data required

· experiments required

· experiments had to be devised

1.1 1

1.2 Worries from the Philosophy of Science

and from Erotetic Logic

aim: devise formal procedure that explicates problem solving

1.2 Worries from the Philosophy of Science

and from Erotetic Logic

aim: devise formal procedure that explicates problem solving

outdated? cf. Vienna Circle

1.2 Worries from the Philosophy of Science

and from Erotetic Logic

aim: devise formal procedure that explicates problem solving

outdated? cf. Vienna Circle

Nickles: no logic of discovery, only local logics of discovery

1.2 Worries from the Philosophy of Science

and from Erotetic Logic

aim: devise formal procedure that explicates problem solving

outdated? cf. Vienna Circle

Nickles: no logic of discovery, only local logics of discovery

touchy: how do (changing) constraints surface in a formal psp?

· changing premises

· changing logics

1.2 Worries from the Philosophy of Science

and from Erotetic Logic

aim: devise formal procedure that explicates problem solving

outdated? cf. Vienna Circle

Nickles: no logic of discovery, only local logics of discovery

touchy: how do (changing) constraints surface in a formal psp?

· changing premises

· changing logics

standard erotetic logic

· insufficiently goal directed

· too restrictive (except for yes–no questions)

1.2 1

1.3 Mastering Proof Heuristics H

logicians: good practice in solving specific type of problems: Γ ` A?

find a proof if there is one (in most cases)

see when there is no proof (in most cases)

1.3 Mastering Proof Heuristics H

logicians: good practice in solving specific type of problems: Γ ` A?

find a proof if there is one (in most cases)

see when there is no proof (in most cases)

demonstrate that there is no proof if there is none (in most cases)

tableau methods and other kinds of procedures (see later)

1.3 Mastering Proof Heuristics H

logicians: good practice in solving specific type of problems: Γ ` A?

find a proof if there is one (in most cases)

see when there is no proof (in most cases)

demonstrate that there is no proof if there is none (in most cases)

tableau methods and other kinds of procedures (see later)

CL is not decidable, there only is a positive test (is partially recursive)

so non-derivability cannot always be demonstrated

1.3 Mastering Proof Heuristics H

logicians: good practice in solving specific type of problems: Γ ` A?

find a proof if there is one (in most cases)

see when there is no proof (in most cases)

demonstrate that there is no proof if there is none (in most cases)

tableau methods and other kinds of procedures (see later)

CL is not decidable, there only is a positive test (is partially recursive)

so non-derivability cannot always be demonstrated

usual positive tests are rather distant from proofs

and so are (partial) methods for showing non-derivability

H

Ghent result: push (most of) the proof heuristics into the proof H⇒ side effect of dynamic logics (prospective dynamics)

Ghent result: push (most of) the proof heuristics into the proof H⇒ side effect of dynamic logics (prospective dynamics)

simple idea: if you want to obtain A, and B ⊃ A is available, look for B

⇒ add to the proof: [B] A

Ghent result: push (most of) the proof heuristics into the proof H⇒ side effect of dynamic logics (prospective dynamics)

simple idea: if you want to obtain A, and B ⊃ A is available, look for B

⇒ add to the proof: [B] A

if you want to obtain A, and A ∨ B is available, look for ∼B

⇒ add to the proof: [∼B] A

etc.

H

result: a procedure (see later) with the properties: H

(1) if Γ `CL A, then the procedure leads to a proof of A from Γ

(2) if the procedure leads to a proof of A from Γ, then Γ `CL A

(3) if the procedure stops, not providing a proof, then Γ 0CL A

(4) for decidable fragments of CL: if Γ 0CL A, then the procedure stops

result: a procedure (see later) with the properties: H

(1) if Γ `CL A, then the procedure leads to a proof of A from Γ

(2) if the procedure leads to a proof of A from Γ, then Γ `CL A

(3) if the procedure stops, not providing a proof, then Γ 0CL A

(4) for decidable fragments of CL: if Γ 0CL A, then the procedure stops

casual comments:

no way to strengthen (4)

result: a procedure (see later) with the properties: H

(1) if Γ `CL A, then the procedure leads to a proof of A from Γ

(2) if the procedure leads to a proof of A from Γ, then Γ `CL A

(3) if the procedure stops, not providing a proof, then Γ 0CL A

(4) for decidable fragments of CL: if Γ 0CL A, then the procedure stops

casual comments:

no way to strengthen (4)

algorithm for turning the prospective proof into a standard proof

result: a procedure (see later) with the properties: H

(1) if Γ `CL A, then the procedure leads to a proof of A from Γ

(2) if the procedure leads to a proof of A from Γ, then Γ `CL A

(3) if the procedure stops, not providing a proof, then Γ 0CL A

(4) for decidable fragments of CL: if Γ 0CL A, then the procedure stops

casual comments:

no way to strengthen (4)

algorithm for turning the prospective proof into a standard proof

other (standard) logics:

rather straightforward way to turn inference rules into prospective rules

and to turn prospective proofs into standard proofs

1.3 1

1.4 Unusual Logics Needed

problem solving requires reasoning processes for which there is no

positive test

(= that are not even partially recursive)

inductive generalization, abduction to the best explanation, etc.

traditionally seen as beyond the scope of logic

1.4 Unusual Logics Needed

problem solving requires reasoning processes for which there is no

positive test

(= that are not even partially recursive)

inductive generalization, abduction to the best explanation, etc.

traditionally seen as beyond the scope of logic

adaptive logics are capable of explicating such reasoning processes

1.4 Unusual Logics Needed

problem solving requires reasoning processes for which there is no

positive test

(= that are not even partially recursive)

inductive generalization, abduction to the best explanation, etc.

traditionally seen as beyond the scope of logic

adaptive logics are capable of explicating such reasoning processes

the claim:

formulating prospective proofs for adaptive logics provides us with a

formal approach to problem solving

1.4 1

1.5 The Traditional View On Logic H

main point:

adaptive logics do not suit the standard view on logic

1.5 The Traditional View On Logic H

main point:

adaptive logics do not suit the standard view on logic

no logic (not even CL) fits the standard view on logic of 1900

because that view was provably mistaken

(and was proven to be mistaken)

1.5 The Traditional View On Logic H

main point:

adaptive logics do not suit the standard view on logic

no logic (not even CL) fits the standard view on logic of 1900

because that view was provably mistaken

(and was proven to be mistaken)

I do not claim that logics that fit the present standard view are not

sensible

I only claim that, in departing slightly from the standard view, one is

able to decently explicate forms of reasoning that

(i) are extremely important in human (scientific and other) reasoning

(ii) do not fit the standard view

1.5 1

1.6 Logical Systems vs. Logical Procedures

standard definition of logical system: set of rules, governing proofs

any extension of a proof with an application of a rule is a proof

1.6 Logical Systems vs. Logical Procedures

standard definition of logical system: set of rules, governing proofs

any extension of a proof with an application of a rule is a proof

procedure:

· set of rules

· for each rule: permission/obligation depending on stage of proof

1.6 Logical Systems vs. Logical Procedures

standard definition of logical system: set of rules, governing proofs

any extension of a proof with an application of a rule is a proof

procedure:

· set of rules

· for each rule: permission/obligation depending on stage of proof

standard definition: rules + universal permission

this is not a sensible explication of human reasoning (goal directed)

1.6 Logical Systems vs. Logical Procedures

standard definition of logical system: set of rules, governing proofs

any extension of a proof with an application of a rule is a proof

procedure:

· set of rules

· for each rule: permission/obligation depending on stage of proof

standard definition: rules + universal permission

this is not a sensible explication of human reasoning (goal directed)

example:

on the prospective-dynamics procedure, a premise cannot be added to

the proof unless a present target can be obtained from the premise by

means of subformulas and negations of subformulas of the premise

if the target is p, p ⊃ q cannot be added, but q ⊃ p can

1.6 1

1.7 The Plan

comment on table of contents

1.7 1

2 Prospective Dynamics: Pushing the Heuristicsinto the Proofs

2.1 Proofs and their Explications

2.2 Instructions vs. rules

2.3 Prospective dynamics: idea and examples

2.4 Prospective dynamics: characterization

2.5 Where went Ex Falso Quodlibet?

2.6 Some properties of CL−

2.7 Afterthought

2 0

2.1 Proofs and their Explications H

CL is claimed to explicate actual proofs, for example in mathematics

This presupposes:

(1) specific meaning of the logical symbols in those contexts

2.1 Proofs and their Explications H

CL is claimed to explicate actual proofs, for example in mathematics

This presupposes:

(1) specific meaning of the logical symbols in those contexts

not discussed here

2.1 Proofs and their Explications H

CL is claimed to explicate actual proofs, for example in mathematics

This presupposes:

(1) specific meaning of the logical symbols in those contexts

not discussed here

(2) correct proofs classified as correct

proofs classified as correct are correct

2.1 Proofs and their Explications H

CL is claimed to explicate actual proofs, for example in mathematics

This presupposes:

(1) specific meaning of the logical symbols in those contexts

not discussed here

(2) correct proofs classified as correct OK

proofs classified as correct are correct

2.1 Proofs and their Explications H

CL is claimed to explicate actual proofs, for example in mathematics

This presupposes:

(1) specific meaning of the logical symbols in those contexts

not discussed here

(2) correct proofs classified as correct OK

proofs classified as correct are correct yes, but . . .

H

H

Actual proofs:

actually produced actually published− −− −− −− −− −− −− −− −−

result of presentationsearch process

H

Actual proofs:

actually produced actually published− −− −− −→ −− skip dead ends −− skip detours −− skip obvious steps −− . . . −− −−

result of presentationsearch process

H

Actual proofs: result from goal-directed process

actually produced actually published− −− −− −→ −− skip dead ends −− skip detours −− skip obvious steps −− . . . −− −−

result of presentationsearch process

H

H

Neither produced nor published proofs are explicated adequately by CL:

CL is too permissive, viz. not goal directed

H

Neither produced nor published proofs are explicated adequately by CL:

CL is too permissive, viz. not goal directed

for example

1 p Prem

2 p ∨ q 1; Add

3 p ∨ r 1; Add

4 p ∨ s 1; Add

. . . . . .

2.1 2

2.2 Instructions vs. rules H

rule: preserves truth

instruction: permission/obligation to apply a rule(depending on stage of the proof)

2.2 Instructions vs. rules H

rule: preserves truth

instruction: permission/obligation to apply a rule(depending on stage of the proof)

official proof: procedure = rules + universal permission

− not goal-directed

− does not explicate actually produced / published proofs

− is border case of procedure

2.2 Instructions vs. rules H

rule: preserves truth

instruction: permission/obligation to apply a rule(depending on stage of the proof)

official proof: procedure = rules + universal permission

− not goal-directed

− does not explicate actually produced / published proofs

− is border case of procedure

some procedures explicate actual proofs

2.2 2

2.3 Prospective dynamics: idea and examples H

− idea:

if one looks for A

and, e.g., B ⊃ A was derived

then look for B

2.3 Prospective dynamics: idea and examples H

− idea:

if one looks for A

and, e.g., B ⊃ A was derived

then look for B

− pushing (part of) the heuristics in the proof:

if one looks for A

and, e.g., B ⊃ A was derived

then derive [B] A

indicating that one should look for B

(given the premises, obtaining B is sufficient to obtain A)

H

Ht ∨ q, p ⊃ (q ∨ ∼r), r ∧ s, s ⊃ p ` q

Ht ∨ q, p ⊃ (q ∨ ∼r), r ∧ s, s ⊃ p ` q

1 [q] q Goal

Ht ∨ q, p ⊃ (q ∨ ∼r), r ∧ s, s ⊃ p ` q

1 [q] q Goal

2 t ∨ q Prem

Ht ∨ q, p ⊃ (q ∨ ∼r), r ∧ s, s ⊃ p ` q

1 [q] q Goal

2 t ∨ q Prem

3 [∼t] q 2; ∨E

Ht ∨ q, p ⊃ (q ∨ ∼r), r ∧ s, s ⊃ p ` q

1 [q] q Goal

2 t ∨ q Prem

3 [∼t] q 2; ∨E ∼t|

Ht ∨ q, p ⊃ (q ∨ ∼r), r ∧ s, s ⊃ p ` q

1 [q] q Goal

2 t ∨ q Prem

3 [∼t] q 2; ∨E ∼t|4 p ⊃ (q ∨ ∼r) Prem

Ht ∨ q, p ⊃ (q ∨ ∼r), r ∧ s, s ⊃ p ` q

1 [q] q Goal

2 t ∨ q Prem

3 [∼t] q 2; ∨E ∼t|4 p ⊃ (q ∨ ∼r) Prem

5 [p] q ∨ ∼r 4; ⊃E

Ht ∨ q, p ⊃ (q ∨ ∼r), r ∧ s, s ⊃ p ` q

1 [q] q Goal

2 t ∨ q Prem

3 [∼t] q 2; ∨E ∼t|4 p ⊃ (q ∨ ∼r) Prem

5 [p] q ∨ ∼r 4; ⊃E

6 s ⊃ p Prem

Ht ∨ q, p ⊃ (q ∨ ∼r), r ∧ s, s ⊃ p ` q

1 [q] q Goal

2 t ∨ q Prem

3 [∼t] q 2; ∨E ∼t|4 p ⊃ (q ∨ ∼r) Prem

5 [p] q ∨ ∼r 4; ⊃E

6 s ⊃ p Prem

7 [s] p 6; ⊃E

Ht ∨ q, p ⊃ (q ∨ ∼r), r ∧ s, s ⊃ p ` q

1 [q] q Goal

2 t ∨ q Prem

3 [∼t] q 2; ∨E ∼t|4 p ⊃ (q ∨ ∼r) Prem

5 [p] q ∨ ∼r 4; ⊃E

6 s ⊃ p Prem

7 [s] p 6; ⊃E

8 r ∧ s Prem

Ht ∨ q, p ⊃ (q ∨ ∼r), r ∧ s, s ⊃ p ` q

1 [q] q Goal

2 t ∨ q Prem

3 [∼t] q 2; ∨E ∼t|4 p ⊃ (q ∨ ∼r) Prem

5 [p] q ∨ ∼r 4; ⊃E

6 s ⊃ p Prem

7 [s] p 6; ⊃E

8 r ∧ s Prem

9 s 8; ∧E

Ht ∨ q, p ⊃ (q ∨ ∼r), r ∧ s, s ⊃ p ` q

1 [q] q Goal

2 t ∨ q Prem

3 [∼t] q 2; ∨E ∼t|4 p ⊃ (q ∨ ∼r) Prem

5 [p] q ∨ ∼r 4; ⊃E

6 s ⊃ p Prem

7 [s] p 6; ⊃E

8 r ∧ s Prem

9 s 8; ∧E

10 p 7, 9; Trans

Ht ∨ q, p ⊃ (q ∨ ∼r), r ∧ s, s ⊃ p ` q

1 [q] q Goal

2 t ∨ q Prem

3 [∼t] q 2; ∨E ∼t|4 p ⊃ (q ∨ ∼r) Prem

5 [p] q ∨ ∼r 4; ⊃E

6 s ⊃ p Prem

7 [s] p 6; ⊃E R10

8 r ∧ s Prem

9 s 8; ∧E

10 p 7, 9; Trans

Ht ∨ q, p ⊃ (q ∨ ∼r), r ∧ s, s ⊃ p ` q

1 [q] q Goal

2 t ∨ q Prem

3 [∼t] q 2; ∨E ∼t|4 p ⊃ (q ∨ ∼r) Prem

5 [p] q ∨ ∼r 4; ⊃E

6 s ⊃ p Prem

7 [s] p 6; ⊃E R10

8 r ∧ s Prem

9 s 8; ∧E

10 p 7, 9; Trans

11 q ∨ ∼r 5, 10; Trans

Ht ∨ q, p ⊃ (q ∨ ∼r), r ∧ s, s ⊃ p ` q

1 [q] q Goal

2 t ∨ q Prem

3 [∼t] q 2; ∨E ∼t|4 p ⊃ (q ∨ ∼r) Prem

5 [p] q ∨ ∼r 4; ⊃E R11

6 s ⊃ p Prem

7 [s] p 6; ⊃E R10

8 r ∧ s Prem

9 s 8; ∧E

10 p 7, 9; Trans

11 q ∨ ∼r 5, 10; Trans

Ht ∨ q, p ⊃ (q ∨ ∼r), r ∧ s, s ⊃ p ` q

1 [q] q Goal

2 t ∨ q Prem

3 [∼t] q 2; ∨E ∼t|4 p ⊃ (q ∨ ∼r) Prem

5 [p] q ∨ ∼r 4; ⊃E R11

6 s ⊃ p Prem

7 [s] p 6; ⊃E R10

8 r ∧ s Prem

9 s 8; ∧E

10 p 7, 9; Trans

11 q ∨ ∼r 5, 10; Trans

12 [r] q 11; ∨E

Ht ∨ q, p ⊃ (q ∨ ∼r), r ∧ s, s ⊃ p ` q

1 [q] q Goal

2 t ∨ q Prem

3 [∼t] q 2; ∨E ∼t|4 p ⊃ (q ∨ ∼r) Prem

5 [p] q ∨ ∼r 4; ⊃E R11

6 s ⊃ p Prem

7 [s] p 6; ⊃E R10

8 r ∧ s Prem

9 s 8; ∧E

10 p 7, 9; Trans

11 q ∨ ∼r 5, 10; Trans

12 [r] q 11; ∨E

13 r 8; ∧E

Ht ∨ q, p ⊃ (q ∨ ∼r), r ∧ s, s ⊃ p ` q

1 [q] q Goal

2 t ∨ q Prem

3 [∼t] q 2; ∨E ∼t|4 p ⊃ (q ∨ ∼r) Prem

5 [p] q ∨ ∼r 4; ⊃E R11

6 s ⊃ p Prem

7 [s] p 6; ⊃E R10

8 r ∧ s Prem

9 s 8; ∧E

10 p 7, 9; Trans

11 q ∨ ∼r 5, 10; Trans

12 [r] q 11; ∨E

13 r 8; ∧E

14 q 12, 13; Trans

Ht ∨ q, p ⊃ (q ∨ ∼r), r ∧ s, s ⊃ p ` q

1 [q] q Goal R14

2 t ∨ q Prem

3 [∼t] q 2; ∨E ∼t|4 p ⊃ (q ∨ ∼r) Prem

5 [p] q ∨ ∼r 4; ⊃E R11

6 s ⊃ p Prem

7 [s] p 6; ⊃E R10

8 r ∧ s Prem

9 s 8; ∧E

10 p 7, 9; Trans

11 q ∨ ∼r 5, 10; Trans

12 [r] q 11; ∨E R14

13 r 8; ∧E

14 q 12, 13; Trans

H

Incidentally: Halgorithm: prospective proofs ⇒ Fitch-style proofs

1 p ⊃ (q ∨ ∼r) Prem

2 s ⊃ p Prem

3 r ∧ s Prem

4 s 3; Sim

5 p 2, 4; MP

6 q ∨ ∼r 1, 5; MP

7 r 3; Sim

8 q 6, 7; DS

H

H∼p ∨ q ` p ⊃ q

H∼p ∨ q ` p ⊃ q

1 [p ⊃ q] p ⊃ q Goal

H∼p ∨ q ` p ⊃ q

1 [p ⊃ q] p ⊃ q Goal

2 [q] p ⊃ q 1; C⊃E

H∼p ∨ q ` p ⊃ q

1 [p ⊃ q] p ⊃ q Goal

2 [q] p ⊃ q 1; C⊃E

3 ∼p ∨ q Prem

H∼p ∨ q ` p ⊃ q

1 [p ⊃ q] p ⊃ q Goal

2 [q] p ⊃ q 1; C⊃E

3 ∼p ∨ q Prem

4 [p] q 3; ∨E

H∼p ∨ q ` p ⊃ q

1 [p ⊃ q] p ⊃ q Goal

2 [q] p ⊃ q 1; C⊃E q|3 ∼p ∨ q Prem

4 [p] q 3; ∨E p|

H∼p ∨ q ` p ⊃ q

1 [p ⊃ q] p ⊃ q Goal

2 [q] p ⊃ q 1; C⊃E q|3 ∼p ∨ q Prem

4 [p] q 3; ∨E p|5 [∼p] p ⊃ q 1; C⊃E

H∼p ∨ q ` p ⊃ q

1 [p ⊃ q] p ⊃ q Goal

2 [q] p ⊃ q 1; C⊃E q|3 ∼p ∨ q Prem

4 [p] q 3; ∨E p|5 [∼p] p ⊃ q 1; C⊃E

6 [∼q] ∼p 3; ∨E

H∼p ∨ q ` p ⊃ q

1 [p ⊃ q] p ⊃ q Goal

2 [q] p ⊃ q 1; C⊃E q|3 ∼p ∨ q Prem

4 [p] q 3; ∨E p|5 [∼p] p ⊃ q 1; C⊃E ∼p|6 [∼q] ∼p 3; ∨E ∼q|

H∼p ∨ q ` p ⊃ q

1 [p ⊃ q] p ⊃ q Goal

2 [q] p ⊃ q 1; C⊃E q|3 ∼p ∨ q Prem

4 [p] q 3; ∨E p|5 [∼p] p ⊃ q 1; C⊃E ∼p|6 [∼q] ∼p 3; ∨E ∼q|7 [p] p ⊃ q 2, 4; Trans

obtain the Goal on all non-redundant conditions

H∼p ∨ q ` p ⊃ q

1 [p ⊃ q] p ⊃ q Goal

2 [q] p ⊃ q 1; C⊃E q|3 ∼p ∨ q Prem

4 [p] q 3; ∨E p|5 [∼p] p ⊃ q 1; C⊃E ∼p|6 [∼q] ∼p 3; ∨E ∼q|7 [p] p ⊃ q 2, 4; Trans p|

obtain the Goal on all non-redundant conditions

H∼p ∨ q ` p ⊃ q

1 [p ⊃ q] p ⊃ q Goal

2 [q] p ⊃ q 1; C⊃E q|3 ∼p ∨ q Prem

4 [p] q 3; ∨E p|5 [∼p] p ⊃ q 1; C⊃E ∼p|6 [∼q] ∼p 3; ∨E ∼q|7 [p] p ⊃ q 2, 4; Trans p|8 p ⊃ q 5, 7; EM

obtain the Goal on all non-redundant conditions

H∼p ∨ q ` p ⊃ q

1 [p ⊃ q] p ⊃ q Goal R8

2 [q] p ⊃ q 1; C⊃E q| R8

3 ∼p ∨ q Prem

4 [p] q 3; ∨E p|5 [∼p] p ⊃ q 1; C⊃E ∼p| R8

6 [∼q] ∼p 3; ∨E ∼q|7 [p] p ⊃ q 2, 4; Trans p| R8

8 p ⊃ q 5, 7; EM

obtain the Goal on all non-redundant conditions

2.3 2

2.4 Prospective dynamics: characterization

Rules (prospective proof for Γ ` G)

Goal To introduce [G] G.

Prem To introduce A for an A ∈ Γ.

Trans [∆ ∪ B] A[∆′] B

[∆ ∪ ∆′] A

EM [∆ ∪ B] A[∆′ ∪ ∼B] A

[∆ ∪ ∆′] A

H

Note: the complement of a formula: H

if A has the form ∼B, then ∗A = B

otherwise ∗A = ∼A

Note: the complement of a formula: H

if A has the form ∼B, then ∗A = B

otherwise ∗A = ∼A

∗p = ∼p ∗ ∗ p = p∗∼p = p ∗ ∗ ∼p = ∼p

∗∼∼p = ∼p ∗ ∗ ∼∼p = p

H

H

α α1 α2 β β1 β2

A ∧ B A B ∼(A ∧ B) ∗A ∗BA ≡ B A ⊃ B B ⊃ A ∼(A ≡ B) ∼(A ⊃ B) ∼(B ⊃ A)

∼(A ∨ B) ∗A ∗B A ∨ B A B∼(A ⊃ B) A ∗B A ⊃ B ∗A B

∼∼A A A

Formula analysing rules:

[∆] α[∆] α1 [∆] α2

[∆] β[∆ ∪ ∗β2] β1 [∆ ∪ ∗β1] β2

H

α α1 α2 β β1 β2

A ∧ B A B ∼(A ∧ B) ∗A ∗BA ≡ B A ⊃ B B ⊃ A ∼(A ≡ B) ∼(A ⊃ B) ∼(B ⊃ A)

∼(A ∨ B) ∗A ∗B A ∨ B A B∼(A ⊃ B) A ∗B A ⊃ B ∗A B

∼∼A A A

Formula analysing rules:

[∆] α[∆] α1 [∆] α2

[∆] β[∆ ∪ ∗β2] β1 [∆ ∪ ∗β1] β2

Example:

[∆] p ∧ q[∆] p [∆] q

[∆] p ∨ q[∆ ∪ ∼q] p [∆ ∪ ∼p] q

H

H

α α1 α2 β β1 β2

A ∧ B A B ∼(A ∧ B) ∗A ∗BA ≡ B A ⊃ B B ⊃ A ∼(A ≡ B) ∼(A ⊃ B) ∼(B ⊃ A)

∼(A ∨ B) ∗A ∗B A ∨ B A B∼(A ⊃ B) A ∗B A ⊃ B ∗A B

∼∼A A A

Condition analysing rules:

[∆ ∪ α] A[∆ ∪ α1, α2] A

[∆ ∪ β] A[∆ ∪ β1] A [∆ ∪ β2] A

H

α α1 α2 β β1 β2

A ∧ B A B ∼(A ∧ B) ∗A ∗BA ≡ B A ⊃ B B ⊃ A ∼(A ≡ B) ∼(A ⊃ B) ∼(B ⊃ A)

∼(A ∨ B) ∗A ∗B A ∨ B A B∼(A ⊃ B) A ∗B A ⊃ B ∗A B

∼∼A A A

Condition analysing rules:

[∆ ∪ α] A[∆ ∪ α1, α2] A

[∆ ∪ β] A[∆ ∪ β1] A [∆ ∪ β2] A

Example:

[∆ ∪ q ∧ r] p[∆ ∪ q, r] p

[∆ ∪ q ∨ r] p[∆ ∪ q] p [∆ ∪ r] p

H

The permissions and obligations H

positive part:

1. pp(A, A).

2. pp(A, α) if pp(A, α1) or pp(A, α2).

3. pp(A, β) if pp(A, β1) or pp(A, β2).

The permissions and obligations H

positive part:

1. pp(A, A).

2. pp(A, α) if pp(A, α1) or pp(A, α2).

3. pp(A, β) if pp(A, β1) or pp(A, β2).

A line with second element [∆] A is marked as a dead end iff an element

of ∆ is not a pp of any premise.

The permissions and obligations H

positive part:

1. pp(A, A).

2. pp(A, α) if pp(A, α1) or pp(A, α2).

3. pp(A, β) if pp(A, β1) or pp(A, β2).

A line with second element [∆] A is marked as a dead end iff an element

of ∆ is not a pp of any premise.

A line with second element [∆] A is marked as a redundant iff

(i) A ∈ ∆ (not the Goal line) or

(ii) a line with second element [∆′] A occurs and ∆′ ⊂ ∆.

The permissions and obligations H

positive part:

1. pp(A, A).

2. pp(A, α) if pp(A, α1) or pp(A, α2).

3. pp(A, β) if pp(A, β1) or pp(A, β2).

A line with second element [∆] A is marked as a dead end iff an element

of ∆ is not a pp of any premise.

A line with second element [∆] A is marked as a redundant iff

(i) A ∈ ∆ (not the Goal line) or

(ii) a line with second element [∆′] A occurs and ∆′ ⊂ ∆.

more marks possible (e.g., inconsistent paths)

The target is the first formula in the condition of the last unmarked

line. (alternatives possible)

H

Phase 1:

− start with Goal rule

− apply FAR only to formula of line that has Prem-line in its path

− derive [B1, . . . , Bn] A by FAR only if target is pp of A

− next, introduce a new premise A iff target is pp of A

− apply CAR only to target A after Prem and FAR are exhausted

− apply Trans only if ∆′ is empty

Phase 2:

− only: new [∆] G by EM, Trans or CAR from R-unmarked lines

next return to phase 1

2.4 2

2.5 Where went Ex Falso Quodlibet? H

Let the logic defined by the procedure be pCL−

p, ∼p 0pCL− q

2.5 Where went Ex Falso Quodlibet? H

Let the logic defined by the procedure be pCL−

p, ∼p 0pCL− q

EFQ requires, for example:

EFQ To introduce [∼A] G for an A ∈ Γ.

This rule may be applied to every A ∈ Γ.

2.5 Where went Ex Falso Quodlibet? H

Let the logic defined by the procedure be pCL−

p, ∼p 0pCL− q

EFQ requires, for example:

EFQ To introduce [∼A] G for an A ∈ Γ.

This rule may be applied to every A ∈ Γ.

EFQ is an isolated, unnatural and ad hoc rule.

H

Where pCL is (propositional) pCL− + EFQ: H

Γ `pCL A iff Γ `CL A

Where pCL is (propositional) pCL− + EFQ: H

Γ `pCL A iff Γ `CL A

That is:

If Γ `CL A, the procedure will lead to a proof of A from Γ.

If Γ 0CL A, the procedure will stop without A being derived.

2.5 2

2.6 Some properties of CL− H

− natural explication of all sensible classical proofs

2.6 Some properties of CL− H

− natural explication of all sensible classical proofs

− EFQ is absent, whence isolated, and unnatural

2.6 Some properties of CL− H

− natural explication of all sensible classical proofs

− EFQ is absent, whence isolated, and unnatural

− assigns same consequences as CL to consistent Γ

2.6 Some properties of CL− H

− natural explication of all sensible classical proofs

− EFQ is absent, whence isolated, and unnatural

− assigns same consequences as CL to consistent Γ

(the intended domain of application of CL)

2.6 Some properties of CL− H

− natural explication of all sensible classical proofs

− EFQ is absent, whence isolated, and unnatural

− assigns same consequences as CL to consistent Γ

(the intended domain of application of CL)

− derives a contradiction from all inconsistent Γ,

but not triviality (except in border cases)

2.6 Some properties of CL− H

− natural explication of all sensible classical proofs

− EFQ is absent, whence isolated, and unnatural

− assigns same consequences as CL to consistent Γ

(the intended domain of application of CL)

− derives a contradiction from all inconsistent Γ,

but not triviality (except in border cases)

⇒ assigns sensible consequence set to inconsistent Γ

2.6 Some properties of CL− H

− natural explication of all sensible classical proofs

− EFQ is absent, whence isolated, and unnatural

− assigns same consequences as CL to consistent Γ

(the intended domain of application of CL)

− derives a contradiction from all inconsistent Γ,

but not triviality (except in border cases)

⇒ assigns sensible consequence set to inconsistent Γ

− resulting consequence relation:

? characterized by a semantics (and tableau method)

2.6 Some properties of CL− H

− natural explication of all sensible classical proofs

− EFQ is absent, whence isolated, and unnatural

− assigns same consequences as CL to consistent Γ

(the intended domain of application of CL)

− derives a contradiction from all inconsistent Γ,

but not triviality (except in border cases)

⇒ assigns sensible consequence set to inconsistent Γ

− resulting consequence relation:

? characterized by a semantics (and tableau method)

? reflexive and monotonic

2.6 Some properties of CL− H

− natural explication of all sensible classical proofs

− EFQ is absent, whence isolated, and unnatural

− assigns same consequences as CL to consistent Γ

(the intended domain of application of CL)

− derives a contradiction from all inconsistent Γ,

but not triviality (except in border cases)

⇒ assigns sensible consequence set to inconsistent Γ

− resulting consequence relation:

? characterized by a semantics (and tableau method)

? reflexive and monotonic

? not transitive (even weak cut does not hold)

2.6 Some properties of CL− H

− natural explication of all sensible classical proofs

− EFQ is absent, whence isolated, and unnatural

− assigns same consequences as CL to consistent Γ

(the intended domain of application of CL)

− derives a contradiction from all inconsistent Γ,

but not triviality (except in border cases)

⇒ assigns sensible consequence set to inconsistent Γ

− resulting consequence relation:

? characterized by a semantics (and tableau method)

? reflexive and monotonic

? not transitive (even weak cut does not hold)

but transitive if restricted to consistent premise sets

2.6 Some properties of CL− H

− natural explication of all sensible classical proofs

− EFQ is absent, whence isolated, and unnatural

− assigns same consequences as CL to consistent Γ

(the intended domain of application of CL)

− derives a contradiction from all inconsistent Γ,

but not triviality (except in border cases)

⇒ assigns sensible consequence set to inconsistent Γ

− resulting consequence relation:

? characterized by a semantics (and tableau method)

? reflexive and monotonic

? not transitive (even weak cut does not hold)

but transitive if restricted to consistent premise sets

? in an interesting (specific) sense relevant

2.6 Some properties of CL− H

− natural explication of all sensible classical proofs

− EFQ is absent, whence isolated, and unnatural

− assigns same consequences as CL to consistent Γ

(the intended domain of application of CL)

− derives a contradiction from all inconsistent Γ,

but not triviality (except in border cases)

⇒ assigns sensible consequence set to inconsistent Γ

− resulting consequence relation:

? characterized by a semantics (and tableau method)

? reflexive and monotonic

? not transitive (even weak cut does not hold)

but transitive if restricted to consistent premise sets

? in an interesting (specific) sense relevant

? exactly the same theorems as CL

2.6 Some properties of CL− H

− natural explication of all sensible classical proofs

− EFQ is absent, whence isolated, and unnatural

− assigns same consequences as CL to consistent Γ

(the intended domain of application of CL)

− derives a contradiction from all inconsistent Γ,

but not triviality (except in border cases)

⇒ assigns sensible consequence set to inconsistent Γ

− resulting consequence relation:

? characterized by a semantics (and tableau method)

? reflexive and monotonic

? not transitive (even weak cut does not hold)

but transitive if restricted to consistent premise sets

? in an interesting (specific) sense relevant

? exactly the same theorems as CL

? adequate w.r.t. CL-semantics if restricted to consistent Γ

H

Sensible H

p ∨ q, ∼p, ∼q ` p ∧ ∼p

p ∨ q, ∼p, ∼q ` q ∧ ∼q

To derive Russell’s paradox from Frege’s set theory.

Sensible H

p ∨ q, ∼p, ∼q ` p ∧ ∼p

p ∨ q, ∼p, ∼q ` q ∧ ∼q

To derive Russell’s paradox from Frege’s set theory.

not sensible

p ∨ q, ∼p, ∼q ` r ∧ ∼r

To derive from Frege’s set theory that the moon is and is not a blue

cheese (or that ℘(∅) = ∅ ∧ ℘(∅) 6= ∅).

Sensible H

p ∨ q, ∼p, ∼q ` p ∧ ∼p

p ∨ q, ∼p, ∼q ` q ∧ ∼q

To derive Russell’s paradox from Frege’s set theory.

not sensible

p ∨ q, ∼p, ∼q ` r ∧ ∼r

To derive from Frege’s set theory that the moon is and is not a blue

cheese (or that ℘(∅) = ∅ ∧ ℘(∅) 6= ∅).

In problem-solving processes, CL− need to be applied.

H

A semantics (Suszko: every logic has a 2-valued semantics)

v : W 7→ 0, 1 is a partial function1. if v(A) ∈ 0, 1 and sub(B, A), then v(B), v(∗B) ∈ 0, 12. if v(A ∧ B) = 1 then v(A) = 1 and v(B) = 1.3. if v(A ∧ B) = 0 then v(A) = 0 or v(B) = 0.4. if v(A ≡ B) = 1 then v(A ⊃ B) = 1 and v(B ⊃ A) = 1.5. if v(A ≡ B) = 0 then v(A ⊃ B) = 0 or v(B ⊃ A) = 0.6. if v(∼(A ∨ B)) = 1 then v(∗A) = 1 and v(∗B) = 1.7. if v(∼(A ∨ B)) = 0 then v(∗A) = 0 or v(∗B) = 0.8. if v(∼(A ⊃ B)) = 1 then v(A) = 1 and v(∗B) = 1.9. if v(∼(A ⊃ B)) = 0 then v(A) = 0 or v(∗B) = 0.

10. if v(∼∼A) = 1 then v(A) = 1.11. if v(∼∼A) = 0 then v(A) = 0.12. if v(A ∨ B) = 1 then v(∗A) = 0 or v(B) = 1.13. if v(A ∨ B) = 1 then v(A) = 1 or v(∗B) = 0.14. if v(A ∨ B) = 0 then v(A) = 0 and v(B) = 0.15. if v(A ⊃ B) = 1 then v(A) = 0 or v(B) = 1.16. if v(A ⊃ B) = 1 then v(∗A) = 1 or v(∗B) = 0.17. if v(A ⊃ B) = 0 then v(∗A) = 0 and v(B) = 0.18. if v(∼(A ∧ B)) = 1 then v(A) = 0 or v(∗B) = 1.19. if v(∼(A ∧ B)) = 1 then v(∗A) = 1 or v(B) = 0.20. if v(∼(A ∧ B)) = 0 then v(∗A) = 0 and v(∗B) = 0.21. if v(∼(A ≡ B)) = 1 then v((A ⊃ B)) = 0 or v(∼(B ⊃ A)) = 1.22. if v(∼(A ≡ B)) = 1 then v(∼(A ⊃ B)) = 1 or v((B ⊃ A)) = 0.23. if v(∼(A ≡ B)) = 0 then v(∼(A ⊃ B)) = v(∼(B ⊃ A)) = 0.24. if v(A) = 0 then v(∗A) = 1.

H

H

Definition

A1, . . . , An B (B is a semantic consequence of A1, . . . , An)

iff

all valuations that verify A1, . . . , An

and for which B is determined,

verify B.

H

Definition

A1, . . . , An B (B is a semantic consequence of A1, . . . , An)

iff

all valuations that verify A1, . . . , An

and for which B is determined,

verify B.

That is:

Definition

A1, . . . , An B (B is a semantic consequence of A1, . . . , An)

iff no valuation that verifies A1, . . . , An falsifies B.

H

Definition

A1, . . . , An B (B is a semantic consequence of A1, . . . , An)

iff

all valuations that verify A1, . . . , An

and for which B is determined,

verify B.

That is:

Definition

A1, . . . , An B (B is a semantic consequence of A1, . . . , An)

iff no valuation that verifies A1, . . . , An falsifies B.

PM: three valued truth-functional semantics

H

Theorem

If [A1, . . . , An]B is derived in a pCL−-proof for Γ ` G,

then v(B) = 1

whenever v(A1) = . . . = v(An) = 1 and v(B) ∈ 0, 1.

Corollary

If G is derived in a pCL−-proof for Γ ` G,

then Γ CL− G. (Soundness)

Theorem

If a prospective proof for Γ ` G halts without G begin derived,

then Γ 2CL− G. (Completeness)

H

Note: Tableau method

TA ∧ BTATB

TA ∨ BF∗A TA F∗B

TB

etc. (read off from semantic clauses)

H

Non-logicians sometimes apply CL to inconsistent premises. H

They consider EFQ (explosion) as a logicians trick.

Non-logicians sometimes apply CL to inconsistent premises. H

They consider EFQ (explosion) as a logicians trick.

Logicians know: EFQ cannot be isolated in CL

Non-logicians sometimes apply CL to inconsistent premises. H

They consider EFQ (explosion) as a logicians trick.

Logicians know: EFQ cannot be isolated in CL

avoiding EFQ requires avoiding:

− Addition or Disjunctive Syllogism

− A / B ⊃ A or ∼A ⊃ (B ∧ ∼B) / A

− A / B ⊃ A or A ⊃ (B ∧ ∼B) / ∼A or ∼∼A / A

− etc.

Non-logicians sometimes apply CL to inconsistent premises. H

They consider EFQ (explosion) as a logicians trick.

Logicians know: EFQ cannot be isolated in CL

avoiding EFQ requires avoiding:

− Addition or Disjunctive Syllogism

− A / B ⊃ A or ∼A ⊃ (B ∧ ∼B) / A

− A / B ⊃ A or A ⊃ (B ∧ ∼B) / ∼A or ∼∼A / A

− etc.

However:

That EFQ cannot be isolated in CL

depends on our view on logic (mere rules vs. procedures).

H

The upgrade to predicate logic (minus EFQ):

is straightforward

if the procedure stops (not for all Γ and A),

with A derived: then Γ `CL− A

with A not derived: then Γ 0CL− A

2.6 2

2.7 Afterthought H

Hintikka:

distinction between rules and heuristics

comparison with game of chess

2.7 Afterthought H

Hintikka:

distinction between rules and heuristics

comparison with game of chess

This is a mistake:

· heuristic reasoning leads to sensible proofs

· part of this reasoning can be pushed into the (object-language) proofs

H

Moreover: truth-in-a-model is a touchy matter: H

· given CL-models one can distinguish valid consequences from sensible

consequences

Moreover: truth-in-a-model is a touchy matter: H

· given CL-models one can distinguish valid consequences from sensible

consequences

BUT:

· there is a semantics that is adequate for sensible reasoning in CL

Moreover: truth-in-a-model is a touchy matter: H

· given CL-models one can distinguish valid consequences from sensible

consequences

BUT:

· there is a semantics that is adequate for sensible reasoning in CL

A is a CL−-consequence of Γ (no CL−-model of Γ falsifies A)

iff

A is a sensible CL-consequence of Γ

Moreover: truth-in-a-model is a touchy matter: H

· given CL-models one can distinguish valid consequences from sensible

consequences

BUT:

· there is a semantics that is adequate for sensible reasoning in CL

A is a CL−-consequence of Γ (no CL−-model of Γ falsifies A)

iff

A is a sensible CL-consequence of Γ

in other words:

sensibility can be incorporated into truth

2.8 2

3 Problem-solving processes

3.1 Aim and introductory remarks

3.2 Problem-solving processes: first elements

3.3 An example

3.4 The rules and the permissions and obligations

3.5 Answerable questions

3.6 Variants, extensions and comments

3 0

3.1 Aim and introductory remarks H

backbone of formal approach to problem solving

3.1 Aim and introductory remarks H

backbone of formal approach to problem solving

· aim: explication of problem solving processes (psps)

3.1 Aim and introductory remarks H

backbone of formal approach to problem solving

· aim: explication of problem solving processes (psps)

· backbone: solve ?A, ∼A by deriving A or ∼A from Γ by CL

3.1 Aim and introductory remarks H

backbone of formal approach to problem solving

· aim: explication of problem solving processes (psps)

· backbone: solve ?A, ∼A by deriving A or ∼A from Γ by CL

· empirical means (observation and experiment)

3.1 Aim and introductory remarks H

backbone of formal approach to problem solving

· aim: explication of problem solving processes (psps)

· backbone: solve ?A, ∼A by deriving A or ∼A from Γ by CL

· empirical means (observation and experiment)

· + new available information (not originally seen as relevant)

(easy extension)

3.1 Aim and introductory remarks H

backbone of formal approach to problem solving

· aim: explication of problem solving processes (psps)

· backbone: solve ?A, ∼A by deriving A or ∼A from Γ by CL

· empirical means (observation and experiment)

· + new available information (not originally seen as relevant)

(easy extension)

· + corrective and ampliative logics, handling inconsistency, . . .

includes forming new hypotheses

adaptive logics: control by conditions and marking definition

(easy extension)

3.1 Aim and introductory remarks H

backbone of formal approach to problem solving

· aim: explication of problem solving processes (psps)

· backbone: solve ?A, ∼A by deriving A or ∼A from Γ by CL

· empirical means (observation and experiment)

· + new available information (not originally seen as relevant)

(easy extension)

· + corrective and ampliative logics, handling inconsistency, . . .

includes forming new hypotheses

adaptive logics: control by conditions and marking definition

(easy extension)

· + devise new empirical means: future research

(seems within reach)

3.1 Aim and introductory remarks H

backbone of formal approach to problem solving

· aim: explication of problem solving processes (psps)

· backbone: solve ?A, ∼A by deriving A or ∼A from Γ by CL

· empirical means (observation and experiment)

· + new available information (not originally seen as relevant)

(easy extension)

· + corrective and ampliative logics, handling inconsistency, . . .

includes forming new hypotheses

adaptive logics: control by conditions and marking definition

(easy extension)

· + devise new empirical means: future research

(seems within reach)

· + model-based reasoning, . . . : future research

H

Plan H

given a logic (or a set of logics) L, we can handle the heuristics

(see previous lecture)

Plan H

given a logic (or a set of logics) L, we can handle the heuristics

(see previous lecture)

viz. define the procedure

for solving ?A, ∼A by deriving A or ∼A from Γ by L

Plan H

given a logic (or a set of logics) L, we can handle the heuristics

(see previous lecture)

viz. define the procedure

for solving ?A, ∼A by deriving A or ∼A from Γ by L

adaptive logics enable us to explicate the reasoning behind many psps

(see next lecture)

H

Background H

• philosophy of science: Nickles, Meheus, Batens

• erotetic logic (varying on Wisniewski)

• logic · adaptive logics

· prospective dynamics

· procedures

Background H

• philosophy of science: Nickles, Meheus, Batens

• erotetic logic (varying on Wisniewski)

• logic · adaptive logics

· prospective dynamics

· procedures

problem determined by (changing) constraints

· conditions on the solution

· methodological instructions / heuristics / examples

· certainties (conceptual system . . . )

H

Formal approach H

formal but not logic with the usual connotations

Formal approach H

formal but not logic with the usual connotations

· proofs – success

· psp (problem solving process) – success?

Formal approach H

formal but not logic with the usual connotations

· proofs – success

· arbitrary sequence of applications of rules

· psp (problem solving process) – success?

· goal directed

Formal approach H

formal but not logic with the usual connotations

· proofs – success

· arbitrary sequence of applications of rules

· infinite consequence set

· psp (problem solving process) – success?

· goal directed

· unique aim (possibly unspecified at outset)

Formal approach H

formal but not logic with the usual connotations

· proofs – success

· arbitrary sequence of applications of rules

· infinite consequence set

· useless subsequences

· psp (problem solving process) – success?

· goal directed

· unique aim (possibly unspecified at outset)

· unsuccessful subsequences

Formal approach H

formal but not logic with the usual connotations

· proofs – success

· arbitrary sequence of applications of rules

· infinite consequence set

· useless subsequences

· deductive

· psp (problem solving process) – success?

· goal directed

· unique aim (possibly unspecified at outset)

· unsuccessful subsequences

· also other forms of reasoning

Formal approach H

formal but not logic with the usual connotations

· proofs – success

· arbitrary sequence of applications of rules

· infinite consequence set

· useless subsequences

· deductive

· CL

· psp (problem solving process) – success?

· goal directed

· unique aim (possibly unspecified at outset)

· unsuccessful subsequences

· also other forms of reasoning

· multiplicity of logics

H

H

differences partly rely on confusion

· proof search is goal-directed process (and is a psp)

· proof search is not always successful

· no arbitrary sequences result of proof search

· proof search for one formula from given premises

(but set of problems solvable by certain means)

· unsuccessful subsequences in proof search

no ‘useless’ subsequences in goal-directed proofs

· that all logic is deductive (or is CL) is a plain prejudice

3.1 3

3.2 Problem-solving processes: first elements H

terminology: psp refers to explicandum and to explicatum

3.2 Problem-solving processes: first elements H

terminology: psp refers to explicandum and to explicatum

• psps contain unsuccessful subsequences

· justified at some point in the psp

· not justified any more at later point

· and vice versa

‘unsuccessful’ is a dynamic property

H

H

• psps require prospective dynamics + derived problems

H

• psps require prospective dynamics + derived problems

· prospective dynamics (previous lecture)

now breath first (better w.r.t. problems)

H

• psps require prospective dynamics + derived problems

· prospective dynamics (previous lecture)

now breath first (better w.r.t. problems)

· derived problems:

?A, ∼A (problem). . .[B1, . . . , Bn] A (if B1, . . . , Bn true, then also A)

H

• psps require prospective dynamics + derived problems

· prospective dynamics (previous lecture)

now breath first (better w.r.t. problems)

· derived problems:

?A, ∼A (problem). . .[B1, . . . , Bn] A (if B1, . . . , Bn true, then also A)?B1, ∼B1, . . . , ?Bn, ∼Bn derived problem

H

Lines occurring in a psp H

problem lines: problem = non-empty set of questions

Lines occurring in a psp H

problem lines: problem = non-empty set of questions

declarative lines

· conditional: [B1, . . . , Bn] A

· unconditional: [∅] A, viz. A

H

H

a stage of a psp: sequence of lines

a psp: chain of stages

next stage obtained by adding new line

marks may change (governed by marking definitions)

relation between stages governed by procedure

H

rehearsal

· the complement of a formula

· a-formulas and b-formulas (a and b)

· formula analysing rules and condition analysing rules

· pp(A, B) (A is a positive part of B)

· the Prem rule

· EM, EM0 and Trans

· direct answer to a question / problem

H

Specific rules H

Where ?M, ∼M is the main (or original) problem:

Main Start a psp with the line:

1 ?M, ∼M Main

Specific rules H

Where ?M, ∼M is the main (or original) problem:

Main Start a psp with the line:

1 ?M, ∼M Main

Target rule (to choose a target that one tries to obtain)

Target If P is the problem of an unmarked problem line, and A is a

direct answer of a member of P, then one may add:

k [A] A Target

Specific rules H

Where ?M, ∼M is the main (or original) problem:

Main Start a psp with the line:

1 ?M, ∼M Main

Target rule (to choose a target that one tries to obtain)

Target If P is the problem of an unmarked problem line, and A is a

direct answer of a member of P, then one may add:

k [A] A Target

Derive problems:

DP If A is an unmarked target from problem line i

and [B1, . . . , Bn] A is the formula of an unmarked line j,

then one may add:

k ?B1, ∼B1, . . . , ?Bn, ∼Bn i, j; DP

3.2 3

3.3 An example H

main problem: ?p ∨ q, ∼(p ∨ q)

premise set: ∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u

H

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

2 [∼(p ∨ q)] ∼(p ∨ q) Target

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

2 [∼(p ∨ q)] ∼(p ∨ q) Target D3

3 [∼p, ∼q] ∼(p ∨ q) 2; C∼∨E D3

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

2 [∼(p ∨ q)] ∼(p ∨ q) Target D3

3 [∼p, ∼q] ∼(p ∨ q) 2; C∼∨E D3

4 [p ∨ q] p ∨ q Target

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

2 [∼(p ∨ q)] ∼(p ∨ q) Target D3

3 [∼p, ∼q] ∼(p ∨ q) 2; C∼∨E D3

4 [p ∨ q] p ∨ q Target

5 [p] p ∨ q 4; C∨E D5

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

2 [∼(p ∨ q)] ∼(p ∨ q) Target D3

3 [∼p, ∼q] ∼(p ∨ q) 2; C∼∨E D3

4 [p ∨ q] p ∨ q Target

5 [p] p ∨ q 4; C∨E D5

6 [q] p ∨ q 4; C∨E

H

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

4 [p ∨ q] p ∨ q Target

6 [q] p ∨ q 4; C∨E

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

4 [p ∨ q] p ∨ q Target

6 [q] p ∨ q 4; C∨E

7 ?q, ∼q 4, 6; DP

pursued answer: q

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

4 [p ∨ q] p ∨ q Target

6 [q] p ∨ q 4; C∨E

7 ?q, ∼q 4, 6; DP

8 [q] q Target

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

4 [p ∨ q] p ∨ q Target

6 [q] p ∨ q 4; C∨E

7 ?q, ∼q 4, 6; DP

8 [q] q Target

9 (q ∨ u) ⊃ (∼t ∨ q) Prem

4 and 6 have no premise in their path

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

4 [p ∨ q] p ∨ q Target

6 [q] p ∨ q 4; C∨E

7 ?q, ∼q 4, 6; DP

8 [q] q Target

9 (q ∨ u) ⊃ (∼t ∨ q) Prem

10 [q ∨ u] ∼t ∨ q 9; ⊃E

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

4 [p ∨ q] p ∨ q Target

6 [q] p ∨ q 4; C∨E

7 ?q, ∼q 4, 6; DP

8 [q] q Target

9 (q ∨ u) ⊃ (∼t ∨ q) Prem

10 [q ∨ u] ∼t ∨ q 9; ⊃E

11 [q] ∼t ∨ q 10; C∨E

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

4 [p ∨ q] p ∨ q Target

6 [q] p ∨ q 4; C∨E

7 ?q, ∼q 4, 6; DP

8 [q] q Target

9 (q ∨ u) ⊃ (∼t ∨ q) Prem

10 [q ∨ u] ∼t ∨ q 9; ⊃E

11 [q] ∼t ∨ q 10; C∨E D12

12 [q, t] q 11; ∨E I12

12: inoperative line

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

4 [p ∨ q] p ∨ q Target

6 [q] p ∨ q 4; C∨E

7 ?q, ∼q 4, 6; DP

8 [q] q Target

9 (q ∨ u) ⊃ (∼t ∨ q) Prem

10 [q ∨ u] ∼t ∨ q 9; ⊃E

11 [q] ∼t ∨ q 10; C∨E D12

12 [q, t] q 11; ∨E I12

13 [u] ∼t ∨ q 10; C∨E

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

4 [p ∨ q] p ∨ q Target

6 [q] p ∨ q 4; C∨E

7 ?q, ∼q 4, 6; DP

8 [q] q Target

9 (q ∨ u) ⊃ (∼t ∨ q) Prem

10 [q ∨ u] ∼t ∨ q 9; ⊃E

11 [q] ∼t ∨ q 10; C∨E D12

12 [q, t] q 11; ∨E I12

13 [u] ∼t ∨ q 10; C∨E

14 [u, t] q 13; ∨E

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

4 [p ∨ q] p ∨ q Target

6 [q] p ∨ q 4; C∨E

7 ?q, ∼q 4, 6; DP

8 [q] q Target

9 (q ∨ u) ⊃ (∼t ∨ q) Prem

10 [q ∨ u] ∼t ∨ q 9; ⊃E

11 [q] ∼t ∨ q 10; C∨E D12

12 [q, t] q 11; ∨E I12

13 [u] ∼t ∨ q 10; C∨E

14 [u, t] q 13; ∨E

15 ?u, ∼u, ?t, ∼t 8, 14; DP

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

4 [p ∨ q] p ∨ q Target

6 [q] p ∨ q 4; C∨E

7 ?q, ∼q 4, 6; DP

8 [q] q Target

9 (q ∨ u) ⊃ (∼t ∨ q) Prem

10 [q ∨ u] ∼t ∨ q 9; ⊃E

11 [q] ∼t ∨ q 10; C∨E D12

12 [q, t] q 11; ∨E I12

13 [u] ∼t ∨ q 10; C∨E

14 [u, t] q 13; ∨E

15 ?u, ∼u, ?t, ∼t 8, 14; DP

16 [t] t Target

cleaning up for lack of space

H

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

8 [q] q Target

. . . . . . . . .

14 [u, t] q 13; ∨E

15 ?u, ∼u, ?t, ∼t 8, 14; DP

16 [t] t Target

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

8 [q] q Target

. . . . . . . . .

14 [u, t] q 13; ∨E

15 ?u, ∼u, ?t, ∼t 8, 14; DP

16 [t] t Target

17 (r ∧ t) ∨ s Prem

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

8 [q] q Target

. . . . . . . . .

14 [u, t] q 13; ∨E

15 ?u, ∼u, ?t, ∼t 8, 14; DP

16 [t] t Target

17 (r ∧ t) ∨ s Prem

18 [∼s] r ∧ t 17; ∨E

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

8 [q] q Target

. . . . . . . . .

14 [u, t] q 13; ∨E

15 ?u, ∼u, ?t, ∼t 8, 14; DP

16 [t] t Target

17 (r ∧ t) ∨ s Prem

18 [∼s] r ∧ t 17; ∨E

19 [∼s] t 18; ∧E

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

8 [q] q Target

. . . . . . . . .

14 [u, t] q 13; ∨E

15 ?u, ∼u, ?t, ∼t 8, 14; DP

16 [t] t Target

17 (r ∧ t) ∨ s Prem

18 [∼s] r ∧ t 17; ∨E

19 [∼s] t 18; ∧E

20 ?s, ∼s 16, 19; DP

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

8 [q] q Target

. . . . . . . . .

14 [u, t] q 13; ∨E

15 ?u, ∼u, ?t, ∼t 8, 14; DP

16 [t] t Target

17 (r ∧ t) ∨ s Prem

18 [∼s] r ∧ t 17; ∨E

19 [∼s] t 18; ∧E

20 ?s, ∼s 16, 19; DP

21 [∼s] ∼s Target

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

8 [q] q Target

. . . . . . . . .

14 [u, t] q 13; ∨E

15 ?u, ∼u, ?t, ∼t 8, 14; DP

16 [t] t Target

17 (r ∧ t) ∨ s Prem

18 [∼s] r ∧ t 17; ∨E

19 [∼s] t 18; ∧E S22

20 ?s, ∼s 16, 19; DP R22

21 [∼s] ∼s Target R22

22 ∼s Prem

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

8 [q] q Target

. . . . . . . . .

14 [u, t] q 13; ∨E S23

15 ?u, ∼u, ?t, ∼t 8, 14; DP R23

16 [t] t Target R23

17 (r ∧ t) ∨ s Prem

18 [∼s] r ∧ t 17; ∨E

19 [∼s] t 18; ∧E S22 R23

20 ?s, ∼s 16, 19; DP R22

21 [∼s] ∼s Target R22

22 ∼s Prem

23 t 19, 22; Trans

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

8 [q] q Target

. . . . . . . . .

14 [u, t] q 13; ∨E S23 R24

15 ?u, ∼u, ?t, ∼t 8, 14; DP R23

16 [t] t Target R23

17 (r ∧ t) ∨ s Prem

18 [∼s] r ∧ t 17; ∨E

19 [∼s] t 18; ∧E S22 R23

20 ?s, ∼s 16, 19; DP R22

21 [∼s] ∼s Target R22

22 ∼s Prem

23 t 19, 22; Trans

24 [u] q 14, 23; Trans

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

8 [q] q Target

. . . . . . . . .

14 [u, t] q 13; ∨E S23 R24

15 ?u, ∼u, ?t, ∼t 8, 14; DP R23

16 [t] t Target R23

17 (r ∧ t) ∨ s Prem

18 [∼s] r ∧ t 17; ∨E

19 [∼s] t 18; ∧E S22 R23

20 ?s, ∼s 16, 19; DP R22

21 [∼s] ∼s Target R22

22 ∼s Prem

23 t 19, 22; Trans

24 [u] q 14, 23; Trans

25 ?u, ∼u 8, 24; DP

cleaning up for lack of space

H

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

. . .

6 [q] p ∨ q 4; C∨E

. . .

23 t 19, 22; Trans

24 [u] q 14, 23; Trans

25 ?u, ∼u 8, 24; DP

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

. . .

6 [q] p ∨ q 4; C∨E

. . .

23 t 19, 22; Trans

24 [u] q 14, 23; Trans

25 ?u, ∼u 8, 24; DP

26 [u] u Target

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

. . .

6 [q] p ∨ q 4; C∨E

. . .

23 t 19, 22; Trans

24 [u] q 14, 23; Trans

25 ?u, ∼u 8, 24; DP

26 [u] u Target

27 t ⊃ u Prem

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

. . .

6 [q] p ∨ q 4; C∨E

. . .

23 t 19, 22; Trans

24 [u] q 14, 23; Trans

25 ?u, ∼u 8, 24; DP

26 [u] u Target

27 t ⊃ u Prem

28 [t] u 27; ⊃E S28

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

. . .

6 [q] p ∨ q 4; C∨E

. . .

23 t 19, 22; Trans

24 [u] q 14, 23; Trans S29

25 ?u, ∼u 8, 24; DP R29

26 [u] u Target R29

27 t ⊃ u Prem

28 [t] u 27; ⊃E S28 R29

29 u 23, 28; Trans

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main

. . .

6 [q] p ∨ q 4; C∨E S30

. . .

23 t 19, 22; Trans

24 [u] q 14, 23; Trans S29 R30

25 ?u, ∼u 8, 24; DP R29

26 [u] u Target R29

27 t ⊃ u Prem

28 [t] u 27; ⊃E S28 R29

29 u 23, 28; Trans

30 q 24, 29; Trans

∼s, ∼u ∨ r, (r ∧ t) ∨ s, (q ∨ u) ⊃ (∼t ∨ q), t ⊃ u H

1 ?p ∨ q, ∼(p ∨ q) Main R31

. . .

6 [q] p ∨ q 4; C∨E S30 R31

. . .

23 t 19, 22; Trans

24 [u] q 14, 23; Trans S29 R30

25 ?u, ∼u 8, 24; DP R29

26 [u] u Target R29

27 t ⊃ u Prem

28 [t] u 27; ⊃E S28 R29

29 u 23, 28; Trans

30 q 24, 29; Trans

31 p ∨ q 6, 30; Trans

3.3 3

3.4 The rules and the permissions and obligations H

Prem, FAR, CAR, EM, EM0, Trans, Main, Target, DP

permissions + further comments on some rules

(marking definitions follow)

H

H

Main Start a psp with the line:

1 ?M, ∼M Main

H

Main Start a psp with the line:

1 ?M, ∼M Main

Target If P is the problem of an unmarked problem line,

and A is a direct answer of a member of P,

then one may add:

k [A] A Target

H

Main Start a psp with the line:

1 ?M, ∼M Main

Target If P is the problem of an unmarked problem line,

and A is a direct answer of a member of P,

then one may add:

k [A] A Target

Prem If A is an unmarked target, B ∈ Γ, and pp(A, B),

then one may add:

k B Prem

H

Formula analysing rules (bring one closer to a target):

[∆] a[∆] a1 [∆] a2

[∆] b[∆ ∪ ∗b2] b1 [∆ ∪ ∗b1] b2

FAR If C is an unmarked target,

[∆] A is the formula of an unmarked line i,

[∆] A / [∆ ∪ ∆′] B is a formula analysing rule,

and pp(C, B),

then one may add:

k [∆ ∪ ∆′] B i; R

in which R is the name of the formula analysing rule.

H

Condition analysing rules (reveal other means to reach a target):

[∆ ∪ a] A[∆ ∪ a1, a2] A

[∆ ∪ b] A[∆ ∪ b1] A [∆ ∪ b2] A

CAR If A is an unmarked target,

[∆ ∪ B] A is the formula of an unmarked line i,

and [∆ ∪ B] A / [∆ ∪ ∆′] A is a condition analysing rule,

then one may add:

k [∆ ∪ ∆′] A i; R

in which R is the name of the condition analysing rule.

H

Eliminate some problems without answering them:

EM0 If [∆ ∪ ∗A] A is the formula of a line i that is neither

R-marked nor I-marked, then one may add:

k [∆] A i; EM0

EM If A is an unmarked target,

[∆ ∪ B] A and [∆′ ∪ ∼B] A are the respective formulas of

the unmarked or only D-marked lines i and j,

and ∆ ⊆ ∆′ or ∆′ ⊆ ∆,

then one may add:

k [∆ ∪ ∆′] A i, j; EM

H

eliminate obtained elements from a condition (and solved questions

from a problem)

and

summarize remaining problems (and paths):

Trans If A is an unmarked target,

and [∆ ∪ B] A and [∆′] B are the respective formulas of the

at most S-marked (not R-, I- or D-marked) lines i and j,

then one may add:

k [∆ ∪ ∆′] A i, j; Trans

H

handle derived problems:

DP If A is an unmarked target from problem line i

and [B1, . . . , Bn] A is the formula of an unmarked line j,

then one may add:

k ?B1, ∼B1, . . . , ?Bn, ∼Bn i, j; DP

H

no instruction for applying EFQ

in view of the intended applications

(deriving predictions, finding explanations, etc.)

no instruction for applying EFQ

in view of the intended applications

(deriving predictions, finding explanations, etc.)

the only exception seems to be: answering ?Γ `CL A, Γ 0CL A

no instruction for applying EFQ

in view of the intended applications

(deriving predictions, finding explanations, etc.)

the only exception seems to be: answering ?Γ `CL A, Γ 0CL A

but every possible application seems to require CL−.

H

Marking definitions

redundant lines are R-marked: (unconditional A identified with [∅] A)

Definition 1 An at most S-marked declarative line i that has [∆] A as

its formula is R-marked at a stage iff, at that stage, [Θ] A is the formula

of a line for some Θ ⊂ ∆.

Definition 2 An unmarked problem line i is R-marked at a stage iff, at

that stage, a direct answer A of a question of line i is the formula of a

line.

H

the following definitions require:

· target from a problem line

· resolution line

· direct target from

· target sequence

· grounded target sequence

H

inoperative lines are I-marked (not useful for extant problem):

Definition 3 An at most S-marked target line that has [A] A as its

formula is I-marked at a stage iff every problem line from which A is a

target is marked at that stage.

Definition 4 An at most S-marked resolution line of which [∆1] A1 is

the formula for some ∆1 6= ∅ is I-marked at a stage iff, at that stage,

for every grounded target sequence 〈[∆n] An, . . . , [∆1] A1〉,

(i) some target [Ai] Ai (1 ≤ i ≤ n) is marked, or

(ii) An, . . . , A1 ∩ ∆1 6= ∅, or

(iii) ∆1 ∪ . . . ∪ ∆n ∪ Γs is flatly inconsistent.

Definition 5 An unmarked problem line is I-marked iff no unmarked

resolution line generates it.

H

Dead end lines are D-marked (no further action from such line)

· A is a dead end (A is literal and not a positive part of a premise)

· CAR-descendant of [∆ ∪ A] B

Definition 6 An at most S-marked resolution line with formula [∆] A is

D-marked at a stage iff some B ∈ ∆ is a dead end or, at that stage, all

CAR-descendants of [∆] A occur in the psp and are D-marked.

Definition 7 An at most S-marked target line with formula [A] A is

D-marked at a stage iff A is a dead end or no further action can be

taken in view of target A.

H

it can be shown that, for all consistent Γ: H

(i) the procedure applied to Γ and ?A, ∼A results in the answer A,

iff Γ `CL A

and

(ii) the procedure applied to Γ and ?A, ∼A stops without the main

problem being answered, or results in the answer ∼A iff Γ 0CL A

it can be shown that, for all consistent Γ: H

(i) the procedure applied to Γ and ?A, ∼A results in the answer A,

iff Γ `CL A

and

(ii) the procedure applied to Γ and ?A, ∼A stops without the main

problem being answered, or results in the answer ∼A iff Γ 0CL A

for the predicative case:

if the procedure applied to Γ and ?A, ∼A stops without the main

problem being answered, or results in the answer ∼A, then Γ 0CL A

it can be shown that, for all consistent Γ: H

(i) the procedure applied to Γ and ?A, ∼A results in the answer A,

iff Γ `CL A

and

(ii) the procedure applied to Γ and ?A, ∼A stops without the main

problem being answered, or results in the answer ∼A iff Γ 0CL A

for the predicative case:

if the procedure applied to Γ and ?A, ∼A stops without the main

problem being answered, or results in the answer ∼A, then Γ 0CL A

because

(ii′) the procedure applied to Γ and ?A, ∼A stops without the main

problem being answered, or results in the answer ∼A, or does not stop

iff Γ 0CL A

H

Speed up the procedure by S-marks

· Γs: union of Γ and of the set of conditionless formulas

that occur at stage s of the psp

Definition 8 A R-unmarked resolution line in which [∆1] A1 is derived is

S-marked iff

(i) ∆1 ∩ Γs 6= ∅, or

(ii) for some target sequence 〈[∆n] An, . . . , [∆1] A1〉, An ∪ ∆1 is

flatly inconsistent whereas ∆1 is not flatly inconsistent, or

(iii) ∆1 ⊂ ∆n ∪ . . . ∪ ∆2 for some target sequence

〈[∆n] An, . . . , [∆1] A1〉.

3.4 3

3.5 Answerable questions H

A is a set of questions that can be answered by standard means

(for example: observation or experiment)

3.5 Answerable questions H

A is a set of questions that can be answered by standard means

(for example: observation or experiment)

idea: whenever an unmarked target is a positive part of a direct answer

of a member of A, that question can be answered (outside the proof)

and the answer can be introduced as a new premise

3.5 Answerable questions H

A is a set of questions that can be answered by standard means

(for example: observation or experiment)

idea: whenever an unmarked target is a positive part of a direct answer

of a member of A, that question can be answered (outside the proof)

and the answer can be introduced as a new premise

whether some ?A, ∼A ∈ A is launched depends on pragmatic

(economic) considerations

H

H

“dead end” needs to be redefined: A is a dead end iff it is not a

positive part of a premise or of a direct answer to a member of A.

H

“dead end” needs to be redefined: A is a dead end iff it is not a

positive part of a premise or of a direct answer to a member of A.

a target A justifies asking the answerable question

· ?A, ∼A

H

“dead end” needs to be redefined: A is a dead end iff it is not a

positive part of a premise or of a direct answer to a member of A.

a target A justifies asking the answerable question

· ?A, ∼A· ?A ∧ B, ∼(A ∧ B)

H

“dead end” needs to be redefined: A is a dead end iff it is not a

positive part of a premise or of a direct answer to a member of A.

a target A justifies asking the answerable question

· ?A, ∼A· ?A ∧ B, ∼(A ∧ B)· ?A ∨ B, ∼(A ∨ B)

H

“dead end” needs to be redefined: A is a dead end iff it is not a

positive part of a premise or of a direct answer to a member of A.

a target A justifies asking the answerable question

· ?A, ∼A· ?A ∧ B, ∼(A ∧ B)· ?A ∨ B, ∼(A ∨ B)· ?B ⊃ A, ∼(B ⊃ A)etc.

H

“dead end” needs to be redefined: A is a dead end iff it is not a

positive part of a premise or of a direct answer to a member of A.

a target A justifies asking the answerable question

· ?A, ∼A· ?A ∧ B, ∼(A ∧ B)· ?A ∨ B, ∼(A ∨ B)· ?B ⊃ A, ∼(B ⊃ A)etc.

so A need not be an unmarked target in order for ?A, ∼A to be

launched

H

H

a launched question is answered outside the psp

H

a launched question is answered outside the psp

its answer is introduced as a new premise

H

a launched question is answered outside the psp

its answer is introduced as a new premise

this is awkward from a logical point of view

so better

redefine A as a set of couples 〈?A, ∼A, B〉 with B ∈ A, ∼A

B is determined but unknown to the problem solver

H

a launched question is answered outside the psp

its answer is introduced as a new premise

this is awkward from a logical point of view

so better

redefine A as a set of couples 〈?A, ∼A, B〉 with B ∈ A, ∼A

B is determined but unknown to the problem solver

that A is the target justifies launching 〈?A, ∼A, A〉 just as much as it

justifies launching 〈?A, ∼A, ∼A〉 (etc.)

H

H

instruction:

New If A is the target of the unmarked target line i, pp(A, B) or

pp(A, ∼B), and 〈?B, ∼B, C〉 ∈ A (where C ∈ B, ∼B),then one may add:

k C i; New

obvious from New:

which member of A was launched in view of which target

H

Γ = (q ∧ r) ⊃ p, ∼s ∨ q, s, . . . A = 〈?q ⊃ r, ∼(q ⊃ r), q ⊃ r〉, . . ..

1 ?p, ∼p Main R19

2 [p] p Target R19

3 (q ∧ r) ⊃ p Prem

4 [q ∧ r] p 3; ⊃E R19

5 [q, r] p 4; C∧E S9 R10

6 ?q, ∼q, ?r, ∼r 2, 5; DP I10

7 [r] r Target I10

8 q ⊃ r 7; New

9 [q] r 8; C⊃E I10

10 [q] p 5, 9; Trans R19

11 ?q, ∼q 2, 10; DP R18

12 [q] q Target R18

13 ∼s ∨ q Prem

14 [s] q 13; ∨E S17 R18

15 ?s, ∼s 12, 14; DP R17

16 [s] s Target R17

17 s Prem

18 q 14, 17; Trans

19 p 10, 18; Trans3.5 3

3.6 Variants, extensions and comments

• procedural variants

• extensions to other logics

including adaptive logics to handle inconsistencies, abduction,

inductive generalization, . . . (fifth lecture)

3.6 Variants, extensions and comments

• procedural variants

• extensions to other logics

including adaptive logics to handle inconsistencies, abduction,

inductive generalization, . . . (fifth lecture)

procedure was not intended to be maximally efficient in view of its aim:

(i) explicate actual problem-solving processes

(ii) avoid steps that are useless with respect to the main problem, the

premises and the directly answerable questions

(iii) easily generalizable to other logics (devise prospective dynamics)

3.6 Variants, extensions and comments

• procedural variants

• extensions to other logics

including adaptive logics to handle inconsistencies, abduction,

inductive generalization, . . . (fifth lecture)

procedure was not intended to be maximally efficient in view of its aim:

(i) explicate actual problem-solving processes

(ii) avoid steps that are useless with respect to the main problem, the

premises and the directly answerable questions

(iii) easily generalizable to other logics (devise prospective dynamics)

procedure is probably not maximally efficient in its kind

· this requires further research

· it does not undermine the main aim of the enterprise:

to delineate sensible reasoning

3.6 3

4 ENTER Adaptive Logics

4.1 The problem

4.2 Characterization of an adaptive Logic

4.3 Annotated dynamic proofs: Reliability

4.4 Semantics

4.5 Annotated dynamic proofs: Minimal Abnormality

4.6 Some further examples

4.7 Some properties

4.8 Combined adaptive logics

4 0

4.1 The problem (repetition) H

many reasoning processes in the sciences (and elsewhere) display

an external dynamics

an internal dynamics

4.1 The problem (repetition) H

many reasoning processes in the sciences (and elsewhere) display

an external dynamics

non-monotonic

an internal dynamics

revise conclusions as insights in premises grow

4.1 The problem (repetition) H

many reasoning processes in the sciences (and elsewhere) display

an external dynamics

non-monotonic

an internal dynamics

revise conclusions as insights in premises grow

⇒ gain technically sound control on the internal dynamics

H

examples

interpret as consistently as possible a theory that turned out inconsistent

inductive generalization

inductive prediction

compatibility

interpreting a person’s position during an ongoing discussion

finding a (potential or actual) explanation

H

H

no positive test for Γ ` A

H

no positive test for Γ ` A

` reasoning

adaptive logic internal dynamics

H

no positive test for Γ ` A

` reasoning

adaptive logic internal dynamics

↓ explicate

dynamic proof theory of a.l.

H

no positive test for Γ ` A

` reasoning

adaptive logic internal dynamics

↓ explicate

dynamic proof theory of a.l.

What is an adaptive logic?

What is a dynamic proof theory?

4.1 4

4.2 Characterization of an adaptive Logic H(only the best studied kind)

· lower limit logic

· set of abnormalities Ω

· strategy

4.2 Characterization of an adaptive Logic H(only the best studied kind)

· lower limit logic

monotonic and compact logic

· set of abnormalities Ω:

characterized by a (possibly restricted) logical form

· strategy :

Reliability, Minimal Abnormality, . . .

H

idea: for each abnormality (separately):

consider it as false, unless this is impossible in view of the premises

example: ∼p, p ∨ r, ∼q, q ∨ s, p

idea: for each abnormality (separately):

consider it as false, unless this is impossible in view of the premises

example: ∼p, p ∨ r, ∼q, q ∨ s, p

strategy required because idea is ambiguous (see below)

idea: for each abnormality (separately):

consider it as false, unless this is impossible in view of the premises

example: ∼p, p ∨ r, ∼q, q ∨ s, p

strategy required because idea is ambiguous (see below)

upper limit logic ULL:

LLL + axiom warranting that members of Ω are logically false

H

the characterization provides AL with: H

· a semantics

· a dynamic proof theory

· soundness and completeness proofs

· proofs of many other properties (Strong Reassurance,

Proof Invariance, ULL-consequences for normal premise sets, . . . )

the characterization provides AL with: H

· a semantics

· a dynamic proof theory

· soundness and completeness proofs

· proofs of many other properties (Strong Reassurance,

Proof Invariance, ULL-consequences for normal premise sets, . . . )

AL interprets the premises as ‘normally as possible’

(no positive test!)

H

Whence the need for a strategy? H

definitions:

Dab-formula Dab(∆): disjunction of finite ∆ ⊂ Ω

Dab(∆) is a minimal Dab-consequence of Γ:

Γ `LLL Dab(∆) and, for all ∆′ ⊂ ∆, Γ 0LLL Dab(∆′)

Whence the need for a strategy? H

definitions:

Dab-formula Dab(∆): disjunction of finite ∆ ⊂ Ω

Dab(∆) is a minimal Dab-consequence of Γ:

Γ `LLL Dab(∆) and, for all ∆′ ⊂ ∆, Γ 0LLL Dab(∆′)

Interpreting Γ as normally as possible

Simple strategy: take A ∈ Ω to be false, unless Γ `LLL A

Whence the need for a strategy? H

definitions:

Dab-formula Dab(∆): disjunction of finite ∆ ⊂ Ω

Dab(∆) is a minimal Dab-consequence of Γ:

Γ `LLL Dab(∆) and, for all ∆′ ⊂ ∆, Γ 0LLL Dab(∆′)

Interpreting Γ as normally as possible

Simple strategy: take A ∈ Ω to be false, unless Γ `LLL A

The Simple strategy is inadequate if

Dab(∆) is a minimal Dab-consequence of Γ and ∆ is not a singleton.

Whence the need for a strategy? H

definitions:

Dab-formula Dab(∆): disjunction of finite ∆ ⊂ Ω

Dab(∆) is a minimal Dab-consequence of Γ:

Γ `LLL Dab(∆) and, for all ∆′ ⊂ ∆, Γ 0LLL Dab(∆′)

Interpreting Γ as normally as possible

Simple strategy: take A ∈ Ω to be false, unless Γ `LLL A

The Simple strategy is inadequate if

Dab(∆) is a minimal Dab-consequence of Γ and ∆ is not a singleton.

· Reliability strategy: consider all members of ∆ as unreliable.

· Minimal Abnormality strategy (see below)

· . . .

H

Adaptive logics: example 1: ACLuNr

· lower limit logic: CLuN (CL++A ∨ ∼A + ¬)

· set of abnormalities: Ω = ∃(A ∧ ∼A) | A ∈ Fabnormality = occurrence of (existentially closed) contradiction

· strategy : Reliability

Adaptive logics: example 1: ACLuNr

· lower limit logic: CLuN (CL++A ∨ ∼A + ¬)

· set of abnormalities: Ω = ∃(A ∧ ∼A) | A ∈ Fabnormality = occurrence of (existentially closed) contradiction

· strategy : Reliability

upper limit logic: CL = CLuN + (A ∧ ∼A) ⊃ B

semantically: the CLuN-models that verify no inconsistency

H

Adaptive logics: example 2: IL

· lower limit logic: CL

· set of abnormalities: Ω = ∃A ∧ ∃∼A | A ∈ Fabnormality = the absence of uniformity

· strategy : Reliability

Adaptive logics: example 2: IL

· lower limit logic: CL

· set of abnormalities: Ω = ∃A ∧ ∃∼A | A ∈ Fabnormality = the absence of uniformity

· strategy : Reliability

upper limit logic: UCL = CL + ∃A ⊃ ∀A

semantically: the models in which, for all predicates π of rank r,

v(π) ∈ ∅, Dr

4.2 4

4.3 Annotated dynamic proofs: Reliability H(rules of inference + marking definition)

a line consists of

· a line number

· a formula

· a justification (line numbers + rule)

· a condition (finite subset of Ω)

4.3 Annotated dynamic proofs: Reliability H(rules of inference + marking definition)

a line consists of

· a line number

· a formula

· a justification (line numbers + rule)

· a condition (finite subset of Ω)

for all adaptive logics of the described kind:

A is derivable on the condition ∆ (in the dynamic proof)

iff

A ∨ Dab(∆) is derivable (on the condition ∅) (in the dynamic proof)

iff

Γ `LLL A ∨ Dab(∆)

H

Rules of inference (depend on LLL and Ω not on the strategy) H

PREM If A ∈ Γ: . . . . . .A ∅

RU If A1, . . . , An `LLL B: A1 ∆1. . . . . .An ∆nB ∆1 ∪ . . . ∪ ∆n

RC If A1, . . . , An `LLL B ∨ Dab(Θ) A1 ∆1. . . . . .An ∆nB ∆1 ∪ . . . ∪ ∆n ∪ Θ

H

Marking Definition for Reliability H

where Dab(∆1), . . . , Dab(∆n) are the minimal Dab-formulas derived

on the condition ∅ at stage s, Us(Γ) = ∆1 ∪ . . . ∪ ∆n

Marking Definition for Reliability H

where Dab(∆1), . . . , Dab(∆n) are the minimal Dab-formulas derived

on the condition ∅ at stage s, Us(Γ) = ∆1 ∪ . . . ∪ ∆n

Definition

where ∆ is the condition of line i, line i is marked (at stage s) iff

∆ ∩ Us(Γ) 6= ∅

Marking Definition for Reliability H

where Dab(∆1), . . . , Dab(∆n) are the minimal Dab-formulas derived

on the condition ∅ at stage s, Us(Γ) = ∆1 ∪ . . . ∪ ∆n

Definition

where ∆ is the condition of line i, line i is marked (at stage s) iff

∆ ∩ Us(Γ) 6= ∅

⇒ idea for consequence set applied to stage of proof

Marking Definition for Reliability H

where Dab(∆1), . . . , Dab(∆n) are the minimal Dab-formulas derived

on the condition ∅ at stage s, Us(Γ) = ∆1 ∪ . . . ∪ ∆n

Definition

where ∆ is the condition of line i, line i is marked (at stage s) iff

∆ ∩ Us(Γ) 6= ∅

⇒ idea for consequence set applied to stage of proof

Marking Definition for Minimal Abnormality: later

H

Derivability at a stage vs. final derivability H

idea: A derived in line i and the proof is stable with respect to i

Derivability at a stage vs. final derivability H

idea: A derived in line i and the proof is stable with respect to i

stability concerns a specific consequence and a specific line !

Derivability at a stage vs. final derivability H

idea: A derived in line i and the proof is stable with respect to i

stability concerns a specific consequence and a specific line !

Definition

A is finally derived from Γ on line i of a proof at stage s iff

(i) A is the second element of line i,

(ii) line i is not marked at stage s, and

(iii) any extension of the proof may be further extended in such a way

that line i is unmarked.

Derivability at a stage vs. final derivability H

idea: A derived in line i and the proof is stable with respect to i

stability concerns a specific consequence and a specific line !

Definition

A is finally derived from Γ on line i of a proof at stage s iff

(i) A is the second element of line i,

(ii) line i is not marked at stage s, and

(iii) any extension of the proof may be further extended in such a way

that line i is unmarked.

Definition

Γ `AL A (A is finally AL-derivable from Γ) iff A is finally derived in a

line of a proof from Γ.

Derivability at a stage vs. final derivability H

idea: A derived in line i and the proof is stable with respect to i

stability concerns a specific consequence and a specific line !

Definition

A is finally derived from Γ on line i of a proof at stage s iff

(i) A is the second element of line i,

(ii) line i is not marked at stage s, and

(iii) any extension of the proof may be further extended in such a way

that line i is unmarked.

Definition

Γ `AL A (A is finally AL-derivable from Γ) iff A is finally derived in a

line of a proof from Γ.

Even at the predicative level, there are criteria for final derivability.

H

H

LLL invalidates certain rules of ULL

AL invalidates certain applications of rules of ULL

H

LLL invalidates certain rules of ULL

AL invalidates certain applications of rules of ULL

ULL extends LLL by validating some further rules

AL extends LLL by validating some applications of some further rules

H

example

adaptive logic: IL

· lower limit logic: CL

· set of abnormalities: Ω = ∃A ∧ ∃∼A | A ∈ F

· strategy : Reliability

Γ = (Pa ∧ ∼Qa) ∧ ∼Ra, ∼Pb ∧ (Qb ∧ Rb), P c ∧ Rc, Qd ∧ ∼Pe

H

H

1 (Pa ∧ ∼Qa) ∧ ∼Ra PREM ∅2 ∼Pb ∧ (Qb ∧ Rb) PREM ∅3 Pc ∧ Rc PREM ∅4 Qd ∧ ∼Pe PREM ∅

number of data of each form immaterial

⇒ same generalizations derivable from Pa and from Pa, Pb

H

1 (Pa ∧ ∼Qa) ∧ ∼Ra PREM ∅2 ∼Pb ∧ (Qb ∧ Rb) PREM ∅3 Pc ∧ Rc PREM ∅4 Qd ∧ ∼Pe PREM ∅5 (∀x)(Qx ⊃ Rx) 2; RC Qx ⊃ Rx

number of data of each form immaterial

⇒ same generalizations derivable from Pa and from Pa, Pb

H

1 (Pa ∧ ∼Qa) ∧ ∼Ra PREM ∅2 ∼Pb ∧ (Qb ∧ Rb) PREM ∅3 Pc ∧ Rc PREM ∅4 Qd ∧ ∼Pe PREM ∅5 (∀x)(Qx ⊃ Rx) 2; RC Qx ⊃ Rx6 Rd 4, 5; RU Qx ⊃ Rx

number of data of each form immaterial

⇒ same generalizations derivable from Pa and from Pa, Pb

H

1 (Pa ∧ ∼Qa) ∧ ∼Ra PREM ∅2 ∼Pb ∧ (Qb ∧ Rb) PREM ∅3 Pc ∧ Rc PREM ∅4 Qd ∧ ∼Pe PREM ∅5 (∀x)(Qx ⊃ Rx) 2; RC Qx ⊃ Rx6 Rd 4, 5; RU Qx ⊃ Rx7 (∀x)(∼Px ⊃ Qx) 2; RC ∼Px ⊃ Qx8 Qe 4, 7; RU ∼Px ⊃ Qx

number of data of each form immaterial

⇒ same generalizations derivable from Pa and from Pa, Pb

H

H

1 (Pa ∧ ∼Qa) ∧ ∼Ra PREM ∅2 ∼Pb ∧ (Qb ∧ Rb) PREM ∅3 Pc ∧ Rc PREM ∅4 Qd ∧ ∼Pe PREM ∅· · ·

H

1 (Pa ∧ ∼Qa) ∧ ∼Ra PREM ∅2 ∼Pb ∧ (Qb ∧ Rb) PREM ∅3 Pc ∧ Rc PREM ∅4 Qd ∧ ∼Pe PREM ∅· · ·9 (∀x)(Px ⊃ ∼Rx) 1; RC Px ⊃ ∼Rx

H

1 (Pa ∧ ∼Qa) ∧ ∼Ra PREM ∅2 ∼Pb ∧ (Qb ∧ Rb) PREM ∅3 Pc ∧ Rc PREM ∅4 Qd ∧ ∼Pe PREM ∅· · ·9L10 (∀x)(Px ⊃ ∼Rx) 1; RC Px ⊃ ∼Rx10 Dab(Px ⊃ ∼Rx) 1, 3; RU ∅

H

H

1 (Pa ∧ ∼Qa) ∧ ∼Ra PREM ∅2 ∼Pb ∧ (Qb ∧ Rb) PREM ∅3 Pc ∧ Rc PREM ∅4 Qd ∧ ∼Pe PREM ∅· · ·

H

1 (Pa ∧ ∼Qa) ∧ ∼Ra PREM ∅2 ∼Pb ∧ (Qb ∧ Rb) PREM ∅3 Pc ∧ Rc PREM ∅4 Qd ∧ ∼Pe PREM ∅· · ·11 (∀x)(Px ⊃ ∼Qx) 1; RC Px ⊃ ∼Qx12 ∼Qc 3, 11; RU Px ⊃ ∼Qx

H

1 (Pa ∧ ∼Qa) ∧ ∼Ra PREM ∅2 ∼Pb ∧ (Qb ∧ Rb) PREM ∅3 Pc ∧ Rc PREM ∅4 Qd ∧ ∼Pe PREM ∅· · ·11 (∀x)(Px ⊃ ∼Qx) 1; RC Px ⊃ ∼Qx12 ∼Qc 3, 11; RU Px ⊃ ∼Qx13 (∀x)(Rx ⊃ Qx) 2; RC Rx ⊃ Qx14 Qc 3, 13; RU Rx ⊃ Qx

H

1 (Pa ∧ ∼Qa) ∧ ∼Ra PREM ∅2 ∼Pb ∧ (Qb ∧ Rb) PREM ∅3 Pc ∧ Rc PREM ∅4 Qd ∧ ∼Pe PREM ∅· · ·11 (∀x)(Px ⊃ ∼Qx) 1; RC Px ⊃ ∼Qx12 ∼Qc 3, 11; RU Px ⊃ ∼Qx13 (∀x)(Rx ⊃ Qx) 2; RC Rx ⊃ Qx14 Qc 3, 13; RU Rx ⊃ Qx15 (∃x)∼(Px ⊃ ∼Qx) ∨ (∃x)∼(Rx ⊃ Qx) 3; RU ∅16 (∃x)(Px ⊃ ∼Qx) ∧ (∃x)(Rx ⊃ Qx) 1, 2; RU ∅

H

1 (Pa ∧ ∼Qa) ∧ ∼Ra PREM ∅2 ∼Pb ∧ (Qb ∧ Rb) PREM ∅3 Pc ∧ Rc PREM ∅4 Qd ∧ ∼Pe PREM ∅· · ·11L17 (∀x)(Px ⊃ ∼Qx) 1; RC Px ⊃ ∼Qx12L17 ∼Qc 3, 11; RU Px ⊃ ∼Qx13L17 (∀x)(Rx ⊃ Qx) 2; RC Rx ⊃ Qx14L17 Qc 3, 13; RU Rx ⊃ Qx15 (∃x)∼(Px ⊃ ∼Qx) ∨ (∃x)∼(Rx ⊃ Qx) 3; RU ∅16 (∃x)(Px ⊃ ∼Qx) ∧ (∃x)(Rx ⊃ Qx) 1, 2; RU ∅17 DabPx ⊃ ∼Qx, Rx ⊃ Qx 15, 16; RU ∅

H

H

1 (Pa ∧ ∼Qa) ∧ ∼Ra PREM ∅2 ∼Pb ∧ (Qb ∧ Rb) PREM ∅3 Pc ∧ Rc PREM ∅4 Qd ∧ ∼Pe PREM ∅· · ·

H

1 (Pa ∧ ∼Qa) ∧ ∼Ra PREM ∅2 ∼Pb ∧ (Qb ∧ Rb) PREM ∅3 Pc ∧ Rc PREM ∅4 Qd ∧ ∼Pe PREM ∅· · ·18 (∀x)(Px ⊃ Sx) 4; RC Px ⊃ Sx19 Sa 1, 18; RU Px ⊃ Sx

H

1 (Pa ∧ ∼Qa) ∧ ∼Ra PREM ∅2 ∼Pb ∧ (Qb ∧ Rb) PREM ∅3 Pc ∧ Rc PREM ∅4 Qd ∧ ∼Pe PREM ∅· · ·18 (∀x)(Px ⊃ Sx) 4; RC Px ⊃ Sx19 Sa 1, 18; RU Px ⊃ Sx20 (∃x)∼(Px ⊃ Sx) ∨ (∃x)∼(Px ⊃ ∼Sx) 3; RU ∅21 (∃x)(Px ⊃ Sx) ∧ (∃x)(Px ⊃ ∼Sx) 4; RU ∅

H

1 (Pa ∧ ∼Qa) ∧ ∼Ra PREM ∅2 ∼Pb ∧ (Qb ∧ Rb) PREM ∅3 Pc ∧ Rc PREM ∅4 Qd ∧ ∼Pe PREM ∅· · ·18L22 (∀x)(Px ⊃ Sx) 4; RC Px ⊃ Sx19L22 Sa 1, 18; RU Px ⊃ Sx20 (∃x)∼(Px ⊃ Sx) ∨ (∃x)∼(Px ⊃ ∼Sx) 3; RU ∅21 (∃x)(Px ⊃ Sx) ∧ (∃x)(Px ⊃ ∼Sx) 4; RU ∅22 DabPx ⊃ Sx, Px ⊃ ∼Sx 20, 21; RU ∅

H

Some theoretical stuff H

a stage (of a proof) is a sequence of lines

Some theoretical stuff H

a stage (of a proof) is a sequence of lines

a proof is a chain of (1 or more) stages

a subsequent stage is obtained by adding a line to the stage

the marking definition determines which lines of the stage are marked

(marks may come and go with the stage)

Some theoretical stuff H

a stage (of a proof) is a sequence of lines

a proof is a chain of (1 or more) stages

a subsequent stage is obtained by adding a line to the stage

the marking definition determines which lines of the stage are marked

(marks may come and go with the stage)

an extension of a proof P is a proof P′ that has P as its initial fragment

Some theoretical stuff H

a stage (of a proof) is a sequence of lines

a proof is a chain of (1 or more) stages

a subsequent stage is obtained by adding a line to the stage

the marking definition determines which lines of the stage are marked

(marks may come and go with the stage)

an extension of a proof P is a proof P′ that has P as its initial fragment

Definition (repetition)

A is finally derived from Γ on line i of a proof at stage s iff

(i) A is the second element of line i,

(ii) line i is not marked at stage s, and

(iii) any extension of the proof may be further extended in such a way

that line i is unmarked.

H

H

for some logics (esp. Minimal Abnormality strategy), premise sets and

conclusions, stability (final derivability) is reached only after infinitely

many stages

H

for some logics (esp. Minimal Abnormality strategy), premise sets and

conclusions, stability (final derivability) is reached only after infinitely

many stages

if a stage has infinitely many lines, the next stage is reached by inserting

a line (variant)

H

for some logics (esp. Minimal Abnormality strategy), premise sets and

conclusions, stability (final derivability) is reached only after infinitely

many stages

if a stage has infinitely many lines, the next stage is reached by inserting

a line (variant)

pace Leon Horsten (transfinite proofs)

H

Game theoretic approaches to final derivability

example:

proponent provides proof P in which A is derived in an unmarked line i

A is finally derived (in that line)

iff

any extension (by the opponent) of P into a P′ in which i is marked

can be extended (by the proponent) into a P′′ in which i is unmarked

the proponent has an ‘answer’ to any ‘attack’

4.3 4

4.4 Semantics H

Dab(∆) is a minimal Dab-consequence of Γ:

Γ LLL Dab(∆) and, for all ∆′ ⊂ ∆, Γ 2LLL Dab(∆′)

where M is a LLL-model: Ab(M) = A ∈ Ω | M |= A

H

4.4 Semantics H

Dab(∆) is a minimal Dab-consequence of Γ:

Γ LLL Dab(∆) and, for all ∆′ ⊂ ∆, Γ 2LLL Dab(∆′)

where M is a LLL-model: Ab(M) = A ∈ Ω | M |= A

Reliability

where Dab(∆1), Dab(∆2), . . . are the minimal Dab-consequences of Γ,

U(Γ) = ∆1 ∪ ∆2 ∪ . . .

4.4 Semantics H

Dab(∆) is a minimal Dab-consequence of Γ:

Γ LLL Dab(∆) and, for all ∆′ ⊂ ∆, Γ 2LLL Dab(∆′)

where M is a LLL-model: Ab(M) = A ∈ Ω | M |= A

Reliability

where Dab(∆1), Dab(∆2), . . . are the minimal Dab-consequences of Γ,

U(Γ) = ∆1 ∪ ∆2 ∪ . . .

a LLL-model M of Γ is reliable iff Ab(M) ⊆ U(Γ)

4.4 Semantics H

Dab(∆) is a minimal Dab-consequence of Γ:

Γ LLL Dab(∆) and, for all ∆′ ⊂ ∆, Γ 2LLL Dab(∆′)

where M is a LLL-model: Ab(M) = A ∈ Ω | M |= A

Reliability

where Dab(∆1), Dab(∆2), . . . are the minimal Dab-consequences of Γ,

U(Γ) = ∆1 ∪ ∆2 ∪ . . .

a LLL-model M of Γ is reliable iff Ab(M) ⊆ U(Γ)

Γ AL A iff all reliable models of Γ verify A

H

Minimal Abnormality H

a LLL-model M of Γ is minimally abnormal iff there is no LLL-model

M ′ for which Ab(M ′) ⊂ Ab(M)

Minimal Abnormality H

a LLL-model M of Γ is minimally abnormal iff there is no LLL-model

M ′ for which Ab(M ′) ⊂ Ab(M)

Γ AL A iff all minimally abnormal models of Γ verify A

H

'

&

$

%

LLL'

&

$

%

ULL

&%'$

Γ

'

&

$

%

LLL'

&

$

%

ULL

&%'$

Γ

Abnormal Γ Normal Γ

H

H

there are no AL-models, but only AL-models of some Γ

H

there are no AL-models, but only AL-models of some Γ

all LLL-models are AL-models of some Γ

H

there are no AL-models, but only AL-models of some Γ

all LLL-models are AL-models of some Γ

the AL-semantics selects some LLL-models of Γ as AL-models of Γ

4.4 4

4.5 Annotated dynamic proofs: Minimal Abnormality H

rules (as for Reliability) and marking definition

4.5 Annotated dynamic proofs: Minimal Abnormality H

rules (as for Reliability) and marking definition

where Dab(∆1), . . . , Dab(∆n) are the minimal Dab-formulas derived

on the condition ∅ at stage s

Φs(Γ): the set of all sets that contain one member of each ∆i

4.5 Annotated dynamic proofs: Minimal Abnormality H

rules (as for Reliability) and marking definition

where Dab(∆1), . . . , Dab(∆n) are the minimal Dab-formulas derived

on the condition ∅ at stage s

Φs(Γ): the set of all sets that contain one member of each ∆i

Φ?s(Γ): contains, for any ϕ ∈ Φ

s(Γ), CnLLL(ϕ) ∩ Ω

4.5 Annotated dynamic proofs: Minimal Abnormality H

rules (as for Reliability) and marking definition

where Dab(∆1), . . . , Dab(∆n) are the minimal Dab-formulas derived

on the condition ∅ at stage s

Φs(Γ): the set of all sets that contain one member of each ∆i

Φ?s(Γ): contains, for any ϕ ∈ Φ

s(Γ), CnLLL(ϕ) ∩ Ω

Φs(Γ): φ ∈ Φ?s(Γ) that are not proper supersets of a φ′ ∈ Φ?

s(Γ)

4.5 Annotated dynamic proofs: Minimal Abnormality H

rules (as for Reliability) and marking definition

where Dab(∆1), . . . , Dab(∆n) are the minimal Dab-formulas derived

on the condition ∅ at stage s

Φs(Γ): the set of all sets that contain one member of each ∆i

Φ?s(Γ): contains, for any ϕ ∈ Φ

s(Γ), CnLLL(ϕ) ∩ Ω

Φs(Γ): φ ∈ Φ?s(Γ) that are not proper supersets of a φ′ ∈ Φ?

s(Γ)

Definition

where ∆ is the condition of line i, line i is marked at stage s iff, where

A derived on the condition ∆ at line i, (i) there is no ϕ ∈ Φs(Γ) such

that ϕ ∩ ∆ = ∅, or (ii) for some ϕ ∈ Φs(Γ), there is no line at which A

is derived on a condition Θ for which ϕ ∩ Θ = ∅

H

H

example: Γ = ∼p, ∼q, p ∨ q, p ∨ r, q ∨ s

Γ `ACLuNm r ∨ s

Γ 0ACLuNr r ∨ s

4.5 4

4.6 Some further examples H

corrective

4.6 Some further examples H

corrective

• ACLuNr and ACLuNm (negation gluts)

• other paraconsistent logics as LLL, including ANA (J)

• negation gaps

• gluts/gaps for all logical symbols

• ambiguity adaptive logics (G)

• adaptive zero logic

• corrective deontic logics (J, I)

• (prioritized ial)

• . . .

H

ampliative (+ ampliative and corrective) H

• compatibility (characterization)

• compatibility with inconsistent premises (J, Da, L)

• diagnosis

• prioritized adaptive logics (L, E, Da, J)

• inductive generalization (D, Ln)

• abduction

• inference to the best explanation (J)

• analogies, metaphors

• erotetic inference

• discussions

• . . .

H

incorporation

• flat Rescher–Manor consequence relations (+ extensions)

• partial structures and pragmatic truth (J)

• prioritized Rescher–Manor consequence relations (L, Ti)

• circumscription, defaults, negation as failure, . . .

• dynamic characterization of R→

• . . .

H

applications

• scientific discovery and creativity

• scientific explanation

• diagnosis (E, Da, L, J)

• positions defended / agreed upon in discussions

• changing positions in discussions

• belief revision (K)

• inconsistent arithmetic (Ti)

• evocation of questions from inconsistent premises (J, K)

• inductive statistical explanation (E)

• tentatively eliminating abnormalities

• . . .

4.6 4

4.7 Some properties H

Derivability Adjustment Theorem:

A ∈ CnULL(Γ) iff A ∨ Dab(∆) ∈ CnLLL(Γ) for some ∆.

4.7 Some properties H

Derivability Adjustment Theorem:

A ∈ CnULL(Γ) iff A ∨ Dab(∆) ∈ CnLLL(Γ) for some ∆.

Strong Reassurance

4.7 Some properties H

Derivability Adjustment Theorem:

A ∈ CnULL(Γ) iff A ∨ Dab(∆) ∈ CnLLL(Γ) for some ∆.

Strong Reassurance

CnLLL(Γ) ⊆ CnAL(Γ) ⊆ CnULL(Γ)

. . .

4.7 4

4.8 Combined adaptive logics H

· ‘union’: Ω1 ∪ Ω2

· sequential combination

. . .

H

example of a set of adaptive logics to combine: ATi H

· lower limit logic: T

· set of abnormalities: Ωi = ♦iA ∧ ∼A | A ∈ W(abnormality is falsehood of an expectancy)

· strategy : Reliability

upper limit logic: Triv = T + ♦A ⊃ A

example of a set of adaptive logics to combine: ATi H

· lower limit logic: T

· set of abnormalities: Ωi = ♦iA ∧ ∼A | A ∈ W(abnormality is falsehood of an expectancy)

· strategy : Reliability

upper limit logic: Triv = T + ♦A ⊃ A

many possible variants: e.g. Ωi = ♦i∀A ∧ ∼∀A | A ∈ F

H

the combination H

we want . . . CnAT3(CnAT2(CnAT1(Γ))) (1)

Proofs: (skipping a couple of details)

· apply rules of AT1, AT2, . . . in any order

· Marking definition: at any stage, mark for AT1, next for AT2, . . .

up to the highest ♦i that occurs in the proof

the combination H

we want . . . CnAT3(CnAT2(CnAT1(Γ))) (1)

Proofs: (skipping a couple of details)

· apply rules of AT1, AT2, . . . in any order

· Marking definition: at any stage, mark for AT1, next for AT2, . . .

up to the highest ♦i that occurs in the proof

Notwithstanding (1), a criterion may warrant final derivability

after finitely many steps.

4.8 4

5 Prospective Dynamics for Adaptive Logics

5.1 The Problem

5.2 Derivability at a Stage

5.3 Implementation for ACLuNr and ACLuNm

5.4 Implementation for ILr and ILm

5.5 Procedural Criterion for Final Derivability

5.6 Phrase the Criterion in Problem Solving Terms

5.7 What If No Criterion Applies

5 0

5.1 The problem H

in general: no positive and no negative test for AL-derivability

5.1 The problem H

in general: no positive and no negative test for AL-derivability

What is the aim of the procedure?

5.1 The problem H

in general: no positive and no negative test for AL-derivability

What is the aim of the procedure?

1. Procedure for deriving a desired consequence from Γ at a stage (and

on some condition).

5.1 The problem H

in general: no positive and no negative test for AL-derivability

What is the aim of the procedure?

1. Procedure for deriving a desired consequence from Γ at a stage (and

on some condition).

2. Can the procedure be continued in such a way that it forms a

criterion for deciding that the formula is finally derived at a line in a

proof from Γ?

5.1 The problem H

in general: no positive and no negative test for AL-derivability

What is the aim of the procedure?

1. Procedure for deriving a desired consequence from Γ at a stage (and

on some condition).

2. Can the procedure be continued in such a way that it forms a

criterion for deciding that the formula is finally derived at a line in a

proof from Γ?

3. What if no criterion applies?

5.1 The problem H

in general: no positive and no negative test for AL-derivability

What is the aim of the procedure?

1. Procedure for deriving a desired consequence from Γ at a stage (and

on some condition).

2. Can the procedure be continued in such a way that it forms a

criterion for deciding that the formula is finally derived at a line in a

proof from Γ?

3. What if no criterion applies?

to keep the discussion within bounds, I shall only consider the Reliability

strategy for adaptive logics in standard format

5.1 5

5.2 Derivability at a Stage H

Aim: to find out whether Γ `AL A

5.2 Derivability at a Stage H

Aim: to find out whether Γ `AL A

Alternatively: given Γ and AL, to answer the question ?A, ∼A

5.2 Derivability at a Stage H

Aim: to find out whether Γ `AL A

Alternatively: given Γ and AL, to answer the question ?A, ∼A

To derive A at a stage on some condition ∆ we need:

− the prospective instructions for the lower limit logic

− some ‘Basic Schema’ instruction for the adaptive logic

H

H

General Basic Schema:

A ∨ Dab(Θ) ∆A ∆ ∪ Θ

H

General Basic Schema:

A ∨ Dab(Θ) ∆A ∆ ∪ Θ

Basic Schema for ACLuNr and ACLuNm :

∃∼A ∨ B ∆∃¬A ∨ B ∆ ∪ ∃(A ∧ ∼A)

H

General Basic Schema:

A ∨ Dab(Θ) ∆A ∆ ∪ Θ

Basic Schema for ACLuNr and ACLuNm :

∃∼A ∨ B ∆∃¬A ∨ B ∆ ∪ ∃(A ∧ ∼A)

Basic Schema for ILr and ILm :

where A ∈ F:

∃A ∨ B ∆∀A ∨ B ∆ ∪ ∃A ∧ ∃∼A)

H

These rules, together with the standard instructions for phase 1 and

phase 2 (here called phase 1.1 and phase 1.2), lead to A being (or not

being) derived on some condition ∆.

5.2 5

5.3 Implementation for ACLuNr and ACLuNm (propositionally)

Structural rules:

Prem If A ∈ Γ, introduce [∅] A on the condition ∅.

Goal Introduce [G] G on the condition ∅.

EFQ If A ∈ Γ, introduce [¬A] G on the condition ∅.

H

¬A is the classical negation of A; ∗A is the classical complement of A

a a1 a2 b b1 b2

A ∧ B A B ¬(A ∧ B) ∗A ∗BA ≡ B A ⊃ B B ⊃ A ¬(A ≡ B) ¬(A ⊃ B) ¬(B ⊃ A)

¬(A ∨ B) ∗A ∗B A ∨ B A B¬(A ⊃ B) A ∗B A ⊃ B ∗A B

¬¬A A A

pp(¬A, ∼A) and pp(∼A, ¬A)

H

Formula analysing rules:

[∆] a Θ[∆] a1 Θ

[∆] a Θ[∆] a2 Θ

[∆] b Θ[∆ ∪ ∗b2] b1 Θ

[∆] b Θ[∆ ∪ ∗b1] b2 Θ

Plus:

∼E[∆] ∼A Θ[∆] ¬A Θ ∪ A ∧ ∼A ¬∼E

[∆] ¬∼A Θ[∆] A Θ

H

Condition analysing rules:

[∆ ∪ a] A Θ[∆ ∪ a1, a2] A Θ

[∆ ∪ b] A Θ[∆ ∪ b1] A Θ

[∆ ∪ b] A Θ[∆ ∪ b2] A Θ

Plus:

C∼E[∆ ∪ ∼B] A Θ[∆ ∪ ¬B] A Θ

C¬∼E[∆ ∪ ¬∼B] A Θ[∆ ∪ B] A Θ ∪ B ∧ ∼B

H

Further rules:

Trans [∆ ∪ B] A Θ[∆′] B Θ′

[∆ ∪ ∆′] A Θ ∪ Θ′

EM [∆ ∪ B] A Θ[∆′ ∪ ¬B] A Θ′

[∆ ∪ ∆′] A Θ ∪ Θ′

EM0 [∆ ∪ ¬A] A Θ[∆] A Θ

IC [∆]Dab(Λ ∪ Λ′) Θ ∪ Λ′

[∆]Dab(Λ ∪ Λ′) Θ

5.3 5

5.4 Implementation for ILr and ILm

replace ¬ by the standard negation ∼

− in the table for a-formulas and b-formulas

− in the standard formula analysing rules and condition analysing rules

(without the “plus”)

restore pp as for CL

5.4 Implementation for ILr and ILm

replace ¬ by the standard negation ∼

− in the table for a-formulas and b-formulas

− in the standard formula analysing rules and condition analysing rules

(without the “plus”)

restore pp as for CL

add CAR and FAR for the quantifiers and for identity

5.4 Implementation for ILr and ILm

replace ¬ by the standard negation ∼

− in the table for a-formulas and b-formulas

− in the standard formula analysing rules and condition analysing rules

(without the “plus”)

restore pp as for CL

add CAR and FAR for the quantifiers and for identity

add pp(∀A, ∃A)

H

Add to the formula analysing rules:

∃E′ [∆] ∃A ∨ B Θ[∆] ∀A ∨ B Θ ∪ ∃A ∧ ∃∼A

Add to the condition analysing rules:

∃E′ [∆ ∪ ∀B] A Θ[∆ ∪ ∃B] A Θ ∪ ∃B ∧ ∃∼B

Prem, Goal, [EFQ], Trans, EM, EM0, IC

5.4 5

5.5 Procedural Criterion for Final Derivability H

In this section, keep reading ∼ as classical negation.

5.5 Procedural Criterion for Final Derivability H

In this section, keep reading ∼ as classical negation.

We obtained A on the condition ∆.

5.5 Procedural Criterion for Final Derivability H

In this section, keep reading ∼ as classical negation.

We obtained A on the condition ∆.

A is finally derivable iff ∆ ∩ U(Γ) = ∅

How can we find out whether ∆ ∩ U(Γ) = ∅?

5.5 Procedural Criterion for Final Derivability H

In this section, keep reading ∼ as classical negation.

We obtained A on the condition ∆.

A is finally derivable iff ∆ ∩ U(Γ) = ∅

How can we find out whether ∆ ∩ U(Γ) = ∅?

For all Θ, if Dab(∆) is derivable on a condition Θ with ∆ ∩ Θ = ∅,

i.e. Γ `LLL Dab(∆ ∪ Θ) and ∆ ∩ Θ = ∅,

then Dab(Θ) must be derivable on the condition ∅.

H

In other words:

After obtaining A on the condition ∆,

we introduce the A-Goal [Dab(∆)]Dab(∆)

In other words:

After obtaining A on the condition ∆,

we introduce the A-Goal [Dab(∆)]Dab(∆)

Next, if Dab(∆) is derived on the condition Θ (with ∆ ∩ Θ = ∅),

we introduce the X-Goal [Dab(Θ)]Dab(Θ),

which should be reached on the condition ∅.

H

H

Phase 1: try to derive A on a condition ∆

no success: Γ 0AL A

H

Phase 1: try to derive A on a condition ∆

no success: Γ 0AL A

success: try to show that all members of ∆ are reliable, viz. move to

Phase 2: try to derive Dab(∆) on a condition Θ (with ∆ ∩ Θ = ∅)

no success: no member of ∆ is unreliable and hence Γ `AL A

H

Phase 1: try to derive A on a condition ∆

no success: Γ 0AL A

success: try to show that all members of ∆ are reliable, viz. move to

Phase 2: try to derive Dab(∆) on a condition Θ (with ∆ ∩ Θ = ∅)

no success: no member of ∆ is unreliable and hence Γ `AL A

success: move to

Phase 3: try to derive Dab(Θ) on the condition ∅

no success: some member of ∆ is unreliable

go back to phase 1 and try to derive A on a new condition ∆′

H

Phase 1: try to derive A on a condition ∆

no success: Γ 0AL A

success: try to show that all members of ∆ are reliable, viz. move to

Phase 2: try to derive Dab(∆) on a condition Θ (with ∆ ∩ Θ = ∅)

no success: no member of ∆ is unreliable and hence Γ `AL A

success: move to

Phase 3: try to derive Dab(Θ) on the condition ∅

no success: some member of ∆ is unreliable

go back to phase 1 and try to derive A on a new condition ∆′

success: Dab(Θ) is a minimal Dab-consequence of Γ

go back to phase 2 and try to derive Dab(∆) on a new condition Θ′

(∆ ∩ Θ = ∅)

5.5 5

5.6 Phrase the Criterion in Problem Solving Terms H

(PM)

This leads to problems that contain questions of which one direct

answer only is sensible.

5.6 5

5.7 What If No Criterion Applies H

Given the presupposition that abnormalities are false until and unless

proven otherwise, the derivability of A on a condition ∆ of which no

member is shown to be unreliable is a good reason to consider A as

derivable.

5.7 What If No Criterion Applies H

Given the presupposition that abnormalities are false until and unless

proven otherwise, the derivability of A on a condition ∆ of which no

member is shown to be unreliable is a good reason to consider A as

derivable.

The more so as the block analysis shows that, as the proof proceeds,

one may obtain more insights in the premises (and cannot loose insight

in the premises).

5.7 What If No Criterion Applies H

Given the presupposition that abnormalities are false until and unless

proven otherwise, the derivability of A on a condition ∆ of which no

member is shown to be unreliable is a good reason to consider A as

derivable.

The more so as the block analysis shows that, as the proof proceeds,

one may obtain more insights in the premises (and cannot loose insight

in the premises).

If complete insight in the premises is reached, the proof becomes stable.

(Derivability at this stage = final derivability.)

5.7 What If No Criterion Applies H

Given the presupposition that abnormalities are false until and unless

proven otherwise, the derivability of A on a condition ∆ of which no

member is shown to be unreliable is a good reason to consider A as

derivable.

The more so as the block analysis shows that, as the proof proceeds,

one may obtain more insights in the premises (and cannot loose insight

in the premises).

If complete insight in the premises is reached, the proof becomes stable.

(Derivability at this stage = final derivability.)

Complete insight in the premises may be reached with respect to a

specific A.

5.7 5

6 Extensions, open problems,and the bright side of life

6.1 Premise Sets Originally Considered as Irrelevant

6.2 Questions Evoked by Steps in the Psp

6.3 Tests

6.4 Narrowing Down Suspicion: Conjectures

6.5 Wild Guesses (PM – contextual)

6.6 More On Combined Contextual Psps

6.7 Changing the Logic

6.8 The Bright Side of Life

6 0

6.1 Premise Sets Originally Considered as Irrelevant H

A researcher often tries to solve a problem from a given theory and/or a

given data set and/or a given set of methodological does and don’ts

(here often a given logic!), and later finds out that (s)he has to broaden

this background.

6.1 Premise Sets Originally Considered as Irrelevant H

A researcher often tries to solve a problem from a given theory and/or a

given data set and/or a given set of methodological does and don’ts

(here often a given logic!), and later finds out that (s)he has to broaden

this background.

One tries to answer ?Pa, ∼Pa from a theory T . This fails, but the

prospective dynamics leads to the following situation:

T warrants that ?Pa, ∼Pa is answered by Pa if Qa is the case.

To find out whether Qa, one needs to rely on a theory T ′.

6.1 Premise Sets Originally Considered as Irrelevant H

A researcher often tries to solve a problem from a given theory and/or a

given data set and/or a given set of methodological does and don’ts

(here often a given logic!), and later finds out that (s)he has to broaden

this background.

One tries to answer ?Pa, ∼Pa from a theory T . This fails, but the

prospective dynamics leads to the following situation:

T warrants that ?Pa, ∼Pa is answered by Pa if Qa is the case.

To find out whether Qa, one needs to rely on a theory T ′.

example: solving a physiological problem may require chemistry, etc.

6.1 Premise Sets Originally Considered as Irrelevant H

A researcher often tries to solve a problem from a given theory and/or a

given data set and/or a given set of methodological does and don’ts

(here often a given logic!), and later finds out that (s)he has to broaden

this background.

One tries to answer ?Pa, ∼Pa from a theory T . This fails, but the

prospective dynamics leads to the following situation:

T warrants that ?Pa, ∼Pa is answered by Pa if Qa is the case.

To find out whether Qa, one needs to rely on a theory T ′.

example: solving a physiological problem may require chemistry, etc.

Which theory has to be invoked will be revealed by the derived problems.

6.1 Premise Sets Originally Considered as Irrelevant H

A researcher often tries to solve a problem from a given theory and/or a

given data set and/or a given set of methodological does and don’ts

(here often a given logic!), and later finds out that (s)he has to broaden

this background.

One tries to answer ?Pa, ∼Pa from a theory T . This fails, but the

prospective dynamics leads to the following situation:

T warrants that ?Pa, ∼Pa is answered by Pa if Qa is the case.

To find out whether Qa, one needs to rely on a theory T ′.

example: solving a physiological problem may require chemistry, etc.

Which theory has to be invoked will be revealed by the derived problems.

Result: premise set is extended.

(= simple, but provoked by the prospective dynamics)

6.1 6

6.2 Questions Evoked by Steps in the Psp H

Consider the following fragment of an example from lecture 4:

1 (Pa ∧ ∼Qa) ∧ ∼Ra PREM ∅2 ∼Pb ∧ (Qb ∧ Rb) PREM ∅3 Pc ∧ Rc PREM ∅4 Qd ∧ ∼Pe PREM ∅· · ·11L17 (∀x)(Px ⊃ ∼Qx) 1; RC Px ⊃ ∼Qx12L17 ∼Qc 3, 11; RU Px ⊃ ∼Qx13L17 (∀x)(Rx ⊃ Qx) 2; RC Rx ⊃ Qx14L17 Qc 3, 13; RU Rx ⊃ Qx15 (∃x)∼(Px ⊃ ∼Qx) ∨ (∃x)∼(Rx ⊃ Qx) 3; RU ∅16 (∃x)(Px ⊃ ∼Qx) ∧ (∃x)(Rx ⊃ Qx) 1, 2; RU ∅17 !(Px ⊃ ∼Qx)∨!(Rx ⊃ Qx) 15, 16; RU ∅

line 17 evokes the question ?!(Px ⊃ ∼Qx), !(Rx ⊃ Qx)

H

H

As abnormalities are presumed to be false unless and until proven

otherwise, it is natural to consider first and foremost questions evoked

by Dab-formulas.

Abnormalities are seen as ‘abnormal’, ‘problematic’, . . .

H

As abnormalities are presumed to be false unless and until proven

otherwise, it is natural to consider first and foremost questions evoked

by Dab-formulas.

Abnormalities are seen as ‘abnormal’, ‘problematic’, . . .

Some other disjunctions also evoke sensible questions.

The fact that ∀A is derived on the condition ∆ evokes the question

?∀A, Dab(∆).

This implies ?∀A, ∼∀A, which implies questions about instances of A,

as well as ?Dab(∆), ∼Dab(∆) (which has only one sensible target).

6.2 6

6.3 Tests H

Often a question can be turned into a test, i.e. can be answered by

observation or experiment.

6.3 Tests H

Often a question can be turned into a test, i.e. can be answered by

observation or experiment.

Thus the question ?!(Px ⊃ ∼Qx), !(Rx ⊃ Qx) erotetically implies

?Qc, ∼Qc.

6.3 Tests H

Often a question can be turned into a test, i.e. can be answered by

observation or experiment.

Thus the question ?!(Px ⊃ ∼Qx), !(Rx ⊃ Qx) erotetically implies

?Qc, ∼Qc.

In view of the premises, this is an extremely sensible question: any

answer will at once lead to one of the disjuncts of

!(Px ⊃ ∼Qx)∨!(Rx ⊃ Qx)

and hence ‘free the other disjunct from suspicion’.

If the answer is, for example, Qc,

then (∃x)(Px ⊃ ∼Qx) ∧ (∃x)∼(Px ⊃ ∼Qx) is derivable and hence lines

13 and 14 are unmarked when the outcome of the test is added as a

new premise.

H

H1 (Pa ∧ ∼Qa) ∧ ∼Ra PREM ∅2 ∼Pb ∧ (Qb ∧ Rb) PREM ∅3 Pc ∧ Rc PREM ∅4 Qd ∧ ∼Pe PREM ∅· · ·11L17 (∀x)(Px ⊃ ∼Qx) 1; RC Px ⊃ ∼Qx12L17 ∼Qc 3, 11; RU Px ⊃ ∼Qx13 (∀x)(Rx ⊃ Qx) 2; RC Rx ⊃ Qx14 Qc 3, 13; RU Rx ⊃ Qx15 (∃x)∼(Px ⊃ ∼Qx) ∨ (∃x)∼(Rx ⊃ Qx) 3; RU ∅16 (∃x)(Px ⊃ ∼Qx) ∧ (∃x)(Rx ⊃ Qx) 1, 2; RU ∅17 !(Px ⊃ ∼Qx)∨!(Rx ⊃ Qx) 15, 16; RU ∅18 Qc New ∅19 (∃x)(Px ⊃ ∼Qx) ∧ (∃x)∼(Px ⊃ ∼Qx) 1, 2; RU ∅

The formula of 19 is the only relevant minimal Dab-formula in the

proof.

H1 (Pa ∧ ∼Qa) ∧ ∼Ra PREM ∅2 ∼Pb ∧ (Qb ∧ Rb) PREM ∅3 Pc ∧ Rc PREM ∅4 Qd ∧ ∼Pe PREM ∅· · ·11L17 (∀x)(Px ⊃ ∼Qx) 1; RC Px ⊃ ∼Qx12L17 ∼Qc 3, 11; RU Px ⊃ ∼Qx13 (∀x)(Rx ⊃ Qx) 2; RC Rx ⊃ Qx14 Qc 3, 13; RU Rx ⊃ Qx15 (∃x)∼(Px ⊃ ∼Qx) ∨ (∃x)∼(Rx ⊃ Qx) 3; RU ∅16 (∃x)(Px ⊃ ∼Qx) ∧ (∃x)(Rx ⊃ Qx) 1, 2; RU ∅17 !(Px ⊃ ∼Qx)∨!(Rx ⊃ Qx) 15, 16; RU ∅18 Qc New ∅19 (∃x)(Px ⊃ ∼Qx) ∧ (∃x)∼(Px ⊃ ∼Qx) 1, 2; RU ∅

The formula of 19 is the only relevant minimal Dab-formula in the

proof.

psps evoke tests that lead to the rejection of some generalizations and

the derivability of others

6.3 6

6.4 Narrowing Down Suspicion: Conjectures H

A researcher may have reasons (personal constraints, . . . —see lecture

1) to deny certain abnormalities.

6.4 Narrowing Down Suspicion: Conjectures H

A researcher may have reasons (personal constraints, . . . —see lecture

1) to deny certain abnormalities. For example ∼!(Rx ⊃ Qx).

6.4 Narrowing Down Suspicion: Conjectures H

A researcher may have reasons (personal constraints, . . . —see lecture

1) to deny certain abnormalities. For example ∼!(Rx ⊃ Qx).

∼(∃A ∧ ∃∼A) is CL-equivalent to ∀A ∨ ∀∼A.

6.4 Narrowing Down Suspicion: Conjectures H

A researcher may have reasons (personal constraints, . . . —see lecture

1) to deny certain abnormalities. For example ∼!(Rx ⊃ Qx).

∼(∃A ∧ ∃∼A) is CL-equivalent to ∀A ∨ ∀∼A.

Abnormalities should be denied in a defeasible way.

6.4 Narrowing Down Suspicion: Conjectures H

A researcher may have reasons (personal constraints, . . . —see lecture

1) to deny certain abnormalities. For example ∼!(Rx ⊃ Qx).

∼(∃A ∧ ∃∼A) is CL-equivalent to ∀A ∨ ∀∼A.

Abnormalities should be denied in a defeasible way.

Defeasible conjectures are better prioritized: ♦A1, ♦♦A1, . . .

6.4 Narrowing Down Suspicion: Conjectures H

A researcher may have reasons (personal constraints, . . . —see lecture

1) to deny certain abnormalities. For example ∼!(Rx ⊃ Qx).

∼(∃A ∧ ∃∼A) is CL-equivalent to ∀A ∨ ∀∼A.

Abnormalities should be denied in a defeasible way.

Defeasible conjectures are better prioritized: ♦A1, ♦♦A1, . . .

These are handled by a well-studied adaptive logic that first avoids

(whenever possible) ∼A ∧ ♦A, next avoids (whenever possible)

∼A ∧ ♦♦A, etc. (see 4.8)

6.4 Narrowing Down Suspicion: Conjectures H

A researcher may have reasons (personal constraints, . . . —see lecture

1) to deny certain abnormalities. For example ∼!(Rx ⊃ Qx).

∼(∃A ∧ ∃∼A) is CL-equivalent to ∀A ∨ ∀∼A.

Abnormalities should be denied in a defeasible way.

Defeasible conjectures are better prioritized: ♦A1, ♦♦A1, . . .

These are handled by a well-studied adaptive logic that first avoids

(whenever possible) ∼A ∧ ♦A, next avoids (whenever possible)

∼A ∧ ♦♦A, etc. (see 4.8)

Tests and conjectures lead only to apparently the same results (as it

should be!)

6.4 6

6.6 More On Combined Contextual Psps H

It is one thing to answer a why-question from a theory T (Hintikka),

and another thing to find out (relying on other knowledge) whether a

potential explanation is indeed a true explanation.

Step 1: find a potential explanation by the prospective dynamics relying

on T .

Step 2: find out whether this explanation is true.

6.6 6

6.7 Changing the Logic H

Example:

If trying to explain Pa from T fails,

no suggested extension of the data offers a way out,

and no other available theory offers a way out,

one may move to IL to extend T by some generalization that, together

with the data, explains Pa.

= move from CL to IL

6.7 6

6.8 The Bright Side of Life

The dynamics of the programme suggests that a pragmatic approach

(solve what is possible and hope for more) is justified.

Further extensions are apparently within reach.

Also outside scientific problem solving.

Lots of applications still have to be worked out. Similarly for some

technical stuff.

6.8 6