mathematical machines and thinking elementary problems and

52
Mathematical machines and thinking — elementary problems and definitions of artificial intelligence Przemyslaw Kesk [email protected] Department of Methods of Artificial Intelligence and Applied Mathematics

Upload: others

Post on 18-Jan-2022

5 views

Category:

Documents


0 download

TRANSCRIPT

Mathematical machines and thinking —

elementary problems and definitions of

artificial intelligence

Przemysław [email protected]

Department of Methods of Artificial Intelligence and Applied Mathematics

Contents

1 Examples of problemsSearching game treesSearching graphsOptimization problemsStrategy problemsPattern recognition problemsData mining problemsControl, regulation problemsArtificial life problems

2 Can machines think? Turing’s views

3 Minsky’s remarks on machines and intelligence

4 Conteporary research areas in AI

Contents

1 Examples of problemsSearching game treesSearching graphsOptimization problemsStrategy problemsPattern recognition problemsData mining problemsControl, regulation problemsArtificial life problems

2 Can machines think? Turing’s views

3 Minsky’s remarks on machines and intelligence

4 Conteporary research areas in AI

Examples of problems Searching game trees

Games

Commonly, two-person games are considered such us: chess, checkers, GO,. . . , where players have conflicting interests, and where rules of game areclearly defined.

Problem of game tree search

Given a game position (in particular an initial position), the task is to derivenumeric scores for all possible moves of the player (whose turn it is to playnow). A score should represent exact or probable pay off for the player if hechooses the move, usually assuming best counter-play by opponent.

Przemysław Klesk (KMSIiMS, ZUT) 4 / 52

Examples of problems Searching game trees

Games

Przemysław Klesk (KMSIiMS, ZUT) 5 / 52

Contents

1 Examples of problemsSearching game treesSearching graphsOptimization problemsStrategy problemsPattern recognition problemsData mining problemsControl, regulation problemsArtificial life problems

2 Can machines think? Turing’s views

3 Minsky’s remarks on machines and intelligence

4 Conteporary research areas in AI

Examples of problems Searching graphs

Puzzles, graphs, mazes (searching)

Sudoku1*5 *37 **8*7* *** *****8 1** *****3 *7* **174* *1* *632** *4* 9***** **5 1***** *** *8*4** 62* 5*7

−→

195 237 648674 859 312328 164 759853 976 421749 512 863216 348 975962 785 134537 491 286481 623 597

Minimal sudoku12 3434 1223 4141 23

** **** 12** **4* *3

,

** **** 12** **41 **

,

** **** 12** *14* **

, . . .

Przemysław Klesk (KMSIiMS, ZUT) 7 / 52

Examples of problems Searching graphs

Puzzles, graphs, mazes (searching)

n-queens problem

On an n × n board the goal is to set up n queens, in such a waythey do not attack each other.

Example solution for n = 8

Przemysław Klesk (KMSIiMS, ZUT) 8 / 52

Examples of problems Searching graphs

Puzzles, graphs, mazes (searching)

Sliding puzzle (n2 − 1 puzzle)

Mazes, movement of players (agents) in computer games

Przemysław Klesk (KMSIiMS, ZUT) 9 / 52

Examples of problems Searching graphs

Puzzles, graphs, mazes (searching)

Problem of graph search

Given an initial node in a graph (or in a tree of states) the task isto find a path (if such exists) to the goal node. Additionally ifspecified, the path should be the shortest.

Przemysław Klesk (KMSIiMS, ZUT) 10 / 52

Contents

1 Examples of problemsSearching game treesSearching graphsOptimization problemsStrategy problemsPattern recognition problemsData mining problemsControl, regulation problemsArtificial life problems

2 Can machines think? Turing’s views

3 Minsky’s remarks on machines and intelligence

4 Conteporary research areas in AI

Examples of problems Optimization problems

Discrete optimization problems

Discrete knapsack problem

Given a set of items A = {(v1, c1), (v2, c2), . . . (vn, cn)} each described by twoquantities: value vi and capacity (or cost) ci, the task is to find a subset A∗ ofA, s.t.: ∑

i(vi,ci)∈A∗

vi −→ max oraz∑

i(vi,ci)∈A∗

ci 6 C,

where C is the maximum capacity of knapsack (constraint).

Przemysław Klesk (KMSIiMS, ZUT) 12 / 52

Examples of problems Optimization problems

Traveling Salesman Problem (TSP)

On a map a set of n cities is given. Starting from a fixed origin city one should find the shortestpath going through all the cities (visiting each at most once) and going back to the origin.

Przemysław Klesk (KMSIiMS, ZUT) 13 / 52

Examples of problems Optimization problems

Jeep problem

A jeep on a desert has n containers of fuel at disposal. Each container contains 1 unit of fuel. Thefuel consumption is 1 : 1, i.e. 1 unit of fuel per 1 unit of distance. The goal of jeep is to maximizethe distance Dn it can travel into the desert, obeying the following rules. Jeep can fill up its tankwith at most 1 unit of fuel and it must not take any additional fuel with it. Jeep can depart fromthe base and leave some fuel along the way, then it can go back to the base using the fuelremaining in its tank. At the base, jeep can fill up and depart again. When jeep reaches some fuel(left before) it can use it to fill up its tank.

Przemysław Klesk (KMSIiMS, ZUT) 14 / 52

Contents

1 Examples of problemsSearching game treesSearching graphsOptimization problemsStrategy problemsPattern recognition problemsData mining problemsControl, regulation problemsArtificial life problems

2 Can machines think? Turing’s views

3 Minsky’s remarks on machines and intelligence

4 Conteporary research areas in AI

Examples of problems Strategy problems

Prisonner’s dilemma

The police have arrested two suspects. Each remains in an isolated room. The police do not havesufficient evidence, but tries to convince each of suspects to testify and to betray the co-suspectin exchange for a light sentence. Each suspect is confronted with the following table of penalties(sentences) in the game:

A stays quiet A betrays

B stays quiet A and i B senteced to 1 year

A free to go

B sentenced to 5 years

B betrays

A senteced to 5 years

B free to go A and B sentenced to 4 years

Przemysław Klesk (KMSIiMS, ZUT) 16 / 52

Examples of problems Strategy problems

Strategy problems

Iterated prisonner’s dilemma

What strategy to use when a series of single prisoner’s dilemma games (e.g. ngames) is to be played in order to minimize to total sentence? After eachgame both players are told its result.

Can the number of games be known in advance?

Note that after n − 1 are played through, the last n-th game reduces to anordinary prisoner’s dilemma. By induction the same happens with gamesn − 1, n − 2, . . . . Unfortunately, this argument and using the dominantstrategy does not lead to minimization of total penalty.

Przemysław Klesk (KMSIiMS, ZUT) 17 / 52

Contents

1 Examples of problemsSearching game treesSearching graphsOptimization problemsStrategy problemsPattern recognition problemsData mining problemsControl, regulation problemsArtificial life problems

2 Can machines think? Turing’s views

3 Minsky’s remarks on machines and intelligence

4 Conteporary research areas in AI

Examples of problems Pattern recognition problems

Pattern recognition

Problem (in general)

We are given a set of observations (examples from the past), where eachobservation is described by a certain number of variables. One variable isdistinguished as the decision variable and has a finite set of values{1, 2, . . . ,K} (classes). The goal is to build a classifier, i.e. to find a function,which assigns observations to classes with as few mistakes as possible.The classifier should approximate well the training data, but moreimportantly it should generalize well i.e. return correct answers to newobservations unseen within the training data.

Przemysław Klesk (KMSIiMS, ZUT) 19 / 52

Examples of problems Pattern recognition problems

Pattern recognition

Examples

anti-spam filter (regular mail vs. spam mail),

automatic diagnosis (diseased / healthy, with risk of cancer / without risk, etc.),

credit credibility specification (credible client / non-credible client),

optical character recognition (OCR),

object detection / recognition / tracking (faces, vehicles, road signs, military objects, etc.)in images or video sequences.

Przemysław Klesk (KMSIiMS, ZUT) 20 / 52

Examples of problems Pattern recognition problems

Temporal pattern recognition

Examples

handwriting recognition (whole sequences, rather than individual characters),

speech recognition,

gestures recognition,

musical score following,

documents authorship recognition,

DNA modeling.

Przemysław Klesk (KMSIiMS, ZUT) 21 / 52

Contents

1 Examples of problemsSearching game treesSearching graphsOptimization problemsStrategy problemsPattern recognition problemsData mining problemsControl, regulation problemsArtificial life problems

2 Can machines think? Turing’s views

3 Minsky’s remarks on machines and intelligence

4 Conteporary research areas in AI

Examples of problems Data mining problems

Data mining

Examples

rules induction in shopping data,

rules induction in behaviour of social media users (e.g. next click predicition),

finding artciles based on user preferences,

sport /market events prediction.

Przemysław Klesk (KMSIiMS, ZUT) 23 / 52

Contents

1 Examples of problemsSearching game treesSearching graphsOptimization problemsStrategy problemsPattern recognition problemsData mining problemsControl, regulation problemsArtificial life problems

2 Can machines think? Turing’s views

3 Minsky’s remarks on machines and intelligence

4 Conteporary research areas in AI

Examples of problems Control, regulation problems

Control, regulation problems

Examples

Inverted pendulum,

Rule-based house temperature controller,

Automatic crane controller for ship unloading,

Automatic medication feeder,

Image stabilizer for digital video camera,

.

.

.

Przemysław Klesk (KMSIiMS, ZUT) 25 / 52

Contents

1 Examples of problemsSearching game treesSearching graphsOptimization problemsStrategy problemsPattern recognition problemsData mining problemsControl, regulation problemsArtificial life problems

2 Can machines think? Turing’s views

3 Minsky’s remarks on machines and intelligence

4 Conteporary research areas in AI

Examples of problems Artificial life problems

Artificial life

Examples

Cellular automata (ang. cellular automata)a,

Conway’s „The Game of Life”,

Simulation of worlds with agents (living creatures) having defined certain: senses,hunger, movement, aggression, etc.b.

aYouTube: an interesting Stephen Wolfram’s lecture can be found.bMaster thesis by: M. Suchorzewski, 2005, at WI’s library

Przemysław Klesk (KMSIiMS, ZUT) 27 / 52

Examples of problems Artificial life problems

Cellular automata

Przemysław Klesk (KMSIiMS, ZUT) 28 / 52

Examples of problems Artificial life problems

Conway’s „Game of life”

1 If full cell has 0 or 1 neighbours then it dies (loneliness).

2 If full cell has 4 or more neighbours then it dies (crowd).

3 If full cell has 2 or 3 neighbours then it remains full.

4 If empty cell has exactly 3 neighbors then it becomes full.

Przemysław Klesk (KMSIiMS, ZUT) 29 / 52

Contents

1 Examples of problemsSearching game treesSearching graphsOptimization problemsStrategy problemsPattern recognition problemsData mining problemsControl, regulation problemsArtificial life problems

2 Can machines think? Turing’s views

3 Minsky’s remarks on machines and intelligence

4 Conteporary research areas in AI

Can machines think? Turing’s views

Can machines think?

1 No, if thinking is defined as an activity related only to humanbeings. Then each such behaviour by machines can only be calledsimilar to thinking.

2 No, if we assume that in the nature of thinking itself there issomething mysterious, mystical.

3 Yes, if we assume that this problem ought to be solved by meansof an experiment and observation, by comparing machine’sbehaviour to such human activities, for which the term thinking istypically applied.

Przemysław Klesk (KMSIiMS, ZUT) 31 / 52

Can machines think? Turing’s views

Paper: “Computing machinery and

intelligence” (A.M. Turing, 1950)

Turing proposes to consider a problem: „Can machines think?”.

It requires to define: machine and to think. Definitions should be goodenough to encapsulate the common understanding of these words.Difficulties: non-precise, ambiguous definitions, or statistical (if built bysurveys1).

Turing replaces original problem by a less ambiguous one — theimitation game.

1Danger: answer to the posed problem would also be statistical.

Przemysław Klesk (KMSIiMS, ZUT) 32 / 52

Can machines think? Turing’s views

Imitation game

A man A and a woman B are in a separate room with respect to a interrogator C.

C poses questions and receives responses from players as from X and Y, and tries todecide wether X = A and Y = B, or rather X = B and Y = A.

The goal of A is to mislead C, so that C identifies him wrong.

Questions are put via a terminal which excludes possibility of identification based onvoice, smell, etc.

Przemysław Klesk (KMSIiMS, ZUT) 33 / 52

Can machines think? Turing’s views

Imitation game

Interrogator might ask: „Will X please tell me the length of his or her hair?”.Suppose X is actually A, then A’s answer might therefore be:. „My hair isshingled, and the longest strands are about nine inches long.”.

The goal of B is to help interrogator. Probably, the best strategy for heris simply to tell the truth. She might add „I am the woman, don’t listen tohim!”, but obviously A can do the same.

Przemysław Klesk (KMSIiMS, ZUT) 34 / 52

Can machines think? Turing’s views

Imitation game — Turing’s test

What happens if A is replaced by a machine in the game? Will theinterrogator be able to make correct identification as frequently as in thecase of human players?

Let the questions above replace the original problem: “Can machinesthink?”

Przemysław Klesk (KMSIiMS, ZUT) 35 / 52

Can machines think? Turing’s views

Imitation game, exemplary conversation

Q: Please write me a sonnet on the subject of the Forth Bridge?

A: Count me out on this one. I never could write poetry.

Q: Add 34957 to 70764.

A: (After a pause of 30 seconds) 105621.

Q: Do you play chess?

O: Yes.

P: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. Itis your move. What do you play?

O: (After a pause of 15 seconds) R-R8 mate.

Przemysław Klesk (KMSIiMS, ZUT) 36 / 52

Can machines think? Turing’s views

Imitation game, critique by Turing himself

Plus: Strong separation between body and intellect. Artificial skin (if existed)does not make a machine dressed in it more humane.

Minus: Odds are are weighted too heavily against the machine. Think of anopposite game, where human tries to pretend to be a machine, andimmediately is given away by slowness and inaccuracy in arithmetic.

Minus: May not machines carry out something which ought to be described asthinking but which is very different from what a man does? (Verystrong objection.)

Plus: Nevertheless, if a machine can be constructed to play the imitationgame satisfactorily, we need not be troubled by the above objection.2.

2Turing predicted that in 50 years computers shall have a memory oforder ≈ 109 bits, and be able to mislead about 30% of interrogators.Przemysław Klesk (KMSIiMS, ZUT) 37 / 52

Can machines think? Turing’s views

Contrary views to Turing’s

The Theological Objection

Thinking is a function of man’s immortal soul. God has given animmortal soul to every man and woman, but not to any other animalor to machines. Hence no animal or machine can think.

In scientific sense noone should be bothered by this objection! In theological terms the following remarks can be given.

The argument would be more convincing if animals were classed with men, for there is a greater difference between thetypical animate and the inanimate than there is between man and the other animals.

Any orthodox view becomes clearer if we consider how it might appear to a member of some other religiouscommunity. How do Christians regard the Moslem view that women have no souls? Why did Christians acceptCopernican theory at last?

The objection implies a serious restriction of the omnipotence of the Almighty. there are certain things that He cannot dosuch as making one equal to two, but should we not believe that He has freedom to confer a soul on an elephant if Hesees fit? All these turn out to be dogmatical speculations . . .

Przemysław Klesk (KMSIiMS, ZUT) 38 / 52

Can machines think? Turing’s views

Contrary views to Turing’s

The “Heads in the Sand” Objection

The consequences of machines thinking would be too dreadful. Let ushope and believe that they cannot do so.

Also scientifically ridiculous.

Connected to the theological objection.

We like to believe that Man is in some subtle way superior to the rest of creation. It is best if he can be shown to benecessarily superior, for then there is no danger of him losing his commanding position.

It is likely to be quite strong in intellectual people, since they value the power of thinking more highly than others, andare more inclined to base their belief in the superiority of Man on this power.

Przemysław Klesk (KMSIiMS, ZUT) 39 / 52

Can machines think? Turing’s views

Contrary views to Turing’s

The Mathematical Objection

Basing on certain results from mathematical logic there exist limits topossibilities of discrete states machines. One of such results is Godel’stheorem (1931): In any logical system, one can construct statements whichcannot be assigned true or false value (cannot be proved or disproved withinthe system).a.

aE.g.: The statement I am saying now is false.

Questions which cannot be answered by one machine may be satisfactorily answered by another (in other formalsystem).

Although limits of all machines has been proved, it is often claimed (without proof) that no such limits apply to human.

Anytime a Godel-like question is posed to machine, the given answer must be wrong. This gives us an illusionaryfeeling of superiority. People do make mistakes in answering many more trivial questions.

Those who hold to the mathematical argument would mostly be willing to accept the imitation game as a basis fordiscussion. Those who believe in the two previous objections would probably not be interested in any criteria.

Przemysław Klesk (KMSIiMS, ZUT) 40 / 52

Can machines think? Turing’s views

Contrary views to Turing’s

The Argument from Consciousness

Prof. Jefferson (1949): „(. . . )Not until a machine can write a sonnet orcompose a concerto because of thoughts and emotions felt, and not by thechance fall of symbols, could we agree that machine equals brain-that is, notonly write it but know that it had written it. No mechanism could feel (andnot merely artificially signal, an easy contrivance) pleasure at its successes,grief when its valves fuse, be warmed by flattery, be made miserable by itsmistakes, be charmed by sex, be angry or depressed when it cannot get what itwant.”

According to the most extreme form of this view the only way by which one could be sure that machine thinks is to bethe machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no onewould be justified in taking any notice.

Likewise according to this view the only way to know that a man thinks is to be that particular man. It is in fact thesolipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult. A is liableto believe “A thinks but B does not” whilst B believes “B thinks but A does not”; instead of arguing over it is usual tohave the polite convention that everyone thinks.

Prof. Jefferson would probably be willing to accept the imitation game as a test rather than an extreme argument above.

Przemysław Klesk (KMSIiMS, ZUT) 41 / 52

Can machines think? Turing’s views

Contrary views to Turing’s

Arguments from Various Disabilities

The form: “I grant you that you can make machines do all the things youhave mentioned but you will never be able to make one to do X.”. Numerousfeatures X are suggested: Be kind, resourceful, beautiful, friendly, haveinitiative, have a sense of humour, tell right from wrong, makemistakes, fall in love, enjoy strawberries and cream, make some onefall in love with it, learn from experience, use words properly, be thesubject of its own thought, have as much diversity of behaviour as aman, do something really new.

No support is usually offered for these statements and comes from a false induction. A man has seen thousands ofmachines in his lifetime. From what he sees he draws a number of general conclusions. Machines are ugly, each isdesigned for a very limited purpose, when required for a different purpose they are useless.

Many of these limitations are associated with the very small storage capacity (memory) of most machines.

Other are a disguised version of objection from consciousness.

The impossibility of making mistakes is clearly false. A machine playing the imitation game must make mistakes (plannedand random) in order to be misidentified.

Przemysław Klesk (KMSIiMS, ZUT) 42 / 52

Can machines think? Turing’s views

Contrary views to Turing’s

Lady Lovelace’s Objection

Lady Lovelace (1842): „(. . . )The Analytical Engine has no pretensions tooriginate anything. It can do whatever we know how to order it toperform(. . . )”. Additional meaning of this objection is that a designerof an intelligent system must be capable to predict all theconsequences of such system. The machine cannot surprise us.

Assertion that machines can only do what they are designed to is clearly right. But it is not the reason for drawing falseconclusions out of it.

Human can create, compose, learn because a biological program he is equipped with has functions like: adaption,ability to change itself (the program) e.g. as a result of observational interaction with the environment.

It is clearly false, that a designer is able to predict all the consequences of a programme, even the most remote ones(e.g. after billions of operations) by means of a device under his skull. Examples: artificial life, Conway’s game of life,chaos theory programmes, chess programmes surprising the grand masters designing them.

Przemysław Klesk (KMSIiMS, ZUT) 43 / 52

Can machines think? Turing’s views

Contrary views to Turing’s

The Argument from Extrasensory Perception

If one acknowledges (statistically confirmed) existence of telepathy,then one may consider the following scenario: let us play the imitationgame having as players a machine and a human having strongtelepathy skills. Interrogator could then ask e.g. “what color is thecard I am holding?”. And a human would answer right morefrequently than machine.

According to Turing it is a strong argument. Telepathy, in general, produces difficulties in many scientific approaches.

One solution is to strengthen the imitation game by a restriction which makes room „telepathy-proof” (in a similar senseas sound-proof rooms). This is compliant with Turing’s postulate about strong body-mind separation in the experiment.

Przemysław Klesk (KMSIiMS, ZUT) 44 / 52

Can machines think? Turing’s views

Turing’s chess test

Version one

A human plays a chess game against an unknown opponent, and hasto decide wether it is a man or a machine.

Version two

A human looks at a finished chess game played by to opponents andhas to identify each of them as: human or machine.

Garri Kasparov passes Turing’s chess test in version two with successratio of over 80%.

Przemysław Klesk (KMSIiMS, ZUT) 45 / 52

Contents

1 Examples of problemsSearching game treesSearching graphsOptimization problemsStrategy problemsPattern recognition problemsData mining problemsControl, regulation problemsArtificial life problems

2 Can machines think? Turing’s views

3 Minsky’s remarks on machines and intelligence

4 Conteporary research areas in AI

Minsky’s remarks on machines and intelligence

Minsky’s remarks

The paper „Steps Towards Artificial Intelligence” (Minsky, 1961).

Minsky agrees with Turing’s views.

There exist no unified and generally acceptable theory ofintelligence.

5 main areas can be named within AI: search, image recognition,learning, planning and induction.

Przemysław Klesk (KMSIiMS, ZUT) 47 / 52

Minsky’s remarks on machines and intelligence

Minsky’s remarks

On search problems

If for given problem we know the way to check the correctness of acandidate solution, then we are always able to browse through multiplecandidate solutions.

From a certain point of view all search problems may seem trivial.E.g. think of chess game tree. It is for sure finite! Each terminal node(leaf) is either a win for white or black or a draw. By propagating itupwards (min-max procedur) the initial node is also assigned with oneof these three values. In this sense chess is similarly non-interesting astic-tac-toe.

Przemysław Klesk (KMSIiMS, ZUT) 48 / 52

Minsky’s remarks on machines and intelligence

Minsky’s remarks

On search problems

Usually, it is not difficult to program an exhaustive search procedure,but for every complex problem it is too inefficient to be practicallyapplied. What good comes from the fact that we have a programmewhich will not finish the computation within our lifetime or even ourcivilization lifetime?

Samuel (1959) estimates: checkers approx. 1040 states, chessapprox. 10120 states. Let us assign generously 1µs for each tree node tobe analyzed by computer and let us estimate the number of centuriesneeded to analyze the whole game tree for checkers:

1040

106 · 60 · 60 · 24 · 365.25 · 100︸ ︷︷ ︸

numer of µs in 1 century

>1040

106 · 102 · 102 · 102 · 103 · 102=

1040

1017= 1023

.

Przemysław Klesk (KMSIiMS, ZUT) 49 / 52

Minsky’s remarks on machines and intelligence

Minsky’s remarks

On search problems

Therefore, technological improvements of computers does notlead to solution of all problems.

Wise algorithms are more needed, that would be directed tosearching more promising states in first order and discarding lesspromising ones.

Every technique (or a heuristic) which can potentially reduce thesearch is valuable.

Przemysław Klesk (KMSIiMS, ZUT) 50 / 52

Minsky’s remarks on machines and intelligence

Minsky’s remarks

We should believe, that sooner or later we shall be able to createcomplex programmes, equipped with combinations of heuristics,recurrences, image processing techniques, etc. One should not try to seetrue intelligence in them. It is rather a matter of esthetics than science.

Every machine capable of ideal 100% introspection (self-awareness)must conclude it is only a machine.

Introduction of a body/mind duality on the grounds of psychology,sociology etc. is actually only implied by the fact that our currentlyknown mechanical model of the brain is not complete.

At the low mechanical level (or digital-like level) all we have is simplerules: „if . . . then . . . ” — it is hard to be excited by this. Similarly inmathematics, as soon as the proof for a theorem becomes understood,the contents of the theorem seems trivial.

Przemysław Klesk (KMSIiMS, ZUT) 51 / 52

Conteporary research areas in AI

Conteporary research areas in AI

Graph and game tree search algorithms

Artificial neural networks

Genetic and evolutionary algorithms

Fuzzy logic and control

Expert systems

Data mining

Ant-colony algorithms

Reinforcement learning

Artificial life

Statistical Learning Theory (SLT)

Przemysław Klesk (KMSIiMS, ZUT) 52 / 52