theory of knowledge background readings -...

41
Theory of Knowledge A Background Reading Compiled by: June Jaydip

Upload: others

Post on 24-Oct-2019

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

Theory of Knowledge

A Background Reading

Compiled by:

June Jaydip

Page 2: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

Agnosticism

Agnosticism (Greek: α- a-, without + γνώσις gnōsis, knowledge; after Gnosticism) is the philosophical view that the truth value of certain claims — particularly metaphysical claims regarding theology, afterlife or the existence of God, gods, deities, or even ultimate reality — is unknown or, depending on the form of agnosticism, inherently impossible to prove or disprove.

Demographic research services normally list agnostics in the same category as atheists and non-religious people[1], using 'agnostic' in the sense of 'noncommittal'[2]. However, this can be misleading given the existence of agnostic theists, who identify themselves as both agnostics in the original sense and followers of a particular religion.

Philosophers and thinkers who have written about agnosticism include Thomas Henry Huxley, Robert G. Ingersoll, and Bertrand Russell. Religious scholars who wrote about agnosticism are Peter Kreeft, Blaise Pascal and Joseph Ratzinger, later elected as Pope Benedict XVI.

Etymology

"Agnostic" was introduced by Thomas Henry Huxley in 1869 to describe his philosophy which rejects Gnosticism, by which he meant not simply the early 1st millennium religious group, but all claims to spiritual or mystical knowledge.[2]

Early Christian church leaders used the Greek word gnosis (knowledge) to describe "spiritual knowledge." Agnosticism is not to be confused with religious views opposing the doctrine of gnosis and Gnosticism—these are religious concepts that are not generally related to agnosticism. Huxley used the term in a broad sense.

In recent years, use of the word to mean "not knowable" is apparent in scientific literature in psychology and neuroscience,[3] and with a meaning close to "independent", in technical and marketing literature, e.g. "platform agnostic" or "hardware agnostic".

Qualifying agnosticism

Enlightenment philosopher David Hume contended that meaningful statements about the universe are always qualified by some degree of doubt.[4] The fallibility of human beings means that they cannot obtain absolute certainty except in trivial cases where a statement is true by definition (as in, "all bachelors are unmarried" or "all triangles have three angles"). All rational statements that assert a factual claim about the universe that begin "I believe that ...." are simply shorthand for, "Based on my knowledge, understanding, and interpretation of the prevailing evidence, I tentatively believe that...." For instance, when one says, "I believe that Lee Harvey Oswald shot John F. Kennedy," one is not asserting an absolute truth but a tentative belief based on interpretation of the assembled evidence. Even though one may set an alarm clock prior to the following day, believing that the sun will rise the next day, that belief is tentative, tempered by a small but finite degree of doubt (the sun might be destroyed; the earth might be shattered in collision with a rogue asteroid or that person might die before the alarm goes off.)

Many mainstream believers in the West embrace an agnostic stance. As noted below, for instance, Roman Catholic dogma about the nature of God contains many strictures of agnosticism. An agnostic who believes in God despairs of ever fully comprehending what it is in which he believes. But some believing agnostics assert that very absurdity strengthens their belief rather than weakens it.[citation needed]

The Catholic Church sees merit in examining what it calls Partial Agnosticism, specifically those systems that "do not aim at constructing a complete philosophy of the Unknowable, but at excluding special kinds of truth, notably religious, from the domain of knowledge."[5] However, the Church is historically opposed to

Page 3: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

a full denial of the ability of human reason to know God. The Council of the Vatican, relying on biblical scripture, declares that "God, the beginning and end of all, can, by the natural light of human reason, be known with certainty from the works of creation" (Const. De Fide, II, De Rev.)[6]

Types of agnosticism

Agnosticism can be subdivided into several subcategories. Recently suggested variations include:

• Strong agnosticism (also called "hard agnosticism," "closed agnosticism," "strict agnosticism," or "absolute agnosticism") refers the view that the question of the existence or nonexistence of God or gods and the nature of ultimate reality is unknowable by reason of our natural inability to verify any experience with anything but another subjective experience. A strong agnostic would say, "I don't know whether God exists or not, and neither do you."

• Weak agnosticism (also called soft agnosticism, open agnosticism, empirical agnosticism, temporal agnosticism)—the view that the existence or nonexistence of any deity is currently unknown but is not necessarily unknowable, therefore one will withhold judgment until/if more evidence is available. A weak agnostic would say, "I don't know whether God exists or not, but maybe do you."

• Apathetic agnosticism (also called Pragmatic agnosticism)—the view that there is no proof of either the existence or nonexistence of any deity, but since any deity that may exist appears unconcerned for the universe or the welfare of its inhabitants, the question is largely academic anyway.[citation needed]

• Agnostic theism (also called religious agnosticism)—the view of those who do not claim to know existence of any deity, but still believe in such an existence. (See Knowledge vs. Beliefs)

• Agnostic atheism—the view of those who do not know of the existence or nonexistence of a deity, and do not believe in any.[7]

• Ignosticism—the view that a coherent definition of God must be put forward before the question of the existence of God can be meaningfully discussed. If the chosen definition isn't coherent, the ignostic holds the noncognitivist view that the existence of God is meaningless or empirically untestable. A.J. Ayer, Theodore Drange, and other philosophers see both atheism and agnosticism as incompatible with ignosticism on the grounds that atheism and agnosticism accept "God exists" as a meaningful proposition which can be argued for or against.

Famous agnostic thinkers

Among the most famous agnostics (in the original sense) have been Thomas Henry Huxley, Robert G. Ingersoll and Bertrand Russell.

Thomas Henry Huxley

Agnostic views are as old as philosophical skepticism, but the terms agnostic and agnosticism were created by Huxley to sum up his thoughts on contemporary developments of metaphysics about the "unconditioned" (Hamilton) and the "unknowable" (Herbert Spencer). It is important, therefore, to discover Huxley's own views on the matter. Though Huxley began to use the term "agnostic" in 1869, his opinions had taken shape some time before that date. In a letter of September 23, 1860, to Charles Kingsley, Huxley discussed his views extensively:

I neither affirm nor deny the immortality of man. I see no reason for believing it, but, on the other hand, I have no means of disproving it. I have no a priori objections to the doctrine. No man who has to deal daily and hourly with nature can trouble himself about a priori difficulties. Give me such evidence as would justify me in believing in anything else, and I will believe that.

Page 4: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

Why should I not? It is not half so wonderful as the conservation of force or the indestructibility of matter... It is no use to talk to me of analogies and probabilities. I know what I mean when I say I believe in the law of the inverse squares, and I will not rest my life and my hopes upon weaker convictions... That my personality is the surest thing I know may be true. But the attempt to conceive what it is leads me into mere verbal subtleties. I have champed up all that chaff about the ego and the non-ego, noumena and phenomena, and all the rest of it, too often not to know that in attempting even to think of these questions, the human intellect flounders at once out of its depth.

And again, to the same correspondent, May 6, 1863:

I have never had the least sympathy with the a priori reasons against orthodoxy, and I have by nature and disposition the greatest possible antipathy to all the atheistic and infidel school. Nevertheless I know that I am, in spite of myself, exactly what the Christian would call, and, so far as I can see, is justified in calling, atheist and infidel. I cannot see one shadow or tittle of evidence that the great unknown underlying the phenomenon of the universe stands to us in the relation of a Father [who] loves us and cares for us as Christianity asserts. So with regard to the other great Christian dogmas, immortality of soul and future state of rewards and punishments, what possible objection can I—who am compelled perforce to believe in the immortality of what we call Matter and Force, and in a very unmistakable present state of rewards and punishments for our deeds—have to these doctrines? Give me a scintilla of evidence, and I am ready to jump at them.

Of the origin of the name agnostic to describe this attitude, Huxley gave the following account:[8]

When I reached intellectual maturity and began to ask myself whether I was an atheist, a theist, or a pantheist; a materialist or an idealist; Christian or a freethinker; I found that the more I learned and reflected, the less ready was the answer; until, at last, I came to the conclusion that I had neither art nor part with any of these denominations, except the last. The one thing in which most of these good people were agreed was the one thing in which I differed from them. They were quite sure they had attained a certain "gnosis,"–had, more or less successfully, solved the problem of existence; while I was quite sure I had not, and had a pretty strong conviction that the problem was insoluble. So I took thought, and invented what I conceived to be the appropriate title of "agnostic." It came into my head as suggestively antithetic to the "gnostic" of Church history, who professed to know so much about the very things of which I was ignorant. To my great satisfaction the term took.

Huxley's agnosticism is believed to be a natural consequence of the intellectual and philosophical conditions of the 1860s, when clerical intolerance was trying to suppress scientific discoveries which appeared to clash with a literal reading of the Book of Genesis and other established Jewish and Christian doctrines. Agnosticism should not, however, be confused with natural theology, deism, pantheism, or other science positive forms of theism.

By way of clarification, Huxley states, "In matters of the intellect, follow your reason as far as it will take you, without regard to any other consideration. And negatively: In matters of the intellect, do not pretend that conclusions are certain which are not demonstrated or demonstrable" (Huxley, Agnosticism, 1889). While A. W. Momerie has noted that this is nothing but a definition of honesty, Huxley's usual definition goes beyond mere honesty to insist that these metaphysical issues are fundamentally unknowable.

Page 5: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

Robert G. Ingersoll

Robert G. Ingersoll, an Illinois lawyer and politician who evolved into a well-known and sought-after orator in 19th century America, has been referred to as the "Great Agnostic."

In an 1896 lecture titled Why I Am An Agnostic, Ingersoll related why he was an agnostic:

Is there a supernatural power—an arbitrary mind—an enthroned God—a supreme will that sways the tides and currents of the world—to which all causes bow? I do not deny. I do not know—but I do not believe. I believe that the natural is supreme—that from the infinite chain no link can be lost or broken—that there is no supernatural power that can answer prayer—no power that worship can persuade or change—no power that cares for man. I believe that with infinite arms Nature embraces the all—that there is no interference—no chance—that behind every event are the necessary and countless causes, and that beyond every event will be and must be the necessary and countless effects. Is there a God? I do not know. Is man immortal? I do not know. One thing I do know, and that is, that neither hope, nor fear, belief, nor denial, can change the fact. It is as it is, and it will be as it must be.

In the conclusion of the speech he simply sums up the agnostic position as:

We can be as honest as we are ignorant. If we are, when asked what is beyond the horizon of the known, we must say that we do not know.

Bertrand Russell

Bertrand Russell's pamphlet, Why I Am Not a Christian, based on a speech delivered in 1927 and later included in a book of the same title, is considered a

classic statement of agnosticism. The essay briefly lays out Russell’s objections to some of the arguments for the existence of God before discussing his moral objections to Christian teachings. He then calls upon his readers to "stand on

their own two feet and look fair and square at the world," with a "fearless attitude and a free intelligence."

In 1939, Russell gave a lecture on The existence and nature of God, in which he characterized himself as an agnostic. He said:

The existence and nature of God is a subject of which I can discuss only half. If one arrives at a negative conclusion concerning the first part of the question, the second part of the question does not arise; and my position, as you may have gathered, is a negative one on this matter.[9]

However, later in the same lecture, discussing modern non-anthropomorphic concepts of God, Russell states:

That sort of God is, I think, not one that can actually be disproved, as I think the omnipotent and benevolent creator can.[10]

Page 6: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

In Russell's 1947 pamphlet, Am I An Atheist Or An Agnostic? (subtitled A Plea For Tolerance In The Face Of New Dogmas), he ruminates on the problem of what to call himself:

As a philosopher, if I were speaking to a purely philosophic audience I should say that I ought to describe myself as an Agnostic, because I do not think that there is a conclusive argument by which one can prove that there is not a God. On the other hand, if I am to convey the right impression to the ordinary man in the street I think I ought to say that I am an Atheist, because when I say that I cannot prove that there is not a God, I ought to add equally that I cannot prove that there are not the Homeric gods.

In his 1953 essay, What Is An Agnostic? Russell states:

An agnostic thinks it impossible to know the truth in matters such as God and the future life with which Christianity and other religions are concerned. Or, if not impossible, at least impossible at the present time.

However, later in the essay, Russell says:

I think that if I heard a voice from the sky predicting all that was going to happen to me during the next twenty-four hours, including events that would have seemed highly improbable, and if all these events then produced to happen, I might perhaps be convinced at least of the existence of some superhuman intelligence.

Religious scholars

Religious scholars, whether Jewish, Muslim or Christian, affirm the possibility of knowledge, even of metaphysical realities such as God and the soul,[11] because human intelligence ("intus", within and "legere", to read) has the power to reach the essence and existence of things since it has a non-material, spiritual element. They affirm that “not being able to see or hold some specific thing does not necessarily negate its existence,” as in the case of gravity, entropy, mental telepathy, or reason and thought.[12]

According to these scholars, agnosticism is impossible in actual practice, since one either lives as if God did not exist (etsi Deus non daretur), or lives as if God did exist (etsi Deus daretur).[13][14][15] These scholars believe that each day in a person’s life is an unavoidable step towards death, and thus not to decide for or against God, the all-encompassing foundation, purpose, meaning of life, is to decide in favor of atheism.[16][13] Even if there were truly no evidence for God, Christian philosopher Blaise Pascal offered to agnostics what is known as Pascal’s Wager: the infinite expected value of acknowledging God is always greater than the expected value of not acknowledging his existence, and thus it is a safer “bet” to choose God.[16]

These religious scholars argue that God has placed in his creation much evidence of his existence,[12] and continues to personally speak to humans.[17] Peter Kreeft and Ronald Tacelli write about a strong, cumulative case with their 20 rational arguments for God’s existence.[18] And, these scholars state, when agnostics demand from God that he proves his existence through laboratory testing, they are asking God, a superior being, to become man’s servant.[19]

According to Joseph Ratzinger later elected as Pope Benedict XVI, agnosticism, more specifically strong agnosticism, is a self-limitation of reason that contradicts itself when it acclaims the power of science to know the truth.[20][17] When reason imposes limits on itself on matters of religion and ethics, this leads to dangerous pathologies of religion and pathologies of science, such as destruction of humans and ecological disasters.[20][17][21] "Agnosticism," said Ratzinger, "is always the fruit of a refusal of that knowledge which is in fact offered to man... The knowledge of God has always existed."[20] Agnosticism, stated Ratzinger, is a choice of comfort, pride, dominion, and utility over truth, and is opposed by the following

Page 7: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

attitudes: the keenest self-criticism, humble listening to the whole of existence, the persistent patience and self-correction of the scientific method, and a readiness to be purified by the truth.[17]

Belief

Belief is the psychological state in which an individual holds a proposition or premise to be true.[1]

Belief, knowledge and epistemology

The relationship between belief and knowledge is subtle. Believers in a claim typically say that they know that claim. For instance, those who believe that the Sun is a god will report that they know that the Sun is a god. However, the terms belief and knowledge are used differently by philosophers.

Epistemology is the philosophical study of knowledge and belief. A primary problem for epistemology is exactly what is needed in order for us to have knowledge. In a notion derived from Plato's dialogue Theaetetus, philosophy has traditionally defined knowledge as justified true belief. The relationship between belief and knowledge is that a belief is knowledge if the belief is true, and if the believer has a justification (reasonable and necessarily plausible assertions/evidence/guidance) for believing it is true.

A false belief is not considered to be knowledge, even if it is sincere. A sincere believer in the flat earth theory does not know that the Earth is flat. Similarly, a truth that nobody believes is not knowledge, because in order to be knowledge, there must be some person who knows it.

Later epistemologists have questioned the "justified true belief" definition, and some philosophers have questioned whether "belief" is a useful notion at all.

Belief as a psychological theory

Mainstream psychology and related disciplines have traditionally treated belief as if it were the simplest form of mental representation and therefore one of the building blocks of conscious thought. Philosophers have tended to be more rigorous in their analysis and much of the work examining the viability of the belief concept stems from philosophical analysis.

The concept of belief presumes a subject (the believer) and an object of belief (the proposition). So like other propositional attitudes, belief implies the existence of mental states and intentionality, both of which are hotly debated topics in the philosophy of mind and whose foundations and relation to brain states are still controversial.

Beliefs are sometimes divided into core beliefs (those which you may be actively thinking about) and dispositional beliefs (those which you may ascribe to but have never previously thought about). For example, if asked 'do you believe tigers wear pink pajamas ?' a person might answer that they do not, despite the fact they may never have thought about this situation before.[2]

That a belief is a mental state has been seen, by some, as contentious. While some philosophers have argued that beliefs are represented in the mind as sentence-like constructs others have gone as far as arguing that there is no consistent or coherent mental representation that underlies our common use of the belief concept and is therefore obsolete and should be rejected.

This has important implications for understanding the neuropsychology and neuroscience of belief. If the concept of belief is incoherent or ultimately indefensible then any attempt to find the underlying neural processes which support it will fail. If the concept of belief does turn out to be useful, then this goal should (in principle) be achievable.

Page 8: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

Philosopher Lynne Rudder Baker has outlined four main contemporary approaches to belief in her book Saving Belief:

• Our common-sense understanding of belief is correct - Sometimes called the ‘mental sentence theory’, in this conception, beliefs exist as coherent entities and the way we talk about them in everyday life is a valid basis for scientific endeavour. Jerry Fodor is one of the principal defenders of this point of view.

• Our common-sense understanding of belief may not be entirely correct, but it is close enough to make some useful predictions - This view argues that we will eventually reject the idea of belief as we use it now, but that there may be a correlation between what we take to be a belief when someone says 'I believe that snow is white' and however a future theory of psychology will explain this behaviour. Most notably philosopher Stephen Stich has argued for this particular understanding of belief.

• Our common-sense understanding of belief is entirely wrong and will be completely superseded by a radically different theory which will have no use for the concept of belief as we know it - Known as eliminativism, this view, (most notably proposed by Paul and Patricia Churchland), argues that the concept of belief is like obsolete theories of times past such as the four humours theory of medicine, or the phlogiston theory of combustion. In these cases science hasn’t provided us with a more detailed account of these theories, but completely rejected them as valid scientific concepts to be replaced by entirely different accounts. The Churchlands argue that our common-sense concept of belief is similar, in that as we discover more about neuroscience and the brain, the inevitable conclusion will be to reject the belief hypothesis in its entirety.

• Our common-sense understanding of belief is entirely wrong, however treating people, animals and even computers as if they had beliefs, is often a successful strategy - The major proponents of this view, Daniel Dennett and Lynne Rudder Baker, are both eliminativists in that they believe that beliefs are not a scientifically valid concept, but they don’t go as far as rejecting the concept of belief as a predictive device. Dennett gives the example of playing a computer at chess. While few people would agree that the computer held beliefs, treating the computer as if it did (e.g. that the computer believes that taking the opposition’s queen will give it a considerable advantage) is likely to be a successful and predictive strategy. In this understanding of belief, named by Dennett the intentional stance, belief based explanations of mind and behaviour are at a different level of explanation and are not reducible to those based on fundamental neuroscience although both may be explanatory at their own level.

Is belief voluntary?

Most philosophers hold the view that belief formation is primarily spontaneous and involuntary. Many people only accept information supporting narrow and specific beliefs, rather than the more challenging broad study across cultures. Few are able to live without drawing many automatic conclusions, as the human mind seems to prefer certainty, even if it is most likely, in error. This occurs, most often, when people are forced to make either "for or against" choices, in a polarized world of so-called binary choices (either/or).

Belief is most often mandatory for group affiliation or "official" membership. In many cases, people bolster belief in which they are emotionally involved by attempted action to resolve contradictions experienced directly. This is done by creative rationializations to reduce experential dissonance. Human imagination serves as the catalyst for creating, modification and perpetuation of belief.

Hope for a world different or better than the present world allows many people to hold information contradictory to their direct experience as valid. This phenomenon is known as dualism. People often believe merely what they wish to be true, in their mind, no matter how much it stands in direct opposition to experiential life. Belief, as a component of the human mind, is true speculation when assumptions cannot be verified logically reconciled to the external world.

Page 9: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

It is technically impossible for all variations of world belief, in over 6.6 Billion minds, to be simultaneously true. The observation of mutually exclusive belief variation, in various minds, clearly demonstrates the phenomena of diverging human imagination and personal modification. From an individual and/or group viewpoint, the preferred belief is strongly imagined to be the one and only unique "truth," fostering an agenda with perceived rewards and very specific group requirements.

Delusional beliefs

Delusions are defined as beliefs in psychiatric diagnostic criteria (for example in the Diagnostic and Statistical Manual of Mental Disorders). Psychiatrist and historian G. E. Berrios has challenged the view that delusions are genuine beliefs and instead labels them as "empty speech acts", where affected persons are motivated to express false or bizarre belief statements due to an underlying psychological disturbance. However, the majority of mental health professionals and researchers treat delusions as if they were genuine beliefs.

In Lewis Carroll's Through the Looking-Glass, the White Queen says, "Why, sometimes I've believed as many as six impossible things before breakfast." This is often quoted in mockery of the common ability of people to entertain beliefs contrary to fact.

Limiting beliefs

The term limiting belief is used for a belief that inhibits exploration of a wider cognitive space than would otherwise be the case. Examples of limiting beliefs are seen both in animals and people. These may be strongly held beliefs, or held unconsciously, and are often tied in with self-image or perceptions about the world. Everyday examples of limiting beliefs:

• That one has specific capabilities, roles, or traits which cannot be escaped or changed. • That one cannot succeed so there is no point committing to trying. • That a particular opinion is right; therefore, there is no point considering other viewpoints. • That a particular action or result is the only way to resolve a problem.

Certainty

Certainty can be defined as either (a) perfect knowledge that has total security from error, or (b) the mental state of being without doubt.[citation needed]Objectively defined, certainty is total continuity and validity of all foundational inquiry, to the highest degree of precision.[citation needed] Something is certain only if no skepticism can occur.[citation needed] Philosophy (at least historically) seeks this state.[citation needed]It is widely held that certainty is a failed historical enterprise.[1]

Emotion

Strictly speaking, certainty is not a property of statements, but a property of people. 'Certainty' is an emotional state, like anger, jealousy, or embarrassment. When someone says "B is certain" they really mean "I am certain that B". The former is often used in everyday language, as it has a rhetorical advantage. It is also sometimes used to convey that a large number of people are certain about B. However the fact that certainty is an emotional state is not always heeded in the literature. The truth is, certainty is an emotional state that is attained by many people every day. In this sense, certainty is linked to 'faith' as a similar state of consciousness or of emotion.

Page 10: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

History

Socrates- ancient Greece

Socrates, often thought to be the first true philosopher, had a higher a criterion for knowledge than others before him. The skeptical problems that he encountered in his philosophy were taken very seriously. As a result, he claimed to know nothing. Socrates often said that his wisdom was limited to an awareness of his own ignorance.

Al-Ghazali- Islamic theologian

Al-Ghazali was a professor of philosophy in the 11th century. His book titled The Incoherence of the Philosophers marks a major turn in Islamic epistemology, as Ghazali effectively discovered philosophical skepticism that would not be commonly seen in the West until René Descartes, George Berkeley and David Hume. He described the necessity of proving the validity of reason- independently from reason. He attempted this and failed. The doubt that he introduced to his foundation of knowledge could not be reconciled using philosophy. Taking this very seriously, he resigned from his post at the university, and suffered serious psychosomatic illness. It was not until he became a religious sufi that he found a solution to his philosophical problems, which are based on Islamic religion; this encounter with skepticism led Ghazali to embrace a form of theological occasionalism, or the belief that all causal events and interactions are not the product of material conjunctions but rather the immediate and present will of God.

Descartes- 18th Century

Descartes' Meditations on First Philosophy is a book in which Descartes first discards all belief in things which are not absolutely certain, and then tries to establish what can be known for sure. Although the phrase "Cogito, ergo sum" is often attributed with Descartes' Meditations on First Philosophy it is actually put forward in his Discourse on Method however, due to the implications of inferring the conclusion within the predicate he changed the argument to "I think, I exist" this then becomes his first certainty.

Ludwig Wittgenstein- 20th Century

On Certainty, is a book by Ludwig Wittgenstein. The main theme of the work is that context plays a role in epistemology. Wittgenstein asserts an anti-foundationalist message throughout the work: that every claim can be doubted but certainty is possible in a framework. "The function [propositions] serve in language is to serve as a kind of framework within which empirical propositions can make sense".[2]

Degrees of Certainty

Rudolph Carnap viewed certainty as a matter of degree (degrees of certainty) which could be objectively measured, with degree one being certainty. Bayesian analysis derives degrees of certainty which are interpreted as a measure of subjective psychological belief.

Foundational crisis of mathematics

The foundational crisis of mathematics was the early 20th century's term for the search for proper foundations of mathematics.

After several schools of the philosophy of mathematics ran into difficulties one after the other in the 20th century, the assumption that mathematics had any foundation that could be stated within mathematics itself began to be heavily challenged.

Page 11: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

One attempt after another to provide unassailable foundations for mathematics was found to suffer from various paradoxes (such as Russell's paradox) and to be inconsistent: an undesirable situation in which every mathematical statement that can be formulated in a proposed system (such as 2 + 2 = 5) can also be proved in the system.

Various schools of thought on the right approach to the foundations of mathematics were fiercely opposing each other. The leading school was that of the formalist approach, of which David Hilbert was the foremost proponent, culminating in what is known as Hilbert's program, which thought to ground mathematics on a small basis of a formal system proved sound by metamathematical finitistic means. The main opponent was the intuitionist school, led by L. E. J. Brouwer, which resolutely discarded formalism as a meaningless game with symbols[citation needed]. The fight was acrimonious. In 1920 Hilbert succeeded in having Brouwer, whom he considered a threat to mathematics, removed from the editorial board of Mathematische Annalen, the leading mathematical journal of the time.

Gödel's incompleteness theorems, proved in 1931, showed that essential aspects of Hilbert's program could not be attained. In Gödel's first result he showed how to construct, for any sufficiently powerful and consistent finitely axiomatizable system – such as necessary to axiomatize the elementary theory of arithmetic – a statement that can be shown to be true, but that does not follow from the rules of the system. It thus became clear that the notion of mathematical truth can not be reduced to a purely formal system as envisaged in Hilbert's program. In a next result Gödel showed that such a system was not powerful enough for proving its own consistency, let alone that a simpler system could do the job. This dealt a final blow to the heart of Hilbert's program, the hope that consistency could be established by finitistic means (it was never made clear exactly what axioms were the "finitistic" ones, but whatever axiomatic system was being referred to, it was a *weaker* system than the system whose consistency it was supposed to prove). Meanwhile, the intuitionistic school had failed to attract adherents among working mathematicians, and floundered due to the difficulties of doing mathematics under the constraint of constructivism.

In a sense, the crisis has not been resolved, but faded away: most mathematicians either do not work from axiomatic systems, or if they do, do not doubt the consistency of ZFC, generally their preferred axiomatic system. In most of mathematics as it is practiced, the various logical paradoxes never played a role anyway, and in those branches in which they do (such as logic and category theory), they may be avoided.

Quotes

“ There is no such thing as absolute certainty, but there is assurance sufficient

for the purposes of human life. — John Stuart Mill ”

“ Doubt is not a pleasant condition, but certainty is absurd. — Voltaire ”

“ In this world nothing can be said to be certain, except death and taxes. — Benjamin Franklin ”

Page 12: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

Determinism

Determinism is the philosophical proposition that every event, including human cognition and behaviour, decision and action, is causally determined by an unbroken chain of prior occurrences.[1] With numerous historical debates, many varieties and philosophical positions on the subject of determinism exist from traditions throughout the world.

Philosophy of determinism

It is a popular misconception that determinism necessarily entails that humanity or individual humans have no influence on the future and its events (a position known as fatalism); however, determinists believe that the level to which human beings have influence over their future is itself dependent on present and past. Causal determinism is associated with, and relies upon, the ideas of materialism and causality. Some of the main philosophers who have dealt with this issue are Steven M. Cahn, Omar Khayyám, Thomas Hobbes, Baruch Spinoza, Gottfried Leibniz, David Hume, Baron d'Holbach (Paul Heinrich Dietrich), Pierre-Simon Laplace, Arthur Schopenhauer, William James, Friedrich Nietzsche and, more recently, John Searle, Ted Honderich, and Daniel Dennett.

Mecca Chiesa notes that the probabilistic or selectionistic determinism of B.F. Skinner comprised a wholly separate conception of determinism that was not mechanistic at all.[2] A mechanistic determinism would assume that every event has an unbroken chain of prior occurrences, but a selectionistic or probabilistic model does not.[3][4]

The nature of determinism

The exact meaning of the term determinism has historically been subject to several interpretations. Some, called Incompatibilists, view determinism and free will as mutually exclusive. The belief that free will is an illusion is known as Hard Determinism. Others, labeled Compatibilists, (or Soft Determinists) believe that the two ideas can be coherently reconciled. Incompatibilists who accept free will but reject determinism are called Libertarians — not to be confused with the political sense. Most of this disagreement is due to the fact that the definition of free will, like that of determinism, varies. Some feel it refers to the metaphysical truth of independent agency, whereas others simply define it as the feeling of agency that humans experience when they act.

Ted Honderich, in his book How Free Are You? - The Determinism Problem gives the following summary of the theory of determinism:

In its central part, determinism is the theory that our choices and decisions and what gives rise to them are effects. What the theory comes to therefore depends on what effects are taken to be... [I]t is effects that seem fundamental to the subject of determinism and how it affects our lives.[5]

Varieties of determinism

Causal (or nomological) determinism is the thesis that future events are necessitated by past and present events combined with the laws of nature. Such determinism is sometimes illustrated by the thought experiment of Laplace's demon. Imagine an entity that knows all facts about the past and the present, and knows all natural laws that govern the universe. Such an entity might, under certain circumstances, be able to use this knowledge to foresee the future, down to the smallest detail.[6] Simon-Pierre Laplace's determinist dogma (as described by Stephen Hawking) is generally referred to as "scientific determinism" and predicated on the supposition that all events have a cause and effect and the precise combination of events at a particular time engender a particular outcome. [2]. This causal determinism has a direct relationship with predictability. (Perfect) predictability implies strict determinism, but lack of predictability

Page 13: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

does not necessarily imply lack of determinism. Limitations on predictability could alternatively be caused by factors such as a lack of information or excessive complexity. An example of this could be found by looking at a bomb dropping from the air. Through mathematics, we can predict the time the bomb will take to reach the ground, and we also know what will happen once the bomb explodes. Any small errors in prediction might arise from our not measuring some factors, such as puffs of wind or variations in air temperature along the bomb's path.

Logical determinism is the notion that all propositions, whether about the past, present or future, are either true or false. The problem of free will, in this context, is the problem of how choices can be free, given that what one does in the future is already determined as true or false in the present.

Additionally, there is environmental determinism, also known as climatic or geographical determinism which holds the view that the physical environment, rather than social conditions, determines culture. Those who believe this view say that humans are strictly defined by stimulus-response (environment-behavior) and cannot deviate. Key proponents of this notion have included Ellen Churchill Semple, Ellsworth Huntington, Thomas Griffith Taylor and possibly Jared Diamond, although his status as an environmental determinist is debated. [7]

Biological determinism is the idea that all behavior, belief, and desire are fixed by our genetic endowment. There are other theses on determinism, including cultural determinism and the narrower concept of psychological determinism. Combinations and syntheses of determinist theses, e.g. bio-environmental determinism, are even more common. Addiction Specialist Dr. Drew Pinski relates addiction to biological determinism:

"Absolutely. It's a complex disorder, but it clearly has a genetic basis. In fact, in the definition of the disease, we consider genetics absolutely a crucial piece of the definition. So the definition as stated in a consensus conference that was published in the early '90s, it's a genetic disorder with a biological basis. The hallmark is the progressive use in the face of adverse consequence, and then finally denial."

Theological determinism is the thesis that there is a God who determines all that humans will do, either by knowing their actions in advance, via some form of omniscience[8] or by decreeing their actions in advance.[9] The problem of free will, in this context, is the problem of how our actions can be free, if there is a being who has determined them for us ahead of time.

Determinism with regard to ethics

Some hold that, were determinism true, it would negate human morals and ethics. Counter to this argument, some would say that determinism is simply the sum of empirical scientific findings, making it devoid of subjectivism. Morals and Ethics do not hold the universal permanence that physical rules do (like magnetism polarity), but their very existence can also mean they were an inevitable product themselves. That, possibly through an extended period of social development, a confluence of events formed to generate the very idea of morals and ethics in our minds. In other words, all events that actually occur are unavoidable, proven by the fact that these events do, in fact, occur. The "chicken before the egg?" debate manifests again, here.

Determinism in Eastern tradition

The idea that the entire universe is a deterministic system has been articulated in both Eastern and non-Eastern religion, philosophy, and literature. Determinism has been expressed in the Buddhist doctrine of Dependent Origination, which states that every phenomenon is conditioned by, and depends on, the phenomena that it is not. A common teaching story, called Indra's Net, illustrates this point using a metaphor. A vast auditorium is decorated with mirrors and/or prisms hanging on strings of different lengths from an immense number of points on the ceiling. One flash of light is sufficient to light the entire display since light bounces and bends from hanging bauble to hanging bauble. Each bauble lights each and every

Page 14: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

other bauble. So, too, each of us is "lit" by each and every other entity in the Universe. In Buddhism, this teaching is used to demonstrate that to ascribe special value to any one thing is to ignore the interdependence of all things. Volitions of all sentient creatures determine the seeming reality in which we perceive ourself as living, rather than a mechanical universe determining the volitions which humans imagine themselves to be forming.

In the story of the Indra's Net, the light that streams back and forth throughout the display is the analogy of karma. (Note that in popular Western usage, the word "karma" often refers to the concept of past good or bad actions resulting in like consequences.) In the Eastern context "Karma" refers to an action, or, more specifically, to an intentional action, and the Buddhist theory holds that every karma (every intentional action) will bear karmic fruit (produce an effect somewhere down the line). Volitional acts drive the universe. The consequences of this view often confound our ordinary expectations.

A shifting flow of probabilities for futures lies at the heart of theories associated with the Yi Jing (or I Ching, the Book of Changes). Probabilities take the center of the stage away from things and people. A kind of "divine" volition sets the fundamental rules for the working out of probabilities in the universe, and human volitions are always a factor in the ways that humans can deal with the real world situations one encounters. If one's situation in life is surfing on a tsunami, one still has some range of choices even in that situation. One person might give up, and another person might choose to struggle and perhaps to survive. The Yi Jing mentality is much closer to the mentality of quantum physics than to that of classical physics, and also finds parallelism in voluntarist or Existentialist ideas of taking one's life as one's project.

The followers of the philosopher Mozi made some early discoveries in optics and other areas of physics, ideas that were consonant with deterministic ideas[citation needed].

Determinism in Western tradition

In the West, the Ancient Greek atomists Leucippus and Democritus were the first to anticipate determinism when they theorized that all processes in the world were due to the mechanical interplay of atoms, but this theory did not gain much support at the time. Determinism in the West is often associated with Newtonian physics, which depicts the physical matter of the universe as operating according to a set of fixed, knowable laws. The "billiard ball" hypothesis, a product of Newtonian physics, argues that once the initial conditions of the universe have been established the rest of the history of the universe follows inevitably. If it were actually possible to have complete knowledge of physical matter and all of the laws governing that matter at any one time, then it would be theoretically possible to compute the time and place of every event that will ever occur (Laplace's demon). In this sense, the basic particles of the universe operate in the same fashion as the rolling balls on a billiard table, moving and striking each other in predictable ways to produce predictable results.

Whether or not it is all-encompassing in so doing, Newtonian mechanics deals only with caused events, e.g.: If an object begins in a known position and is hit dead on by an object with some known velocity, then it will be pushed straight toward another predictable point. If it goes somewhere else, the Newtonians argue, one must question one's measurements of the original position of the object, the exact direction of the striking object, gravitational or other fields that were inadvertently ignored, etc. Then, they maintain, repeated experiments and improvements in accuracy will always bring one's observations closer to the theoretically predicted results. When dealing with situations on an ordinary human scale, Newtonian physics has been so enormously successful that it has no competition. But it fails spectacularly as velocities become some substantial fraction of the speed of light and when interactions at the atomic scale are studied. Before the discovery of quantum effects and other challenges to Newtonian physics, "uncertainty" was always a term that applied to the accuracy of human knowledge about causes and effects, and not to the causes and effects themselves.

Page 15: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

Minds and bodies

Some determinists argue that materialism does not present a complete understanding of the universe, because while it can describe determinate interactions among material things, it ignores the minds or souls of conscious beings.

A number of positions can be delineated:

1. Immaterial souls exist and exert a non-deterministic causal influence on bodies. (Traditional free-will, interactionist dualism).[10] [11]

2. Immaterial souls exist, but are part of deterministic framework. 3. Immaterial souls exist, but exert no causal influence, free or determined (epiphenomenalism,

occasionalism) 4. Immaterial souls do not exist — the mind-body problem has some other solution. 5. Immaterial souls are all that exist (Idealism).

Modern perspectives on determinism

Determinism and a first cause

Since the early twentieth century when astronomer Edwin Hubble first hypothesized that redshift shows the universe is expanding, prevailing scientific opinion has been that the current state of the universe is the result of a process described by the Big Bang. Many theists and deists claim that it therefore has a finite age, pointing out that something cannot come from nothing. The big bang does not describe from where the compressed universe came; instead it leaves the question open. Different astrophysicists hold different views about precisely how the universe originated (Cosmogony). The philosophical argument here would be that the big bang triggered every single action, and possibly mental thought, through the system of cause and effect.

Determinism and generative processes

In emergentist or generative philosophy of cognitive sciences and evolutionary psychology, free will does not exist.[12][13] However an illusion of free will is experienced due to the generation of infinite behaviour from the interaction of finite-deterministic set of rules and parameters. Thus the unpredictability of the emerging behaviour from deterministic processes leads to a perception of free will, even though free will as an ontological entity does not exist.[12][13]

As an illustration, the strategy board-games chess and Go have rigorous rules in which no information (such as cards' face-values) is hidden from either player and no random events (such as dice-rolling) happen within the game. Yet, chess and especially Go with its extremely simple deterministic rules, can still have an extremely large number of unpredictable moves. By analogy, emergentists or generativists suggest that the experience of free will emerges from the interaction of finite rules and deterministic parameters that generate infinite and unpredictable behaviour. Yet, if all these events were accounted for, and there were a known way to evaluate these events, the seemingly unpredictable behaviour would become predictable.[12][13]

Dynamical-evolutionary psychology, cellular automata and the generative sciences, model emergent processes of social behaviour on this philosophy, showing the experience of free will as essentially a gift of ignorance or as a product of incomplete information.[12][13]

Page 16: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

Determinism in mathematical models

Many mathematical models are deterministic. This is true of most models involving differential equations (notably, those measuring rate of change over time). Mathematical models that are not deterministic because they involve randomness are called stochastic. Because of sensitive dependence on initial conditions, some deterministic models may appear to behave non-deterministically; in such cases, a deterministic interpretation of the model may not be useful due to numerical instability and a finite amount of precision in measurement. Such considerations can motivate the consideration of a stochastic model when the underlying system is accurately modeled in the abstract by deterministic equations.[14]

Arguments against determinism

Libertarianism is the belief that we have complete free will. Compatibilism is a mixture of Libertarianism and Determinism.

The negation of determinism is sometimes called indeterminism.

Determinism, quantum mechanics and classical physics

Since the beginning of the 20th century, quantum mechanics has revealed previously concealed aspects of events. Newtonian physics, taken in isolation rather than as an approximation to quantum mechanics, depicts a universe in which objects move in perfectly determinative ways. At human scale levels of interaction, Newtonian mechanics gives predictions that in many areas check out as completely perfectible, to the accuracy of measurement. Poorly designed and fabricated guns and ammunition scatter their shots rather widely around the center of a target, and better guns produce tighter patterns. Absolute knowledge of the forces accelerating a bullet should produce absolutely reliable predictions of its path, or so was thought. However, knowledge is never absolute in practice and the equations of Newtonian mechanics can exhibit sensitive dependence on initial conditions, meaning small errors in knowledge of initial conditions can result in arbitrarily large deviations from predicted behavior.

At atomic scales the paths of objects can only be predicted in a probabilistic way. The paths may not be exactly specified in a full quantum description of the particles; "path" is a classical concept which quantum particles do not exactly possess. The probability arises from the measurement of the perceived path of the particle. In some cases, a quantum particle may trace an exact path, and the probability of finding the particles in that path is one. The quantum development is at least as predictable as the classical motion, but it describes wave functions that cannot be easily expressed in ordinary language. In double-slit experiments, light is fired singly through a double-slit apparatus at a distant screen and does not arrive at a single point, nor do the photons arrive in a scattered pattern analogous to bullets fired by a fixed gun at a distant target. Instead, the light arrives in varying concentrations at widely separated points, and the distribution of its collisions can be calculated reliably. In that sense the behavior of light in this apparatus is deterministic, but there is no way to predict where in the resulting interference pattern an individual photon will make its contribution (see Heisenberg Uncertainty Principle).

Some have argued that, in addition to the conditions humans can observe and the laws we can deduce, there are hidden factors or "hidden variables" that determine absolutely in which order photons reach the detector screen. They argue that the course of the universe is absolutely determined, but that humans are screened from knowledge of the determinative factors. So, they say, it only appears that things proceed in a merely probabilistically-determinative way. In actuality, they proceed in an absolutely deterministic way. Although matters are still subject to some measure of dispute, quantum mechanics makes statistical predictions which would be violated if some local hidden variables existed. There have been a number of experiments to verify those predictions, and so far they do not appear to be violated, though many physicists believe better experiments are needed to conclusively settle the question. (See Bell test experiments.) It is possible,

Page 17: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

however, to augment quantum mechanics with non-local hidden variables to achieve a deterministic theory that is in agreement with experiment. An example is the Bohm interpretation of quantum mechanics.

On the macro scale it can matter very much whether a bullet arrives at a certain point at a certain time, as snipers are well aware; there are analogous quantum events that have macro- as well as quantum-level consequences. It is easy to contrive situations in which the arrival of an electron at a screen at a certain point and time would trigger one event and its arrival at another point would trigger an entirely different event. (See Schrödinger's cat.)

Even before the laws of quantum mechanics were fully developed, the phenomenon of radioactivity posed a challenge to determinism. A gram of uranium-238, a commonly occurring radioactive substance, contains some 2.5 x 1021 atoms. By all tests known to science these atoms are identical and indistinguishable. Yet about 12600 times a second one of the atoms in that gram will decay, giving off an alpha particle. This decay does not depend on external stimulus and no extant theory of physics predicts when any given atom will decay, with realistically obtainable knowledge. The uranium found on earth is thought to have been synthesized during a supernova explosion that occurred roughly 5 billion years ago. For determinism to hold, every uranium atom must contain some internal "clock" that specifies the exact time it will decay.[citation needed] And somehow the laws of physics must specify exactly how those clocks were set as each uranium atom was formed during the supernova collapse.

Exposure to alpha radiation can cause cancer. For this to happen, at some point a specific alpha particle must alter some chemical reaction in a cell in a way that results in a mutation. Since molecules are in constant thermal motion, the exact timing of the radioactive decay that produced the fatal alpha particle matters. If probabilistically determined events do have an impact on the macro events -- such as when a person who could have been historically important dies in youth of a cancer caused by a random mutation -- then the course of history is not determined from the dawn of time.

The time dependent Schrödinger equation gives the first time derivative of the quantum state. That is, it explicitly and uniquely predicts the development of the wave function with time.

So quantum mechanics is deterministic, provided that one accepts the wave function itself as reality (rather than as probability of classical coordinates). Since we have no practical way of knowing the exact magnitudes, and especially the phases, in a full quantum mechanical description of the causes of an observable event, this turns out to be philosophically similar to the "hidden variable" doctrine.

According to some, quantum mechanics is more strongly ordered than Classical Mechanics, because while Classical Mechanics is chaotic, quantum mechanics is not. For example, the classical problem of three bodies under a force such as gravity is not integrable, while the quantum mechanical three body problem is tractable and integrable, using the Faddeev Equations. That is, the quantum mechanical problem can always be solved to a given accuracy with a large enough computer of predetermined precision, while the classical problem may require arbitrarily high precision, depending on the details of the motion. This does not mean that quantum mechanics describes the world as more deterministic, unless one already considers the wave function to be the true reality. Even so, this does not get rid of the probabilities, because we can't do anything without using classical descriptions, but it assigns the probabilities to the classical approximation, rather than to the quantum reality.

Asserting that quantum mechanics is deterministic by treating the wave function itself as reality implies a single wave function for the entire universe, starting at the big bang. Such a "wave function of everything" would carry the probabilities of not just the world we know, but every other possible world that could have evolved from the big bang. For example, large voids in the distributions of galaxies are believed by many

Page 18: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

cosmologists to have originated in quantum fluctuations during the big bang. (See cosmic inflation and primordial fluctuations.) If so, the "wave function of everything" would carry the possibility that the region where our Milky Way galaxy is located could have been a void and the Earth never existed at all. (See large-scale structure of the cosmos.)

First cause

Intrinsic to the debate concerning determinism is the issue of first cause. Deism, a philosophy articulated in the seventeenth century, holds that the universe has been deterministic since creation, but ascribes the creation to a metaphysical God or first cause outside of the chain of determinism. God may have begun the process, Deism argues, but God has not influenced its evolution. This perspective illustrates a puzzle underlying any conception of determinism:

Assume: All events have causes, and their causes are all prior events. There is no cycle of events such that an event (possibly indirectly) causes itself.

The picture this gives us is that Event AN is preceded by AN-1, which is preceded by AN-2, and so forth.

Under these assumptions, two possibilities seem clear, and both of them question the validity of the original assumptions:

(1) There is an event A0 prior to which there was no other event that could serve as its cause. (2) There is no event A0 prior to which there was no other event, which means that we are presented with an infinite series of causally related events, which is itself an event, and yet there is no cause for this infinite series of events.

Under this analysis the original assumption must have something wrong with it. It can be fixed by admitting one exception, a creation event (either the creation of the original event or events, or the creation of the infinite series of events) that is itself not a caused event in the sense of the word "caused" used in the formulation of the original assumption. Some agency, which many systems of thought call God, creates space, time, and the entities found in the universe by means of some process that is analogous to causation but is not causation as we know it. This solution to the original difficulty has led people to question whether there is any reason for there only being one divine quasi-causal act, whether there have not been a number of events that have occurred outside the ordinary sequence of events, events that may be called miracles. Another possibility is that the "last event" loops back to the "first event" causing an infinite loop. If you were to call the Big Bang the first event, you would see the end of the Universe as the "last event". In theory, the end of the Universe would be the cause of the beginning of the Universe. You would be left with an infinite loop of time with no real beginning or end. This theory eliminates the need for a first cause, but does not explain why there should be a loop in time.

Immanuel Kant carried forth this idea of Leibniz in his idea of transcendental relations, and as a result, this had profound effects on later philosophical attempts to sort these issues out. His most influential immediate successor, a strong critic whose ideas were yet strongly influenced by Kant, was Edmund Husserl, the developer of the school of philosophy called phenomenology. But the central concern of that school was to elucidate not physics but the grounding of information that physicists and others regard as empirical. In an indirect way, this train of investigation appears to have contributed much to the philosophy of science called logical positivism and particularly to the thought of members of the Vienna Circle, all of which have had much to say, at least indirectly, about ideas of determinism.

Approximation

An approximation (represented by the symbol ≈) is an inexact representation of something that is still close enough to be useful. Although approximation is most often applied to numbers, it is also frequently applied to such things as mathematical functions, shapes, and physical laws.

Page 19: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

Approximations may be used because incomplete information prevents use of exact representations. Many problems in physics are either too complex to solve analytically, or impossible to solve. Thus, even when the exact representation is known, an approximation may yield a sufficiently accurate solution while reducing the complexity of the problem significantly.

For instance, physicists often approximate the shape of the Earth as a sphere even though more accurate representations are possible, because many physical behaviours—e.g. gravity—are much easier to calculate for a sphere than for less regular shapes.

The problem consisting of two or more planets orbiting around a sun has no exact solution. Often, ignoring the gravitational effects of the planets gravitational pull on each other and assuming that the sun does not move achieve a good approximation. The use of perturbations to correct for the errors can yield more accurate solutions. Simulations of the motions of the planets and the star also yields more accurate solutions.

The type of approximation used depends on the available information, the degree of accuracy required, the sensitivity of the problem to this data, and the savings (usually in time and effort) that can be achieved by approximation.

Science

The scientific method is carried out with a constant interaction between scientific laws (theory) and empirical measurements, which are constantly compared to one another.

The approximation also refers to using a simpler process. This model is used to make predictions easier. The most common versions of philosophy of science accept that empirical measurements are always approximations—they do not perfectly represent what is being measured. The history of science indicates that the scientific laws commonly felt to be true at any time in history are only approximations to some deeper set of laws. For example, attempting to resolve a model using outdated physical laws alone incorporates an inherent source of error, which should be corrected by approximating the quantum effects not present in these laws.

Each time a newer set of laws is proposed, it is required that in the limiting situations in which the older set of laws were tested against experiments, the newer laws are nearly identical to the older laws, to within the measurement uncertainties of the older measurements. This is the correspondence principle.

Mathematics

Approximation usually occurs when an exact form or an exact numerical number is unknown. However some known form may exist and may be able to represent the real form so that no significant deviation can be found. It also is used when a number is not rational, such as the number π, which often is shortened to 3.14, or √7 as ≈ 2.65. Numerical approximations sometimes result from using a small number of significant digits. Approximation theory is a branch of mathematics, a quantitative part of functional analysis. Diophantine approximation deals with approximation to real numbers by rational numbers. The symbol "≈" means "approximately equal to"; tilde (~) and the Libra sign ( ) are common alternatives.

Fallibilism

Fallibilism is the philosophical doctrine that all claims of knowledge could, in principle, be mistaken. Some fallibilists go further, arguing that absolute certainty about knowledge is impossible. As a formal doctrine, it is most strongly associated with Charles Sanders Peirce, John Dewey, and other pragmatists, who use it in their attacks on foundationalism. However, it is arguably already present in the views of some ancient philosophers, including Xenophanes, Socrates, and Plato. Another proponent of fallibilism is Karl Popper,

Page 20: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

who builds his theory of knowledge, critical rationalism, on fallibilistic presuppositions. Fallibilism is also been employed by Willard Van Orman Quine to, among other things, attack the distinction between analytic and synthetic statements.

Unlike scepticism, fallibilism does not imply the need to abandon our knowledge - we needn't have logically conclusive justifications for what we know. Rather, it is an admission that, because empirical knowledge can be revised by further observation, any of the things we take as knowledge might possibly turn out to be false. Some fallibilists make an exception for things that are axiomatically true (such as mathematical and logical knowledge). Others remain fallibilists about these as well, on the basis that, even if these axiomatic systems are in a sense infallible, we are still capable of error when working with these systems. The critical rationalist Hans Albert argues that it is impossible to prove any truth with certainty, even in logic and mathematics. This argument is called the Münchhausen Trilemma.

Moral fallibilism

Moral fallibilism is a specific subset of the broader epistemological fallibilism outlined above. In the debate between moral subjectivism and moral objectivism, moral fallibilism holds out a third plausible stance: that objectively true moral standards exist, but that they cannot be reliably or conclusively determined by humans. This avoids the problems associated with the flexibility of subjectivism by retaining the idea that morality is not a matter of mere opinion, whilst accounting for the conflict between differing objective moralities. Notable proponents of such views are Isaiah Berlin (value pluralism) and Bernard Williams (perspectivism).

Epistemology

According to Theaetetus, knowledge is a subset of that which is both true and believed.

Epistemology (from Greek επιστήµη - episteme, "knowledge" + λόγος, "logos") or theory of knowledge is a branch of philosophy concerned with the nature and scope (limitations) of knowledge.[1] The term was introduced into English by the Scottish philosopher James Frederick Ferrier (1808-1864).[2]

Much of the debate in this field has focused on analyzing the nature of knowledge and how it relates to similar notions such as truth, belief, and justification. It also deals with the means of production of knowledge, as well as skepticism about different knowledge claims. In other words, epistemology primarily addresses the following questions: "What is knowledge?", "How is knowledge acquired?", "What do people know?", "How do we know what we know?"

Knowledge

The primary question that epistemology addresses is "What is knowledge?" This question is several millennia old.

Page 21: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

Distinguishing knowing that from knowing how

In this article, and in epistemology in general, the kind of knowledge usually discussed is propositional knowledge, also known as "knowledge-that" as opposed to "knowledge-how." For example: in mathematics, it is known that 2 + 2 = 4, but there is also knowing how to add two numbers. Many (but not all) philosophers thus think there is an important distinction between "knowing that" and "knowing how", with epistemology primarily interested in the former. This distinction is recognized linguistically in many languages, though not in modern English except as dialect (see verbs "ken" and "wit" in the Shorter Oxford Dictionary).[3] In Personal Knowledge, Michael Polanyi articulates a case for the epistemological relevance of both forms of knowledge; using the example of the act of balance involved in riding a bicycle, he suggests that the theoretical knowledge of the physics involved in maintaining a state of balance cannot substitute for the practical knowledge of how to ride, and that it is important to understand how both are established and grounded. It is worth pointing out that in recent times, some epistemologists (see Sosa, Greco, Kvanvig, Zagzebski) have argued that we should not think of knowledge this way; Epistemology should evaluate people's properties (i.e.,intellectual virtues) instead of propositions' properties. This is, in short, because higher forms of cognitive success (i.e., understanding) involve non 'veritic' features which can't be evaluated from a justified true belief view of knowledge.

Belief

Often, statements of "belief" mean that the speaker predicts something that will prove to be useful or successful in some sense — perhaps the speaker might "believe in" his or her favorite football team. This is not the kind of belief usually addressed within epistemology. The kind that is dealt with is when "to believe something" simply means any cognitive content held as true. For example, to believe that the sky is blue is to think that the proposition "The sky is blue" is true.

Knowledge implies belief. The statement "I know P, but I don't believe that P is true" is contradictory. To know P is, among other things, to believe that P is true, or to believe in P. (See the article on Moore's paradox.)

Truth

If someone believes something, he or she thinks that it is true but may be mistaken. This is not the case with knowledge. For example, a man thinks that a particular bridge is safe enough to support him, and he attempts to cross it; unfortunately, the bridge collapses under his weight. It could be said that the man believed that the bridge was safe, but that his belief was mistaken. It would not be accurate to say that he knew that the bridge was safe, because plainly it was not. By contrast, if the bridge actually supported his weight then he might be justified in subsequently holding that he knew the bridge had been safe enough for his passage, at least at that particular time. For something to count as knowledge, it must actually be true.

The Aristotelian definition of truth states:

"To say of something which is that it is not, or to say of something which is not that it is, is false. However, to say of something which is that it is, or of something which is not that it is not, is true."

Justification

Plato

In Plato's dialogue Theaetetus, Socrates considers a number of theories as to what knowledge is, the last being that knowledge is true belief that has been "given an account of" — meaning explained or defined in some way. According to the theory that knowledge is justified true belief, in order to know that a given proposition is true, one must not only believe the relevant true proposition, but one must also have a good

Page 22: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

reason for doing so. One implication of this would be that no one would gain knowledge just by believing something that happened to be true. For example, an ill person with no medical training, but a generally optimistic attitude, might believe that he/she will recover from his/her illness quickly. Nevertheless, even if this belief turned out to be true, the patient would not have known that he/she would get well since his/her belief lacked justification. The definition of knowledge as justified true belief was widely accepted until the 1960s. At this time, a paper written by the American philosopher Edmund Gettier provoked widespread discussion. See theories of justification for other views on the idea.

The Gettier problem

In 1963 Edmund Gettier called into question the theory of knowledge that had been dominant among philosophers for thousands of years[4]. In a few pages, Gettier argued that there are situations in which one's belief may be justified and true, yet fail to count as knowledge. That is, Gettier contended that while justified belief in a proposition is necessary for that proposition to be known, it is not sufficient. As in the diagram above, a true proposition can be believed by an individual but still not fall within the "knowledge" category (purple region).

According to Gettier, there are certain circumstances in which one does not have knowledge, even when all of the above conditions are met. Gettier proposed two thought experiments, which have come to be known as "Gettier cases", as counterexamples to the classical account of knowledge. One of the cases involves two men, Smith and Jones, who are awaiting the results of their applications for the same job. Each man has ten coins in his pocket. Smith has excellent reasons to believe that Jones will get the job and, furthermore, knows that Jones has ten coins in his pocket (he recently counted them). From this Smith infers, "the man who will get the job has ten coins in his pocket." However, Smith is unaware that he has ten coins in his own pocket. Furthermore, Smith, not Jones, is going to get the job. While Smith has strong evidence to believe that Jones will get the job, he is wrong. Smith has a justified true belief that a man with ten coins in his pocket will get the job; however, according to Gettier, Smith does not know that a man with ten coins in his pocket will get the job, because Smith's belief is "...true by virtue of the number of coins in Smith's pocket, while Smith does not know how many coins are in Smith's pocket, and bases his belief...on a count of the coins in Jones's pocket, whom he falsely believes to be the man who will get the job." (see [4] p.122.) These cases fail to be knowledge because the subject's belief is justified, but only happens to be true in virtue of luck.

Responses to Gettier

The responses to Gettier have been varied. Usually, they have involved substantive attempts to provide a definition of knowledge different from the classical one, either by recasting knowledge as justified true belief with some additional fourth condition, or as something else altogether.

Infallibilism, indefeasibility

In one response to Gettier, the American philosopher Richard Kirkham has argued that the only definition of knowledge that could ever be immune to all counterexamples is the infallibilist one.[citation needed] To qualify as an item of knowledge, so the theory goes, a belief must not only be true and justified, the justification of the belief must necessitate its truth. In other words, the justification for the belief must be infallible. (See Fallibilism, below, for more information.)

Yet another possible candidate for the fourth condition of knowledge is indefeasibility. Defeasibility theory maintains that there should be no overriding or defeating truths for the reasons that justify one's belief. For example, suppose that person S believes they saw Tom Grabit steal a book from the library and uses this to justify the claim that Tom Grabit stole a book from the library. A possible defeater or overriding proposition for such a claim could be a true proposition like, "Tom Grabit's identical twin Sam is currently

Page 23: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

in the same town as Tom." So long as no defeaters of one's justification exist, a subject would be epistemically justified.

The Indian philosopher B K Matilal has drawn on the Navya-Nyaya fallibilism tradition to respond to the Gettier problem. Nyaya theory distinguishes between know p and know that one knows p - these are different events, with different causal conditions. The second level is a sort of implicit inference that usually follows immediately the episode of knowing p (knowledge simpliciter). The Gettier case is analyzed by referring to a view of Gangesha (13th c.), who takes any true belief to be knowledge; thus a true belief acquired through a wrong route may just be regarded as knowledge simpliciter on this view. The question of justification arises only at the second level, when one considers the knowledgehood of the acquired belief. Initially, there is lack of uncertainty, so it becomes a true belief. But at the very next moment, when the hearer is about to embark upon the venture of knowing whether he knows p, doubts may arise. "If, in some Gettier-like cases, I am wrong in my inference about the knowledgehood of the given occurrent belief (for the evidence may be pseudo-evidence), then I am mistaken about the truth of my belief -- and this is in accord with Nyaya fallibilism: not all knowledge-claims can be sustained."[5]

Reliabilism

Reliabilism is a theory advanced by philosophers such as Alvin Goldman according to which a belief is justified (or otherwise supported in such a way as to count towards knowledge) only if it is produced by processes that typically yield a sufficiently high ratio of true to false beliefs. In other words, this theory states that a true belief counts as knowledge only if it is produced by a reliable belief-forming process.

Reliabilism has been challenged by Gettier cases. Another argument that challenges reliabilism, like the Gettier cases (although it was not presented in the same short article as the Gettier cases), is the case of Henry and the barn façades. In the thought experiment, a man, Henry, is driving along and sees a number of buildings that resemble barns. Based on his perception of one of these, he concludes that he has just seen barns. While he has seen one, and the perception he based his belief on was of a real barn, all the other barn-like buildings he saw were façades. Theoretically, Henry doesn't know that he has seen a barn, despite both his belief that he has seen one being true and his belief being formed on the basis of a reliable process (i.e. his vision), since he only acquired his true belief by accident.[citation needed]

Other responses

The American philosopher Robert Nozick has offered the following definition of knowledge:

S knows that P if and only if:

• P; • S believes that P; • if P were false, S would not believe that P; • if P is true, S will believe that P. [6]

Nozick believed that the third subjunctive condition served to address cases of the sort described by Gettier. Nozick further claims this condition addresses a case of the sort described by D. M. Armstrong[7]: A father believes his son innocent of committing a particular crime, both because of faith in his son and (now) because he has seen presented in the courtroom a conclusive demonstration of his son's innocence. His belief via the method of the courtroom satisfies the four subjunctive conditions, but his faith-based belief does not. If his son were guilty, he would still believe him innocent, on the basis of faith in his son; this would violate the third subjunctive condition.

The British philosopher Simon Blackburn has criticized this formulation by suggesting that we do not want to accept as knowledge beliefs which, while they "track the truth" (as Nozick's account requires), are not

Page 24: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

held for appropriate reasons. He says that "we do not want to award the title of knowing something to someone who is only meeting the conditions through a defect, flaw, or failure, compared with someone else who is not meeting the conditions."[citation needed]

Timothy Williamson, has advanced a theory of knowledge according to which knowledge is not justified true belief plus some extra condition(s). In his book Knowledge and its Limits, Williamson argues that the concept of knowledge cannot be analyzed into a set of other concepts—instead, it is sui generis. Thus, though knowledge requires justification, truth, and belief, the word "knowledge" can't be, according to Williamson's theory, accurately regarded as simply shorthand for "justified true belief".

Externalism and internalism

Part of the debate over the nature of knowledge is a debate between epistemological externalists on the one hand, and epistemological internalists on the other. Externalists think that factors deemed "external", meaning outside of the psychological states of those who gain knowledge, can be conditions of knowledge. For example, an externalist response to the Gettier problem is to say that, in order for a justified, true belief to count as knowledge, it must be caused, in the right sort of way, by relevant facts. Such causation, to the extent that it is "outside" the mind, would count as an external, knowledge-yielding condition. Internalists, contrariwise, claim that all knowledge-yielding conditions are within the psychological states of those who gain knowledge.

René Descartes, prominent philosopher and supporter of internalism wrote that, since the only method by which we perceive the external world is through our senses, and that, since the senses are not infallible, we should not consider our concept of knowledge to be infallible. The only way to find anything that could be described as "infallibly true," he advocates, would be to pretend that an omnipotent, deceitful being is tampering with one's perception of the universe, and that the logical thing to do is to question anything that involves the senses. "Cogito ergo sum" (I think, therefore I am) is commonly associated with Descartes' theory, because he postulated that the only thing that he could not logically bring himself to doubt is his own existence: "I do not exist" is a contradiction in terms; the act of saying that one does not exist assumes that someone must be making the statement in the first place. Though Descartes could doubt his senses, his body and the world around him, he could not deny his own existence, because he was able to doubt and must exist in order to do so. Even if some "evil genius" were to be deceiving him, he would have to exist in order to be deceived. However from this Descartes did not go as far as to define what he was. This was pointed out by the materialist philosopher Pierre Gassendi (1592-1655) who accused Descartes of saying that he was "not this and not that," while never saying what exactly was existing. One could argue that this is not an edifying question, because it doesn't matter what exactly exists, it only matters that it does indeed exist.

Acquiring knowledge

The second question that will be dealt with is the question of how knowledge is acquired. This area of epistemology covers what is called "the regress problem", issues concerning epistemic distinctions such as that between experience and apriority as means of creating knowledge. Further that between synthesis and analysis used as a means of proof, and debates such as the one between empiricists and rationalists.

The regress problem

Suppose we make a point of asking for a justification for every belief. Any given justification will itself depend on another belief for its justification, so one can also reasonably ask for this to be justified, and so forth. This appears to lead to an infinite regress, with each belief justified by some further belief. The apparent impossibility of completing an infinite chain of reasoning is thought by some to support skepticism. The skeptic will argue that since no one can complete such a chain, ultimately no beliefs are justified and, therefore, no one knows anything. "The only thing I know for sure is that I do not know for sure."

Page 25: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

Response to the regress problem

Many epistemologists studying justification have attempted to argue for various types of chains of reasoning that can escape the regress problem.

Infinitism

Some philosophers, notably Peter Klein in his "Human Knowledge and the Infinite Regress of Reasons", have argued that it's not impossible for an infinite justificatory series to exist. This position is known as "infinitism". Infinitists typically take the infinite series to be merely potential, in the sense that an individual may have indefinitely many reasons available to him, without having consciously thought through all of these reasons. The individual need only have the ability to bring forth the relevant reasons when the need arises. This position is motivated in part by the desire to avoid what is seen as the arbitrariness and circularity of its chief competitors, foundationalism and coherentism.

Foundationalism

Foundationalists respond to the regress problem by claiming that some beliefs that support other beliefs do not themselves require justification by other beliefs. Sometimes, these beliefs, labeled "foundational", are characterized as beliefs that one is directly aware of the truth of, or as beliefs that are self-justifying, or as beliefs that are infallible. According to one particularly permissive form of foundationalism, a belief may count as foundational, in the sense that it may be presumed true until defeating evidence appears, as long as the belief seems to its believer to be true.[citation needed] Others have argued that a belief is justified if it is based on perception or certain a priori considerations.

The chief criticism of foundationalism is that it allegedly leads to the arbitrary or unjustified acceptance of certain beliefs.[8]

Coherentism

Another response to the regress problem is coherentism, which is the rejection of the assumption that the regress proceeds according to a pattern of linear justification. To avoid the charge of circularity, coherentists hold that an individual belief is justified circularly by the way it fits together (coheres) with the rest of the belief system of which it is a part. This theory has the advantage of avoiding the infinite regress without claiming special, possibly arbitrary status for some particular class of beliefs. Yet, since a system can be coherent while also being wrong, coherentists face the difficulty in ensuring that the whole system corresponds to reality.

Foundherentism

There is also a position known as "foundherentism". Susan Haack is the philosopher who conceived it, and it is meant to be a unification of foundationalism and coherentism. One component of this theory is what is called the "analogy of the crossword puzzle". Whereas, say, infinists regard the regress of reasons as "shaped" like a single line, Susan Haack has argued that it is more like a crossword puzzle, with multiple lines mutually supporting each other.[9]

A priori and a posteriori knowledge

The nature of this distinction has been disputed by various philosophers; however, the terms may be roughly defined as follows:

• A priori knowledge is knowledge that is known independently of experience (that is, it is non-empirical).

Page 26: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

• A posteriori knowledge is knowledge that is known by experience (that is, it is empirical).

Analytic/synthetic distinction

Some propositions are such that we appear to be justified in believing them just so far as we understand their meaning. For example, consider, "My father's brother is my uncle." We seem to be justified in believing it to be true by virtue of our knowledge of what its terms mean. Philosophers call such propositions "analytic". Synthetic propositions, on the other hand, have distinct subjects and predicates. An example of a synthetic proposition would be, "My father's brother has black hair." Kant held that all mathematical propositions are synthetic.

The American philosopher W. V. O. Quine, in his "Two Dogmas of Empiricism", famously challenged the distinction, arguing that the two have a blurry boundary.

Specific theories of knowledge acquisition

Empiricism

In philosophy, empiricism is generally a theory of knowledge emphasizing the role of experience, especially experience based on perceptual observations by the five senses. Certain forms treat all knowledge as empirical,[citation needed] while some regard disciplines such as mathematics and logic as exceptions.[citation needed]

Rationalism

Rationalists believe that knowledge is primarily (at least in some areas) acquired by a priori processes or is innate—e.g., in the form of concepts not derived from experience. The relevant theoretical processes often go by the name "intuition".[citation needed] The relevant theoretical concepts may purportedly be part of the structure of the human mind (as in Kant's theory of transcendental idealism), or they may be said to exist independently of the mind (as in Plato's theory of Forms).

The extent to which this innate human knowledge is emphasized over experience as a means to acquire knowledge varies from rationalist to rationalist. Some hold that knowledge of any kind can only be gained a priori,[citation needed] while others claim that some knowledge can also be gained a posteriori.[citation needed] Consequently, the borderline between rationalist epistemologies and others can be vague.

Constructivism

Constructivism is a view in philosophy according to which all knowledge is "constructed" in as much as it is contingent on convention, human perception, and social experience.[citation needed] Constructivism proposes new definitions for knowledge and truth that forms a new paradigm, based on inter-subjectivity instead of the classical objectivity and viability instead of truth. Piagetian constructivism, however, believes in objectivity as constructs can be validated through experimentation. The constructivist point of view is pragmatic as Vico said: "the truth is to have made it".

It originated in sociology under the term "social constructionism" and has been given the name "constructivism" when referring to philosophical epistemology, though "constructionism" and "constructivism" are often used interchangeably.[citation needed]Constructivism has also emerged in the field of International Relations, of which the writings of Alexander Wendt are most popular. Describing the characteristic nature of International reality marked by 'anarchy' he says, "anarchy is what states make of it."

Page 27: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

Objectivism

Objectivism holds that knowledge of reality is attained by perceptual observation and a volitional process of reason based on those observations. Ayn Rand explains this process in her theory of concept-formation.

"According to Objectivism, concepts “represent classifications of observed existents according to their relationships to other observed existents.” To form a concept, one mentally isolates a group of concretes (of distinct perceptual units), on the basis of observed similarities which distinguish them from all other known concretes (similarity is “the relationship between two or more existents which possess the same characteristic(s), but in different measure or degree”); then, by a process of omitting the particular measurements of these concretes, one integrates them into a single new mental unit: the concept, which subsumes all concretes of this kind (a potentially unlimited number). The integration is completed and retained by the selection of a perceptual symbol (a word) to designate it. “A concept is a mental integration of two or more units possessing the same distinguishing characteristic(s), with their particular measurements omitted.”"[10]

What do people know?

The last question that will be dealt with is the question of what people know. At the heart of this area of study is skepticism, with many approaches involved trying to disprove some particular form of it.

Skepticism

Skepticism is related to the question of whether certain knowledge is possible. Skeptics argue that the belief in something does not necessarily justify an assertion of knowledge of it. In this skeptics oppose foundationalism, which states that there have to be some basic beliefs that are justified without reference to others. The skeptical response to this can take several approaches. First, claiming that "basic beliefs" must exist, amounts to the logical fallacy of argument from ignorance combined with the slippery slope. While a foundationalist would use Münchhausen Trilemma as a justification for demanding the validity of basic beliefs, a skeptic would see no problem with admitting the result.

Developments from skepticism

Fallibilism

For most of philosophical history, "knowledge" was taken to mean belief that was true and justified to an absolute certainty.[citation needed] Early in the 20th century, however, the notion that belief had to be justified as such to count as knowledge lost favour. Fallibilism is the view that knowing something does not entail certainty regarding it.

Practical applications

Far from being purely academic, the study of epistemology is useful for a great many applications. It is particularly commonly employed in issues of law where proof of guilt or innocence may be required, or when it must be determined whether a person knew a particular fact before taking a specific action (e.g., whether an action was premeditated).

Other common applications of epistemology include:

• Mathematics and science • History and archaeology • Medicine (diagnosis of disease)

Page 28: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

• Product testing (How can we know that the product will not fail?) • Intelligence (information) gathering • Religion and Apologetics • Cognitive Science • Artificial Intelligence • Psychology • Linguistics • Literature • Philosophy • Knowledge Management • Testimony • Sociology • Cultural Anthropology (Do different cultures have different systems of knowledge?) • Austrian School of economics (in the form of praxeology, the discovery of economic laws valid for

all human action)

Nihilism

Nihilism (from the Latin nihil, nothing) is a philosophical position that argues that existence is without objective meaning, purpose, or intrinsic value. Nihilists generally assert that objective morality does not exist, and that no action is logically preferable to any other in regard to the moral value of one action over another. Nihilists that argue that there is no objective morality may claim that existence has no intrinsic higher meaning or goal. They may also claim that there is no reasonable proof or argument for the existence of a higher ruler or creator, or posit that even if a higher ruler or creator exists, humanity has no moral obligation to worship them.

The term nihilism is sometimes used synonymously with anomie to denote a general mood of despair at the pointlessness of existence.[1] Movements such as Dada, Futurism,[2] and deconstructionism,[3] among others, have been identified by commentators as "nihilistic" at various times in various contexts. Often this means or is meant to imply that the beliefs of the accuser are more substantial or truthful, whereas the beliefs of the accused are nihilistic, and thereby comparatively amount to nothing (or are simply claimed to be destructively amoralistic).

Nihilism is also a characteristic that has been ascribed to time periods: for example, Jean Baudrillard and others have called postmodernity a nihilistic epoch,[4] and some Christian theologians and figures of religious authority have asserted that postmodernity[5] and many aspects of modernity[3] represent the rejection of God, and therefore are nihilistic.

History

Though the term nihilism was first popularized by the novelist Ivan Turgenev, it was first introduced into philosophical discourse by Friedrich Heinrich Jacobi (1743 – 1819), who used the term to characterize rationalism, and in particular Immanuel Kant's "critical" philosophy in order to carry out a reductio ad absurdum according to which all rationalism (philosophy as criticism) reduces to nihilism, and thus it should be avoided and replaced with a return to some type of faith and revelation. A related concept is fideism.

Nihilism is often associated with the philosopher Friedrich Nietzsche, whose perspectivist view ("in so far as the word 'knowledge' has any meaning, the world is knowable; but it is interpretable otherwise, it has no meaning behind it, but countless meanings" The Will to Power, trans. Walter Kaufmann) accorded with certain aspects of one reading of the position; the modern definition does not apply to him.[6] Still, Nietzsche could be accurately categorized as a nihilist in the descriptive sense that he noted the "death of God" and the atrophy of traditional absolutist morality in his time. However, he never advocated nihilism as

Page 29: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

a practical mode of living and was typically quite critical of what he described as the more dangerous nihilism, the rejection of the material world in favor of a nonexistent "heaven".[7][6] His later work displays a preoccupation with nihilism.

Nietzsche characterized nihilism as emptying the world and especially human existence of meaning, purpose, comprehensible truth, or essential value. He hints that nihilism can become a false belief, when it leads individuals to discard any hope of meaning in the world and thus to invent some compensatory alternate measure of significance. Nietzsche used the phrase 'Christians and other nihilists', which is consistent with Christianity in general as Nietzsche describes nihilism, though as nihilism is now commonly construed, Christian philosophy is its opposite. Another prominent philosopher who has written on the subject is Martin Heidegger, who argued that "[the term] nihilism has a very specific meaning. What remains unquestioned and forgotten in metaphysics is being; and hence, it is nihilistic."[8]

Nietzsche

In most contexts, Nietzsche defined the term as any philosophy that results in an apathy toward life and a poisoning of the human soul—and opposed it vehemently. Nietzsche's deep concern with nihilism was part of his intense reaction to Schopenhauer's doctrine of the denial of the will. Nietzsche describes it as "the will to nothingness" or, more specifically:

A nihilist is a man who judges of the world as it is that it ought NOT to be, and of the world as it ought to be that it does not exist. According to this view, our existence (action, suffering, willing, feeling) has no meaning: the pathos of 'in vain' is the nihilists' pathos — at the same time, as pathos, an inconsistency on the part of the nihilists.

– Friedrich Nietzsche, The Will to Power, section 585, translated by Walter Kaufmann

Stanley Rosen identifies Nietzsche's equation of nihilism with "the situation which obtains when 'everything is permitted.'"[9] Nietzsche asserts that this nihilism is a result of valuing "higher", "divine" or "meta-physical" things (such as God), that do not in turn value "base", "human" or "earthly" things. But a person who rejects God and the divine may still retain the belief that all "base", "earthly", or "human" ideas are still valueless because they were considered so in the previous belief system (such as a Christian who becomes a communist and believes fully in the party structure and leader). In this interpretation, any form of idealism, after being rejected by the idealist, leads to nihilism. Moreover, this is the source of "inconsistency on the part of the nihilists". The nihilist continues to believe that only "higher" values and truths are worthy of being called such, but rejects the idea that they exist. Because of this rejection, all ideas described as true or valuable are rejected by the nihilist as impossible because they do not meet the previously established standards.

In this sense, it is the philosophical equivalent to the Russian political movement: the leap beyond skepticism — the desire to destroy meaning, knowledge, and value. To Nietzsche, it was irrational because the human soul thrives on value. Nihilism, then, was in a sense like suicide and mass murder all at once. He considered faith in the categories of reason, seeking either to overcome or ignore nature, to be the cause of such nihilism. "We have measured the value of the world according to categories that refer to a purely fictitious world".[10] He saw this philosophy as present in Christianity (which he described as 'slave morality'), Buddhism, morality, asceticism and any excessively skeptical philosophy.

As the first philosopher to study nihilism extensively, however, Nietzsche was also quite influenced by its ideas. Nietzsche's complex relationship with nihilism can be seen in his statement that "I praise, I do not reproach, [nihilism's] arrival. I believe it is one of the greatest crises, a moment of the deepest self-reflection of humanity. Whether man recovers from it, whether he becomes master of this crisis, is a question of his strength!". [11] While this may appear to imply his allegiance to the nihilist viewpoint, it would be more accurate to say that Nietzsche saw the coming of nihilism as valuable in the long term (as well as ironically

Page 30: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

acknowledging that nihilism exists in the world so has more gravity compared with categories that refer to a purely fictitious world). According to Nietzsche, it is only once nihilism is overcome that a culture can have a true foundation upon which to thrive. He wished to hasten its coming only so that he could also hasten its ultimate departure.[6]

Nietzsche's philosophy also shares with nihilism a rejection of any perfect source of absolute, universal and transcendent values.[7] Still, he did not consider all values of equal worth. Recognizing the chaos of nihilism, he advocated a philosophy that willfully transcends it. Furthermore, his positive attitude towards truth as a vehicle of faith and belief distinguishes him from the extreme pessimism that nihilism is often associated with.[6]

'To the clean are all things clean' — thus say the people. I, however, say unto you: To the swine all things become swinish! Therefore preach the visionaries and bowed-heads (whose hearts are also bowed down): 'The world itself is a filthy monster.' For these are all unclean spirits; especially those, however, who have no peace or rest, unless they see the world FROM THE BACKSIDE — the backworldsmen! TO THOSE do I say it to the face, although it sound unpleasantly: the world resembleth man, in that it hath a backside, — SO MUCH is true! There is in the world much filth: SO MUCH is true! But the world itself is not therefore a filthy monster!

– Friedrich Nietzsche, Thus Spoke Zarathustra

A major cause of Nietzsche's continued association with nihilism is his famous proclamation that "God is dead." This is Nietzsche's way of saying that the idea of God is no longer capable of acting as a source of any moral code or teleology. God is dead, then, in the sense that his existence is now irrelevant to the bulk of humanity. "And we," writes Nietzsche in The Gay Science, "have killed him." Alternately, some have interpreted Nietzsche's comment to be a statement of faith that the world has no rational order. Nietzsche also believed that, even though he thought Christian morality was nihilistic, without God humanity is left with no epistemological or moral base from which we can derive absolute beliefs. Thus, even though nihilism has been a threat in the past, through Christianity, Platonism, and various political movements that aim toward a distant utopian future, and any other philosophy that devalues human life and the world around us (and any philosophy that devalues the world around us by privileging some other or future world necessarily devalues human life), Nietzsche tells us it is also a threat for humanity's future. This warning can also be taken as a polemic against 19th and 20th century scientism.

He advocated a remedy for nihilism's destructive effects and a hope for humanity's future in the form of the Übermensch (English: overman or superman), a position especially apparent in his works Thus Spoke Zarathustra and The Antichrist. The Übermensch is an exercise of action and life: one must give value to their existence by behaving as if one's very existence were a work of art. Nietzsche believed that the Übermensch "exercise" would be a necessity for human survival in the post-religious era. Another part of Nietzsche's remedy for nihilism is a revaluation of morals — he hoped that we are able to discard the old morality of equality and servitude and adopt a new code, turning Judeo-Christian morality on its head. Excess, carelessness, callousness, and sin, then, are not the damning acts of a person with no regard for their salvation, nor that which plummets a society toward decadence and decline, but the signifier of a soul already withering and the sign that a society is in decline. The only true sin to Nietzsche is that which is — against a human nature — aimed at the expression and venting of one's power over oneself. Virtue, likewise, is not to act according to what has been commanded, but to contribute to all that betters a human soul.

He attempts to reintroduce what he calls a master morality, which values personal excellence over forced compassion and creative acts of will over the herd instinct, a moral outlook he attributes to the ancient Greeks. The Christian moral ideals developed in opposition to this master morality, he says, as the reversal of the value system of the elite social class due to the oppressed class' resentment of their Roman masters. Nietzsche, however, did not believe that humans should adopt master morality as the be-all-end-all code of

Page 31: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

behavior - he believed that the revaluation of morals would correct the inconsistencies in both master and slave morality - but simply that master morality was preferable to slave morality, although this is debatable. Walter Kaufmann, for one, disagrees that Nietzsche actually preferred master morality to slave morality. He certainly gives slave morality a much harder time, but this is partly because he believes that slave morality is modern society's more imminent danger. The Antichrist had been meant as the first book in a four-book series, "Toward a Re-Evaluation of All Morals", which might have made his views more explicit, but Nietzsche was afflicted by mental collapse that rendered him unable to write the later three books.

Postmodernism

Postmodern and poststructuralist thought deny the very grounds on which Western cultures have based their 'truths': absolute knowledge and meaning, a 'decentralization' of authorship, the accumulation of positive knowledge, historical progress, and the ideals of humanism and the Enlightenment. Jacques Derrida, whose deconstruction is perhaps most commonly labeled nihilistic, did not himself make the nihilistic move that others have claimed. Derridean deconstructionists argue that this approach rather frees texts, individuals or organisations from a restrictive truth, and that deconstruction opens up the possibility of other ways of being.[12] Gayatri Chakravorty Spivak, for example, uses deconstruction to create an ethics of opening up Western scholarship to the voice of the subaltern and to philosophies outside of the canon of western texts.[13] Derrida himself built a philosophy based upon a 'responsibility to the other'[14] Deconstruction can thus be seen not as a denial of truth, but as a denial of our ability to know truth (it makes an epistemological claim compared to nihilism's ontological claim).

Lyotard argues that, rather than relying on an objective truth or method to prove their claims, philosophers legitimize their truths by reference to a story about the world which is inseparable from the age and system the stories belong to, referred to by Lyotard as meta-narratives. He then goes on to define the postmodern condition as one characterized by a rejection both of these meta-narratives and of the process of legitimation by meta-narratives. "In lieu of meta-narratives we have created new language-games in order to legitimize our claims which rely on changing relationships and mutable truths, none of which is privileged over the other to speak to ultimate truth." This concept of the instability of truth and meaning leads in the direction of nihilism, though Lyotard stops short of embracing the latter.

Postmodern theorist Jean Baudrillard wrote briefly of nihilism from the postmodern viewpoint in Simulacra and Simulation. He stuck mainly to topics of interpretations of the real world over the simulations that the real world is composed of. The uses of meaning was an important subject in Baudrillard's discussion of nihilism:

The apocalypse is finished, today it is the precession of the neutral, of forms of the neutral and of indifference…all that remains, is the fascination for desertlike and indifferent forms, for the very operation of the system that annihilates us. Now, fascination (in contrast to seduction, which was attached to appearances, and to dialectical reason, which was attached to meaning) is a nihilistic passion par excellence, it is the passion proper to the mode of disappearance. We are fascinated by all forms of disappearance, of our disappearance. Melancholic and fascinated, such is our general situation in an era of involuntary transparency.

– Jean Baudrillard, Simulacra and Simulation, "On Nihilism", trans. 1995

Page 32: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

Cultural manifestations

In art

In art, there have been movements, such as surrealism and cubism, criticised for being nihilistic, and others, like Dada and Situationism which openly embrace it. The Situationist International (1957-1972) can be considered a good example of a political/social/artistic movement that was deeply rooted with nihilistic views and ideas. The Situationists abolished the earlier concepts of both Dada and surrealism by attempting to deconstruct the whole notion of 'art' as a separate form, subject or entity. Generally, modern art is criticised as nihilistic for not being representative, e.g. the Nazi party's Degenerate art exhibit. In some Stalinist regimes, modern art is seen as degenerative, and official rules for "aesthetic realism" are established to halt its public and artistic influence.

Literature and music thematically deal with nihilism, especially contemporary literature and music, wherein the uncertainty following modernism's demise is explored in detail. The character Rorschach, from Alan Moore's graphic novel Watchmen, is a borderline nihilist who says: "We are born to scrawl our own designs upon this morally blank world", observing that existence: "Has no pattern, save what we imagine after staring at it for too long"; however, Rorschach abides moral absolutism, as reflected in his journal.

Dada

The term Dada was first used during World War I, an event that precipitated the movement, which lasted from approximately 1916 to 1923. The Dada Movement began in the old town of Zürich, Switzerland known as the "Niederdorf" or "Niederdörfli," which is now sporadically inhabited by dadaist squatters. The Dadaists claimed that Dada was not an art movement, but an anti-art movement, sometimes using found objects in a manner similar to found poetry and labeling them art, thus undermining ideas of what art is and what it can be. The "anti-art" drive is thought to have stemmed from a post-war emptiness that lacked passion or meaning in life. Sometimes Dadaists paid attention to aesthetic guidelines only so they could be avoided, attempting to render their works devoid of meaning and aesthetic value. This tendency toward devaluation of art has led many to claim that Dada was an essentially nihilist movement; a destruction without creation. War and destruction had washed away peoples' mindset of creation and aesthetic.

In film

Perhaps the most commonly referenced portrayal of Nihilism in contemporary film is 1999's adaptation of the book of the same title Fight Club, in which the unnamed narrator's disillusionment with the search for meaning in a consumerist, emasculated society results at first in the antagonist (Tyler Durden) winning him over to a philosophy of antipathy, self-mutilation, and outright animosity towards life. Durden's Nihilism is blurred, however, by the Existentialist flavor of his rebellion against society. His credo that "It is only after we have lost everything that we are free to do anything" reflects a Sartrean insistence on the infinite responsibility of free will, while his desire for common men to rise up and overthrow the shallow values of society is reminiscent of Nietzsche's discussion of master-slave morality.

John Malkovich's character in the 1993 movie In the Line of Fire espouses an outlook on life that could be seen as nihilistic over a telephone conversation with a secret service agent played by Clint Eastwood. Malkovich's character, a would-be presidential assassin, describes life and death as lacking any intrinsic justice and being random and meaningless, and gives his motive for the assassination attempt as being "to punctuate the dreariness". A more fatalist treatment of Nihilism can be seen in the later I ♥ Huckabees, which includes Nihilism among other theories to develop the film's take on life in general. A similar use of Nihilism as a study in futility and meaninglessness can be seen in Jim Jarmusch's 2005 film Broken Flowers.

The 1998 movie The Big Lebowski written and directed by Joel and Ethan Coen, without treating Nihilism as a serious thematic concern, uses several Nihilist characters as comic narrative devices. Three black-clad

Page 33: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

men with German accents confront protagonist "The Dude" (Lebowski) claiming "We are Nihilists, Lebowski. We believe in nothing. Yeah, nothing." Also, upon being told that a man sleeping on a chair that is floating in a pool with a bottle of Jack Daniels next to him is a Nihilist, "The Dude" responds "Oh, that must be exhausting." The 2008 Batman film, The Dark Knight, portrays Joker as a moral nihilist, although this philosophy is greatly diluted by his schizophrenic and destructive nature.

In music

Late 1970s punk rock and early 1980s UK 82-style hardcore punk songs often express themes of nihilism. The UK band the Sex Pistols penned a song entitled "No Future", which became a slogan for unemployed and disaffected youth during the late 1970s.[15] A 2007 article in The Guardian noted that "...in the summer of 1977, ...punk's nihilistic swagger was the most thrilling thing in England"[16].

Probability

Probability is the likelihood or chance that something is the case or will happen. Probability theory is used extensively in areas such as statistics, mathematics, science and philosophy to draw conclusions about the likelihood of potential events and the underlying mechanics of complex systems.

Interpretations

The word probability does not have a consistent direct definition. In fact, there are two broad categories of probability interpretations:

1. Frequentists talk about probabilities only when dealing with well defined random experiments. The probability of a random event denotes the relative frequency of occurrence of an experiment's outcome, when repeating the experiment. Frequentists consider probability to be the relative frequency "in the long run" of outcomes.[1]

2. Bayesians, however, assign probabilities to any statement whatsoever, even when no random process is involved. Probability, for a Bayesian, is a way to represent an individual's degree of belief in a statement, given the evidence.

Prehistory and Etymology

Probability has an interesting etymology. Its meaning today is almost the opposite of the meaning of the word from which it originated. Before the seventeenth century, legal evidence in Europe was considered to greater weight if a person testifying had “probity”. “empirical evidence” was barely a concept. Probity was a measure of authority, so evidence came from authority. A noble person had probity. Yet today, probability is the very measure of the weight of empirical evidence in science, arrived at from inductive or statistical inference.[2][3]

History

Further information: Statistics

The scientific study of probability is a modern development. Gambling shows that there has been an interest in quantifying the ideas of probability for millennia, but exact mathematical descriptions of use in those problems only arose much later.

According to Richard Jeffrey, "Before the middle of the seventeenth century, the term 'probable' (Latin probabilis) meant approvable, and was applied in that sense, univocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances."[4]

Page 34: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

Aside from some elementary considerations made by Girolamo Cardano in the 16th century, the doctrine of probabilities dates to the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the earliest known scientific treatment of the subject. Jakob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's Doctrine of Chances (1718) treated the subject as a branch of mathematics. See Ian Hacking's The Emergence of Probability for a history of the early development of the very concept of mathematical probability.

The theory of errors may be traced back to Roger Cotes's Opera Miscellanea (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the discussion of errors of observation. The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that there are certain assignable limits within which all errors may be supposed to fall; continuous errors are discussed and a probability curve is given.

Pierre-Simon Laplace (1774) made the first attempt to deduce a rule for the combination of observations from the principles of the theory of probabilities. He represented the law of probability of errors by a curve y = φ(x), x being any error and y its probability, and laid down three properties of this curve:

1. it is symmetric as to the y-axis; 2. the x-axis is an asymptote, the probability of the error being 0; 3. the area enclosed is 1, it being certain that an error exists.

He also gave (1781) a formula for the law of facility of error (a term due to Lagrange, 1774), but one which led to unmanageable equations. Daniel Bernoulli (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors.

The method of least squares is due to Adrien-Marie Legendre (1805), who introduced it in his Nouvelles méthodes pour la détermination des orbites des comètes (New Methods for Determining the Orbits of Comets). In ignorance of Legendre's contribution, an Irish-American writer, Robert Adrain, editor of "The Analyst" (1808), first deduced the law of facility of error,

h being a constant depending on precision of observation, and c a scale factor ensuring that the area under the curve equals 1. He gave two proofs, the second being essentially the same as John Herschel's (1850). Gauss gave the first proof which seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823), James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), W. F. Donkin (1844, 1856), and Morgan Crofton (1870). Other contributors were Ellis (1844), De Morgan (1864), Glaisher (1872), and Giovanni Schiaparelli (1875). Peters's (1856) formula for r, the probable error of a single observation, is well known.

In the nineteenth century authors on the general theory included Laplace, Sylvestre Lacroix (1816), Littrow (1833), Adolphe Quetelet (1853), Richard Dedekind (1860), Helmert (1872), Hermann Laurent (1873), Liagre, Didion, and Karl Pearson. Augustus De Morgan and George Boole improved the exposition of the theory.

On the geometric side (see integral geometry) contributors to The Educational Times were influential (Miller, Crofton, McColl, Wolstenholme, Watson, and Artemas Martin).

Mathematical treatment

In mathematics a probability of an event, A is represented by a real number in the range from 0 to 1 and written as P(A), p(A) or Pr(A). An impossible event has a probability of 0, and a certain event has a probability of 1. However, the converses are not always true: probability 0 events are not always

Page 35: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

impossible, nor probability 1 events certain. The rather subtle distinction between "certain" and "probability 1" is treated at greater length in the article on "almost surely".

The opposite or complement of an event A is the event [not A] (that is, the event of A not occurring); its probability is given by P(not A) = 1 - P(A). As an example, the chance of not rolling a six on a six-sided die

is 1 - (chance of rolling a six) = . See Complementary event for a more complete treatment.

If both the events A and B occur on a single performance of an experiment this is called the intersection or

joint probability of A and B, denoted as . If two events, A and B are independent then the joint probability is

for example, if two coins are flipped the chance of both being heads is .

If either event A or event B or both events occur on a single performance of an experiment this is called the

union of the events A and B denoted as . If two events are mutually exclusive then the probability of either occurring is

For example, the chance of rolling a 1 or 2 on a six-sided die is

.

If the events are not mutually exclusive then

.

For example, when drawing a single card at random from a regular deck of cards, the chance of getting a

heart or a face card (J,Q,K) (or one that is both) is , because of the 52 cards of a deck 13 are hearts, 12 are face cards, and 3 are both: here the possibilities included in the "3 that are both" are included in each of the "13 hearts" and the "12 face cards" but should only be counted once.

Conditional probability is the probability of some event A, given the occurrence of some other event B. Conditional probability is written P(A|B), and is read "the probability of A, given B". It is defined by

If P(B) = 0 then is undefined.

Summary of probabilities

Event Probability

A

not A

Page 36: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

A or B

A and B

A given B

Theory

Like other theories, the theory of probability is a representation of probabilistic concepts in formal terms—that is, in terms that can be considered separately from their meaning. These formal terms are manipulated by the rules of mathematics and logic, and any results are then interpreted or translated back into the problem domain.

There have been at least two successful attempts to formalize probability, namely the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation (see probability space), sets are interpreted as events and probability itself as a measure on a class of sets. In Cox's theorem, probability is taken as a primitive (that is, not further analyzed) and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details.

There are other methods for quantifying uncertainty, such as the Dempster-Shafer theory or possibility theory, but those are essentially different and not compatible with the laws of probability as they are usually understood.

Applications

Two major applications of probability theory in everyday life are in risk assessment and in trade on commodity markets. Governments typically apply probabilistic methods in environmental regulation where it is called "pathway analysis", often measuring well-being using methods that are stochastic in nature, and choosing projects to undertake based on statistical analyses of their probable effect on the population as a whole. It is not correct to say that statistics are involved in the modelling itself, as typically the assessments of risk are one-time and thus require more fundamental probability models, e.g. "the probability of another 9/11". A law of small numbers tends to apply to all such choices and perception of the effect of such choices, which makes probability measures a political matter.

A good example is the effect of the perceived probability of any widespread Middle East conflict on oil prices - which have ripple effects in the economy as a whole. An assessment by a commodity trader that a war is more likely vs. less likely sends prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are not assessed independently nor necessarily very rationally. The theory of behavioral finance emerged to describe the effect of such groupthink on pricing, on policy, and on peace and conflict.

It can reasonably be said that the discovery of rigorous methods to assess and combine probability assessments has had a profound effect on modern society. Accordingly, it may be of some importance to most citizens to understand how odds and probability assessments are made, and how they contribute to reputations and to decisions, especially in a democracy.

Another significant application of probability theory in everyday life is reliability. Many consumer products, such as automobiles and consumer electronics, utilize reliability theory in the design of the product in order to reduce the probability of failure. The probability of failure may be closely associated with the product's warranty.

Page 37: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

Relation to randomness

In a deterministic universe, based on Newtonian concepts, there is no probability if all conditions are known. In the case of a roulette wheel, if the force of the hand and the period of that force are known, then the number on which the ball will stop would be a certainty. Of course, this also assumes knowledge of inertia and friction of the wheel, weight, smoothness and roundness of the ball, variations in hand speed during the turning and so forth. A probabilistic description can thus be more useful than Newtonian mechanics for analysing the pattern of outcomes of repeated rolls of roulette wheel. Physicists face the same situation in kinetic theory of gases, where the system, while deterministic in principle, is so complex (with

the number of molecules typically the order of magnitude of Avogadro constant ) that only statistical description of its properties is feasible.

A revolutionary discovery of 20th century physics was the random character of all physical processes that occur at microscopic scales and are governed by the laws of quantum mechanics. The wave function itself evolves deterministically as long as no observation is made, but, according to the prevailing Copenhagen interpretation, the randomness caused by the wave function collapsing when an observation is made, is fundamental. This means that probability theory is required to describe nature. Others never came to terms with the loss of determinism. Albert Einstein famously remarked in a letter to Max Born: Jedenfalls bin ich überzeugt, daß der Alte nicht würfelt. (I am convinced that God does not play dice). Although alternative viewpoints exist, such as that of quantum decoherence being the cause of an apparent random collapse, at present there is a firm consensus among the physicists that probability theory is necessary to describe quantum phenomena.[citation needed]

Uncertainty

Uncertainty is a term used in subtly different ways in a number of fields, including philosophy, statistics, economics, finance, insurance, psychology, sociology, engineering, and information science. It applies to predictions of future events, to physical measurements already made, or to the unknown.

Concepts

In his seminal work Risk, Uncertainty, and Profit[1] University of Chicago economist Frank Knight (1921) established the important distinction between risk and uncertainty:

"Uncertainty must be taken in a sense radically distinct from the familiar notion of risk, from which it has never been properly separated.... The essential fact is that 'risk' means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomena depending on which of the two is really present and operating.... It will appear that a measurable uncertainty, or 'risk' proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all."

Although the terms are used in various ways among the general public, many specialists in decision theory, statistics and other quantitative fields have defined uncertainty and risk more specifically. Doug Hubbard defines uncertainty and risk as:[2]

1. Uncertainty: The lack of certainty, A state of having limited knowledge where it is impossible to exactly describe existing state or future outcome, more than one possible outcome.

2. Measurement of Uncertainty: A set of possible states or outcomes where probabilities are assigned to each possible state or outcome - this also includes the application of a probability density function to continuous variables

Page 38: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

3. Risk: A state of uncertainty where some possible outcomes have an undesired effect or significant loss.

4. Measurement of Risk: A set of measured uncertainties where some possible outcomes are losses, and the magnitudes of those losses - this also includes loss functions over continuous variables.

There are other different taxonomy of uncertainties and decisions that include a more broad sense of uncertainty and how it should be approached from an ethics perspective [3]:

For example, if you do not know whether it will rain tomorrow, then you have a state of uncertainty. If you apply probabilities to the possible outcomes using weather forecasts or even just a calibrated probability assessment, you have quantified the uncertainty. Suppose you quantify your uncertainty as a 90% chance of sunshine. If you are planning a major, costly, outdoor event for tomorrow then you have risk since there is a 10% chance of rain and rain would be undesirable. Furthermore, if this is a business event and you would lose $100,000 if it rains, then you have quantified the risk (a 10% chance of losing $100,000). These situation can be made even more realistic by quantifying light rain vs. heavy rain, the cost of delays vs. outright cancellation, etc.

Some may represent the risk in this example as the "expected opportunity loss" (EOL) or the chance of the loss multiplied by the amount of the loss (10% x $100,000 = $10,000). That is useful if the organizer of the event is "risk neutral" which most people are not. Most would be willing to pay a premium to avoid the loss. An insurance company, for example, would compute an EOL as a minimum for any insurance coverage, then add on to that other operating costs and profit. Since many people are willing to buy insurance for many reasons, then clearly the EOL alone is not the perceived value of avoiding the risk.

Quantitative uses of the terms uncertainty and risk are fairly consistent from fields such as probability theory, actuarial science, and information theory. Some also create new terms without substantially changing the definitions of uncertainty or risk. For example, surprisal is a variation on uncertainty sometimes uses in information theory. But outside of the more mathematical uses of the term, usage may vary widely. In cognitive psychology, uncertainty can be real, or just a matter of perception, such as expectations, threats, etc.

Vagueness or ambiguity are sometimes described as "second order uncertainty", where there is uncertainty even about the definitions of uncertain states or outcomes. The difference here is that this uncertainty is about the human definitions and concepts not an objective fact of nature. It has been argued that ambiguity, however, is always avoidable while uncertainty (of the "first order" kind) is not necessarily avoidable.[4]:

Page 39: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

Uncertainty may be purely a consequence of a lack of knowledge of obtainable facts. That is, you may be uncertain about whether a new rocket design will work, but this uncertainty can be removed with further analysis and experimentation. At the subatomic level, however, uncertainty may be a fundamental and unavoidable property of the universe. In quantum mechanics, the Heisenberg Uncertainty Principle puts limits on how much an observer can ever know about the position and velocity of a particle. This may not just be ignorance of potentially obtainable facts but that there is no fact to be found. There is some controversy in physics as to whether such uncertainty is an irreducible property of nature or if there are "hidden variables" that would describe the state of a particle even more exactly than Heisenberg's uncertainty principle allows.

Measurements

In metrology, physics, and engineering, the uncertainty or margin of error of a measurement is stated by giving a range of values which are likely to enclose the true value. This may be denoted by error bars on a graph, or by the following notations:

• measured value ± uncertainty • measured value(uncertainty)

The latter "concise notation" is used for example by IUPAC in stating the atomic mass of elements. There, the uncertainty applies only to the least significant figure of x. For instance, 1.00794(7) stands for 1.00794 ± 0.00007.

Often, the uncertainty of a measurement is found by repeating the measurement enough times to get a good estimate of the standard deviation of the values. Then, any single value has an uncertainty equal to the standard deviation. However, if the values are averaged, then the mean measurement value has a much smaller uncertainty, equal to the standard error of the mean, which is the standard deviation divided by the square root of the number of measurements.

When the uncertainty represents the standard error of the measurement, then about 68.2% of the time, the true value of the measured quantity falls within the stated uncertainty range. For example, it is likely that for 31.8% of the atomic mass values given on the list of elements by atomic mass, the true value lies outside of the stated range. If the width of the interval is doubled, then probably only 4.6% of the true values lie outside the doubled interval, and if the width is tripled, probably only 0.3% lie outside. These values follow from the properties of the normal distribution, and they apply only if the measurement process produces normally distributed errors. In that case, the quoted standard errors are easily converted to 68.3% ("one sigma"), 95.4% ("two sigma"), or 99.7% ("three sigma") confidence intervals.

In this context, uncertainty depends on both the accuracy and precision of the measurement instrument. The least the accuracy and precision of an instrument are, the larger the measurement uncertainty is. Notice that precision is often determined as the standard deviation of the repeated measures of a given value, namely using the same method described above to assess measurement uncertainty. However, this method is correct only when the instrument is accurate. When it is inaccurate, the uncertainty is larger than the standard deviation of the repeated measures, and it appears evident that the uncertainty does not depend only on instrumental precision.

Applications

• Investing in financial markets such as the stock market. • Uncertainty is used in engineering notation when talking about significant figures. Or the possible

error involved in measuring things such as distance. • Uncertainty is designed into games, most notably in gambling, where chance is central to play. • In scientific modelling, in which the prediction of future events should be understood to have a

range of expected values.

Page 40: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics

• In physics in certain situations, uncertainty has been elevated into a principle, the uncertainty principle.

• In weather forecasting it is now commonplace to include data on the degree of uncertainty in a weather forecast.

• Uncertainty is often an important factor in economics. According to economist Frank Knight, it is different from risk, where there is a specific probability assigned to each outcome (as when flipping a fair coin). Uncertainty involves a situation that has unknown probabilities, while the estimated probabilities of possible outcomes need not add to unity.

• In risk assessment and risk management.[5] • In metrology, measurement uncertainty is a central concept quantifying the dispersion one may

reasonably attribute to a measurement result. Such an uncertainty can also be referred to as a measurement error. In daily life, measurement uncertainty is often implicit ("He is 6 feet tall" give or take a few inches), while for any serious use an explicit statement of the measurement uncertainty is necessary. The expected measurement uncertainty of many measuring instruments (scales, oscilloscopes, force gages, rulers, thermometers, etc) is often stated in the manufacturers specification.

The most commonly used procedure for calculating measurement uncertainty is described in the Guide to the Expression of Uncertainty in Measurement (often referred to as "the GUM") published by ISO. A derived work is for example the National Institute for Standards and Technology (NIST) publication NIST Technical Note 1297 "Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results" and the Eurachem/Citac publication "Uncertatinty in measurements" (available at the Eurachem homepage). The uncertainty of the result of a measurement generally consists of several components. The components are regarded as random variables, and may be grouped into two categories according to the method used to estimate their numerical values:

• Type A, those which are evaluated by statistical methods, • Type B, those which are evaluated by other means, e.g. by assigning a probability

distribution.

By propagating the variances of the components through a function relating the components to the measurement result, the combined measurement uncertainty is given as the square root of the resulting variance. The simplest form is the standard deviation of a repeated observation.

• Uncertainty has been a common theme in art, both as a thematic device (see, for example, the indecision of Hamlet), and as a quandary for the artist (such as Martin Creed's difficulty with deciding what artworks to make).

Page 41: theory of knowledge background readings - orgfree.comchemistry4u.orgfree.com/Books/theory-Of-Kowledge-Background-Rreadings.pdf · Demographic research services normally list agnostics