Blog Archive

Tuesday, May 16, 2023

05-15-2023-1837 - epistemology variety concepts, justification, absolute relative terms, infinite regress, conceptual containment, reality truth existence, sensibilities, sphere shift, incorporeality, ontological status, subjectivism, perception dependency, subjectivism difficult to distinguish bet knowledge opinion and subjective knowledge, personal deduction, deductive, logic, reason, induction, unity of knowledge, principle, consilience, concordance, convergence, unobservable, impalpable, instrumentalism, primary secondary qualities, phenomena, noumena, functional fixedness, materialism, monism, solipsism, imprecise language, fuzzy concept, closed concept, deontological, ordinary language, methodological differences, concretization, casery, exemplification, utilization, individually necessary and jointly sufficient, analysis of knowledge, theory of knowledge, justified true belief, knowledge, methodism, particularism, abstract, intuition, concrete, specific cases, resemblance, certainty, decolonization of knowledge, hegemony, western knowledge systems, decolonial scholarship, sociohistorical circcumstances knowledge, background, coherency, rectitude, correction, linguistic ambiguity, foundationalism, basic beliefs conception, basic beliefs, axioms, system, epistemological, categories of beliefs, justified, doxastic, virtue, object, subject, pattern, chaos, visual motif, methodological naturalism, primary–secondary quality distinction, practical knowledge in the form of skills , individually necessary and jointly sufficient , eristic, Indeterminacy, etc. (draft)

https://en.wikipedia.org/wiki/Category:Concepts_in_epistemology

The distinction between absolute and relative terms was introduced by Peter Unger in his 1971 paper A Defense of Skepticism and differentiates between terms that, in their most literal sense, don't admit of degrees (absolute terms) and those that do (relative terms).[1] According to his account, the term "flat", for example, is an absolute term because a surface is either perfectly (or absolutely) flat or isn't flat at all. The terms "bumpy" or "curved", on the other hand, are relative terms because there is no such thing as "absolute bumpiness" or "absolute curvedness" (although in analytic geometry curvedness is quantified). A bumpy surface can always be made bumpier. A truly flat surface, however, can never be made flatter. Colloquially, he acknowledges, we do say things like "surface A is flatter than surface B", but this is just a shorter way of saying "surface A is closer to being flat than surface B". This paraphrasing, however, doesn't work for relative terms. Another important aspect of absolute terms, one that motivated this choice of terminology, is that they can always be modified by the term "absolutely". For example, it is quite natural to say "this surface is absolutely flat", but it would be very strange and barely even meaningful to say "this surface is absolutely bumpy". 

https://en.wikipedia.org/wiki/Absolute_and_relative_terms

https://en.wikipedia.org/wiki/Absolute_and_relative_terms

A priori ("from the earlier") and a posteriori ("from the later") are Latin phrases used in philosophy to distinguish types of knowledge, justification, or argument by their reliance on empirical evidence or experience. A priori knowledge is independent from current experience (e.g., as part of a new study). Examples include mathematics,[i] tautologies, and deduction from pure reason.[ii] A posteriori knowledge depends on empirical evidence. Examples include most fields of science and aspects of personal knowledge.

The terms originate from the analytic methods found in Organon, a collection of works by Aristotle. Prior analytics (a priori) is about deductive logic, which comes from definitions and first principles. Posterior analytics (a posteriori) is about inductive logic, which comes from observational evidence.

Both terms appear in Euclid's Elements and were popularized by Immanuel Kant's Critique of Pure Reason, an influential work in the history of philosophy.[1] Both terms are primarily used as modifiers to the noun "knowledge" (i.e. "a priori knowledge"). A priori can be used to modify other nouns such as "truth". Philosophers may use apriority, apriorist, and aprioricity as nouns referring to the quality of being a priori.[2]

 https://en.wikipedia.org/wiki/A_priori_and_a_posteriori

Justification (also called epistemic justification) is the property of belief that qualifies it as knowledge rather than mere opinion. Epistemology is the study of reasons that someone holds a rationally admissible belief (although the term is also sometimes applied to other propositional attitudes such as doubt).[1] Epistemologists are concerned with various epistemic features of belief, which include the ideas of warrant (a proper justification for holding a belief), knowledge, rationality, and probability, among others.

Debates surrounding epistemic justification often involve the structure of justification, including whether there are foundational justified beliefs or whether mere coherence is sufficient for a system of beliefs to qualify as justified. Another major subject of debate is the sources of justification, which might include perceptual experience (the evidence of the senses), reason, and authoritative testimony, among others.

Justification and knowledge

"Justification" involves the reasons why someone holds a belief that one should hold based on one's current evidence.[1] Justification is a property of beliefs insofar as they are held blamelessly. In other words, a justified belief is a belief that a person is entitled to hold.

Many philosophers from Plato onward have treated "justified true belief" as constituting knowledge. It is particularly associated with a theory discussed in his dialogues Meno and Theaetetus. While in fact Plato seems to disavow justified true belief as constituting knowledge at the end of Theaetetus, the claim that Plato unquestioningly accepted this view of knowledge stuck until the proposal of the Gettier problem.[1]

The subject of justification has played a major role in the value of knowledge as "justified true belief".[citation needed] Some contemporary epistemologists, such as Jonathan Kvanvig assert that justification isn't necessary in getting to the truth and avoiding errors. Kvanvig attempts to show that knowledge is no more valuable than true belief, and in the process dismissed the necessity of justification due to justification not being connected to the truth.[citation needed]

Conceptions of justification

William P. Alston identifies two conceptions of justification.[2]: 15–16  One conception is "deontological" justification, which holds that justification evaluates the obligation and responsibility of a person having only true beliefs. This conception implies, for instance, that a person who has made his best effort but is incapable of concluding the correct belief from his evidence is still justified. The deontological conception of justification corresponds to epistemic internalism. Another conception is "truth-conducive" justification, which holds that justification is based on having sufficient evidence or reasons that entails that the belief is at least likely to be true. The truth-conductive conception of justification corresponds to epistemic externalism.

Theories of justification

There are several different views as to what entails justification, mostly focusing on the question "How sure do we need to be that our beliefs correspond to the actual world?" Different theories of justification require different conditions before a belief can be considered justified. Theories of justification generally include other aspects of epistemology, such as defining knowledge.

Notable theories of justification include:

  • Foundationalism – Basic beliefs justify other, non-basic beliefs.
  • Epistemic coherentism – Beliefs are justified if they cohere with other beliefs a person holds, each belief is justified if it coheres with the overall system of beliefs.
  • Infinitism – Beliefs are justified by infinite chains of reasons.
  • Foundherentism – Both fallible foundations and coherence are components of justification—proposed by Susan Haack.
  • Internalism and externalism – The believer must be able to justify a belief through internal knowledge (internalism), or outside sources of knowledge can be used to justify a belief (externalism).
  • Reformed epistemology – Beliefs are warranted by proper cognitive function—proposed by Alvin Plantinga.
  • Evidentialism – Beliefs depend solely on the evidence for them.
  • Reliabilism – A belief is justified if it is the result of a reliable process.
  • Infallibilism – Knowledge is incompatible with the possibility of being wrong.
  • Fallibilism – Claims can be accepted even though they cannot be conclusively proven or justified.
  • Non-justificationism – Knowledge is produced by attacking claims and refuting them instead of justifying them.
  • Skepticism – Knowledge is impossible or undecidable.

Criticism of theories of justification

Robert Fogelin claims to detect a suspicious resemblance between the theories of justification and Agrippa's five modes leading to the suspension of belief. He concludes that the modern proponents have made no significant progress in responding to the ancient modes of Pyrrhonian skepticism.[3]

William P. Alston criticizes the very idea of a theory of justification. He claims: "There isn't any unique, epistemically crucial property of beliefs picked out by 'justified'. Epistemologists who suppose the contrary have been chasing a will-o'-the-wisp. What has really been happening is this. Different epistemologists have been emphasizing, concentrating on, "pushing" different epistemic desiderata, different features of belief that are positively valuable from the standpoint of the aims of cognition."[2]: 22 

See also

References


 https://en.wikipedia.org/wiki/Justification_(epistemology)

Coin showing the owl of Athena
The owl of Athena, a symbol of knowledge in the Western world

Knowledge is a form of awareness or familiarity. It is often understood as awareness of facts or as practical skills, and may also mean familiarity with objects or situations. Knowledge of facts, also called propositional knowledge, is often defined as true belief that is distinct from opinion or guesswork by virtue of justification. While there is wide agreement among philosophers that propositional knowledge is a form of true belief, many controversies in philosophy focus on justification: whether it is needed at all, how to understand it, and whether something else besides it is needed. These controversies intensified due to a series of thought experiments by Edmund Gettier and have provoked various alternative definitions. Some of them deny that justification is necessary and suggest alternative criteria while others accept that justification is an essential aspect and formulate additional requirements.

Knowledge can be produced in many different ways. The most important source of empirical knowledge is perception, which is the usage of the senses. Many theorists also include introspection as a source of knowledge, not of external physical objects, but of one's own mental states. Other sources often discussed include memory, rational intuition, inference, and testimony. According to foundationalism, some of these sources are basic in the sense that they can justify beliefs without depending on other mental states. This claim is rejected by coherentists, who contend that a sufficient degree of coherence among all the mental states of the believer is necessary for knowledge. According to infinitism, an infinite chain of beliefs is needed.

Many different aspects of knowledge are investigated, and it plays a role in various disciplines. It is the primary subject of the field of epistemology, which studies what someone knows, how they come to know it, and what it means to know something. The problem of the value of knowledge concerns the question of why knowledge is more valuable than mere true belief. Philosophical skepticism is the thesis that humans lack any form of knowledge or that knowledge is impossible. Formal epistemology studies, among other things, the rules governing how knowledge and related states behave and in what relations they stand to each other. Science tries to acquire knowledge using the scientific method, which is based on repeatable experimentation, observation, and measurement. Many religions hold that humans should seek knowledge and that God or the divine is the source of knowledge.

Definitions

Numerous definitions of knowledge have been suggested.[1][2][3] Most definitions of knowledge in analytic philosophy recognize three fundamental types. "Knowledge-that", also called propositional knowledge, can be expressed using that-clauses as in "I know that Dave is at home".[4][5][6] "Knowledge-how" (know-how) expresses practical competence, as in "she knows how to swim". Finally, "knowledge by acquaintance" refers to a familiarity with the known object based on previous direct experience.[5][7][8] Analytical philosophers usually aim to identify the essential features of propositional knowledge in their definitions.[9] There is wide, though not universal, agreement among philosophers that knowledge involves a cognitive success or an epistemic contact with reality, like making a discovery, and that propositional knowledge is a form of true belief.[10][11]

Despite the agreement about these general characteristics of knowledge, many deep disagreements remain regarding its exact definition. These disagreements relate to the goals and methods within epistemology and other fields, or to differences concerning the standards of knowledge that people intend to uphold, for example, what degree of certainty is required. One approach is to focus on knowledge's most salient features in order to give a practically useful definition.[12][5] Another is to try to provide a theoretically precise definition by listing the conditions that are individually necessary and jointly sufficient. The term "analysis of knowledge" (or equivalently, "conception of knowledge" or "theory of knowledge") is often used for this approach.[1][13][14] It can be understood in analogy to how chemists analyze a sample by seeking a list of all the chemical elements composing it.[1][15][16] An example of this approach is characterizing knowledge as justified true belief (JTB), which is seen by many philosophers as the standard definition.[4][17] Others seek a common core among diverse forms of knowledge, for example, that they all involve some kind of awareness or that they all belong to a special type of successful performance.[11][18][19][20]

Methodological differences concern whether researchers base their inquiry on abstract and general intuitions or on concrete and specific cases, referred to as methodism and particularism, respectively.[21][22][23] Another source of disagreement is the role of ordinary language in one's inquiry: the weight given to how the term "knowledge" is used in everyday discourse.[6][14] According to Ludwig Wittgenstein, for example, there is no clear-cut definition of knowledge since it is just a cluster of concepts related through family resemblance.[24][25] Different conceptions of the standards of knowledge are also responsible for various disagreements. Some epistemologists, like René Descartes, hold that knowledge demands very high requirements, like certainty, and is therefore quite rare. Others see knowledge as a rather common phenomenon, prevalent in many everyday situations, without excessively high standards.[1][5][26]

In analytic philosophy, knowledge is usually understood as a mental state possessed by an individual person, but the term is sometimes used to refer to a characteristic of a group of people as group knowledge, social knowledge, or collective knowledge.[27][28] In a slightly different sense, it can also mean knowledge stored in documents, as in "knowledge housed in the library"[29][30] or the knowledge base of an expert system.[31][32] The English word knowledge is a broad term. It includes various meanings that some other languages distinguish using several words. For example, Latin uses the words cognitio and scientia for "knowledge" while Spanish uses the words conocer and saber for "to know".[11] In ancient Greek, four important terms for knowledge were used: epistēmē (unchanging theoretical knowledge), technē (expert technical knowledge), mētis (strategic knowledge), and gnōsis (personal intellectual knowledge).[20] Knowledge is often contrasted with ignorance, which is associated with a lack of understanding, education, and true beliefs.[33][34][35] Epistemology, also referred to as the theory of knowledge, is the philosophical discipline studying knowledge. It investigates topics like the nature of knowledge and justification, how knowledge arises, and what value it has. Further issues include the different types of knowledge and the extent to which the beliefs of most people amount to knowledge as well as the limits of what can be known.[36][37][11]

Justified true belief

Venn diagram of justified true belief
Knowledge is often defined as justified true belief.

Many philosophers define knowledge as justified true belief (JTB). This definition characterizes knowledge through three essential features: as (1) a belief that is (2) true and (3) justified.[4][17] In the dialogue Theaetetus by the ancient Greek philosopher Plato, Socrates pondered the distinction between knowledge and true belief but rejected the JTB definition of knowledge.[38][39] The most widely accepted feature is truth: one can believe something false but one cannot know something false.[5][6] A few ordinary language philosophers have raised doubts that knowledge is a form of belief based on everyday expressions like "I do not believe that; I know it".[4][5][40] Most theorists reject this distinction and explain such expressions through ambiguities of natural language.[4][5]

The main controversy surrounding the JTB definition concerns its third feature: justification.[1][4][41] This component is often included because of the impression that some true beliefs are not forms of knowledge. Specifically, this covers cases of superstition, lucky guesses, or erroneous reasoning. The corresponding beliefs may even be true but it seems there is more to knowledge than just being right about something.[4][5][14] The JTB definition solves this problem by identifying proper justification as the additional component needed, which is absent in the above-mentioned cases. Many philosophers have understood justification internalistically (internalism): a belief is justified if it is supported by another mental state of the person, such as a perceptual experience, a memory, or a second belief. This mental state has to constitute a sufficiently strong evidence or reason for the believed proposition. Some modern versions modify the JTB definition by using an externalist conception of justification instead. This means that justification depends not just on factors internal to the subject but also on external factors. According to reliabilist theories of justification, justified beliefs are produced by a reliable process. According to causal theories of justification, justification requires that the believed fact causes the belief. This is the case, for example, when a bird sits on a tree and a person forms a belief about this fact because they see the bird.[1][4][5]

Gettier problem and alternatives

Venn diagram of justified true belief that does not amount to knowledge
The Gettier problem is motivated by the idea that some justified true beliefs do not amount to knowledge.

The JTB definition came under severe criticism in the 20th century, when Edmund Gettier gave a series of counterexamples.[42] They purport to present concrete cases of justified true beliefs that fail to constitute knowledge. The reason for their failure is usually a form of epistemic luck: the justification is not relevant to the truth.[4][5][41] In a well-known example, there is a country road with many barn facades and only one real barn. The person driving is not aware of this, stops in front of the real barn by a lucky coincidence, and forms the belief that he is in front of a barn. It has been argued that this justified true belief does not constitute knowledge since the person would not have been able to tell the difference without the fortuitous accident.[43][44][45] So even though the belief is justified, it is a lucky coincidence that it is also true.[1]

The responses to these counterexamples have been diverse. According to some, they show that the JTB definition of knowledge is deeply flawed and that a radical reconceptualization of knowledge is necessary, often by denying justification a role.[1] This can happen, for example, by replacing justification with reliability or by understanding knowledge as the manifestation of cognitive virtues. Another approach is to define it in regard to the cognitive role it plays. For example, one role of knowledge is to provide reasons for thinking something or for doing something.[11] Various theorists are diametrically opposed to the radical reconceptualization and either deny that Gettier cases pose problems or they try to solve them by making smaller modifications to how justification is defined. Such approaches result in a minimal modification of the JTB definition.[1]

Between these two extremes, some philosophers have suggested various moderate departures. They agree that the JTB definition includes some correct claims: justified true belief is a necessary condition of knowledge. However, they disagree that it is a sufficient condition. They hold instead that an additional criterion, some feature X, is necessary for knowledge. For this reason, they are often referred to as JTB+X definitions of knowledge.[1][46] A closely related approach speaks not of justification but of warrant and defines warrant as justification together with whatever else is necessary to arrive at knowledge.[4][47]

Many candidates for the fourth feature have been suggested. In this regard, knowledge may be defined as justified true belief that does not depend on any false beliefs, that there are no defeaters[a] present, or that the person would not have the belief if it was false.[14][45] Such and similar definitions are successful at avoiding many of the original Gettier cases. However, they are often undermined by newly conceived counterexamples.[49] To avoid all possible cases, it may be necessary to find a criterion that excludes all forms of epistemic luck. It has been argued that such a criterion would set the required standards of knowledge very high: the belief has to be infallible to succeed in all cases.[5][50] This would mean that very few of our beliefs amount to knowledge, if any.[5][51][52] For example, Richard Kirkham suggests that our definition of knowledge requires that the evidence for the belief necessitates its truth.[53] There is still very little consensus in the academic discourse as to which of the proposed modifications or reconceptualizations is correct.[1][54][11]

Types

A common distinction among types of knowledge is between propositional knowledge, or knowledge-that, and non-propositional knowledge in the form of practical skills or acquaintance.[5][55][56] The distinctions between these major types are usually drawn based on the linguistic formulations used to express them.[1]

Propositional knowledge

Propositional knowledge, also referred to as descriptive knowledge, is a form of theoretical knowledge about facts, like knowing that "2 + 2 = 4". It is the paradigmatic type of knowledge in analytic philosophy.[4][5][6] Propositional knowledge is propositional in the sense that it involves a relation to a proposition. Since propositions are often expressed through that-clauses, it is also referred to as knowledge-that, as in "Akari knows that kangaroos hop".[5][6][8] In this case, Akari stands in the relation of knowing to the proposition "kangaroos hop". Closely related types of knowledge are know-wh, for example, knowing who is coming to dinner and knowing why they are coming.[5] These expressions are normally understood as types of propositional knowledge since they usually can be paraphrased using a that-clause.[5][6] It is usually held that the capacity for propositional knowledge is exclusive to relatively sophisticated creatures, such as humans. This is based on the claim that advanced intellectual capacities are required to believe a proposition that expresses what the world is like.[57]

Propositional knowledge is either occurrent or dispositional. Occurrent knowledge is knowledge that is active, for example, because a person is currently thinking about it. Dispositional knowledge, on the other hand, is stored in the back of a person's mind without being involved in cognitive processes at the moment. In this regard, it refers to the mere ability to access the relevant information. For example, a person may know for most of their life that cats have whiskers. This knowledge is dispositional most of the time. It becomes occurrent when the person actively thinks about the whiskers of cats. A similar classification is often discussed in relation to beliefs as the difference between occurrent and dispositional beliefs.[6][58][59]

Non-propositional knowledge

Photograph of someone riding a bicycle
Knowing how to ride a bicycle is one form of non-propositional knowledge.

Non-propositional knowledge is knowledge in which no essential relation to a proposition is involved. The two most well-known forms are knowledge-how (know-how or procedural knowledge) and knowledge by acquaintance.[5][6][7] The term "know-how" refers to some form of practical ability or skill. It can be defined as having the corresponding competence.[5][57] Examples include knowing how to ride a bicycle or knowing how to swim. Some of the abilities responsible for know-how may also involve certain forms of knowledge-that, as in knowing how to prove a mathematical theorem, but this is not generally the case.[11] Some forms of practical knowledge do not require a highly developed mind, in contrast to propositional knowledge. In this regard, practical knowledge is more common in the animal kingdom. For example, an ant knows how to walk even though it presumably lacks a mind sufficiently developed enough to stand in a relation to the corresponding proposition by representing it.[57] Knowledge-how is closely related to tacit knowledge. Tacit knowledge is knowledge that cannot be fully articulated or explained, in contrast to explicit knowledge.[60][61]

Knowledge by acquaintance is familiarity with an individual that results from direct experiential contact with this individual.[5][6][8] This individual can a person or a regular object. On the linguistic level, it does not require a that-clause and can be expressed using a direct object. So when someone claims that they know Wladimir Klitschko personally, they are expressing that they had a certain kind of contact with him and not that they know a certain fact about him. This is usually understood to mean that it constitutes a relation to a concrete individual and not to a proposition. Knowledge by acquaintance plays a central role in Bertrand Russell's epistemology. He contrasts it with knowledge by description, which is a form of propositional knowledge not based on direct perceptual experience.[62][63][5] However, there is some controversy about whether it is possible to acquire knowledge by acquaintance in its pure non-propositional form. In this regard, some theorists, like Peter D. Klein, have suggested that it can be understood as one type of propositional knowledge that is only expressed in a grammatically different way.[4]

Other distinctions

A priori and a posteriori

The distinction between a priori and a posteriori knowledge came to prominence in Immanuel Kant's philosophy and is often discussed in the academic literature. To which category a knowledge attitude belongs depends on the role of experience in its formation and justification.[6][64][65] To know something a posteriori means to know it on the basis of experience.[66][67] For example, to know that it is currently raining or that the baby is crying belongs to a posteriori knowledge since it is based on some form of experience, like visual or auditory experience.[64] A priori knowledge, however, is possible without any experience to justify or support the known proposition.[65][68] Mathematical knowledge, such as that 2 + 2 = 4, is a paradigmatic case of a priori knowledge since no empirical investigation is necessary to confirm this fact.[67][68] The distinction between a posteriori and a priori knowledge is usually equated with the distinction between empirical and non-empirical knowledge.[67] This distinction pertains primarily to knowledge but it can also be applied to propositions or arguments. For example, an a priori proposition is a proposition that can be known independently of experience.[64]

The relevant experience in question is primarily identified with sensory experience. However, some non-sensory experiences, like memory and introspection, are often included as well. But certain conscious phenomena are excluded in this context. For example, the conscious phenomenon of a rational insight into the solution of a mathematical problem does not make the resulting knowledge a posteriori.[64][65] It is sometimes argued that, in a trivial sense, some form of experience is required even for a priori knowledge, the experience needed to learn the language in which the claim is expressed. For a priori knowledge, this is the only form of experience required. For this reason, knowing that "all bachelors are unmarried" is considered a form of a priori knowledge since, given an understanding of the terms "bachelor" and "unmarried", no further experience is necessary to know that it is true.[64][65]

One difficulty for a priori knowledge is to explain how it is possible. It is usually seen as unproblematic that one can come to know things through experience but it is not clear how knowledge is possible without experience. One of the earliest solutions to this problem is due to Plato, who argues that, in the context of geometry, the soul already possesses the knowledge and just needs to recollect or remember it to access it again.[68][69] A similar explanation is given by René Descartes, who holds that a priori knowledge exists as innate knowledge present in the mind of each human.[68] A different approach is to posit a special mental faculty responsible for this type of knowledge, often referred to as rational insight or rational intuition.[64]

Self-knowledge

In philosophy, "self-knowledge" usually refers to a person's knowledge of their own sensations, thoughts, beliefs, and other mental states. Many philosophers hold that it is a special type of knowledge since it is more direct than knowledge of the external world, which is mediated through the senses. Traditionally, it was often claimed that self-knowledge is indubitable. For example, when someone is in pain, they cannot be wrong about this fact. However, various contemporary theorists reject this position. A closely related issue is to explain how self-knowledge works. Some philosophers, like Russell, understand it as a form of knowledge by acquaintance while others, like John Locke, claim that there is an inner sense that works in analogy to how the external five senses work. According to a different perspective, self-knowledge is indirect in the sense that a person has to interpret their internal and external behavior in order to learn about their mental states, similar to how one can learn about the mental states of other people by interpreting their external behavior.[70][71][72]

In a slightly different sense, the term self-knowledge can also refer to the knowledge of the self as a persisting entity that has certain personality traits, preferences, physical attributes, relationships, goals, and social identities. This meaning is of particular interest to psychology and refers to a person's awareness of their own characteristics.[73][74][75] Self-knowledge is closely related to self-concept, the difference being that the self-concept also includes unrealistic aspects of how a person sees themselves. In this regard, self-knowledge is often measured by comparing a person's self-assessment of their character traits with how other people assess this person's traits.[74]

Situated knowledge

Situated knowledge is knowledge specific to a particular situation.[76][77] It is closely related to practical or tacit knowledge, which is learned and applied in specific circumstances. This especially concerns certain forms of acquiring knowledge, such as trial and error or learning from experience.[78] In this regard, situated knowledge usually lacks a more explicit structure and is not articulated in terms of universal ideas.[77] The term is often used in feminism and postmodernism to argue that many forms of knowledge are not absolute but depend on the concrete historical, cultural, and linguistic context.[76][77] Understood in this way, it is frequently used to argue against absolute or universal knowledge claims stated in the scientific discourse. Donna Haraway is a prominent defender of this position.[78][79] One of her arguments is based on the idea that perception is embodied and is not a universal "gaze from nowhere".[78]

Higher and lower knowledge

Many forms of eastern spirituality and religion distinguish between higher and lower knowledge. They are also referred to as para vidya and apara vidya in Hinduism or the two truths doctrine in Buddhism. Lower knowledge is based on the senses and the intellect. In this regard, all forms of empirical and objective knowledge belong to this category.[80][81] Most of the knowledge needed in one's everyday functioning is lower knowledge. It is about mundane or conventional things that are in tune with common sense. It includes the body of knowledge belonging to the empirical sciences.[80][82][83]

Higher knowledge, on the other hand, is understood as knowledge of God, the absolute, the true self, or the ultimate reality. It belongs neither to the external world of physical objects nor to the internal world of the experience of emotions and concepts. Many spiritual teachings emphasize the increased importance, or sometimes even exclusive importance, of higher knowledge in comparison to lower knowledge. This is usually based on the idea that achieving higher knowledge is one of the central steps on the spiritual path. In this regard, higher knowledge is seen as what frees the individual from ignorance, helps them realize God, or liberates them from the cycle of rebirth.[81][82] This is often combined with the view that lower knowledge is in some way based on a delusion: it belongs to the realm of mere appearances or Maya, while higher knowledge manages to view the reality underlying these appearances.[83] In the Buddhist tradition, the attainment of higher knowledge or ultimate truth is often associated with seeing the world from the perspective of sunyata, i.e. as a form of emptiness lacking inherent existence or intrinsic nature.[80][84][85]

Sources of knowledge

Picture of the five senses
Perception using one of the five senses is an important source of knowledge.

Sources of knowledge are ways how people come to know things. According to Andrea Kern, they can be understood as rational capacities that are exercised when a person acquires new knowledge.[86] Various sources of knowledge are discussed in the academic literature, often in terms of the mental faculties responsible. They include perception, introspection, memory, inference, and testimony. However, not everyone agrees that all of them actually lead to knowledge. Usually, perception or observation, i.e. using one of the senses, is identified as the most important source of empirical knowledge.[5][6][87] So knowing that the baby is sleeping constitutes observational knowledge if it was caused by a perception of the snoring baby. But this would not be the case if one learned about this fact through a telephone conversation with one's spouse. Direct realists explain observational knowledge by holding that perception constitutes a direct contact with the perceived object. Indirect realists, on the other hand, contend that this contact happens indirectly: people can only directly perceive sense data, which are then interpreted as representing external objects. This distinction affects whether the knowledge of external objects is direct or indirect and may thus have an impact on how certain perceptual knowledge is.[11]

Introspection is often seen in analogy to perception as a source of knowledge, not of external physical objects, but of internal mental states. Traditionally, various theorists have ascribed a special epistemic status to introspection by claiming that it is infallible or that there is no introspective difference between appearance and reality. However, this claim has been contested in the contemporary discourse. Critics argue that it may be possible, for example, to mistake an unpleasant itch for a pain or to confuse the experience of a slight ellipse for the experience of a circle.[11] Perceptual and introspective knowledge often act as a form of fundamental or basic knowledge. According to some empiricists, perceptual knowledge is the only source of basic knowledge and provides the foundation for all other knowledge.[5][6]

Memory is usually identified as another source of knowledge. It differs from perception and introspection in that it is not as independent or fundamental as they are since it depends on other previous experiences.[11][88] The faculty of memory retains knowledge acquired in the past and makes it accessible in the present, as when remembering a past event or a friend's phone number.[89][90] It is generally considered a reliable source of knowledge, but it can be deceptive at times nonetheless, either because the original experience was unreliable or because the memory degraded and does not accurately represent the original experience anymore.[11]

Knowledge based on perception, introspection, or memory may also give rise to inferential knowledge, which comes about when reasoning is applied to draw inferences from another known fact.[5][6][11] In this regard, the perceptual knowledge of a Czech stamp on a postcard may give rise to the inferential knowledge that one's friend is visiting the Czech Republic. According to rationalists, some forms of knowledge are completely independent of observation and introspection. They are needed to explain how certain a priori beliefs, like the mathematical belief that 2 + 2 = 4, constitute knowledge. Some theorists, like Robert Audi, hold that the faculty of pure reason or rational intuition is responsible in these cases since there seem to be no sensory perceptions that could justify such general and abstract knowledge.[88][91] However, difficulties in providing a clear account of pure reason or rational intuition have led various empirically minded epistemologists to doubt that they constitute independent sources of knowledge.[5][6][11] A closely related approach is to hold that this type of knowledge is innate. According to Plato's theory of recollection, for example, it is accessed through a special form of remembering.[5][6]

Photograph of a person giving testimony at a trial
Testimony is an important source of knowledge for many everyday purposes. The testimony given at a trial is one special case.

Testimony is often included as an additional source of knowledge. Unlike the other sources, it is not tied to one specific cognitive faculty. Instead, it is based on the idea that one person can come to know a fact because another person talks about this fact. Testimony can happen in numerous ways, like regular speech, a letter, the newspaper, or an online blog. The problem of testimony consists in clarifying under what circumstances and why it constitutes a source of knowledge. A popular response is that it depends on the reliability of the person pronouncing the testimony: only testimony from reliable sources can lead to knowledge.[11][92][93]

Structure of knowledge

The structure of knowledge is the way in which the mental states of a person need to be related to each other for knowledge to arise.[94][95] Most theorists hold that, among other things, an agent has to have good reasons for holding a belief if this belief is to amount to knowledge. So when challenged, the agent may justify their belief by referring to their reason for holding it. In many cases, this reason is itself a belief that may as well be challenged. So when the agent believes that Ford cars are cheaper than BMWs because they believe to have heard this from a reliable source, they may be challenged to justify why they believe that their source is reliable. Whatever support they present may also be challenged.[4][11][14] This threatens to lead to an infinite regress since the epistemic status at each step depends on the epistemic status of the previous step.[96][97] Theories of the structure of knowledge offer responses for how to solve this problem.[4][11][14]

Diagram showing the differences between foundationalism, coherentism, and infinitism
Foundationalism, coherentism, and infinitism are theories of the structure of knowledge. The black arrows symbolize how one belief supports another belief.

The three most common theories are foundationalism, coherentism, and infinitism. Foundationalists and coherentists deny the existence of this infinite regress, in contrast to infinitists.[4][11][14] According to foundationalists, some basic reasons have their epistemic status independent of other reasons and thereby constitute the endpoint of the regress. Against this view, it has been argued that the concept of "basic reason" is contradictory: there should be a reason for why some reasons are basic and others are non-basic, in which case the basic reasons would depend on another reason after all and would therefore not be basic. An additional problem consists in finding plausible candidates for basic reasons.[4][11][14]

Coherentists and infinitists avoid these problems by denying the distinction between basic and non-basic reasons. Coherentists argue that there is only a finite number of reasons, which mutually support each other and thereby ensure each other's epistemic status.[4][11] Their critics contend that this constitutes the fallacy of circular reasoning.[98][99] For example, if belief b1 supports belief b2 and belief b2 supports belief b1, the agent has a reason for accepting one belief if they already have the other. However, their mutual support alone is not a good reason for newly accepting both beliefs at once. A closely related issue is that there can be various distinct sets of coherent beliefs and coherentists face the problem of explaining why someone should accept one coherent set rather than another.[4][11] For infinitists, in contrast to foundationalists and coherentists, there is an infinite number of reasons. This position faces the problem of explaining how human knowledge is possible at all, as it seems that the human mind is limited and cannot possess an infinite number of reasons.[4] In their traditional forms, both foundationalists and coherentists face the Gettier problem, i.e. that having a reason or justification for a true belief is not sufficient for knowledge in cases where cognitive luck is responsible for the success.[4][100]

Value of knowledge

Sculpture showing a torch being passed form one person to another
Los portadores de la antorcha (The Torch-Bearers) – sculpture by Anna Hyatt Huntington symbolizing the transmission of knowledge from one generation to the next (Ciudad Universitaria, Madrid, Spain)

Knowledge may be valuable either because it is useful or because it is good in itself. Knowledge can be useful by helping a person achieve their goals. An example of this form of instrumental value is knowledge of a disease, which can be beneficial because it enables a person to identify and treat the disease. Or to reach an important job interview, knowledge of where and when it takes place helps the person arrive there on time.[101][102][103] However, this does not imply that knowledge is always useful. In this regard, many true beliefs about trivial matters have neither positive nor negative value. This concerns, for example, knowing how many grains of sand are on a specific beach or memorizing phone numbers one never intends to call. In a few cases, knowledge may even have a negative value. For example, if a person's life depends on gathering the courage to jump over a ravine, then having a true belief about the involved dangers may hinder the person to do so.[102]

Besides having instrumental value, knowledge may also have intrinsic value. This means that certain forms of knowledge are good in themselves even if they do not provide any practical benefits. According to Duncan Pritchard, this applies to certain forms of knowledge associated with wisdom.[102][101] The value of knowledge is relevant to the field of education, specifically to the issue of choosing which knowledge should be passed on to the student.[101]

A more specific issue in epistemology concerns the question of whether or why knowledge is more valuable than mere true belief.[104][103] There is wide agreement that knowledge is good in some sense but the thesis that knowledge is better than true belief is controversial. An early discussion of this problem is found in Plato's Meno in relation to the claim that both knowledge and true belief can successfully guide action and, therefore, have apparently the same value. For example, it seems that mere true belief is as effective as knowledge when trying to find the way to Larissa.[103][104][105] According to Plato, knowledge is better because it is more stable.[103] A different approach is to hold that knowledge gets its additional value from justification. However, if the value in question is understood primarily as an instrumental value, it is not clear in what sense knowledge is better than mere true belief since they are usually equally useful.[104]

The problem of the value of knowledge is often discussed in relation to reliabilism and virtue epistemology.[104][103][106] Reliabilism can be defined as the thesis that knowledge is reliably-formed true belief. On this view, it is difficult to explain how a reliable belief-forming process adds additional value.[104] According to an analogy by Linda Zagzebski, a cup of coffee made by a reliable coffee machine has the same value as an equally good cup of coffee made by an unreliable coffee machine.[107] This difficulty in solving the value problem is sometimes used as an argument against reliabilism. Virtue epistemologists see knowledge as the manifestation of cognitive virtues and can thus argue that knowledge has additional value due to its association with virtue. However, not everyone agrees that knowledge actually has additional value over true belief. A similar view is defended by Jonathan Kvanvig, who argues that the main epistemic value resides not in knowledge but in understanding, which implies grasping how one's beliefs cohere with each other.[104][103][108]

Philosophical skepticism

Bust of Pyrrho of Elis
Pyrrho of Elis was one of the first philosophical skeptics.

Philosophical skepticism in its strongest form, also referred to as global skepticism, is the thesis that humans lack any form of knowledge or that knowledge is impossible. Very few philosophers have explicitly defended this position. However, it has been influential nonetheless, usually in a negative sense: many researchers see it as a serious challenge to any epistemological theory and often try to show how their preferred theory overcomes it.[4][5][11] For example, it is commonly accepted that perceptual experience constitutes a source of knowledge. However, according to the dream argument, this is not the case since dreaming provides unreliable information and since the agent could be dreaming without knowing it. Because of this inability to discriminate between dream and perception, it is argued that there is no perceptual knowledge.[4][5][6] A similar often cited thought experiment assumes that the agent is actually a brain in a vat that is just fed electrical stimuli. Such a brain would have the false impression of having a body and interacting with the external world. The basic argument is the same: since the agent is unable to tell the difference, they do not know that they have a body responsible for reliable perceptions.[11]

One issue revealed through these thought experiments is the problem of underdetermination: that the evidence available is not sufficient to make a rational decision between competing theories. And if two contrary hypotheses explain the appearances equally well, then the agent is not justified in believing one of those hypotheses rather than the other. Based on this premise, the general skeptic has to argue that this is true for all our knowledge, that there is always an alternative and very different explanation.[11] Another skeptic argument is based on the idea that human cognition is fallible and therefore lacks absolute certainty. More specific arguments target particular theories of knowledge, such as foundationalism or coherentism, and try to show that their concept of knowledge is deeply flawed.[4][11] An important argument against global skepticism is that it seems to contradict itself: the claim that there is no knowledge appears to constitute a knowledge-claim itself.[6] Other responses come from common sense philosophy and reject global skepticism based on the fact that it contradicts many plausible ordinary beliefs. It is then argued against skepticism by seeing common sense as more reliable than the abstract reasoning cited in favor of skepticism.[11][109]

Certain less radical forms of skepticism deny that knowledge exists within a specific area or discipline, sometimes referred to as local or selective skepticism.[5][6][11] It is often motivated by the idea that certain phenomena do not accurately represent their subject matter. They may thus lead to false impressions concerning its nature. External world skeptics hold that one can only know about one's own sensory impressions and experiences but not about the external world. This is based on the idea that beliefs about the external world are mediated through the senses. The senses are faulty at times and may thus show things that are not really there. This problem is avoided on the level of sensory impressions, which are given to the experiencer directly without an intermediary. In this sense, the person may be wrong about seeing a red Ferrari in the street (it might have been a Maserati or a mere light reflection) but they cannot be wrong about having a sensory impression of the color red.[5][6][11]

The inverse path is taken by some materialists, who accept the existence of the external physical world but deny the existence of the internal realm of mind and consciousness based on the difficulty of explaining how the two realms can exist together.[14] Other forms of local skepticism accept scientific knowledge but deny the possibility of moral knowledge, for example, because there is no reliable way to empirically measure whether a moral claim is true or false.[5]

The issue of the definition and standards of knowledge is central to the question of whether skepticism in its different forms is true. If very high standards are used, for example, that knowledge implies infallibility, then skepticism becomes more plausible. In this case, the skeptic only has to show that no belief is absolutely certain; that while the actual belief is true, it could have been false. However, the more these standards are weakened to how the term is used in everyday language, the less plausible skepticism becomes.[6][110][11] For example, such a position is defended in the pragmatist epistemology, which sees all beliefs and theories as fallible hypotheses and holds that they may need to be revised as new evidence is acquired.[111][112][113]

In various disciplines

Formal epistemology

Formal epistemology studies knowledge using formal tools, such as mathematics and logic.[114] An important issue in this field concerns the epistemic principles of knowledge. These are rules governing how knowledge and related states behave and in what relations they stand to each other. The transparency principle, also referred to as the luminosity of knowledge, is an often discussed principle. It states that knowing something implies the second-order knowledge that one knows it. This principle implies that if Heike knows that today is Monday, then she also knows that she knows that today is Monday.[11][115][116] Other commonly discussed principles are the conjunction principle, the closure principle, and the evidence transfer principle. For example, the conjunction principle states that having two justified beliefs in two separate propositions implies that the agent is also justified in believing the conjunction of these two propositions. In this regard, if Bob knows that dogs are animals and he also knows that cats are animals, then he knows the conjunction of these two propositions, i.e. he knows that dogs and cats are animals.[4]

Science

The scientific approach is usually regarded as an exemplary process for how to gain knowledge about empirical facts.[117][118] Scientific knowledge includes mundane knowledge about easily observable facts, for example, chemical knowledge that certain reactants become hot when mixed together. But it also encompasses knowledge of less tangible issues, like claims about the behavior of genes, neutrinos, and black holes.[119]

A key aspect of most forms of science is that they seek natural laws that explain empirical observations.[117][118] Scientific knowledge is discovered and tested using the scientific method. This method aims to arrive at reliable knowledge by formulating the problem in a clear manner and by ensuring that the evidence used to support or refute a specific theory is public, reliable, and replicable. This way, other researchers can repeat the experiments and observations in the initial study to confirm or disconfirm it.[120] The scientific method is often analyzed as a series of steps. According to some formulations, it begins with regular observation and data collection. Based on these insights, the scientists then try to find a hypothesis that explains the observations. The hypothesis is then tested using a controlled experiment to compare whether predictions based on the hypothesis match the actual results. As a last step, the results are interpreted and a conclusion is reached whether and to what degree the findings confirm or disconfirm the hypothesis.[121][122][123]

The progress of scientific knowledge is traditionally seen as a gradual and continuous process in which the existing body of knowledge is increased in each step. However, this view has been challenged by various philosophers of science, such as Thomas Kuhn, who holds that between phases of incremental progress, there are so-called scientific revolutions in which a paradigm shift occurs. According to this view, various fundamental assumptions are changed due to the paradigm shift, resulting in a radically new perspective on the body of scientific knowledge that is incommensurable with the previous outlook.[124][125]

Religion

Knowledge plays a central role in many religions. Knowledge claims about the existence of God or religious doctrines about how each one should live their lives are found in almost every culture.[126] However, such knowledge claims are often controversial and are commonly rejected by religious skeptics and atheists.[127] The epistemology of religion is the field of inquiry that investigates whether belief in God and in other religious doctrines is rational and amounts to knowledge.[126][128] One important view in this field is evidentialism. It states that belief in religious doctrines is justified if it is supported by sufficient evidence. Suggested examples of evidence for religious doctrines include religious experiences such as direct contact with the divine or inner testimony as when hearing God's voice.[126][128][129] However, evidentialists often reject that belief in religious doctrines amount to knowledge based on the claim that there is not sufficient evidence.[126][128] A famous saying in this regard is due to Bertrand Russell. When asked how he would justify his lack of belief in God when facing his judgment after death, he replied "Not enough evidence, God! Not enough evidence."[126]

However, religious teachings about the existence and nature of God are not always understood as knowledge claims by their defenders and some explicitly state that the proper attitude towards such doctrines is not knowledge but faith. This is often combined with the assumption that these doctrines are true but cannot be fully understood by reason or verified through rational inquiry. For this reason, it is claimed that one should accept them even though they do not amount to knowledge.[127] Such a view is reflected in a famous saying by Immanuel Kant where he claims that he "had to deny knowledge in order to make room for faith."[130]

Distinct religions often differ from each other concerning the doctrines they proclaim as well as their understanding of the role of knowledge in religious practice.[131][132] In both the Jewish and the Christian tradition, knowledge plays a role in the fall of man in which Adam and Eve were expelled from the Garden of Eden. Responsible for this fall was that they ignored God's command and ate from the tree of knowledge, which gave them the knowledge of good and evil. This is understood as a rebellion against God since this knowledge belongs to God and it is not for humans to decide what is right or wrong.[133][134][135] In the Christian literature, knowledge is seen as one of the seven gifts of the Holy Spirit.[136] In Islam, "the Knowing" (al-ʿAlīm) is one of the 99 names reflecting distinct attributes of God. The Qur'an asserts that knowledge comes from God and the acquisition of knowledge is encouraged in the teachings of Muhammad.[137][138]

Oil painting showing Saraswati
Saraswati is the goddess of knowledge and the arts in Hinduism.

In Buddhism, knowledge that leads to liberation is called vijjā. It contrasts with avijjā or ignorance, which is understood as the root of all suffering. This is often explained in relation to the claim that humans suffer because they crave things that are impermanent. The ignorance of the impermanent nature of things is seen as the factor responsible for this craving.[139][140][141] The central goal of Buddhist practice is to stop suffering. This aim is to be achieved by understanding and practicing the teaching known as the Four Noble Truths and thereby overcoming ignorance.[140][141] Knowledge plays a key role in the classical path of Hinduism known as jñāna yoga or "path of knowledge". Its aim is to achieve oneness with the divine by fostering an understanding of the self and its relation to Brahman or ultimate reality.[142][143]

Anthropology

The anthropology of knowledge is a multi-disciplinary field of inquiry.[144][145] It studies how knowledge is acquired, stored, retrieved, and communicated.[146] Special interest is given to how knowledge is reproduced and undergoes changes in relation to social and cultural circumstances.[144] In this context, the term knowledge is used in a very broad sense, roughly equivalent to terms like understanding and culture.[144][147] This means that the forms and reproduction of understanding are studied irrespective of their truth value. In epistemology, on the other hand, knowledge is usually restricted to forms of true belief. The main focus in anthropology is on empirical observations of how people ascribe truth values to meaning contents, like when affirming an assertion, even if these contents are false.[144] But it also includes practical components: knowledge is what is employed when interpreting and acting on the world and involves diverse phenomena, such as feelings, embodied skills, information, and concepts. It is used to understand and anticipate events in order to prepare and react accordingly.[147]

The reproduction of knowledge and its changes often happen through some form of communication.[144][146] This includes face-to-face discussions and online communications as well as seminars and rituals. An important role in this context falls to institutions, like university departments or scientific journals in the academic context.[144] A tradition or traditional knowledge may be defined as knowledge that has been reproduced within a society or geographic region over several generations. However, societies also respond to various external influences, such as other societies, whose understanding is often interpreted and incorporated in a modified form.[144][147][148]

Individuals belonging to the same social group usually understand things and organize knowledge in similar ways to one another. In this regard, social identities play a significant role: individuals who associate themselves with similar identities, like age-influenced, professional, religious, and ethnic identities, tend to embody similar forms of knowledge. Such identities concern both how an individual sees themselves, for example, in terms of the ideals they pursue, as well as how other people see them, such as the expectations they have toward the individual.[144][149]

Sociology

The sociology of knowledge is closely related to the anthropology of knowledge.[150][147][151] It is the subfield of sociology that investigates how thought and society are related to each other. In it, the term "knowledge" is understood in a very wide sense that encompasses many types of mental products. It includes philosophical and political ideas as well as religious and ideological doctrines and can also be applied to folklore, law, and technology. In this regard, knowledge is sometimes treated as a synonym for culture. The sociology of knowledge studies in what sociohistorical circumstances knowledge arises and on what existential conditions it depends. The investigated conditions can include physical, demographic, economic, and sociocultural factors. The sociology of knowledge differs from most forms of epistemology since its main focus is on the question of how ideas originate and what consequences they have rather than asking whether these ideas are coherent or correct. An example of a theory in this field is due to Karl Marx, who claims that the dominant ideology in a society is a product of and changes with the underlying socioeconomic conditions.[152][153][154] A related perspective is found in certain forms of decolonial scholarship that claim that colonial powers are responsible for the hegemony of western knowledge systems and seek a decolonization of knowledge to undermine this hegemony.[155][156]

See also

References

Notes


  1. A defeater of a belief is evidence that this belief is false.[48]

Citations


  • Ichikawa, Jonathan Jenkins; Steup, Matthias (2018). "The Analysis of Knowledge". The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Archived from the original on 2 May 2022. Retrieved 24 May 2022.

  • Bolisani, Ettore; Bratianu, Constantin (2018). "The Elusive Definition of Knowledge". Emergent Knowledge Strategies: Strategic Thinking in Knowledge Management. Springer International Publishing. pp. 1–22. doi:10.1007/978-3-319-60657-6_1. ISBN 978-3-319-60657-6. Archived from the original on 2 June 2022. Retrieved 12 June 2022.

  • "knowledge". Oxford dictionary (American English). Archived from the original on 14 July 2010.

  • Klein, Peter D. (1998). "Knowledge, concept of". In Craig, Edward (ed.). Routledge Encyclopedia of Philosophy. London; New York: Routledge. doi:10.4324/9780415249126-P031-1. ISBN 978-0-415-25069-6. OCLC 38096851. Archived from the original on 13 June 2022. Retrieved 13 June 2022.

  • Hetherington, Stephen. "Knowledge". Internet Encyclopedia of Philosophy. Archived from the original on 2 June 2022. Retrieved 18 May 2022.

  • Stroll, Avrum. "epistemology". www.britannica.com. Archived from the original on 10 July 2019. Retrieved 20 May 2022.

  • Stanley, Jason; Willlamson, Timothy (2001). "Knowing How". Journal of Philosophy. 98 (8): 411–444. doi:10.2307/2678403. JSTOR 2678403. Archived from the original on 2 June 2022. Retrieved 12 June 2022.

  • Zagzebski 1999, p. 92.

  • Zagzebski 1999, p. 92, 96-7.

  • Zagzebski 1999, p. 109.

  • Steup, Matthias; Neta, Ram (2020). "Epistemology". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Archived from the original on 21 July 2020. Retrieved 22 May 2022.

  • Zagzebski 1999, p. 99.

  • Hannon, Michael (2021). "Knowledge, concept of". Routledge Encyclopedia of Philosophy. London; New York: Routledge. doi:10.4324/9780415249126-P031-2. ISBN 978-0-415-07310-3. Archived from the original on 2 June 2022. Retrieved 12 June 2022.

  • Lehrer, Keith (2015). "1. The Analysis of Knowledge". Theory of Knowledge. Routledge. ISBN 978-1-135-19609-7. Archived from the original on 2 June 2022. Retrieved 12 June 2022.

  • Zagzebski 1999, p. 96.

  • Gupta, Anil (2021). "Definitions: 1.1 Real and nominal definitions". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Archived from the original on 1 May 2022. Retrieved 28 May 2022.

  • Zagzebski 1999, p. 99–100.

  • Silva, Paul Jr. (September 2019). "Beliefless knowing". Pacific Philosophical Quarterly. 100 (3): 723–746. doi:10.1111/papq.12273. S2CID 240808741.

  • Crumley, Jack S. (2016). "What do you know? Look at the artifacts". Introducing Philosophy: Knowledge and Reality. Peterborough, Ontario, Canada: Broadview Press. pp. 51–52. ISBN 978-1-55481-129-8. OCLC 950057343. Archived from the original on 13 June 2022. Retrieved 13 June 2022.

  • Allen, Barry (2005). "Knowledge". In Horowitz, Maryanne Cline (ed.). New Dictionary of the History of Ideas. Vol. 3. New York: Charles Scribner's Sons. pp. 1199–1204. ISBN 978-0-684-31377-1. OCLC 55800981. Archived from the original on 22 August 2017.

  • Pritchard, Duncan (2013). "3 Defining knowledge". What is this thing called Knowledge?. Routledge. ISBN 978-1-134-57367-7. Archived from the original on 2 June 2022. Retrieved 12 June 2022.

  • McCain, Kevin. "Problem of the Criterion". Internet Encyclopedia of Philosophy. Archived from the original on 2 June 2022. Retrieved 28 May 2022.

  • Fumerton, Richard (2008). "The Problem of the Criterion". The Oxford Handbook of Skepticism. pp. 34–52. doi:10.1093/oxfordhb/9780195183214.003.0003. ISBN 978-0-19-518321-4. Archived from the original on 2 June 2022. Retrieved 12 June 2022.

  • García-Arnaldos, María Dolores (23 October 2020). "An Introduction to the Theory of Knowledge, written by O'Brien, D." History of Philosophy & Logical Analysis. 23 (2): 508. doi:10.30965/26664275-20210003. ISSN 2666-4275. S2CID 228985437.

  • O'Brien, Dan (2016). "Part I Chapter 7: Family resemblance". An Introduction to the Theory of Knowledge. John Wiley & Sons. ISBN 978-1-5095-1240-9.

  • Black, Tim (1 April 2002). "Relevant alternatives and the shifting standards of knowledge". Southwest Philosophy Review. 18 (1): 23–32. doi:10.5840/swphilreview20021813. Archived from the original on 2 June 2022. Retrieved 12 June 2022.

  • Klausen, Søren Harnow (March 2015). "Group knowledge: a real-world approach". Synthese. 192 (3): 813–839. doi:10.1007/s11229-014-0589-9. S2CID 207246817.

  • Lackey, Jennifer (2021). The Epistemology of Groups. Oxford University Press. ISBN 978-0-19-965660-8.

  • "knowledge". The American Heritage Dictionary. HarperCollins. Retrieved 25 October 2022.

  • Magee, Bryan; Popper, Karl R. (1971). "Conversation with Karl Popper". In Magee, Bryan (ed.). Modern British philosophy. New York: St. Martin's Press. pp. 74–75. ISBN 978-0-19-283047-0. OCLC 314039. Popper: Putting our ideas into words, or better, writing them down, makes an important difference. ... It is what I call 'knowledge in the objective sense'. Scientific knowledge belongs to it. It is this knowledge which is stored in our libraries rather than our heads. Magee: And you regard the knowledge stored in our libraries as more important than the knowledge stored in our heads. Popper: Much more important, from every point of view

  • "knowledge base". The American Heritage Dictionary. HarperCollins. Retrieved 25 October 2022.

  • Walton, Douglas N. (January 2005). "Pragmatic and idealized models of knowledge and ignorance". American Philosophical Quarterly. 42 (1): 59–69 [59, 64]. JSTOR 20010182. It is a pervasive assumption in recent analytical philosophy that knowledge can be defined as a modality representing a rational agent's true and consistent beliefs. Such views are based on rationality assumptions. One is that knowledge can only consist of true propositions. This way of speaking is sharply at odds with the way we speak about knowledge, for example, in computing, where a so-called knowledge base can be a database, that is, a set of data that has been collected and is thought to consist of true propositions, even though, realistically speaking, many of them might later be shown to be false or untenable. ... The pragmatic account of knowledge starts with a knowledge system, meaning a working system with an agent having a database. ... The notion of a search can be a social one, in many instances. A group of agents can be engaged in the search, and some of them can know things that others do not know.

  • Crotty, Kevin (2022). Ignorance, Irony, and Knowledge in Plato. Rowman & Littlefield. p. 16. ISBN 978-1-6669-2712-2.

  • Peels, Rik; Blaauw, Martijn (2016). The Epistemic Dimensions of Ignorance. Cambridge University Press. p. 25. ISBN 978-1-107-17560-0.

  • Publishers, HarperCollins. "The American Heritage Dictionary entry: ignorance". www.ahdictionary.com. Retrieved 7 March 2023.

  • Truncellito, David A. "Epistemology". Internet Encyclopedia of Philosophy. Retrieved 8 March 2023.

  • Moser, Paul K. (2005). The Oxford Handbook of Epistemology. Oxford University Press. p. 3. ISBN 978-0-19-020818-9.

  • Parikh, Rohit; Renero, Adriana (2017). "Justified True Belief: Plato, Gettier, and Turing". Philosophical Explorations of the Legacy of Alan Turing: Turing 100. Springer International Publishing. pp. 93–102. doi:10.1007/978-3-319-53280-6_4. ISBN 978-3-319-53280-6. Archived from the original on 2 June 2022. Retrieved 12 June 2022.

  • Chappell, Sophie-Grace (2019). "Plato on Knowledge in the Theaetetus". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Archived from the original on 10 July 2022. Retrieved 12 June 2022.

  • Zagzebski 1999, p. 93.

  • Zagzebski 1999, p. 100.

  • Hetherington, Stephen. "Gettier Problems". Internet Encyclopedia of Philosophy. Archived from the original on 19 February 2009. Retrieved 28 May 2022.

  • Rodríguez, Ángel García (2018). "Fake barns and our epistemological theorizing". Crítica: Revista Hispanoamericana de Filosofía. 50 (148): 29–54. doi:10.22201/iifs.18704905e.2018.02. ISSN 0011-1503. JSTOR 26767766. S2CID 171635198.

  • Goldman, Alvin I. (18 November 1976). "Discrimination and Perceptual Knowledge". The Journal of Philosophy. 73 (20): 771–791. doi:10.2307/2025679. JSTOR 2025679.

  • Sudduth, Michael. "Defeaters in Epistemology: 2b Defeasibility Analyses and Propositional Defeaters". Internet Encyclopedia of Philosophy. Archived from the original on 2 June 2022. Retrieved 17 May 2022.

  • Durán, Juan M.; Formanek, Nico (1 December 2018). "Grounds for Trust: Essential Epistemic Opacity and Computational Reliabilism". Minds and Machines. 28 (4): 645–666. doi:10.1007/s11023-018-9481-6. ISSN 1572-8641. S2CID 53102940.

  • Comesaña, Juan (2005). "Justified vs. Warranted Perceptual Belief: Resisting Disjunctivism". Philosophy and Phenomenological Research. 71 (2): 367–383. doi:10.1111/j.1933-1592.2005.tb00453.x. ISSN 0031-8205. JSTOR 40040862. Archived from the original on 2 June 2022. Retrieved 12 June 2022.

  • McCain, Kevin; Stapleford, Scott; Steup, Matthias (2021). Epistemic Dilemmas: New Arguments, New Angles. Routledge. p. 111. ISBN 978-1-000-46851-9.

  • Zagzebski 1999, p. 101.

  • Kraft, Tim (2012). "Scepticism, Infallibilism, Fallibilism". Discipline Filosofiche. 22 (2): 49–70. Archived from the original on 2 June 2022. Retrieved 12 June 2022.

  • Zagzebski 1999, p. 103–4.

  • Sidelle, Alan (2001). "An Argument That Internalism Requires Infallibility". Philosophy and Phenomenological Research. 63 (1): 163–179. doi:10.1111/j.1933-1592.2001.tb00096.x. Archived from the original on 2 June 2022. Retrieved 12 June 2022.

  • Kirkham, Richard L. (October 1984). "Does the Gettier Problem Rest on a Mistake?". Mind. New Series. 93 (372): 501–513. doi:10.1093/mind/XCIII.372.501. JSTOR 2254258.

  • Zagzebski 1999, p. 93–4, 104–5.

  • Ronald, Barnett (1990). The Idea Of Higher Education. McGraw-Hill Education (UK). p. 40. ISBN 978-0-335-09420-2.

  • Lilley, Simon; Lightfoot, Geoffrey; Amaral, Paulo (2004). Representing Organization: Knowledge, Management, and the Information Age. Oxford University Press. pp. 162–163. ISBN 978-0-19-877541-6.

  • Pritchard, Duncan (2013). "1 Some preliminaries". What is this thing called Knowledge?. Routledge. ISBN 978-1-134-57367-7. Archived from the original on 2 June 2022. Retrieved 12 June 2022.

  • Bartlett, Gary (2018). "Occurrent States". Canadian Journal of Philosophy. 48 (1): 1–17. doi:10.1080/00455091.2017.1323531. S2CID 220316213. Archived from the original on 4 May 2021. Retrieved 3 April 2021.

  • Schwitzgebel, Eric (2021). "Belief: 2.1 Occurrent Versus Dispositional Belief". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Archived from the original on 15 November 2019. Retrieved 8 June 2022.

  • Gascoigne, Neil; Thornton, Tim (2014). Tacit Knowledge. Routledge. pp. 8, 37, 81, 108. ISBN 978-1-317-54726-6.

  • Hill, Sonya D., ed. (2009). "Knowledge–Based View Of The Firm". Encyclopedia of Management. Gale. ISBN 978-1-4144-0691-6.

  • Hasan, Ali; Fumerton, Richard (2020). "Knowledge by Acquaintance vs. Description: 1. The Distinction". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Archived from the original on 31 May 2022. Retrieved 28 May 2022.

  • DePoe, John M. "Knowledge by Acquaintance and Knowledge by Description". Internet Encyclopedia of Philosophy. Archived from the original on 2 June 2022. Retrieved 28 May 2022.

  • Baehr, Jason S. "A Priori and A Posteriori". Internet Encyclopedia of Philosophy. Retrieved 17 September 2022.

  • Russell, Bruce (2020). "A Priori Justification and Knowledge". The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Retrieved 18 September 2022.

  • "a posteriori knowledge". www.britannica.com. Retrieved 17 September 2022.

  • Moser, Paul K. "A posteriori". Routledge Encyclopedia of Philosophy. Routledge. Retrieved 18 September 2022.

  • "a priori knowledge". www.britannica.com. Retrieved 17 September 2022.

  • Woolf, Raphael (1 January 2013). "Plato and the Norms of Thought". Mind. 122 (485): 171–216. doi:10.1093/mind/fzt012. ISSN 0026-4423.

  • Gertler, Brie (2021). "Self-Knowledge". The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Retrieved 22 October 2022.

  • Gertler, Brie (2010). "1. Introduction". Self-Knowledge. Routledge. p. 1. ISBN 978-1-136-85811-6.

  • McGeer, V. (2001). "Self-knowledge: Philosophical Aspects". International Encyclopedia of the Social & Behavioral Sciences: 13837–13841. doi:10.1016/B0-08-043076-7/01073-1. ISBN 978-0-08-043076-8.

  • Gertler, Brie. "Self-Knowledge > Knowledge of the Self". Stanford Encyclopedia of Philosophy. Retrieved 22 October 2022.

  • Morin, Alain; Racy, Famira (2021). "15. Dynamic self-processes - Self-knowledge". The Handbook of Personality Dynamics and Processes. Academic Press. pp. 373–374. ISBN 978-0-12-813995-0.

  • Kernis, Michael H. (2013). Self-Esteem Issues and Answers: A Sourcebook of Current Perspectives. Psychology Press. p. 209. ISBN 978-1-134-95270-0.

  • "situated knowledge". APA Dictionary of Psychology. Washington, DC: American Psychological Association. n.d. Retrieved 18 September 2022.

  • Hunter, Lynette (2009). "Situated Knowledge". Mapping Landscapes for Performance as Research: Scholarly Acts and Creative Cartographies. Palgrave Macmillan UK. pp. 151–153. doi:10.1057/9780230244481_23. ISBN 978-0-230-24448-1.

  • Haraway, Donna (1988). "Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective". Feminist Studies. 14 (3): 575–599. doi:10.2307/3178066. JSTOR 3178066. S2CID 39794636.

  • Thompson, C. M. (2001). "Situated Knowledge: Feminist and Science and Technology Studies Perspectives". International Encyclopedia of the Social & Behavioral Sciences. Pergamon. pp. 14129–14133. ISBN 978-0-08-043076-8.

  • Thakchoe, Sonam (2022). "The Theory of Two Truths in India". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Archived from the original on 16 May 2022. Retrieved 6 June 2022.

  • "Para and Apara Vidya". The Hindu. 18 September 2018. Archived from the original on 31 May 2022. Retrieved 6 June 2022.

  • Mishra, T. K. (2021). The Power of Ethics: Some lessons from the Bhagavad-Gita. K.K. Publications. p. 52. ISBN 978-81-7844-127-6. Archived from the original on 30 July 2022. Retrieved 21 June 2022.

  • Ghose, Aurobindo (1998). "Political Writings and Speeches. 1890–1908: The Glory of God in Man". Bande Mataram II. ISBN 978-81-7058-416-2.

  • Paul Williams (2008). Mahayana Buddhism: The Doctrinal Foundations. Routledge. pp. 68–69. ISBN 978-1-134-25056-1. Archived from the original on 21 November 2016. Retrieved 12 June 2022.

  • Christopher W. Gowans (2014). Buddhist Moral Philosophy: An Introduction. Routledge. pp. 69–70. ISBN 978-1-317-65934-1. Archived from the original on 12 November 2021. Retrieved 12 June 2022.

  • Kern, Andrea (2017). Sources of Knowledge: On the Concept of a Rational Capacity for Knowledge. Harvard University Press. pp. 8–10, 133. ISBN 978-0-674-41611-6.

  • O’Brien, Daniel. "The Epistemology of Perception". Internet Encyclopedia of Philosophy. Retrieved 25 October 2022.

  • Audi, Robert (2002). "The Sources of Knowledge". The Oxford Handbook of Epistemology. Oxford University Press. pp. 71–94. ISBN 978-0-19-513005-8. Archived from the original on 12 June 2022. Retrieved 12 June 2022.

  • Gardiner, J. M. (29 September 2001). "Episodic memory and autonoetic consciousness: a first-person approach". Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences. 356 (1413): 1351–1361. doi:10.1098/rstb.2001.0955. ISSN 0962-8436. PMC 1088519. PMID 11571027.

  • Michaelian, Kourken; Sutton, John (2017). "Memory: 3. Episodicity". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Archived from the original on 5 October 2021. Retrieved 2 October 2021.

  • Markie, Peter; Folescu, M. (2021). "Rationalism vs. Empiricism". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Archived from the original on 9 August 2019. Retrieved 8 June 2022.

  • Leonard, Nick (2021). "Epistemological Problems of Testimony". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Archived from the original on 10 July 2022. Retrieved 8 June 2022.

  • Green, Christopher R. "Epistemology of Testimony". Internet Encyclopedia of Philosophy. Archived from the original on 7 March 2022. Retrieved 8 June 2022.

  • Hasan, Ali; Fumerton, Richard (2018). "Foundationalist Theories of Epistemic Justification". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Archived from the original on 5 August 2019. Retrieved 6 June 2022.

  • Fumerton, Richard (2022). Foundationalism. doi:10.1017/9781009028868. ISBN 978-1-009-02886-8. Archived from the original on 12 June 2022. Retrieved 12 June 2022.

  • Cameron, Ross (2018). "Infinite Regress Arguments". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Archived from the original on 2 January 2020. Retrieved 12 June 2022.

  • Clark, Romane (1988). "Vicious Infinite Regress Arguments". Philosophical Perspectives. 2: 369–380. doi:10.2307/2214081. JSTOR 2214081. Archived from the original on 26 March 2022. Retrieved 12 June 2022.

  • Murphy, Peter. "Coherentism in Epistemology". Internet Encyclopedia of Philosophy. Archived from the original on 12 June 2022. Retrieved 8 June 2022.

  • Lammenranta, Markus. "Epistemic Circularity". Internet Encyclopedia of Philosophy. Archived from the original on 27 January 2021. Retrieved 12 June 2022.

  • Schafer, Karl (September 2014). "Knowledge and Two Forms of Non-Accidental Truth". Philosophy and Phenomenological Research. 89 (2): 375. doi:10.1111/phpr.12062. What the Gettier cases show is that this condition is insufficient to capture all the ways in which accidental truth is incompatible with knowledge.

  • Degenhardt, M. A. B. (2019). Education and the Value of Knowledge. Routledge. pp. 1–6. ISBN 978-1-000-62799-2.

  • Pritchard, Duncan (2013). "2 The value of knowledge". What is this thing called Knowledge?. Routledge. ISBN 978-1-134-57367-7. Archived from the original on 2 June 2022. Retrieved 12 June 2022.

  • Olsson, Erik J (December 2011). "The Value of Knowledge: The Value of Knowledge". Philosophy Compass. 6 (12): 874–883. doi:10.1111/j.1747-9991.2011.00425.x. S2CID 143034920.

  • Pritchard, Duncan; Turri, John; Carter, J. Adam (2022). "The Value of Knowledge". The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Retrieved 19 September 2022.

  • Plato (2002). Five Dialogues. Indianapolis, IN: Hackett Pub. Co. pp. 89–90, 97b–98a. ISBN 978-0-87220-633-5.

  • Pritchard, Duncan (April 2007). "Recent Work on Epistemic Value". American Philosophical Quarterly. 44 (2): 85–110. JSTOR 20464361.

  • Turri, John; Alfano, Mark; Greco, John (2021). "Virtue Epistemology: 6. Epistemic Value". The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Retrieved 20 September 2022.

  • Kvanvig, Jonathan L. (2003). The Value of Knowledge and the Pursuit of Understanding. Cambridge University Press. p. 200. ISBN 978-1-139-44228-2. Archived from the original on 23 July 2020. Retrieved 16 July 2020.

  • Lycan, William G. (2019). "2. Moore against the New Skeptics". On Evidence in Philosophy. Oxford University Press. pp. 21–36. ISBN 978-0-19-256526-6.

  • Zagzebski 1999, p. 97–8.

  • McDermid, Douglas. "Pragmatism: 2b. Anti-Cartesianism". Internet Encyclopedia of Philosophy.

  • Misak, Cheryl (2002). Truth, Politics, Morality: Pragmatism and Deliberation. Routledge. p. 53. ISBN 978-1-134-82618-6.

  • Hamner, M. Gail (2003). American Pragmatism: A Religious Genealogy. Oxford University Press. p. 87. ISBN 978-0-19-515547-1.

  • Weisberg, Jonathan (2021). "Formal Epistemology". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Archived from the original on 14 March 2015. Retrieved 5 June 2022.

  • Das, Nilanjan; Salow, Bernhard (2018). "Transparency and the KK Principle". Noûs. 52 (1): 3–23. doi:10.1111/nous.12158. Archived from the original on 12 June 2022. Retrieved 12 June 2022.

  • Dokic, Jérôme; Égré, Paul (2009). "Margin for Error and the Transparency of Knowledge". Synthese. 166 (1): 1–20. doi:10.1007/s11229-007-9245-y. S2CID 14221986. Archived from the original on 12 June 2022. Retrieved 12 June 2022.

  • Pritchard, Duncan (2013). "11 Scientific knowledge". What is this thing called Knowledge?. Routledge. pp. 115–118. ISBN 978-1-134-57367-7. Archived from the original on 2 June 2022. Retrieved 12 June 2022.

  • Moser, Paul K. (2005). "13. Scientific Knowledge". The Oxford Handbook of Epistemology. Oxford University Press. p. 385. ISBN 978-0-19-020818-9.

  • Moser, Paul K. (2005). "13. Scientific Knowledge". The Oxford Handbook of Epistemology. Oxford University Press. p. 386. ISBN 978-0-19-020818-9.

  • Moser, Paul K. (2005). "13. Scientific Knowledge". The Oxford Handbook of Epistemology. Oxford University Press. p. 390. ISBN 978-0-19-020818-9.
    Hatfield, Gary (1996). "Scientific method". In Craig, Edward (ed.). Routledge Encyclopedia of Philosophy. Routledge.
    "scientific method". www.britannica.com. Retrieved 17 July 2022.
    Hepburn, Brian; Andersen, Hanne (2021). "Scientific Method". The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Retrieved 23 July 2022.

  • "scientific method". www.britannica.com. Retrieved 17 July 2022.

  • Hatfield, Gary (1996). "Scientific method". In Craig, Edward (ed.). Routledge Encyclopedia of Philosophy. Routledge.

  • Hepburn, Brian; Andersen, Hanne (2021). "Scientific Method". The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Retrieved 23 July 2022.

  • Pritchard, Duncan (2013). "11 Scientific knowledge". What is this thing called Knowledge?. Routledge. pp. 123–125. ISBN 978-1-134-57367-7. Archived from the original on 2 June 2022. Retrieved 12 June 2022.

  • Niiniluoto, Ilkka (2019). "Scientific Progress". The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Retrieved 8 March 2023.

  • Clark, Kelly James. "Religious Epistemology". Internet Encyclopedia of Philosophy. Retrieved 21 September 2022.

  • Penelhum, Terence (1971). "1. Faith, Scepticism and Philosophy". Problems of Religious Knowledge. Macmillan. ISBN 978-0-333-10633-4.

  • Forrest, Peter (2021). "The Epistemology of Religion". The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Retrieved 21 September 2022.

  • Dougherty, Trent (12 June 2014). "Faith, Trust, and Testimony". Religious Faith and Intellectual Virtue: 97–123. doi:10.1093/acprof:oso/9780199672158.003.0005. ISBN 978-0-19-967215-8.

  • Stevenson, Leslie (March 2003). "Opinion, Belief or Faith, and Knowledge". Kantian Review. 7: 72–101. doi:10.1017/S1369415400001746. ISSN 2044-2394. S2CID 143965507.

  • Paden, William E. (2009). "Comparative religion". The Routledge Companion to the Study of Religion. pp. 239–256. doi:10.4324/9780203868768-19. ISBN 978-0-203-86876-8.

  • Bouquet, Alan Coates (1962). Comparative Religion: A Short Outline. CUP Archive. p. 1.

  • Carson, Thomas; Cerrito, Joann, eds. (2003). New Catholic encyclopedia Vol. 14: Thi–Zwi (2nd ed.). Detroit: Thomson/Gale. p. 164. ISBN 978-0-7876-4018-7.

  • Delahunty, Andrew; Dignen, Sheila (2012). "Tree of knowledge". Oxford Dictionary of Reference and Allusion. OUP Oxford. p. 365. ISBN 978-0-19-956746-1.

  • Blayney, Benjamin, ed. (1769). "Genesis". The King James Bible. Oxford University Press.

  • Vost, Kevin (2016). Seven Gifts of the Holy Spirit. Sophia Institute Press. pp. 75–76. ISBN 978-1-62282-412-0.

  • Campo, Juan Eduardo (2009). Encyclopedia of Islam. Infobase Publishing. p. 515. ISBN 978-1-4381-2696-8.

  • Swartley, Keith E. (2005). Encountering the World of Islam. InterVarsity Press. p. 63. ISBN 978-0-8308-5644-2.

  • Burton, David (2002). "Knowledge and Liberation: Philosophical Ruminations on a Buddhist Conundrum". Philosophy East and West. 52 (3): 326–345. doi:10.1353/pew.2002.0011. ISSN 0031-8221. JSTOR 1400322. S2CID 145257341.

  • Chaudhary, Angraj (2017). "Avijjā)". In Sarao, K.T.S.; Long, Jeffery D. (eds.). Buddhism and Jainism. Dordrecht, The Netherlands: Springer Nature. pp. 202–203. ISBN 978-94-024-0851-5.

  • Chaudhary, Angraj (2017). "Wisdom (Buddhism)". In Sarao, K.T.S.; Long, Jeffery D. (eds.). Buddhism and Jainism. Dordrecht, The Netherlands: Springer Nature. pp. 1373–1374. ISBN 978-94-024-0851-5.

  • Jones, Constance; Ryan, James D. (2006). "jnana". Encyclopedia of Hinduism. Infobase Publishing. ISBN 978-0-8160-7564-5.

  • Jones, Constance; Ryan, James D. (2006). "Bhagavad Gita". Encyclopedia of Hinduism. Infobase Publishing. ISBN 978-0-8160-7564-5.

  • Allwood, Carl Martin (2013). "Anthropology of Knowledge". The Encyclopedia of Cross-Cultural Psychology. John Wiley & Sons, Inc. pp. 69–72. doi:10.1002/9781118339893.wbeccp025. ISBN 978-1-118-33989-3.

  • Boyer, Dominic (2007). "1. Of Dialectical Germans and Dialectical Ethnographers: Notes from an Engagement with Philosophy". In Harris, Mark (ed.). Ways of Knowing: Anthropological Approaches to Crafting Experience and Knowledge. Berghahn Books. ISBN 978-1-84545-364-0.

  • Cohen, Emma (2010). "Anthropology of knowledge". The Journal of the Royal Anthropological Institute. 16: S193–S202. doi:10.1111/j.1467-9655.2010.01617.x. hdl:11858/00-001M-0000-0012-9B72-7. JSTOR 40606072.

  • Barth, Fredrik (February 2002). "An Anthropology of Knowledge". Current Anthropology. 43 (1): 1–18. doi:10.1086/324131. hdl:1956/4191. ISSN 0011-3204.

  • Kuruk, Paul (2020). Traditional Knowledge, Genetic Resources, Customary Law and Intellectual Property: A Global Primer. Edward Elgar Publishing. p. 25. ISBN 978-1-78536-848-6.

  • Hansen, Judith Friedman (1982). "From Background to Foreground: Toward an Anthropology of Learning". Anthropology & Education Quarterly. 13 (2): 193. doi:10.1525/aeq.1982.13.2.05x1833m. ISSN 0161-7761. JSTOR 3216630.

  • Boyer, Dominic (2005). Spirit and System: Media, Intellectuals, and the Dialectic in Modern German Culture. University of Chicago Press. p. 34. ISBN 978-0-226-06891-6.

  • Aspen, Harald (2001). Amhara Traditions of Knowledge: Spirit Mediums and Their Clients. Otto Harrassowitz Verlag. p. 5. ISBN 978-3-447-04410-3.

  • Coser, Lewis A. (1968). "Knowledge, Sociology of". International Encyclopedia of the Social Sciences. Gale. ISBN 978-0-02-928751-4.

  • Tufari, P. (2003). "Knowledge, Sociology of". New Catholic Encyclopedia. Thomson/Gale. ISBN 978-0-7876-4008-8.

  • Scheler, Max; Stikkers, Kenneth W. (2012). Problems of a Sociology of Knowledge (Routledge Revivals). Routledge. p. 23. ISBN 978-0-415-62334-6.

  • Lee, Jerry Won (2017). The Politics of Translingualism: After Englishes. Routledge. p. 67. ISBN 978-1-315-31051-0.

    1. Dreyer, Jaco S. (21 April 2017). "Practical theology and the call for the decolonisation of higher education in South Africa: Reflections and proposals". HTS Teologiese Studies / Theological Studies. 73 (4): 1–7. doi:10.4102/hts.v73i4.4805.

    Bibliography

    External links

     https://en.wikipedia.org/wiki/Knowledge

    Abstraction in its main sense is a conceptual process wherein general rules and concepts are derived from the usage and classification of specific examples, literal ("real" or "concrete") signifiers, first principles, or other methods.

    "An abstraction" is the outcome of this process—a concept that acts as a common noun for all subordinate concepts and connects any related concepts as a group, field, or category.[1]

    Conceptual abstractions may be formed by filtering the information content of a concept or an observable phenomenon, selecting only those aspects which are relevant for a particular purpose. For example, abstracting a leather soccer ball to the more general idea of a ball selects only the information on general ball attributes and behavior, excluding but not eliminating the other phenomenal and cognitive characteristics of that particular ball.[1] In a type–token distinction, a type (e.g., a 'ball') is more abstract than its tokens (e.g., 'that leather soccer ball').

    Abstraction in its secondary use is a material process,[2] discussed in the themes below

    https://en.wikipedia.org/wiki/Abstraction

    The Peacock Room, designed in the Anglo-Japanese style by James Abbott McNeill Whistler and Edward Godwin, one of the most famous and comprehensive examples of Aesthetic interior design

    Aestheticism (also the Aesthetic movement) was an art movement in the late 19th century which privileged the aesthetic value of literature, music and the arts over their socio-political functions.[1][2] According to Aestheticism, art should be produced to be beautiful, rather than to serve a moral, allegorical, or other didactic purpose, a sentiment exemplified by the slogan "art for art's sake." Aestheticism originated in 1860s England with a radical group of artists and designers, including William Morris and Dante Gabriel Rossetti. It flourished in the 1870s and 1880s, gaining prominence and the support of notable writers such as Walter Pater and Oscar Wilde.

    Aestheticism challenged the values of mainstream Victorian culture, as many Victorians believed that literature and art fulfilled important ethical roles.[3] Writing in The Guardian, Fiona McCarthy states that "the aesthetic movement stood in stark and sometimes shocking contrast to the crass materialism of Britain in the 19th century."[4]

    Aestheticism was named by the critic Walter Hamilton in The Aesthetic Movement in England in 1882.[5] By the 1890s, decadence, a term with origins in common with aestheticism, was in use across Europe.[3] 

    https://en.wikipedia.org/wiki/Aestheticism

    The argument from illusion is an argument for the existence of sense-data. It is posed as a criticism of direct realism.  

    https://en.wikipedia.org/wiki/Argument_from_illusion

    The anthropic principle, also known as the "observation selection effect",[1] is the hypothesis, first proposed in 1957 by Robert Dicke, that the range of possible observations that we could make about the universe is limited by the fact that observations could only happen in a universe capable of developing intelligent life in the first place.[2] Proponents of the anthropic principle argue that it explains why this universe has the age and the fundamental physical constants necessary to accommodate conscious life, since if either had been different, we would not have been around to make observations. Anthropic reasoning is often used to deal with the notion that the universe seems to be finely tuned for the existence of life.[3]

    There are many different formulations of the anthropic principle. Philosopher Nick Bostrom counts them at thirty, but the underlying principles can be divided into "weak" and "strong" forms, depending on the types of cosmological claims they entail. The weak anthropic principle (WAP), as defined by Brandon Carter, states that the universe's ostensible fine tuning is the result of selection bias (specifically survivorship bias). Most such arguments draw upon some notion of the multiverse for there to be a statistical population of universes to select from. However, a single vast universe is sufficient for most forms of the WAP that do not specifically deal with fine tuning. Carter distinguished the WAP from the strong anthropic principle (SAP), which considers the universe in some sense compelled to eventually have conscious and sapient life emerge within it.[4][5] A form of the latter known as the participatory anthropic principle, articulated by John Archibald Wheeler, suggests on the basis of quantum mechanics that the universe, as a condition of its existence, must be observed, so implying one or more observers. Stronger yet is the final anthropic principle (FAP), proposed by John D. Barrow and Frank Tipler, which views the universe's structure as expressible by bits of information in such a way that information processing is inevitable and eternal.[4] 

    https://en.wikipedia.org/wiki/Anthropic_principle

    The analytic–synthetic distinction is a semantic distinction, used primarily in philosophy to distinguish between propositions (in particular, statements that are affirmative subjectpredicate judgments) that are of two types: analytic propositions and synthetic propositions. Analytic propositions are true or not true solely by virtue of their meaning, whereas synthetic propositions' truth, if any, derives from how their meaning relates to the world.[1]

    While the distinction was first proposed by Immanuel Kant, it was revised considerably over time, and different philosophers have used the terms in very different ways. Furthermore, some philosophers (starting with W.V.O. Quine) have questioned whether there is even a clear distinction to be made between propositions which are analytically true and propositions which are synthetically true.[2] Debates regarding the nature and usefulness of the distinction continue to this day in contemporary philosophy of language.[2] 

    Kant

    Conceptual containment

    The philosopher Immanuel Kant uses the terms "analytic" and "synthetic" to divide propositions into two types. Kant introduces the analytic–synthetic distinction in the Introduction to his Critique of Pure Reason (1781/1998, A6–7/B10–11). There, he restricts his attention to statements that are affirmative subject–predicate judgments and defines "analytic proposition" and "synthetic proposition" as follows:

    • analytic proposition: a proposition whose predicate concept is contained in its subject concept
    • synthetic proposition: a proposition whose predicate concept is not contained in its subject concept but related

    Examples of analytic propositions, on Kant's definition, include:

    • "All bachelors are unmarried."
    • "All triangles have three sides."

    Kant's own example is:

    • "All bodies are extended," that is, occupy space. (A7/B11)

    Each of these statements is an affirmative subject–predicate judgment, and, in each, the predicate concept is contained within the subject concept. The concept "bachelor" contains the concept "unmarried"; the concept "unmarried" is part of the definition of the concept "bachelor". Likewise, for "triangle" and "has three sides", and so on.

    Examples of synthetic propositions, on Kant's definition, include:

    • "All bachelors are alone."
    • "All creatures with hearts have kidneys."

    Kant's own example is:

    • "All bodies are heavy," that is, they experience a gravitational force. (A7/B11)

    As with the previous examples classified as analytic propositions, each of these new statements is an affirmative subject–predicate judgment. However, in none of these cases does the subject concept contain the predicate concept. The concept "bachelor" does not contain the concept "alone"; "alone" is not a part of the definition of "bachelor". The same is true for "creatures with hearts" and "have kidneys"; even if every creature with a heart also has kidneys, the concept "creature with a heart" does not contain the concept "has kidneys". So the philosophical issue is: What kind of statement is "Language is used to transmit meaning"?

    Kant's version and the a priori / a posteriori distinction

    In the Introduction to the Critique of Pure Reason, Kant contrasts his distinction between analytic and synthetic propositions with another distinction, the distinction between a priori and a posteriori propositions. He defines these terms as follows:

    • a priori proposition: a proposition whose justification does not rely upon experience. Moreover, the proposition can be validated by experience, but is not grounded in experience. Therefore, it is logically necessary.
    • a posteriori proposition: a proposition whose justification does rely upon experience. The proposition is validated by, and grounded in, experience. Therefore, it is logically contingent.

    Examples of a priori propositions include:

    • "All bachelors are unmarried."
    • "7 + 5 = 12."

    The justification of these propositions does not depend upon experience: one need not consult experience to determine whether all bachelors are unmarried, nor whether 7 + 5 = 12. (Of course, as Kant would grant, experience is required to understand the concepts "bachelor", "unmarried", "7", "+" and so forth. However, the a priori - a posteriori distinction as employed here by Kant refers not to the origins of the concepts but to the justification of the propositions. Once we have the concepts, experience is no longer necessary.)

    Examples of a posteriori propositions include:

    • "All bachelors are unhappy."
    • "Tables exist."

    Both of these propositions are a posteriori: any justification of them would require one's experience.

    The analytic/synthetic distinction and the a priori - a posteriori distinction together yield four types of propositions:

    • analytic a priori
    • synthetic a priori
    • analytic a posteriori
    • synthetic a posteriori

    Kant posits the third type as obviously self-contradictory. Ruling it out, he discusses only the remaining three types as components of his epistemological framework—each, for brevity's sake, becoming, respectively, "analytic", "synthetic a priori", and "empirical" or "a posteriori" propositions. This triad accounts for all propositions possible. Examples of analytic and a posteriori statements have already been given, for synthetic a priori propositions he gives those in mathematics and physics.

    The ease of knowing analytic propositions

    Part of Kant's argument in the Introduction to the Critique of Pure Reason involves arguing that there is no problem figuring out how knowledge of analytic propositions is possible. To know an analytic proposition, Kant argued, one need not consult experience. Instead, one needs merely to take the subject and "extract from it, in accordance with the principle of contradiction, the required predicate" (A7/B12). In analytic propositions, the predicate concept is contained in the subject concept. Thus, to know an analytic proposition is true, one need merely examine the concept of the subject. If one finds the predicate contained in the subject, the judgment is true.

    Thus, for example, one need not consult experience to determine whether "All bachelors are unmarried" is true. One need merely examine the subject concept ("bachelors") and see if the predicate concept "unmarried" is contained in it. And in fact, it is: "unmarried" is part of the definition of "bachelor" and so is contained within it. Thus the proposition "All bachelors are unmarried" can be known to be true without consulting experience.

    It follows from this, Kant argued, first: All analytic propositions are a priori; there are no a posteriori analytic propositions. It follows, second: There is no problem understanding how we can know analytic propositions; we can know them because we only need to consult our concepts in order to determine that they are true.

    The possibility of metaphysics

    After ruling out the possibility of analytic a posteriori propositions, and explaining how we can obtain knowledge of analytic a priori propositions, Kant also explains how we can obtain knowledge of synthetic a posteriori propositions. That leaves only the question of how knowledge of synthetic a priori propositions is possible. This question is exceedingly important, Kant maintains, because all scientific knowledge (for him Newtonian physics and mathematics) is made up of synthetic a priori propositions. If it is impossible to determine which synthetic a priori propositions are true, he argues, then metaphysics as a discipline is impossible. The remainder of the Critique of Pure Reason is devoted to examining whether and how knowledge of synthetic a priori propositions is possible.[3]

    https://en.wikipedia.org/wiki/Analytic%E2%80%93synthetic_distinction

    The analogy of the sun (or simile of the sun or metaphor of the sun) is found in the sixth book of The Republic (507b–509c), written by the Greek philosopher Plato as a dialogue between his brother Glaucon and Socrates, and narrated by the latter. Upon being urged by Glaucon to define goodness, a cautious Socrates professes himself incapable of doing so.[1]: 169  Instead he draws an analogy and offers to talk about "the child of goodness"[1]: 169  (Greek: "ἔκγονός τε τοῦ ἀγαθοῦ"). Socrates reveals this "child of goodness" to be the sun, proposing that just as the sun illuminates, bestowing the ability to see and be seen by the eye,[1]: 169  with its light, so the idea of goodness illumines the intelligible with truth. While the analogy sets forth both epistemological and ontological theories, it is debated whether these are most authentic to the teaching of Socrates or its later interpretations by Plato.  

    https://en.wikipedia.org/wiki/Analogy_of_the_sun

    The analogy of the divided line (Greek: γραμμὴ δίχα τετμημένη, translit. grammē dicha tetmēmenē) is presented by the Greek philosopher Plato in the Republic (509d–511e). It is written as a dialogue between Glaucon and Socrates, in which the latter further elaborates upon the immediately preceding analogy of the sun at the former's request. Socrates asks Glaucon to not only envision this unequally bisected line but to imagine further bisecting each of the two segments. Socrates explains that the four resulting segments represent four separate 'affections' (παθήματα) of the psyche. The lower two sections are said to represent the visible while the higher two are said to represent the intelligible. These affections are described in succession as corresponding to increasing levels of reality and truth from conjecture (εἰκασία) to belief (πίστις) to thought (διάνοια) and finally to understanding (νόησις). Furthermore, this analogy not only elaborates a theory of the psyche but also presents metaphysical and epistemological views. 

    https://en.wikipedia.org/wiki/Analogy_of_the_divided_line

    Ambiguity is the type of meaning in which a phrase, statement or resolution is not explicitly defined, making several interpretations plausible. A common aspect of ambiguity is uncertainty. It is thus an attribute of any idea or statement whose intended meaning cannot be definitively resolved, according to a rule or process with a finite number of steps. (The ambi- part of the term reflects an idea of "two," as in "two meanings.")

    The concept of ambiguity is generally contrasted with vagueness. In ambiguity, specific and distinct interpretations are permitted (although some may not be immediately obvious), whereas with information that is vague, it is difficult to form any interpretation at the desired level of specificity.

    Linguistic forms

    Lexical ambiguity is contrasted with semantic ambiguity. The former represents a choice between a finite number of known and meaningful context-dependent interpretations. The latter represents a choice between any number of possible interpretations, none of which may have a standard agreed-upon meaning. This form of ambiguity is closely related to vagueness.

    Ambiguity in human language is argued to reflect principles of efficient communication.[2][3] Languages that communicate efficiently will avoid sending information that is redundant with information provided in the context. This can be shown mathematically to result in a system which is ambiguous when context is neglected. In this way, ambiguity is viewed as a generally useful feature of a linguistic system.

    Linguistic ambiguity can be a problem in law, because the interpretation of written documents and oral agreements is often of paramount importance.

    Structural analysis of an ambiguous Spanish sentence:
    Pepe vio a Pablo enfurecido
    Interpretation 1: When Pepe was angry, then he saw Pablo
    Interpretation 2: Pepe saw that Pablo was angry.
    Here, the syntactic tree in figure represents interpretation 2.

    Lexical ambiguity

    The lexical ambiguity of a word or phrase pertains to its having more than one meaning in the language to which the word belongs.[4] "Meaning" here refers to whatever should be captured by a good dictionary. For instance, the word "bank" has several distinct lexical definitions, including "financial institution" and "edge of a river". Or consider "apothecary". One could say "I bought herbs from the apothecary". This could mean one actually spoke to the apothecary (pharmacist) or went to the apothecary (pharmacy).

    The context in which an ambiguous word is used often makes it evident which of the meanings is intended. If, for instance, someone says "I buried $100 in the bank", most people would not think someone used a shovel to dig in the mud. However, some linguistic contexts do not provide sufficient information to disambiguate a used word.

    Lexical ambiguity can be addressed by algorithmic methods that automatically associate the appropriate meaning with a word in context, a task referred to as word-sense disambiguation.

    The use of multi-defined words requires the author or speaker to clarify their context, and sometimes elaborate on their specific intended meaning (in which case, a less ambiguous term should have been used). The goal of clear concise communication is that the receiver(s) have no misunderstanding about what was meant to be conveyed. An exception to this could include a politician whose "weasel words" and obfuscation are necessary to gain support from multiple constituents with mutually exclusive conflicting desires from their candidate of choice. Ambiguity is a powerful tool of political science.

    More problematic are words whose senses express closely related concepts. "Good", for example, can mean "useful" or "functional" (That's a good hammer), "exemplary" (She's a good student), "pleasing" (This is good soup), "moral" (a good person versus the lesson to be learned from a story), "righteous", etc. "I have a good daughter" is not clear about which sense is intended. The various ways to apply prefixes and suffixes can also create ambiguity ("unlockable" can mean "capable of being unlocked" or "impossible to lock").

    Semantic and syntactic ambiguity

    Which is wet: the food, or the cat?

    Semantic ambiguity occurs when a word, phrase or sentence, taken out of context, has more than one interpretation. In "We saw her duck" (example due to Richard Nordquist), the words "her duck" can refer either

    1. to the person's bird (the noun "duck", modified by the possessive pronoun "her"), or
    2. to a motion she made (the verb "duck", the subject of which is the objective pronoun "her", object of the verb "saw").[5]

    Syntactic ambiguity arises when a sentence can have two (or more) different meanings because of the structure of the sentence—its syntax. This is often due to a modifying expression, such as a prepositional phrase, the application of which is unclear. "He ate the cookies on the couch", for example, could mean that he ate those cookies that were on the couch (as opposed to those that were on the table), or it could mean that he was sitting on the couch when he ate the cookies. "To get in, you will need an entrance fee of $10 or your voucher and your drivers' license." This could mean that you need EITHER ten dollars OR BOTH your voucher and your license. Or it could mean that you need your license AND you need EITHER ten dollars OR a voucher. Only rewriting the sentence, or placing appropriate punctuation can resolve a syntactic ambiguity.[5] For the notion of, and theoretic results about, syntactic ambiguity in artificial, formal languages (such as computer programming languages), see Ambiguous grammar.

    Usually, semantic and syntactic ambiguity go hand in hand. The sentence "We saw her duck" is also syntactically ambiguous. Conversely, a sentence like "He ate the cookies on the couch" is also semantically ambiguous. Rarely, but occasionally, the different parsings of a syntactically ambiguous phrase result in the same meaning. For example, the command "Cook, cook!" can be parsed as "Cook (noun used as vocative), cook (imperative verb form)!", but also as "Cook (imperative verb form), cook (noun used as vocative)!". It is more common that a syntactically unambiguous phrase has a semantic ambiguity; for example, the lexical ambiguity in "Your boss is a funny man" is purely semantic, leading to the response "Funny ha-ha or funny peculiar?"

    Spoken language can contain many more types of ambiguities which are called phonological ambiguities, where there is more than one way to compose a set of sounds into words. For example, "ice cream" and "I scream". Such ambiguity is generally resolved according to the context. A mishearing of such, based on incorrectly resolved ambiguity, is called a mondegreen

    https://en.wikipedia.org/wiki/Ambiguity

    Always-already is a philosophical term regarding the perception of phenomena by the mind of an observer. The features of a phenomenon that seem to precede any perception of it are said to be "always already" present. 

    https://en.wikipedia.org/wiki/Always_already

    The Allegory of the Cave, or Plato's Cave, is an allegory presented by the Greek philosopher Plato in his work Republic (514a–520a) to compare "the effect of education (παιδεία) and the lack of it on our nature". It is written as a dialogue between Plato's brother Glaucon and his mentor Socrates, narrated by the latter. The allegory is presented after the analogy of the sun (508b–509c) and the analogy of the divided line (509d–511e).

    In the allegory "The Cave", Plato describes a group of people who have lived chained to the wall of a cave all their lives, facing a blank wall. The people watch shadows projected on the wall from objects passing in front of a fire behind them and give names to these shadows. The shadows are the prisoners' reality, but are not accurate representations of the real world. The shadows represent the fragment of reality that we can normally perceive through our senses, while the objects under the sun represent the true forms of objects that we can only perceive through reason. Three higher levels exist: the natural sciences; mathematics, geometry, and deductive logic; and the theory of forms.

    Socrates explains how the philosopher is like a prisoner who is freed from the cave and comes to understand that the shadows on the wall are actually not the direct source of the images seen. A philosopher aims to understand and perceive the higher levels of reality. However, the other inmates of the cave do not even desire to leave their prison, for they know no better life.[1]

    Socrates remarks that this allegory can be paired with previous writings, namely the analogy of the sun and the analogy of the divided line.

    Summary

    Allegory of the cave. From top to bottom:
    • The sun ("the Good")
    • Natural things (ideas)
    • Reflections of natural things (mathematical objects)
    • Fire (doctrine)
    • Artificial objects (creatures and objects)
    • Shadows of artificial objects, allegory (image, analogy of the sun and of the divided line)

    Imprisonment in the cave

    Plato begins by having Socrates ask Glaucon to imagine a cave where people have been imprisoned from childhood, but not from birth. These prisoners are chained so that their legs and necks are fixed, forcing them to gaze at the wall in front of them and not to look around at the cave, each other, or themselves (514a–b).[2] Behind the prisoners is a fire, and between the fire and the prisoners is a raised walkway with a low wall, behind which people walk carrying objects or puppets "of men and other living things" (514b).[2]

    The people walk behind the wall so their bodies do not cast shadows for the prisoners to see, but the objects they carry do ("just as puppet showmen have screens in front of them at which they work their puppets") (514a).[2] The prisoners cannot see any of what is happening behind them; they are only able to see the shadows cast upon the cave wall in front of them. The sounds of the people talking echo off the walls; the prisoners believe these sounds come from the shadows (514c).[2]

    Socrates suggests that the shadows are reality for the prisoners because they have never seen anything else; they do not realize that what they see are shadows of objects in front of a fire, much less that these objects are inspired by real things outside the cave which they do not see (514b–515a).[2]

    Departure from the cave

    Socrates then supposes that the prisoners are released.[3]: 199  A freed prisoner would look around and see the fire. The light would hurt his eyes and make it difficult for him to see the objects casting the shadows. If he were told that what he is seeing is real instead of the other version of reality he sees on the wall, he would not believe it. In his pain, Socrates continues, the freed prisoner would turn away and run back to what he is accustomed to (that is, the shadows of the carried objects). The light "... would hurt his eyes, and he would escape by turning away to the things which he was able to look at, and these he would believe to be clearer than what was being shown to him."[2]

    Socrates continues: "Suppose... that someone should drag him... by force, up the rough ascent, the steep way up, and never stop until he could drag him out into the light of the sun."[2] The prisoner would be angry and in pain, and this would only worsen when the radiant light of the sun overwhelms his eyes and blinds him.[2]

    "Slowly, his eyes adjust to the light of the sun. First he can see only shadows. Gradually he can see the reflections of people and things in water and then later see the people and things themselves. Eventually, he is able to look at the stars and moon at night until finally he can look upon the sun itself (516a)."[2] Only after he can look straight at the sun "is he able to reason about it" and what it is (516b).[2] (See also Plato's analogy of the sun, which occurs near the end of The Republic, Book VI.)[4][5]

    Return to the cave

    Socrates continues, saying that the free prisoner would think that the world outside the cave was superior to the world he experienced in the cave and attempt to share this with the prisoners remaining in the cave attempting to bring them onto the journey he had just endured; "he would bless himself for the change, and pity [the other prisoners]" and would want to bring his fellow cave dwellers out of the cave and into the sunlight (516c).[2]

    The returning prisoner, whose eyes have become accustomed to the sunlight, would be blind when he re-entered the cave, just as he was when he was first exposed to the sun (516e).[2] The prisoners who remained, according to the dialogue, would infer from the returning man's blindness that the journey out of the cave had harmed him and that they should not undertake a similar journey. Socrates concludes that the prisoners, if they were able, would therefore reach out and kill anyone who attempted to drag them out of the cave (517a).[2]

    Themes in the allegory appearing elsewhere in Plato's work

    The allegory is related to Plato's theory of Forms, according to which the "Forms" (or "Ideas"), and not the material world known to us through sensation, possess the highest and most fundamental kind of reality. Knowledge of the Forms constitutes real knowledge or what Socrates considers "the Good".[6] Socrates informs Glaucon that the most excellent people must follow the highest of all studies, which is to behold the Good. Those who have ascended to this highest level, however, must not remain there but must return to the cave and dwell with the prisoners, sharing in their labors and honors.

    Plato's Phaedo contains similar imagery to that of the allegory of the cave; a philosopher recognizes that before philosophy, his soul was "a veritable prisoner fast bound within his body... and that instead of investigating reality of itself and in itself is compelled to peer through the bars of a prison."[7]

    Scholarly discussion

    Scholars debate the possible interpretations of the allegory of the cave, either looking at it from an epistemological standpoint—one based on the study of how Plato believes we come to know things—or through a political (politeia) lens.[8] Much of the scholarship on the allegory falls between these two perspectives, with some completely independent of either. The epistemological view and the political view, fathered by Richard Lewis Nettleship and A. S. Ferguson, respectively, tend to be discussed most frequently.[8]

    Nettleship interprets the allegory of the cave as representative of our innate intellectual incapacity, in order to contrast our lesser understanding with that of the philosopher, as well as an allegory about people who are unable or unwilling to seek truth and wisdom.[9][8] Ferguson, on the other hand, bases his interpretation of the allegory on the claim that the cave is an allegory of human nature and that it symbolizes the opposition between the philosopher and the corruption of the prevailing political condition.[1]

    Cleavages have emerged within these respective camps of thought, however. Much of the modern scholarly debate surrounding the allegory has emerged from Martin Heidegger's exploration of the allegory, and philosophy as a whole, through the lens of human freedom in his book The Essence of Human Freedom: An Introduction to Philosophy and The Essence of Truth: On Plato's Cave Allegory and Theaetetus.[10] In response, Hannah Arendt, an advocate of the political interpretation of the allegory, suggests that through the allegory, Plato "wanted to apply his own theory of ideas to politics".[11] Conversely, Heidegger argues that the essence of truth is a way of being and not an object.[12] Arendt criticised Heidegger's interpretation of the allegory, writing that "Heidegger ... is off base in using the cave simile to interpret and 'criticize' Plato's theory of ideas".[11]

    Various scholars also debate the possibility of a connection between the work in the allegory and the cave and the work done by Plato considering the analogy of the divided line and the analogy of the sun. The divided line is a theory presented to us in Plato's work the Republic. This is displayed through a dialogue given between Socrates and Glaucon in which they explore the possibility of a visible and intelligible world, with the visible world consisting of items such as shadows and reflections (displayed as AB) then elevating to the physical item itself (displayed as BC) while the intelligible world consists of mathematical reasoning (displayed by CD) and philosophical understanding (displayed by DE).[3]

    Many see this as an explanation for the way in which the prisoner in the allegory of the cave goes through the journey, first in the visible world with shadows such as those on the wall,[3] then the realization of the physical with the understanding of concepts such as the tree being separate from its shadow. It enters the intelligible world as the prisoner looks at the sun.[13]

    The divided line – (AC) is generally taken as representing the visible world and (CE) as representing the intelligible world[14]

    The Analogy of the Sun refers to the moment in book six in which Socrates, after being urged by Glaucon to define goodness, proposes instead an analogy through a "child of goodness". Socrates reveals this "child of goodness" to be the sun, proposing that just as the sun illuminates, bestowing the ability to see and be seen by the eye[15]: 169  with its light, so the idea of goodness illumines the intelligible with truth, leading some scholars to believe this forms a connection of the sun and the intelligible world within the realm of the allegory of the cave.

    Influence

    The themes and imagery of Plato's cave have appeared throughout Western thought and culture. Some examples include:

    • Francis Bacon used the term "Idols of the Cave" to refer to errors of reason arising from the idiosyncratic biases and preoccupations of individuals.
    • Thomas Browne in his 1658 discourse Urn Burial stated: "A Dialogue between two Infants in the womb concerning the state of this world, might handsomely illustrate our ignorance of the next, whereof methinks we yet discourse in Platoes denne, and are but Embryon Philosophers".
    • Evolutionary biologist Jeremy Griffith's book A Species In Denial includes the chapter "Deciphering Plato's Cave Allegory".[16]
    • The films The Conformist, The Matrix, Cube, Dark City, The Truman Show, Us and City of Ember model Plato's allegory of the cave, as does the TV series 1899.[17]
    • The 2013 movie After the Dark has a segment where Mr. Zimit likens James' life to the Allegory of the Cave.
    • The Cave by José Saramago culminates in the discovery of Plato's Cave underneath the center, "an immense complex fusing the functions of an office tower, a shopping mall and a condominium".[18]
    • Emma Donoghue acknowledges the influence of Plato's allegory of the cave on her novel Room.[19]
    • Ray Bradbury's novel Fahrenheit 451 explores the themes of reality and perception also explored in Plato's allegory of the cave and Bradbury references Plato's work in the novel.[20][21]
    • José Carlos Somoza's novel The Athenian Murders is presented as a murder mystery but features many references to Plato's philosophy including the allegory of the cave.[22]
    • Novelist James Reich argues Nicholas Ray's film Rebel Without a Cause, starring James Dean, Natalie Wood, and Sal Mineo as John "Plato" Crawford is influenced by and enacts aspects of the allegory of the cave.[23]
    • In an episode of the television show Legion, titled "Chapter 16", the narrator uses Plato's Cave to explain "the most alarming delusion of all", narcissism.
    • H. G. Wells' short novel The Country of the Blind has a similar "Return to the Cave" situation when a man accidentally discovers a village of blind people and wherein he tries to explain how he can "see", only to be ridiculed.
    • Daniel F. Galouye's post-apocalyptic novel Dark Universe describes the Survivors, who live underground in total darkness, using echolocation to navigate. Another race of people evolve, who are able to see using infrared.
    • C. S. Lewis' novels The Silver Chair and The Last Battle both reference the ideas and imagery of the Cave. In the former in Chapter 12, the Witch dismisses the idea of a greater reality outside the bounds of her Underworld. In The Last Battle most of the characters learn that the Narnia which they have known is but a "shadow" of the true Narnia. Lord Digory says in Chapter 15, "It's all in Plato, all in Plato".
    • In season 1, episode 2 of the 2015 Catalan television series Merlí, titled "Plato", a high school philosophy teacher demonstrates the allegory using household objects for a non-verbal, agoraphobic student, and makes a promise to him that "I'll get you out of the cave".
    • In the 2016 season 1, episode 1 of The Path, titled "What the Fire Throws", a cult leader uses the allegory in a sermon to inspire the members to follow him "up out of the world of shadows ... into the light".

    See also

    References


  • Ferguson, A. S. (1922). "Plato's Simile of Light. Part II. The Allegory of the Cave (Continued)". The Classical Quarterly. 16 (1): 15–28. doi:10.1017/S0009838800001956. JSTOR 636164. S2CID 170982104.

  • Plato. Rouse, W.H.D. (ed.). The Republic Book VII. Penguin Group Inc. pp. 365–401.

  • Plato, The Republic, Book 6, translated by Benjamin Jowett, online Archived 18 April 2009 at the Wayback Machine

  • Jowett, B. (ed.) (1941). Plato's The Republic. New York: The Modern Library. OCLC 964319.

  • Malcolm, John (1962-01-01). "The Line and the Cave". Phronesis. 7 (1): 38–45. doi:10.1163/156852862x00025. ISSN 0031-8868.

  • Watt, Stephen (1997), "Introduction: The Theory of Forms (Books 5–7)", Plato: Republic, London: Wordsworth Editions, pp. xiv–xvi, ISBN 978-1-85326-483-2

  • Elliott, R. K. (1967). "Socrates and Plato's Cave". Kant-Studien. 58 (2): 138. doi:10.1515/kant.1967.58.1-4.137. S2CID 170201374.

  • Hall, Dale (January 1980). "Interpreting Plato's Cave as an Allegory of the Human Condition". Apeiron. 14 (2): 74–86. doi:10.1515/APEIRON.1980.14.2.74. JSTOR 40913453. S2CID 170372013. ProQuest 1300369376.

  • Nettleship, Richard Lewis (1955). "Chapter 4 - The four stages of intelligence". Lectures On The Republic Of Plato (2nd ed.). London: Macmillan & Co.

  • McNeill, William (5 January 2003). "The Essence of Human Freedom: An Introduction to Philosophy and The Essence of Truth: On Plato's Cave Allegory and Theaetetus". Notre Dame Philosophical Reviews.

  • Abensour, Miguel (2007). "Against the Sovereignty of Philosophy over Politics: Arendt's Reading of Plato's Cave Allegory". Social Research: An International Quarterly. 74 (4): 955–982. doi:10.1353/sor.2007.0064. JSTOR 40972036. S2CID 152872480. Gale A174238908 Project MUSE 527590 ProQuest 209671578.

  • Powell, Sally (1 January 2011). "Discovering the unhidden: Heidegger's Interpretation of Plato's Allegory of the Cave and its Implications for Psychotherapy". Existential Analysis. 22 (1): 39–50. Gale A288874147.

  • Raven, J. E. (1953). "Sun, Divided Line, and Cave". The Classical Quarterly. 3 (1/2): 22–32. doi:10.1017/S0009838800002573. JSTOR 637158. S2CID 170803513.

  • "divided line," The Cambridge Dictionary of Philosophy, 2nd edition, Cambridge University Press, 1999, ISBN 0-521-63722-8, p. 239.

  • Pojman, Louis & Vaughn, L. (2011). Classics of Philosophy. New York: Oxford University Press, Inc.

  • Griffith, Jeremy (2003). A Species in Denial. Sydney: WTM Publishing & Communications. p. 83. ISBN 978-1-74129-000-4. Archived from the original on 2013-10-29. Retrieved 2013-04-01.

  • The Matrix and Philosophy: Welcome to the Desert of the Real by William Irwin. Open Court Publishing, 2002. ISBN 0-8126-9501-1. "Written for those fans of the film who are already philosophers."

  • Keates, Jonathan (24 November 2002). "Shadows on the Wall". The New York Times. Retrieved 24 November 2002.

  • "Q & A with Emma Donoghue – Spoiler-friendly Discussion of Room (showing 1–50 of 55)". www.goodreads.com. Retrieved 2016-01-30.

  • "Parallels between Ray Bradbury's Fahrenheit 69 and Plato's 'Allegory of the Cave'". Archived from the original on 2019-06-06.

  • Bradbury, Ray (1953). Fahrenheit 451. The Random House Publishing Group. p. 151. ISBN 978-0-758-77616-7.

  • Somoza, Jose Carlos (2003). The Athenian Murders. ABACUS. ISBN 978-0349116181.

  • Further reading

    The following is a list of supplementary scholarly literature on the allegory of the cave that includes articles from epistemological, political, alternative, and independent viewpoints on the allegory:

    External links

     https://en.wikipedia.org/wiki/Allegory_of_the_cave

    In philosophy and psychology, an alief is an automatic or habitual belief-like attitude, particularly one that is in tension with a person's explicit beliefs.[1]

    For example, a person standing on a transparent balcony may believe that they are safe, but alieve that they are in danger. A person watching a sad movie may believe that the characters are completely fictional, but their aliefs may lead them to cry nonetheless. A person who is hesitant to eat fudge that has been formed into the shape of feces, or who exhibits reluctance in drinking from a sterilized bedpan may believe that the substances are safe to eat and drink, but may alieve that they are not.

    The term alief was introduced by Tamar Gendler, a professor of philosophy and cognitive science at Yale University, in a pair of influential articles published in 2008.[2] Since the publication of these original articles, the notion of alief has been utilized by Gendler and others — including Paul Bloom[3] and Daniel Dennett[4] — to explain a range of psychological phenomena in addition to those listed above, including the pleasure of stories,[3] the persistence of positive illusions,[4] certain religious beliefs,[5] and certain psychiatric disturbances, such as phobias and obsessive–compulsive disorder.[4] 

    https://en.wikipedia.org/wiki/Alief_(mental_state)

    The basic premise of the concept of mundane reason is that the standard assumptions about reality that people typically make as they go about day to day, including the very fact that they experience their reality as perfectly natural, are actually the result of social, cultural, and historical processes that make a particular perception of the world readily available. It is the reasoning about the world, self, and others which presupposes the world and its relationship to the observer; according to Steven Shapin (Shapin 1994:31), it is a set of presuppositions about the subject, the object, and the nature of their relations.[1]

    https://en.wikipedia.org/wiki/Mundane_reason

    Moral rationalism, also called ethical rationalism, is a view in meta-ethics (specifically the epistemology of ethics) according to which moral principles are knowable a priori, by reason alone.[1] Some prominent figures in the history of philosophy who have defended moral rationalism are Plato and Immanuel Kant. Perhaps the most prominent figure in the history of philosophy who has rejected moral rationalism is David Hume. Recent philosophers who have defended moral rationalism include Richard Hare, Christine Korsgaard, Alan Gewirth, and Michael Smith.

    Moral rationalism is similar to the rationalist version of ethical intuitionism; however, they are distinct views. Moral rationalism is neutral on whether basic moral beliefs are known via inference or not. A moral rationalist who believes that some moral beliefs are justified non-inferentially is a rationalist ethical intuitionist. So, rationalist ethical intuitionism implies moral rationalism, but the reverse does not hold. 

    https://en.wikipedia.org/wiki/Moral_rationalism

    In semantics, semiotics, philosophy of language, metaphysics, and metasemantics, meaning "is a relationship between two sorts of things: signs and the kinds of things they intend, express, or signify".[1]

    The types of meanings vary according to the types of the thing that is being represented. Namely:

    • There are the things, which might have meaning;
    • There are things that are also signs of other things, and so, are always meaningful (i.e., natural signs of the physical world and ideas within the mind);
    • There are things that are necessarily meaningful such as words and nonverbal symbols.

    The major contemporary positions of meaning come under the following partial definitions of meaning:

    https://en.wikipedia.org/wiki/Meaning_(philosophy)

    Metafiction is a form of fiction that emphasises its own narrative structure in a way that continually reminds the audience that they are reading or viewing a fictional work. Metafiction is self-conscious about language, literary form, and story-telling, and works of metafiction directly or indirectly draw attention to their status as artifacts.[1] Metafiction is frequently used as a form of parody or a tool to undermine literary conventions and explore the relationship between literature and reality, life, and art.[2]

    Although metafiction is most commonly associated with postmodern literature that developed in the mid-20th century, its use can be traced back to much earlier works of fiction, such as The Canterbury Tales (Geoffrey Chaucer, 1387), Don Quixote (Miguel de Cervantes, 1605), "Chymical Wedding of Christian Rosenkreutz" (Johann Valentin Andreae, 1617) The Life and Opinions of Tristram Shandy, Gentleman (Laurence Sterne, 1759), Sartor Resartus (Thomas Carlyle, 1833–34),[3] and Vanity Fair (William Makepeace Thackeray, 1847).

    Metafiction became particularly prominent in the 1960s, with works such as Lost in the Funhouse by John Barth, Pale Fire by Vladimir Nabokov, "The Babysitter" and "The Magic Poker" by Robert Coover, Slaughterhouse-Five by Kurt Vonnegut,[4] The French Lieutenant's Woman by John Fowles, The Crying of Lot 49 by Thomas Pynchon, and Willie Master's Lonesome Wife by William H. Gass.

    Since the 1980s, contemporary Latino literature has an abundance of self-reflexive, metafictional works, including novels and short stories by Junot Díaz (The Brief Wondrous Life of Oscar Wao),[5] Sandra Cisneros (Caramelo),[6] Salvador Plascencia (The People of Paper),[7] Carmen Maria Machado (Her Body),[8] Rita Indiana (Tentacle),[9] and Valeria Luiselli (Lost Children Archive).[7] 

    https://en.wikipedia.org/wiki/Metafiction

    Mimesis (/mɪˈmsɪs, mə-, m-, -əs/;[1] Ancient Greek: μίμησις, mīmēsis) is a term used in literary criticism and philosophy that carries a wide range of meanings, including imitatio, imitation, nonsensuous similarity, receptivity, representation, mimicry, the act of expression, the act of resembling, and the presentation of the self.[2]

    The original Ancient Greek term mīmēsis (μίμησις) derives from mīmeisthai (μιμεῖσθαι, 'to imitate'), itself coming from mimos (μῖμος, 'imitator, actor'). In ancient Greece, mīmēsis was an idea that governed the creation of works of art, in particular, with correspondence to the physical world understood as a model for beauty, truth, and the good. Plato contrasted mimesis, or imitation, with diegesis, or narrative. After Plato, the meaning of mimesis eventually shifted toward a specifically literary function in ancient Greek society.[3]

    One of the best-known modern studies of mimesis—understood in literature as a form of realism—is Erich Auerbach's Mimesis: The Representation of Reality in Western Literature, which opens with a comparison between the way the world is represented in Homer's Odyssey and the way it appears in the Bible.[4]

    In addition to Plato and Auerbach, mimesis has been theorised by thinkers as diverse as Aristotle,[5] Philip Sidney, Samuel Taylor Coleridge, Adam Smith, Gabriel Tarde, Sigmund Freud, Walter Benjamin,[6] Theodor Adorno,[7] Paul Ricœur, Luce Irigaray, Jacques Derrida, René Girard, Nikolas Kompridis, Philippe Lacoue-Labarthe, Michael Taussig,[8] Merlin Donald, Homi Bhabha, Roberto Calasso, and Nidesh Lawtoo. During the nineteenth century, the racial politics of imitation towards African Americans influenced the term mimesis and its evolution.[9] 

    https://en.wikipedia.org/wiki/Mimesis

    The burden of proof (Latin: onus probandi, shortened from Onus probandi incumbit ei qui dicit, non ei qui negat) is the obligation on a party in a dispute to provide sufficient warrant for its position.  

    https://en.wikipedia.org/wiki/Burden_of_proof_(philosophy)

    In contemporary philosophy, a brute fact is a fact that cannot be explained in terms of a deeper, more "fundamental" fact.[1] There are two main ways to explain something: say what "brought it about", or describe it at a more "fundamental" level.[citation needed] For example, a cat displayed on a computer screen can be explained, more "fundamentally", in terms of certain voltages in bits of metal in the screen, which in turn can be explained, more "fundamentally", in terms of certain subatomic particles moving in a certain manner. If one were to keep explaining the world in this way and reach a point at which no more "deeper" explanations can be given, then one would have found some facts which are brute or inexplicable, in the sense that we cannot give them an ontological explanation. As it might be put, there may exist some things that just are.

    To reject the existence of brute facts is to think that everything can be explained ("Everything can be explained" is sometimes called the principle of sufficient reason). 

    https://en.wikipedia.org/wiki/Brute_fact

    The principle of sufficient reason states that everything must have a reason or a cause. The principle was articulated and made prominent by Gottfried Wilhelm Leibniz, with many antecedents, and was further used and developed by Arthur Schopenhauer and Sir William Hamilton, 9th Baronet.  

    https://en.wikipedia.org/wiki/Principle_of_sufficient_reason

    In metaphysics, ontology is the philosophical study of being, as well as related concepts such as existence, becoming, and reality.

    Ontology addresses questions like how entities are grouped into categories and which of these entities exist on the most fundamental level. Ontologists often try to determine what the categories or highest kinds are and how they form a system of categories that encompasses the classification of all entities. Commonly proposed categories include substances, properties, relations, states of affairs, and events. These categories are characterized by fundamental ontological concepts, including particularity and universality, abstractness and concreteness, or possibility and necessity. Of special interest is the concept of ontological dependence, which determines whether the entities of a category exist on the most fundamental level. Disagreements within ontology are often about whether entities belonging to a certain category exist and, if so, how they are related to other entities.[1]

    When used as a countable noun, the words ontology and ontologies refer not to the science of being but to theories within the science of being. Ontological theories can be divided into various types according to their theoretical commitments. Monocategorical ontologies hold that there is only one basic category, but polycategorical ontologies rejected this view. Hierarchical ontologies assert that some entities exist on a more fundamental level and that other entities depend on them. Flat ontologies, on the other hand, deny such a privileged status to any entity.

    Etymology

    The compound word ontology ('study of being') combines

    onto- (Greek: ὄν, on;[note 1] GEN. ὄντος, ontos, 'being' or 'that which is') and
    -logia (-λογία, 'logical discourse').[2][3]

    While the etymology is Greek, the oldest extant records of the word itself is a Neo-Latin form ontologia, which appeared

    in 1606 in the Ogdoas Scholastica by Jacob Lorhard (Lorhardus), and
    in 1613 in the Lexicon philosophicum by Rudolf Göckel (Goclenius).

    The first occurrence in English of ontology, as recorded by the Oxford English Dictionary,[4] came in 1664 through Archelogia philosophica nova... by Gideon Harvey.[5] The word was first used, in its Latin form, by philosophers, and based on the Latin roots (and in turn on the Greek ones). 

    https://en.wikipedia.org/wiki/Ontology

    A binary opposition (also binary system) is a pair of related terms or concepts that are opposite in meaning. Binary opposition is the system of language and/or thought by which two theoretical opposites are strictly defined and set off against one another.[1] It is the contrast between two mutually exclusive terms, such as on and off, up and down, left and right.[2] Binary opposition is an important concept of structuralism, which sees such distinctions as fundamental to all language and thought.[2] In structuralism, a binary opposition is seen as a fundamental organizer of human philosophy, culture, and language.

    Binary opposition originated in Saussurean structuralist theory.[3] According to Ferdinand de Saussure, the binary opposition is the means by which the units of language have value or meaning; each unit is defined in reciprocal determination with another term, as in binary code. It is not a contradictory relation but a structural, complementary one.[3] Saussure demonstrated that a sign's meaning is derived from its context (syntagmatic dimension) and the group (paradigm) to which it belongs.[4] An example of this is that one cannot conceive of 'good' if we do not understand 'evil'.[5]

    Typically, one of the two opposites assumes a role of dominance over the other. The categorization of binary oppositions is "often value-laden and ethnocentric", with an illusory order and superficial meaning.[6] Furthermore, Pieter Fourie discovered that binary oppositions have a deeper or second level of binaries that help to reinforce meaning. As an example, the concepts hero and villain involve secondary binaries: good/bad, handsome/ugly, liked/disliked, and so on.[7] 

    https://en.wikipedia.org/wiki/Binary_opposition

    In sociology, anthropology, archaeology, history, philosophy, and linguistics, structuralism is a general theory of culture and methodology that implies that elements of human culture must be understood by way of their relationship to a broader system.[1] It works to uncover the structures that underlie all the things that humans do, think, perceive, and feel.

    Alternatively, as summarized by philosopher Simon Blackburn, structuralism is:[2]

    [T]he belief that phenomena of human life are not intelligible except through their interrelations. These relations constitute a structure, and behind local variations in the surface phenomena there are constant laws of abstract structure.

    Structuralism in Europe developed in the early 20th century, mainly in France and the Russian Empire, in the structural linguistics of Ferdinand de Saussure and the subsequent Prague,[3] Moscow,[3] and Copenhagen schools of linguistics. As an intellectual movement, structuralism became the heir to existentialism.[4] After World War II, an array of scholars in the humanities borrowed Saussure's concepts for use in their respective fields. French anthropologist Claude Lévi-Strauss was arguably the first such scholar, sparking a widespread interest in structuralism.[2]

    The structuralist mode of reasoning has since been applied in a range of fields, including anthropology, sociology, psychology, literary criticism, economics, and architecture. Along with Lévi-Strauss, the most prominent thinkers associated with structuralism include linguist Roman Jakobson and psychoanalyst Jacques Lacan.

    By the late 1960s, many of structuralism's basic tenets came under attack from a new wave of predominantly French intellectuals/philosophers such as historian Michel Foucault, Jacques Derrida, Marxist philosopher Louis Althusser, and literary critic Roland Barthes.[3] Though elements of their work necessarily relate to structuralism and are informed by it, these theorists eventually came to be referred to as post-structuralists. Many proponents of structuralism, such as Lacan, continue to influence continental philosophy and many of the fundamental assumptions of some of structuralism's post-structuralist critics are a continuation of structuralist thinking.[5] 

    https://en.wikipedia.org/wiki/Structuralism

    A Basic Limiting Principle (B.L.P.) is a general principle that limits our explanations metaphysically or epistemologically, and which normally goes unquestioned or even unnoticed in our everyday or scientific thinking. The term was introduced by the philosopher C. D. Broad in his 1949 paper "The Relevance of Psychical research to Philosophy":

    "There are certain limiting principles which we unhesitatingly take for granted as the framework within which all our practical activities and our scientific theories are confined. Some of these seem to be self-evident. Others are so overwhelmingly supported by all the empirical facts which fall within the range of ordinary experience and the scientific elaborations of it (including under this heading orthodox psychology) that it hardly enters our heads to question them. Let us call these Basic Limiting Principles."[1]

    Broad offers nine examples of B.L.P.s, including the principle that there can be no backward causation, that there can be no action at a distance, and that one cannot perceive physical events or material things directly, unmediated by sensations.

    Notes


    1. Broad (1949)

    References


     https://en.wikipedia.org/wiki/Basic_belief

    Doxastic logic is a type of logic concerned with reasoning about beliefs.

    The term doxastic derives from the Ancient Greek δόξα (doxa, "opinion, belief"), from which the English term doxa ("popular opinion or belief") is also borrowed. Typically, a doxastic logic uses the notation to mean "It is believed that is the case", and the set denotes a set of beliefs. In doxastic logic, belief is treated as a modal operator.

    There is complete parallelism between a person who believes propositions and a formal system that derives propositions. Using doxastic logic, one can express the epistemic counterpart of Gödel's incompleteness theorem of metalogic, as well as Löb's theorem, and other metalogical results in terms of belief.[1] 

    https://en.wikipedia.org/wiki/Doxastic_logic

    Nomothetic and idiographic are terms used by Neo-Kantian philosopher Wilhelm Windelband to describe two distinct approaches to knowledge, each one corresponding to a different intellectual tendency, and each one corresponding to a different branch of academia. To say that Windelband supported that last dichotomy is a consequent misunderstanding of his own thought. For him, any branch of science and any discipline can be handled by both methods as they offer two integrating points of view.[1]

    • Nomothetic is based on what Kant described as a tendency to generalize, and is typical for the natural sciences. It describes the effort to derive laws that explain types or categories of objective phenomena, in general.
    • Idiographic is based on what Kant described as a tendency to specify, and is typical for the humanities. It describes the effort to understand the meaning of contingent, unique, and often cultural or subjective phenomena.

    https://en.wikipedia.org/wiki/Nomothetic_and_idiographic

    A closed concept is a concept where all the necessary and sufficient conditions required to include something within the concept can be listed. For example, the concept of a triangle is closed because it is a three-sided polygon, and only a three-sided polygon, is a triangle. All the conditions required to call something a triangle can be, and are, are listed.

    Its opposite is an "open concept".[1] 

    https://en.wikipedia.org/wiki/Closed_concept

    In logic and mathematics, necessity and sufficiency are terms used to describe a conditional or implicational relationship between two statements. For example, in the conditional statement: "If P then Q", Q is necessary for P, because the truth of Q is guaranteed by the truth of P (equivalently, it is impossible to have P without Q).[1] Similarly, P is sufficient for Q, because P being true always implies that Q is true, but P not being true does not always imply that Q is not true.[2]

    In general, a necessary condition is one that must be present in order for another condition to occur, while a sufficient condition is one that produces the said condition.[3] The assertion that a statement is a "necessary and sufficient" condition of another means that the former statement is true if and only if the latter is true. That is, the two statements must be either simultaneously true, or simultaneously false.[4][5][6]

    In ordinary English (also natural language) "necessary" and "sufficient" indicate relations between conditions or states of affairs, not statements. For example, being a male is a necessary condition for being a brother, but it is not sufficient—while being a male sibling is a necessary and sufficient condition for being a brother. Any conditional statement consists of at least one sufficient condition and at least one necessary condition. 

    https://en.wikipedia.org/wiki/Necessity_and_sufficiency

    Continuum fallacy

    The continuum fallacy (also known as the fallacy of the beard,[9][10] line-drawing fallacy, or decision-point fallacy[11]) is an informal fallacy related to the sorites paradox. Both fallacies cause one to erroneously reject a vague claim simply because it is not as precise as one would like it to be. Vagueness alone does not necessarily imply invalidity. The fallacy is the argument that two states or conditions cannot be considered distinct (or do not exist at all) because between them there exists a continuum of states.

    Strictly, the sorites paradox refers to situations where there are many discrete states (classically between 1 and 1,000,000 grains of sand, hence 1,000,000 possible states), while the continuum fallacy refers to situations where there is (or appears to be) a continuum of states, such as temperature. Whether any continua exist in the physical world is the classic question of atomism, and while both Newtonian physics and quantum physics model the world as continuous, there are some proposals in quantum gravity, such as loop quantum gravity, that suggest that notions of continuous length do not apply at the Planck length, and thus what appear to be continua may simply be as-yet undistinguishable discrete states.

    For the purpose of the continuum fallacy, one assumes that there is in fact a continuum, though this is generally a minor distinction: in general, any argument against the sorites paradox can also be used against the continuum fallacy. One argument against the fallacy is based on the simple counterexample: there do exist bald people and people who are not bald. Another argument is that for each degree of change in states, the degree of the condition changes slightly, and these slight changes build up to shift the state from one category to another. For example, perhaps the addition of a grain of rice causes the total group of rice to be "slightly more" of a heap, and enough slight changes will certify the group's heap status – see fuzzy logic.

    Proposed resolutions

    Denying the existence of heaps

    One may object to the first premise by denying that 1,000,000 grains of sand makes a heap. But 1,000,000 is just an arbitrary large number, and the argument will apply with any such number. So the response must deny outright that there are such things as heaps. Peter Unger defends this solution.[12]

    Setting a fixed boundary

    A common first response to the paradox is to term any set of grains that has more than a certain number of grains in it a heap. If one were to define the "fixed boundary" at 10,000 grains then one would claim that for fewer than 10,000, it is not a heap; for 10,000 or more, then it is a heap.[13]

    Collins argues that such solutions are unsatisfactory as there seems little significance to the difference between 9,999 grains and 10,000 grains. The boundary, wherever it may be set, remains arbitrary, and so its precision is misleading. It is objectionable on both philosophical and linguistic grounds: the former on account of its arbitrariness and the latter on the ground that it is simply not how natural language is used.[14]

    Unknowable boundaries (or epistemicism)

    Timothy Williamson[15][16][17] and Roy Sorensen[18] claim that there are fixed boundaries but that they are necessarily unknowable.

    Supervaluationism

    Supervaluationism is a method for dealing with irreferential singular terms and vagueness. It allows one to retain the usual tautological laws even when dealing with undefined truth values.[19][20][21][22] As an example of a proposition about an irreferential singular term, consider the sentence "Pegasus likes licorice". Since the name "Pegasus" fails to refer, no truth value can be assigned to the sentence; there is nothing in the myth that would justify any such assignment. However, there are some statements about "Pegasus" which have definite truth values nevertheless, such as "Pegasus likes licorice or Pegasus doesn't like licorice". This sentence is an instance of the tautology "", i.e. the valid schema " or not-". According to supervaluationism, it should be true regardless of whether or not its components have a truth value.

    By admitting sentences without defined truth values, supervaluationism avoids adjacent cases such that n grains of sand is a heap of sand, but n-1 grains is not; for example, "1,000 grains of sand is a heap" may be considered a border case having no defined truth value. Nevertheless, supervaluationism is able to handle a sentence like "1,000 grains of sand is a heap, or 1,000 grains of sand is not a heap" as a tautology, i.e. to assign it the value true.[citation needed]

    Mathematical explanation

    Let be a classical valuation defined on every atomic sentence of the language , and let be the number of distinct atomic sentences in . Then for every sentence , at most distinct classical valuations can exist. A supervaluation is a function from sentences to truth values such that, a sentence is super-true (i.e. ) if and only if for every classical valuation ; likewise for super-false. Otherwise, is undefined—i.e. exactly when there are two classical valuations and such that and .

    For example, let be the formal translation of "Pegasus likes licorice". Then there are exactly two classical valuations and on , viz. and . So is neither super-true nor super-false. However, the tautology is evaluated to by every classical valuation; it is hence super-true. Similarly, the formalization of the above heap proposition is neither super-true nor super-false, but is super-true.

    Truth gaps, gluts, and multi-valued logics

    Another method is to use a multi-valued logic. In this context, the problem is with the principle of bivalence: the sand is either a heap or is not a heap, without any shades of gray. Instead of two logical states, heap and not-heap, a three value system can be used, for example heap, indeterminate and not-heap. A response to this proposed solution is that three valued systems do not truly resolve the paradox as there is still a dividing line between heap and indeterminate and also between indeterminate and not-heap. The third truth-value can be understood either as a truth-value gap or as a truth-value glut.[23]

    Alternatively, fuzzy logic offers a continuous spectrum of logical states represented in the unit interval of real numbers [0,1]—it is a many-valued logic with infinitely-many truth-values, and thus the sand transitions gradually from "definitely heap" to "definitely not heap", with shades in the intermediate region. Fuzzy hedges are used to divide the continuum into regions corresponding to classes like definitely heap, mostly heap, partly heap, slightly heap, and not heap.[24][25] Though the problem remains of where these borders occur; e.g. at what number of grains sand starts being 'definitely' a heap.

    Hysteresis

    Another method, introduced by Raffman,[26] is to use hysteresis, that is, knowledge of what the collection of sand started as. Equivalent amounts of sand may be termed heaps or not based on how they got there. If a large heap (indisputably described as a heap) is diminished slowly, it preserves its "heap status" to a point, even as the actual amount of sand is reduced to a smaller number of grains. For example, 500 grains is a pile and 1,000 grains is a heap. There will be an overlap for these states. So if one is reducing it from a heap to a pile, it is a heap going down until 750. At that point, one would stop calling it a heap and start calling it a pile. But if one replaces one grain, it would not instantly turn back into a heap. When going up it would remain a pile until 900 grains. The numbers picked are arbitrary; the point is, that the same amount can be either a heap or a pile depending on what it was before the change. A common use of hysteresis would be the thermostat for air conditioning: the AC is set at 77 °F and it then cools the air to just below 77 °F, but does not activate again instantly when the air warms to 77.001 °F—it waits until almost 78 °F, to prevent instant change of state over and over again.[27]

    Group consensus

    One can establish the meaning of the word "heap" by appealing to consensus. Williamson, in his epistemic solution to the paradox, assumes that the meaning of vague terms must be determined by group usage.[28] The consensus method typically claims that a collection of grains is as much a "heap" as the proportion of people in a group who believe it to be so. In other words, the probability that any collection is considered a heap is the expected value of the distribution of the group's opinion.

    A group may decide that:

    • One grain of sand on its own is not a heap.
    • A large collection of grains of sand is a heap.

    Between the two extremes, individual members of the group may disagree with each other over whether any particular collection can be labelled a "heap". The collection can then not be definitively claimed to be a "heap" or "not a heap". This can be considered an appeal to descriptive linguistics rather than prescriptive linguistics, as it resolves the issue of definition based on how the population uses natural language. Indeed, if a precise prescriptive definition of "heap" is available then the group consensus will always be unanimous and the paradox does not occur.

    Resolutions in utility theory


    "X more or equally red than Y"
    modelled as quasitransitive relation
    ≈ : indistinguishable, > : clearly more red
    Y
    X
    f10 e20 d30 c40 b50 a60
    f10 > > > >
    e20 > > >
    d30
    > >
    c40

    >
    b50


    a60



    In the economics field of utility theory, the sorites paradox arises when a person's preferences patterns are investigated. As an example by Robert Duncan Luce, it is easy to find a person, say Peggy, who prefers in her coffee 3 grams (that is, 1 cube) of sugar to 15 grams (5 cubes), however, she will usually be indifferent between 3.00 and 3.03 grams, as well as between 3.03 and 3.06 grams, and so on, as well as finally between 14.97 and 15.00 grams.[29]

    Two measures were taken by economists to avoid the sorites paradox in such a setting.

    • Comparative, rather than positive, forms of properties are used. The above example deliberately does not make a statement like "Peggy likes a cup of coffee with 3 grams of sugar", or "Peggy does not like a cup of coffee with 15 grams of sugar". Instead, it states "Peggy likes a cup of coffee with 3 grams of sugar more than one with 15 grams of sugar".[33]
    • Economists distinguish preference ("Peggy likes ... more than ...") from indifference ("Peggy likes ... as much as ... "), and do not consider the latter relation to be transitive.[35] In the above example, abbreviating "a cup of coffee with x grams of sugar" by "cx", and "Peggy is indifferent between cx and cy" as "cxcy", the facts c3.00c3.03 and c3.03c3.06 and ... and c14.97c15.00 do not imply c3.00c15.00.

    Several kinds of relations were introduced to describe preference and indifference without running into the sorites paradox. Luce defined semi-orders and investigated their mathematical properties;[29] Amartya Sen performed a similar task for quasitransitive relations.[36] Abbreviating "Peggy likes cx more than cy" as "cx > cy", and abbreviating "cx > cy or cxcy" by "cxcy", it is reasonable that the relation ">" is a semi-order while ≥ is quasitransitive. Conversely, from a given semi-order > the indifference relation ≈ can be reconstructed by defining cxcy if neither cx > cy nor cy > cx. Similarly, from a given quasitransitive relation ≥ the indifference relation ≈ can be reconstructed by defining cxcy if both cxcy and cycx. These reconstructed ≈ relations are usually not transitive.

    The table to the right shows how the above color example can be modelled as a quasi-transitive relation ≥. Color differences overdone for readability. A color X is said to be more or equally red than a color Y if the table cell in row X and column Y is not empty. In that case, if it holds a "≈", then X and Y look indistinguishably equal, and if it holds a ">", then X looks clearly more red than Y. The relation ≥ is the disjoint union of the symmetric relation ≈ and the transitive relation >. Using the transitivity of >, the knowledge of both f10 > d30 and d30 > b50 allows one to infer that f10 > b50. However, since ≥ is not transitive, a "paradoxical" inference like "d30e20 and e20f10, hence d30f10" is no longer possible. For the same reason, e.g. "d30e20 and e20f10, hence d30f10" is no longer a valid inference. Similarly, to resolve the original heap variation of the paradox with this approach, the relation "X grains are more a heap than Y grains" could be considered quasitransitive rather than transitive.

    Geometrical shape definition

    The concept of a heap can be defined as a geometric shape in space, such as a pyramid, composed of randomly arranged sand particles. The existence of a heap can be said to cease when there are no longer sufficient sand particles to maintain the defined geometric shape. This philosophical resolution influenced by mathematical principles was proposed by Alex Dragomir, a Romanian philosopher, in 2023.[citation needed]

    See also

    References


  • "Sorites". Omnilexica. Archived from the original on 2018-09-20. Retrieved 2014-03-14.

  • Barker, C. (2009). "Vagueness". In Allan, Keith (ed.). Concise Encyclopedia of Semantics. Elsevier. p. 1037. ISBN 978-0-08-095968-9.

  • Sorensen, Roy A. (2009). "sorites arguments". In Jaegwon Kim; Sosa, Ernest; Rosenkrantz, Gary S. (eds.). A Companion to Metaphysics. John Wiley & Sons. p. 565. ISBN 978-1-4051-5298-3.

  • Bergmann, Merrie (2008). An Introduction to Many-Valued and Fuzzy Logic: Semantics, Algebras, and Derivation Systems. New York, NY: Cambridge University Press. p. 3. ISBN 978-0-521-88128-9.

  • (Barnes 1982), (Burnyeat 1982), (Williamson 1994)

  • Dolev, Y. (2004). "Why Induction Is No Cure For Baldness". Philosophical Investigations. 27 (4): 328–344. doi:10.1111/j.1467-9205.2004.t01-1-00230.x.

  • Read, Stephen (1995). Thinking About Logic, p.174. Oxford. ISBN 019289238X.

  • Russell, Bertrand (June 1923). "Vagueness". The Australasian Journal of Psychology and Philosophy. 1 (2): 84–92. doi:10.1080/00048402308540623. ISSN 1832-8660. Retrieved November 18, 2009. Shalizi's 1995 etext is archived at archive.org and at WebCite.

  • David Roberts: Reasoning: Other Fallacies Archived 2008-09-15 at the Wayback Machine

  • Thouless, Robert H. (1953), Straight and Crooked Thinking (PDF) (Revised ed.), London: Pan Books, p. 61

  • "Chapter Summary".

  • Unger, Peter (1979). "There Are No Ordinary Things". Synthese. 41 (2): 117–154. doi:10.1007/bf00869568. JSTOR 20115446. S2CID 46956605.

  • Collins 2018, p. 32.

  • Collins 2018, p. 35.

  • Williamson, Timothy (1992). "Inexact Knowledge". Mind. 101 (402): 218–242. doi:10.1093/mind/101.402.217. JSTOR 2254332.

  • Williamson, Timothy (1992). "Vagueness and Ignorance". Supplementary Proceedings of the Aristotelian Society. Aristotelian Society. 66: 145–162. doi:10.1093/aristoteliansupp/66.1.145. JSTOR 4106976.

  • Williamson, Timothy (1994). Vagueness. London: Routledge .

  • Sorensen, Roy (1988). Blindspots. Clarendon Press. ISBN 9780198249818.

  • Fine, Kit (Apr–May 1975). "Vagueness, Truth and Logic" (PDF). Synthese. 30 (3/4): 265–300. doi:10.1007/BF00485047. JSTOR 20115033. S2CID 17544558. Archived from the original (PDF) on 2015-06-08.

  • van Fraassen, Bas C. (1966). "Singular Terms, Truth-Value Gaps, and Free Logic" (PDF). Journal of Philosophy. 63 (17): 481–495. doi:10.2307/2024549. JSTOR 2024549.

  • Kamp, Hans (1975). Keenan, E. (ed.). Two Theories about Adjectives. Cambridge University Press. pp. 123–155.

  • Dummett, Michael (1975). "Wang's Paradox" (PDF). Synthese. 30 (3/4): 301–324. doi:10.1007/BF00485048. JSTOR 20115034. S2CID 46956702. Archived from the original (PDF) on 2016-04-22.

  • "Truth Values". The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. 2018.

  • Zadeh, L.A. (June 1965). "Fuzzy sets". Information and Control. San Diego. 8 (3): 338–353. doi:10.1016/S0019-9958(65)90241-X. ISSN 0019-9958. Wikidata Q25938993.

  • Goguen, J. A. (1969). "The Logic of Inexact Concepts". Synthese. 19 (3–4): 325–378. doi:10.1007/BF00485654. JSTOR 20114646. S2CID 46965639.

  • Raffman, Diana (2014). Unruly Words: A Study of Vague Language. OUP. pp. 136ff. doi:10.1093/acprof:oso/9780199915101.001.0001. ISBN 9780199915101.

  • Raffman, D. (2005). "How to understand contextualism about vagueness: reply to Stanley". Analysis. 65 (287): 244–248. doi:10.1111/j.1467-8284.2005.00558.x. JSTOR 3329033.

  • Collins 2018, p. 33.

  • Robert Duncan Luce (Apr 1956). "Semiorders and a Theory of Utility Discrimination" (PDF). Econometrica. 24 (2): 178–191. doi:10.2307/1905751. JSTOR 1905751. Here: p.179

  • Wallace E. Armstrong (Mar 1948). "Uncertainty and the Utility Function". Economic Journal. 58 (229): 1–10. doi:10.2307/2226342. JSTOR 2226342.

  • Peter C. Fishburn (May 1970). "Intransitive Individual Indifference and Transitive Majorities". Econometrica. 38 (3): 482–489. doi:10.2307/1909554. JSTOR 1909554.

  • Alan D. Miller; Shiran Rachmilevitch (Feb 2014). Arrow's Theorem Without Transitivity (PDF) (Working paper). University of Haifa. p. 11.

  • The comparative form was found in all economics publications investigated so far.[30][31][32] Apparently it is entailed by the object of investigations in utility theory.

  • Wallace E. Armstrong (Sep 1939). "The Determinateness of the Utility Function". Economic Journal. 49 (195): 453–467. doi:10.2307/2224802. JSTOR 2224802.

  • According to Armstrong (1948), indifference was considered transitive in preference theory,[30]: 2  the latter was challenged in 1939 for this very reason,[34]: 463  and succeeded by utility theory.

    1. Sen, Amartya (1969). "Quasi-transitivity, rational choice and collective decisions". The Review of Economic Studies. 36 (3): 381–393. doi:10.2307/2296434. JSTOR 2296434. Zbl 0181.47302.

    Bibliography

    External links

     

    https://en.wikipedia.org/wiki/Sorites_paradox#Continuum_fallacy


    Causality (also called causation, or cause and effect) is influence by which one event, process, state, or object (a cause) contributes to the production of another event, process, state, or object (an effect) where the cause is partly responsible for the effect, and the effect is partly dependent on the cause. In general, a process has many causes,[1] which are also said to be causal factors for it, and all lie in its past. An effect can in turn be a cause of, or causal factor for, many other effects, which all lie in its future. Some writers have held that causality is metaphysically prior to notions of time and space.[2][3][4]

    Causality is an abstraction that indicates how the world progresses.[5] As such a basic concept, it is more apt as an explanation of other concepts of progression than as something to be explained by others more basic. The concept is like those of agency and efficacy. For this reason, a leap of intuition may be needed to grasp it.[6][7] Accordingly, causality is implicit in the logic and structure of ordinary language,[8] as well as explicit in the language of scientific causal notation.

    In English studies of Aristotelian philosophy, the word "cause" is used as a specialized technical term, the translation of Aristotle's term αἰτία, by which Aristotle meant "explanation" or "answer to a 'why' question". Aristotle categorized the four types of answers as material, formal, efficient, and final "causes". In this case, the "cause" is the explanans for the explanandum, and failure to recognize that different kinds of "cause" are being considered can lead to futile debate. Of Aristotle's four explanatory modes, the one nearest to the concerns of the present article is the "efficient" one.

    David Hume, as part of his opposition to rationalism, argued that pure reason alone cannot prove the reality of efficient causality; instead, he appealed to custom and mental habit, observing that all human knowledge derives solely from experience.

    The topic of causality remains a staple in contemporary philosophy

    https://en.wikipedia.org/wiki/Causality

    Categorization is the ability and activity of recognizing shared features or similarities between the elements of the experience of the world (such as objects, events, or ideas), organizing and classifying experience by associating them to a more abstract group (that is, a category, class, or type),[1][2] on the basis of their traits, features, similarities or other criteria that are universal to the group. Categorization is considered one of the most fundamental cognitive abilities, and as such it is studied particularly by psychology and cognitive linguistics.

    Categorization is sometimes considered synonymous with classification (cf., Classification synonyms). Categorization and classification allow humans to organize things, objects, and ideas that exist around them and simplify their understanding of the world.[3] Categorization is something that humans and other organisms do: "doing the right thing with the right kind of thing." The activity of categorizing things can be nonverbal or verbal. For humans, both concrete objects and abstract ideas are recognized, differentiated, and understood through categorization. Objects are usually categorized for some adaptive or pragmatic purposes.

    Categorization is grounded in the features that distinguish the category's members from nonmembers. Categorization is important in learning, prediction, inference, decision making, language, and many forms of organisms' interaction with their environments.

    Overview

    Categories are distinct collections of concrete or abstract instances (category members) that are considered equivalent by the cognitive system. Using category knowledge requires one to access mental representations that define the core features of category members (cognitive psychologists refer to these category-specific mental representations as concepts).[4][5]

    To categorization theorists, the categorization of objects is often considered using taxonomies with three hierarchical levels of abstraction.[6] For example, a plant could be identified at a high level of abstraction by simply labeling it a flower, a medium level of abstraction by specifying that the flower is a rose, or a low level of abstraction by further specifying this particular rose as a dog rose. Categories in a taxonomy are related to one another via class inclusion, with the highest level of abstraction being the most inclusive and the lowest level of abstraction being the least inclusive.[6] The three levels of abstraction are as follows:

    • Superordinate level, Genus (e.g., Flower) - The highest and most inclusive level of abstraction. Exhibits the highest degree of generality and the lowest degree of within-category similarity.[7]
    • Basic Level, Species (e.g., Rose) - The middle level of abstraction. Rosch and colleagues (1976) suggest the basic level to be the most cognitively efficient.[6] Basic level categories exhibit high within-category similarities and high between-category dissimilarities. Furthermore, the basic level is the most inclusive level at which category exemplars share a generalized identifiable shape.[6] Adults most-often use basic level object names, and children learn basic object names first.[6]
    • Subordinate level (e.g., Dog Rose) - The lowest level of abstraction. Exhibits the highest degree of specificity and within-category similarity.[7]

    https://en.wikipedia.org/wiki/Categorization


    In philosophy, objectivity is the concept of truth independent from individual subjectivity (bias caused by one's perception, emotions, or imagination). A proposition is considered to have objective truth when its truth conditions are met without bias caused by the mind of a sentient being. Scientific objectivity refers to the ability to judge without partiality or external influence. Objectivity in the moral framework calls for moral codes to be assessed based on the well-being of the people in the society that follow it.[1] Moral objectivity also calls for moral codes to be compared to one another through a set of universal facts and not through subjectivity.[1]

    Objectivity of knowledge

    Plato considered geometry to be a condition of idealism concerned with universal truth.[clarification needed] In Republic, Socrates opposes the sophist Thrasymachus's relativistic account of justice, and argues that justice is mathematical in its conceptual structure, and that ethics was therefore a precise and objective enterprise with impartial standards for truth and correctness, like geometry.[2] The rigorous mathematical treatment Plato gave to moral concepts set the tone for the western tradition of moral objectivism that came after him.[citation needed] His contrasting between objectivity and opinion became the basis for philosophies intent on resolving the questions of reality, truth, and existence. He saw opinions as belonging to the shifting sphere of sensibilities, as opposed to a fixed, eternal and knowable incorporeality. Where Plato distinguished between how we know things and their ontological status, subjectivism such as George Berkeley's depends on perception.[3] In Platonic terms, a criticism of subjectivism is that it is difficult to distinguish between knowledge, opinions, and subjective knowledge.[4]

    Platonic idealism is a form of metaphysical objectivism, holding that the ideas exist independently from the individual. Berkeley's empirical idealism, on the other hand, holds that things only exist as they are perceived. Both approaches boast an attempt at objectivity. Plato's definition of objectivity can be found in his epistemology, which is based on mathematics, and his metaphysics, where knowledge of the ontological status of objects and ideas is resistant to change.[3]

    In opposition to philosopher René Descartes' method of personal deduction[clarification needed], natural philosopher Isaac Newton applied the relatively objective scientific method to look for evidence before forming a hypothesis.[5] Partially in response to Kant's rationalism, logician Gottlob Frege applied objectivity to his epistemological and metaphysical philosophies. If reality exists independently of consciousness, then it would logically include a plurality of indescribable forms. Objectivity requires a definition of truth formed by propositions with truth value. An attempt of forming an objective construct incorporates ontological commitments to the reality of objects.[6]

    The importance of perception in evaluating and understanding objective reality is debated in the observer effect of quantum mechanics. Direct or naïve realists rely on perception as key in observing objective reality, while instrumentalists hold that observations are useful in predicting objective reality. The concepts that encompass these ideas are important in the philosophy of science. Philosophies of mind explore whether objectivity relies on perceptual constancy.[7]

    Objectivity in ethics

    Ethical subjectivism

    The term "ethical subjectivism" covers two distinct theories in ethics. According to cognitive versions of ethical subjectivism, the truth of moral statements depends upon people's values, attitudes, feelings, or beliefs. Some forms of cognitivist ethical subjectivism can be counted as forms of realism, others are forms of anti-realism.[8] David Hume is a foundational figure for cognitive ethical subjectivism. On a standard interpretation of his theory, a trait of character counts as a moral virtue when it evokes a sentiment of approbation in a sympathetic, informed, and rational human observer.[9] Similarly, Roderick Firth's ideal observer theory held that right acts are those that an impartial, rational observer would approve of.[10] William James, another ethical subjectivist, held that an end is good (to or for a person) just in the case it is desired by that person (see also ethical egoism). According to non-cognitive versions of ethical subjectivism, such as emotivism, prescriptivism, and expressivism, ethical statements cannot be true or false, at all: rather, they are expressions of personal feelings or commands.[11] For example, on A. J. Ayer's emotivism, the statement, "Murder is wrong" is equivalent in meaning to the emotive, "Murder, Boo!"[12]

    Ethical objectivism

    According to the ethical objectivist, the truth or falsehood of typical moral judgments does not depend upon the beliefs or feelings of any person or group of persons. This view holds that moral propositions are analogous to propositions about chemistry, biology, or history, in so much as they are true despite what anyone believes, hopes, wishes, or feels. When they fail to describe this mind-independent moral reality, they are false—no matter what anyone believes, hopes, wishes, or feels.

    There are many versions of ethical objectivism, including various religious views of morality, Platonistic intuitionism, Kantianism, utilitarianism, and certain forms of ethical egoism and contractualism. Note that Platonists define ethical objectivism in an even more narrow way, so that it requires the existence of intrinsic value. Consequently, they reject the idea that contractualists or egoists could be ethical objectivists. Objectivism, in turn, places primacy on the origin of the frame of reference—and, as such, considers any arbitrary frame of reference ultimately a form of ethical subjectivism by a transitive property, even when the frame incidentally coincides with reality and can be used for measurements.

    Moral objectivism and relativism

    Moral objectivism is the view that what is right or wrong does not depend on what anyone thinks is right or wrong.[1] Moral objectivism depends on how the moral code affects the well-being of the people of the society. Moral objectivism allows for moral codes to be compared to each other through a set of universal facts than mores of a society. Nicholas Reschar defines mores as customs within every society (e.g., what women can wear) and states that moral codes cannot be compared to one's personal moral compass.[1] An example is the categorical imperative of Immanuel Kant which says: "Act only according to that maxim [i.e., rule] whereby you can at the same time will that it become a universal law." John Stuart Mill was a consequential thinker and therefore proposed utilitarianism which asserts that in any situation, the right thing to do is whatever is likely to produce the most happiness overall. Moral relativism is the view where an actor's moral codes are locally derived from their culture.[13] The rules within moral codes are equal to each other and are only deemed "right" or "wrong" within their specific moral codes.[13] Relativism is opposite to Universalism because there is not a single moral code for every agent to follow.[13] Relativism differs from Nihilism because it validates every moral code that exists whereas nihilism does not.[13] When it comes to relativism, Russian philosopher and writer, Fyodor Dostoevsky, coined the phrase "If God doesn't exist, everything is permissible". That phrase was his view of the consequences for rejecting theism as a basis of ethics. American anthropologist Ruth Benedict argued that there is no single objective morality and that moral codes necessarily vary by culture.[14]

    Objectivity in history

    History as a discipline has wrestled with notions of objectivity from its very beginning. While its object of study is commonly thought to be the past, the only thing historians have to work with are different versions of stories based on individual perceptions of reality and memory.

    Several history streams developed to devise ways to solve this dilemma: Historians like Leopold von Ranke (19th century) have advocated for the use of extensive evidence –especially archived physical paper documents– to recover the bygone past, claiming that, as opposed to people's memories, objects remain stable in what they say about the era they witnessed, and therefore represent a better insight into objective reality.[15] In the 20th century, the Annales School emphasized the importance of shifting focus away from the perspectives of influential men –usually politicians around whose actions narratives of the past were shaped–, and putting it on the voices of ordinary people.[16] Postcolonial streams of history challenge the colonial-postcolonial dichotomy and critique Eurocentric academia practices, such as the demand for historians from colonized regions to anchor their local narratives to events happening in the territories of their colonizers to earn credibility.[17] All the streams explained above try to uncover whose voice is more or less truth-bearing and how historians can stitch together versions of it to best explain what "actually happened."

    The anthropologist Michel-Rolph Trouillot developed the concepts of historicity 1 and 2 to explain the difference between the materiality of socio-historical processes (H1) and the narratives that are told about the materiality of socio-historical processes (H2).[18] This distinction hints that H1 would be understood as the factual reality that elapses and is captured with the concept of "objective truth", and that H2 is the collection of subjectivities that humanity has stitched together to grasp the past. Debates about positivism, relativism, and postmodernism are relevant to evaluating these concepts' importance and the distinction between them.

    Ethical considerations

    In his book "Silencing the past", Trouillot wrote about the power dynamics at play in history-making, outlining four possible moments in which historical silences can be created: (1) making of sources (who gets to know how to write, or to have possessions that are later examined as historical evidence), (2) making of archives (what documents are deemed important to save and which are not, how to classify materials, and how to order them within physical or digital archives), (3) making of narratives (which accounts of history are consulted, which voices are given credibility), and (4) the making of history (the retrospective construction of what The Past is).[19]

    Because history (official, public, familial, personal) informs current perceptions and how we make sense of the present, whose voice gets to be included in it –and how– has direct consequences in material socio-historical processes. Thinking of current historical narratives as impartial depictions of the totality of events unfolded in the past by labeling them as "objective" risks sealing historical understanding. Acknowledging that history is never objective and always incomplete has a meaningful opportunity to support social justice efforts. Under said notion, voices that have been silenced are placed on an equal footing to the grand and popular narratives of the world, appreciated for their unique insight of reality through their subjective lens.

    See also

    References


  • Rescher, Nicholas (January 2008). "Moral Objectivity". Social Philosophy and Policy. 25 (1): 393–409. doi:10.1017/S0265052508080151. S2CID 233358084.

  • Plato, "The Republic", 337B, Harper Collins Publishers, 1968

  • E. Douka Kabîtoglou (1991). "Shelley and Berkeley: The Platonic Connection" (PDF): 20–35.

  • Mary Margaret Mackenzie (1985). "Plato's moral theory". Journal of Medical Ethics. 11 (2): 88–91. doi:10.1136/jme.11.2.88. PMC 1375153. PMID 4009640.

  • Suzuki, Fumitaka (March 2012). "The Cogito Proposition of Descartes and Characteristics of His Ego Theory" (PDF). Bulletin of Aichi University of Education. 61: 73–80.

  • Clinton Tolley. "Kant on the Generality of Logic" (PDF). University of California, San Diego.

  • Tyler Burge, Origins of Objectivity, Oxford University Press, 2010.

  • Thomas Pölzler (2018). "How to Measure Moral Realism". Review of Philosophy and Psychology. 9 (3): 647–670. doi:10.1007/s13164-018-0401-8. PMC 6132410. PMID 30220945.

  • Rayner, Sam (2005). "Hume's Moral Philosophy". Macalester Journal of Philosophy. 14 (1): 6–21.

  • "A Substantive Revision to Firth's Ideal Observer Theory" (PDF). Stance. Ball State University. 3: 55–61. April 2010.

  • Marchetti, Sarin (21 December 2010). "William James on Truth and Invention in Morality". European Journal of Pragmatism and American Philosophy. II (2). doi:10.4000/ejpap.910.

  • "24.231 Ethics – Handout 3 Ayer's Emotivism" (PDF). Massachusetts Institute of Technology.

  • Wreen, Michael (July 2018). "What Is Moral Relativism?". Philosophy. 93 (3): 337–354. doi:10.1017/S0031819117000614. S2CID 171526831. ProQuest 2056736032.

  • "Moral Relativism and Objectivism". University of California, Santa Cruz. Retrieved 20 February 2019.

  • Leopold von Ranke, “Author’s Preface,” in History of the Reformation in Germany, trans. Sarah Austin, vii-xi. London: George Rutledge and Sons, 1905.

  • Andrea, A. (1991). Mentalities in history. The Historian 53(3), 605-608.

  • Chakrabarty, D. (1992). Postcoloniality and the artifice of history: Who speaks for "Indian" pasts?Representations, (37), 1-26. doi:10.2307/2928652.

  • Trouillot, Michel-Rolph. (1995). Silencing the past : power and the production of history. Boston, Mass. :Beacon Press,

    1. Trouillot, Michel-Rolph. (1995). Silencing the past : power and the production of history. Boston, Mass. :Beacon Press,

    Further reading

    • Bachelard, Gaston. La formation de l'esprit scientifique: contribution à une psychanalyse de la connaissance. Paris: Vrin, 2004. ISBN 2-7116-1150-7.
    • Castillejo, David. The Formation of Modern Objectivity. Madrid: Ediciones de Arte y Bibliofilia, 1982.
    • Gaukroger, Stephen. (2012). Objectivity. Oxford University Press.
    • Kuhn, Thomas S. The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1996, 3rd ed. ISBN 0-226-45808-3.
    • Megill, Allan. Rethinking Objectivity. London: Duke UP, 1994.
    • Nagel, Ernest. The Structure of Science. New York: Brace and World, 1961.
    • Nagel, Thomas. The View from Nowhere. Oxford: Oxford UP, 1986
    • Nozick, Robert. Invariances: the structure of the objective world. Cambridge: Harvard UP, 2001.
    • Popper, Karl. R. Objective Knowledge: An Evolutionary Approach. Oxford University Press, 1972. ISBN 0-19-875024-2.
    • Rescher, Nicholas. Objectivity: the obligations of impersonal reason. Notre Dame: Notre Dame Press, 1977.
    • Rorty, Richard. Objectivity, Relativism, and Truth. Cambridge: Cambridge University Press, 1991
    • Rousset, Bernard. La théorie kantienne de l'objectivité, Paris: Vrin, 1967.
    • Scheffler, Israel. Science and Subjectivity. Hackett, 1982. Voices of Wisdom; a multicultural philosophy reader. Kessler

    External links

     https://en.wikipedia.org/wiki/Objectivity_(philosophy)

    An object is a philosophical term often used in contrast to the term subject. A subject is an observer and an object is a thing observed. For modern philosophers like Descartes, consciousness is a state of cognition that includes the subject—which can never be doubted as only it can be the one who doubts—and some object(s) that may be considered as not having real or full existence or value independent of the subject who observes it. Metaphysical frameworks also differ in whether they consider objects existing independently of their properties and, if so, in what way.[1]

    The pragmatist Charles S. Peirce defines the broad notion of an object as anything that we can think or talk about.[2] In a general sense it is any entity: the pyramids, gods,[3] Socrates,[3] Alpha Centauri, the number seven, a disbelief in predestination or the fear of cats. In a strict sense it refers to any definite being.

    A related notion is objecthood. Objecthood is the state of being an object. One approach to defining it is in terms of objects' properties and relations. Descriptions of all bodies, minds, and persons must be in terms of their properties and relations. The philosophical question of the nature of objecthood concerns how objects are related to their properties and relations. For example, it seems that the only way to describe an apple is by describing its properties and how it is related to other things. Its properties may include its redness, its size, and its composition, while its relations may include "on the table", "in the room" and "being bigger than other apples".

    The notion of an object must address two problems: the change problems and the problems of substances. Two leading theories about objecthood are substance theory, wherein substances (objects) are distinct from their properties, and bundle theory, wherein objects are no more than bundles of their properties.

    Etymology

    In English the word object is derived from the Latin objectus (p.p. of obicere) with the meaning "to throw, or put before or against", from ob- and jacere, "to throw".[4] As such it is a root for several important words used to derive meaning, such as objectify (to materialize), objective (a future reference), and objectivism (a philosophical doctrine that knowledge is based on objective reality).

    Terms and usage

    Broadly construed, the word object names a maximally general category, whose members are eligible for being referred to, quantified over and thought of. Terms similar to the broad notion of object include thing, being, entity, item, existent, term, unit, and individual.[3]

    In ordinary language, one is inclined to call only a material object "object".[3] In certain contexts, it may be socially inappropriate to apply the word object to animate beings, especially to human beings, while the words entity and being are more acceptable.

    Some authors use object in contrast to property; that is to say, an object is an entity that is not a property. Objects differ from properties in that objects cannot be referred to by predicates. Such usage may exclude abstracts objects from counting as objects. Terms similar to such usage of object include substance, individual, and particular. [3]

    The word object can be used in contrast to subject as well. There are two definitions. The first definition holds that an object is an entity that fails to experience and that is not conscious. The second definition holds that an object is an entity experienced. The second definition differs from the first one in that the second definition allows for a subject to be an object at the same time.[3]

    Change

    An attribute of an object is called a property if it can be experienced (e.g. its color, size, weight, smell, taste, and location). Objects manifest themselves through their properties. These manifestations seem to change in a regular and unified way, suggesting that something underlies the properties. The change problem asks what that underlying thing is. According to substance theory, the answer is a substance, that which stands for the change.

    Problem of substance

    Because substances are only experienced through their properties a substance itself is never directly experienced. The problem of substance asks on what basis can one conclude the existence of a substance that cannot be seen or scientifically verified. According to David Hume's bundle theory, the answer is none; thus an object is merely its properties.

    In the Mūlamadhyamakakārikā Nagarjuna seizes the dichotomy between objects as collections of properties or as separate from those properties to demonstrate that both assertions fall apart under analysis. By uncovering this paradox he then provides a solution (pratītyasamutpāda – "dependent origination") that lies at the very root of Buddhist praxis. Although Pratītyasamutpāda is normally limited to caused objects, Nagarjuna extends his argument to objects in general by differentiating two distinct ideas – dependent designation and dependent origination. He proposes that all objects are dependent upon designation, and therefore any discussion regarding the nature of objects can only be made in light of the context. The validity of objects can only be established within those conventions that assert them.[5]

    Facts

    Bertrand Russell updated the classical terminology with one more term, the fact;[6] "Everything that there is in the world I call a fact." Facts, objects, are opposed to beliefs, which are "subjective" and may be errors on the part of the subject, the knower who is their source and who is certain of himself and little else. All doubt implies the possibility of error and therefore admits the distinction between subjectivity and objectivity. The knower is limited in ability to tell fact from belief, false from true objects and engages in reality testing, an activity that will result in more or less certainty regarding the reality of the object. According to Russell,[7] "we need a description of the fact which would make a given belief true" where "Truth is a property of beliefs." Knowledge is "true beliefs".[8]

    Applications

    Value theory

    Value theory concerns the value of objects. When it concerns economic value, it generally deals with physical objects. However, when concerning philosophic or ethic value, an object may be both a physical object and an abstract object (e.g. an action).[citation needed]

    Physics

    Limiting discussions of objecthood to the realm of physical objects may simplify them. However, defining physical objects in terms of fundamental particles (e.g. quarks) leaves open the question of what is the nature of a fundamental particle and thus asks what categories of being can be used to explain physical objects.[citation needed]

    Semantics

    Symbols represent objects; how they do so, the map–territory relation, is the basic problem of semantics.[9]

    See also

    References

     

  • Goswick, Dana (27 July 2016). "Ordinary Objects". oxfordbibliographies. doi:10.1093/obo/9780195396577-0312. ISBN 978-0-19-539657-7. Retrieved 20 April 2020.

  • Peirce, Charles S. "Object". University of Helsinki. Archived from the original on 2009-02-14. Retrieved 2009-03-19.

  • Rettler, Bradley and Andrew M. Bailey. "Object". Stanford Encyclopedia of Philosophy. Retrieved 29 January 2021.

  • Klein, Ernest (1969) A comprehensive etymological dictionary of the English language, Vol II, Elsevier publishing company, Amsterdam, pp. 1066–1067

  • MMK 24:18

  • Russell 1948, p. 143.

  • Russell 1948, pp. 148–149.

  • Russell 1948, p. 154.

    1. Dąmbska, Izydora (2016). "Symbols". Poznan Studies in the Philosophy of the Sciences & the Humanities. 105: 201–209 – via Humanities Source.

    Sources

    External links

    • Bradley Retter & Andrew M. Bailey (2017). Object. Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University.
    • Gideon Rosen (2022). Abstract objects. Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University.
    • Colin Smith. "Even More Abstract Objects". Crazy Objects And Their Affect On Reality.

     

    https://en.wikipedia.org/wiki/Object_(philosophy)

    A paradigm shift is a fundamental change in the basic concepts and experimental practices of a scientific discipline. It is a concept in the philosophy of science that was introduced and brought into the common lexicon by the American physicist and philosopher Thomas Kuhn. Even though Kuhn restricted the use of the term to the natural sciences, the concept of a paradigm shift has also been used in numerous non-scientific contexts to describe a profound change in a fundamental model or perception of events.

    Kuhn presented his notion of a paradigm shift in his influential book The Structure of Scientific Revolutions (1962).

    Kuhn contrasts paradigm shifts, which characterize a Scientific Revolution, to the activity of normal science, which he describes as scientific work done within a prevailing framework or paradigm. Paradigm shifts arise when the dominant paradigm under which normal science operates is rendered incompatible with new phenomena, facilitating the adoption of a new theory or paradigm.[1]

    As one commentator summarizes:

    Kuhn acknowledges having used the term "paradigm" in two different meanings. In the first one, "paradigm" designates what the members of a certain scientific community have in common, that is to say, the whole of techniques, patents and values shared by the members of the community. In the second sense, the paradigm is a single element of a whole, say for instance Newton’s Principia, which, acting as a common model or an example... stands for the explicit rules and thus defines a coherent tradition of investigation. Thus the question is for Kuhn to investigate by means of the paradigm what makes possible the constitution of what he calls "normal science". That is to say, the science which can decide if a certain problem will be considered scientific or not. Normal science does not mean at all a science guided by a coherent system of rules, on the contrary, the rules can be derived from the paradigms, but the paradigms can guide the investigation also in the absence of rules. This is precisely the second meaning of the term "paradigm", which Kuhn considered the most new and profound, though it is in truth the oldest.[2]

     https://en.wikipedia.org/wiki/Paradigm_shift

    The Composition of Causes was a set of philosophical laws advanced by John Stuart Mill in his watershed essay A System of Logic. These laws outlined Mill's view of the epistemological components of emergentism, a school of philosophical laws that posited a decidedly opportunistic approach to the classic dilemma of causation nullification.

    Mill was determined to prove that the intrinsic properties of all things relied on three primary tenets, which he called the Composition of Causes. These were:

    1. The Cause of Inherent Efficiency, a methodological understanding of deterministic forces engaged in the perpetual axes of the soul, as it pertained to its own self-awareness.

    2. The so-called Sixth Cause, a conceptual notion embodied by the system of inter-related segments of social and elemental vitra. This was a hotly debated matter in early 17th-century philosophical circles, especially in the halls of the Reichtaven in Meins, where the spirit of Geudl still lingered.

    3. The Cause of Multitude, an evolutionary step taken from Hemmlich's Plurality of a Dysfunctional Enterprise, detailing the necessary linkage between both sets of perception-based self-awareness.

    Furthermore, the Composition of Causes elevated Mill's standing in ontological circles, lauded by his contemporaries for applying a conceptual vision of an often-argued discipline.

    External links

     https://en.wikipedia.org/wiki/Criteria_of_truth

    A pattern is a regularity in the world, in human-made design,[1] or in abstract ideas. As such, the elements of a pattern repeat in a predictable manner. A geometric pattern is a kind of pattern formed of geometric shapes and typically repeated like a wallpaper design.

    Any of the senses may directly observe patterns. Conversely, abstract patterns in science, mathematics, or language may be observable only by analysis. Direct observation in practice means seeing visual patterns, which are widespread in nature and in art. Visual patterns in nature are often chaotic, rarely exactly repeating, and often involve fractals. Natural patterns include spirals, meanders, waves, foams, tilings, cracks, and those created by symmetries of rotation and reflection. Patterns have an underlying mathematical structure;[2]: 6  indeed, mathematics can be seen as the search for regularities, and the output of any function is a mathematical pattern. Similarly in the sciences, theories explain and predict regularities in the world.

    In art and architecture, decorations or visual motifs may be combined and repeated to form patterns designed to have a chosen effect on the viewer. In computer science, a software design pattern is a known solution to a class of problems in programming. In fashion, the pattern is a template used to create any number of similar garments.

    In many areas of the decorative arts, from ceramics and textiles to wallpaper, "pattern" is used for an ornamental design that is manufactured, perhaps for many different shapes of object.

    Nature

    Nature provides examples of many kinds of pattern, including symmetries, trees and other structures with a fractal dimension, spirals, meanders, waves, foams, tilings, cracks and stripes.[3]

    Symmetry

    Symmetry is widespread in living things. Animals that move usually have bilateral or mirror symmetry as this favours movement.[2]: 48–49  Plants often have radial or rotational symmetry, as do many flowers, as well as animals which are largely static as adults, such as sea anemones. Fivefold symmetry is found in the echinoderms, including starfish, sea urchins, and sea lilies.[2]: 64–65 

    Among non-living things, snowflakes have striking sixfold symmetry: each flake is unique, its structure recording the varying conditions during its crystallisation similarly on each of its six arms.[2]: 52  Crystals have a highly specific set of possible crystal symmetries; they can be cubic or octahedral, but cannot have fivefold symmetry (unlike quasicrystals).[2]: 82–84 

    Spirals

    Spiral patterns are found in the body plans of animals including molluscs such as the nautilus, and in the phyllotaxis of many plants, both of leaves spiralling around stems, and in the multiple spirals found in flowerheads such as the sunflower and fruit structures like the pineapple.[4]

    Chaos, turbulence, meanders and complexity

    Vortex street turbulence

    Chaos theory predicts that while the laws of physics are deterministic, there are events and patterns in nature that never exactly repeat because extremely small differences in starting conditions can lead to widely differing outcomes.[5] The patterns in nature tend to be static due to dissipation on the emergence process, but when there is interplay between injection of energy and dissipation there can arise a complex dynamic.[6] Many natural patterns are shaped by this complexity, including vortex streets,[7] other effects of turbulent flow such as meanders in rivers.[8] or nonlinear interaction of the system [9]

    Waves, dunes

    Dune ripples and boards form a symmetrical pattern.

    Waves are disturbances that carry energy as they move. Mechanical waves propagate through a medium – air or water, making it oscillate as they pass by.[10] Wind waves are surface waves that create the chaotic patterns of the sea. As they pass over sand, such waves create patterns of ripples; similarly, as the wind passes over sand, it creates patterns of dunes.[11]

    Bubbles, foam

    Foams obey Plateau's laws, which require films to be smooth and continuous, and to have a constant average curvature. Foam and bubble patterns occur widely in nature, for example in radiolarians, sponge spicules, and the skeletons of silicoflagellates and sea urchins.[12][13]

    Cracks

    Shrinkage Cracks

    Cracks form in materials to relieve stress: with 120 degree joints in elastic materials, but at 90 degrees in inelastic materials. Thus the pattern of cracks indicates whether the material is elastic or not. Cracking patterns are widespread in nature, for example in rocks, mud, tree bark and the glazes of old paintings and ceramics.[14]

    Spots, stripes

    Alan Turing,[15] and later the mathematical biologist James D. Murray[16] and other scientists, described a mechanism that spontaneously creates spotted or striped patterns, for example in the skin of mammals or the plumage of birds: a reaction–diffusion system involving two counter-acting chemical mechanisms, one that activates and one that inhibits a development, such as of dark pigment in the skin.[17] These spatiotemporal patterns slowly drift, the animals' appearance changing imperceptibly as Turing predicted.

    Skins of a South African giraffe (Giraffa camelopardalis giraffa) and Burchell's zebra (Equus quagga burchelli)

    Art and architecture

    Elaborate ceramic tiles at Topkapi Palace

    Tilings

    In visual art, pattern consists in regularity which in some way "organizes surfaces or structures in a consistent, regular manner." At its simplest, a pattern in art may be a geometric or other repeating shape in a painting, drawing, tapestry, ceramic tiling or carpet, but a pattern need not necessarily repeat exactly as long as it provides some form or organizing "skeleton" in the artwork.[18] In mathematics, a tessellation is the tiling of a plane using one or more geometric shapes (which mathematicians call tiles), with no overlaps and no gaps.[19]

    Zentangles

    The concept and process of Zentangles, a blend of meditative Zen practice with the purposeful drawing of repetitive patterns or artistic tangles has been trademarked by Rick Roberts and Maria Thomas.[20] The process, using patterns such as cross hatching, dots, curves and other mark making, on small pieces of paper or tiles which can then be put together to form mosaic clusters, or shaded or coloured in, can, like the doodle, be used as a therapeutic device to help to relieve stress and anxiety in children and adults.[21] [22] Zentangles comprising relevant or irrelevant shapes can be drawn within the outline of an animal, human or object to provide texture and interest. [1]

    In architecture

    Patterns in architecture: the Virupaksha temple at Hampi has a fractal-like structure where the parts resemble the whole.

    In architecture, motifs are repeated in various ways to form patterns. Most simply, structures such as windows can be repeated horizontally and vertically (see leading picture). Architects can use and repeat decorative and structural elements such as columns, pediments, and lintels.[23] Repetitions need not be identical; for example, temples in South India have a roughly pyramidal form, where elements of the pattern repeat in a fractal-like way at different sizes.[24]

    Patterns in Architecture: the columns of Zeus's temple in Athens

    See also: pattern book.

    Science and mathematics

    Fractal model of a fern illustrating self-similarity

    Mathematics is sometimes called the "Science of Pattern", in the sense of rules that can be applied wherever needed.[25] For example, any sequence of numbers that may be modeled by a mathematical function can be considered a pattern. Mathematics can be taught as a collection of patterns.[26]

    Fractals

    Some mathematical rule-patterns can be visualised, and among these are those that explain patterns in nature including the mathematics of symmetry, waves, meanders, and fractals. Fractals are mathematical patterns that are scale invariant. This means that the shape of the pattern does not depend on how closely you look at it. Self-similarity is found in fractals. Examples of natural fractals are coast lines and tree shapes, which repeat their shape regardless of what magnification you view at. While self-similar patterns can appear indefinitely complex, the rules needed to describe or produce their formation can be simple (e.g. Lindenmayer systems describing tree shapes).[27]

    In pattern theory, devised by Ulf Grenander, mathematicians attempt to describe the world in terms of patterns. The goal is to lay out the world in a more computationally friendly manner.[28]

    In the broadest sense, any regularity that can be explained by a scientific theory is a pattern. As in mathematics, science can be taught as a set of patterns.[29]

    Computer science

    In computer science, a software design pattern, in the sense of a template, is a general solution to a problem in programming. A design pattern provides a reusable architectural outline that may speed the development of many computer programs.[30]

    Fashion

    In fashion, the pattern is a template, a technical two-dimensional tool used to create any number of identical garments. It can be considered as a means of translating from the drawing to the real garment.[31]

    See also

    References


  • Garai, Achraf (3 March 2022). "What are design patterns?". achrafgarai.com. Retrieved 1 January 2023.

  • Stewart, Ian (2001). What shape is a snowflake?. London: Weidenfeld & Nicolson. ISBN 0-297-60723-5. OCLC 50272461.

  • Stevens, Peter. Patterns in Nature, 1974. Page 3.

  • Kappraff, Jay (2004). "Growth in Plants: A Study in Number" (PDF). Forma. 19: 335–354.

  • Crutchfield, James P; Farmer, J Doyne; Packard, Norman H; Shaw, Robert S (December 1986). "Chaos". Scientific American. 254 (12): 46–57. Bibcode:1986SciAm.255f..46C. doi:10.1038/scientificamerican1286-46.

  • Clerc, Marcel G.; González-Cortés, Gregorio; Odent, Vincent; Wilson, Mario (29 June 2016). "Optical textures: characterizing spatiotemporal chaos". Optics Express. 24 (14): 15478–85. arXiv:1601.00844. Bibcode:2016OExpr..2415478C. doi:10.1364/OE.24.015478. PMID 27410822. S2CID 34610459.

  • von Kármán, Theodore. Aerodynamics. McGraw-Hill (1963): ISBN 978-0070676022. Dover (1994): ISBN 978-0486434858.

  • Lewalle, Jacques (2006). "Flow Separation and Secondary Flow: Section 9.1" (PDF). Lecture Notes in Incompressible Fluid Dynamics: Phenomenology, Concepts and Analytical Tools. Syracuse, NY: Syracuse University. Archived from the original (PDF) on 2011-09-29.

  • Scroggie, A.J; Firth, W.J; McDonald, G.S; Tlidi, M; Lefever, R; Lugiato, L.A (August 1994). "Pattern formation in a passive Kerr cavity" (PDF). Chaos, Solitons & Fractals. 4 (8–9): 1323–1354. Bibcode:1994CSF.....4.1323S. doi:10.1016/0960-0779(94)90084-1.

  • French, A.P. Vibrations and Waves. Nelson Thornes, 1971.

  • Tolman, H.L. (2008), "Practical wind wave modeling", in Mahmood, M.F. (ed.), CBMS Conference Proceedings on Water Waves: Theory and Experiment (PDF), Howard University, USA, 13–18 May 2008: World Scientific Publ.

  • Philip Ball. Shapes, 2009. pp 68, 96-101.

  • Frederick J. Almgren, Jr. and Jean E. Taylor, The geometry of soap films and soap bubbles, Scientific American, vol. 235, pp. 82–93, July 1976.

  • Stevens, Peter. 1974. Page 207.

  • Turing, A. M. (1952). "The Chemical Basis of Morphogenesis". Philosophical Transactions of the Royal Society B. 237 (641): 37–72. Bibcode:1952RSPTB.237...37T. doi:10.1098/rstb.1952.0012.

  • Murray, James D. (9 March 2013). Mathematical Biology. Springer Science & Business Media. pp. 436–450. ISBN 978-3-662-08539-4.

  • Ball, Philip. Shapes. 2009. Pages 159–167.

  • Jirousek, Charlotte (1995). "Art, Design, and Visual Thinking". Pattern. Cornell University. Retrieved 12 December 2012.

  • Grünbaum, Branko; Shephard, G. C. (1987). Tilings and Patterns. New York: W. H. Freeman. ISBN 9780716711933.

  • "Zentangle". Zentangle. Retrieved 2023-02-03.

  • Hsu, M.F. (July 2021). "Effects of Zentangle art workplace health promotion activities on rural healthcare workers". Public Health. 196: 217–222. doi:10.1016/j.puhe.2021.05.033. PMID 34274696. S2CID 236092775.

  • Chung, S.K. (September 2022). "The effects of Zentangles on affective well-being among adults". American Journal of Occupational Therapy. 1 (76). doi:10.5014/ajot.2022.049113. PMID 35943847. S2CID 251444115.

  • Adams, Laurie (2001). A History of Western Art. McGraw Hill. p. 99.

  • Jackson, William Joseph (2004). Heaven's Fractal Net: Retrieving Lost Visions in the Humanities. Indiana University Press. p. 2.

  • Resnik, Michael D. (November 1981). "Mathematics as a Science of Patterns: Ontology and Reference". Noûs. 15 (4): 529–550. doi:10.2307/2214851. JSTOR 2214851.

  • Bayne, Richard E (2012). "MATH 012 Patterns in Mathematics - spring 2012". Archived from the original on 7 February 2013. Retrieved 16 January 2013.

  • Mandelbrot, Benoit B. (1983). The fractal geometry of nature. Macmillan. ISBN 978-0-7167-1186-5.

  • Grenander, Ulf; Miller, Michael (2007). Pattern Theory: From Representation to Inference. Oxford University Press.

  • "Causal Patterns in Science". Harvard Graduate School of Education. 2008. Retrieved 16 January 2013.

  • Gamma et al, 1994.

  • Bibliography

    In nature

    In art and architecture

    • Alexander, C. A Pattern Language: Towns, Buildings, Construction. Oxford, 1977.
    • de Baeck, P. Patterns. Booqs, 2009.
    • Garcia, M. The Patterns of Architecture. Wiley, 2009.
    • Kiely, O. Pattern. Conran Octopus, 2010.
    • Pritchard, S. V&A Pattern: The Fifties. V&A Publishing, 2009.

    In science and mathematics

    • Adam, J. A. Mathematics in Nature: Modeling Patterns in the Natural World. Princeton, 2006.
    • Resnik, M. D. Mathematics as a Science of Patterns. Oxford, 1999.

    In computing

    • Gamma, E., Helm, R., Johnson, R., Vlissides, J. Design Patterns. Addison-Wesley, 1994.
    • Bishop, C. M. Pattern Recognition and Machine Learning. Springer, 2007.

     https://en.wikipedia.org/wiki/Pattern

    The Peripatetic axiom is: "Nothing is in the intellect that was not first in the senses" (Latin: "Nihil est in intellectu quod non sit prius in sensu"). It is found in Thomas Aquinas's De veritate, q. 2 a. 3 arg. 19.[1]

    Aquinas adopted this principle from the Peripatetic school of Greek philosophy, established by Aristotle.[where?] Aquinas argued that the existence of God could be proved by reasoning from sense data.[2] He used a variation on the Aristotelian notion of the "active intellect" ("intellectus agens")[3] which he interpreted as the ability to abstract universal meanings from particular empirical data.[4] 

    https://en.wikipedia.org/wiki/Peripatetic_axiom

    Perspicacity (also called perspicaciousness) is a penetrating discernment (from the Latin perspicācitās, meaning throughsightedness, discrimination)—a clarity of vision or intellect which provides a deep understanding and insight.[1] It takes the concept of wisdom deeper in the sense that it denotes a keenness of sense and intelligence applied to insight. It has been described as a deeper level of internalization.[2] Another definition refers to it as the "ability to recognize subtle differences between similar objects or ideas".[3]

    The artist René Magritte illustrated the quality in his 1936 painting Perspicacity. The picture shows an artist at work who studies his subject intently: it is an egg. But the painting which he is creating is not of an egg; it is an adult bird in flight.[4]

    Perspicacity is also used to indicate practical wisdom in the areas of politics and finance. [5][6] Being perspicacious about other people, rather than having false illusions, is a sign of good mental health.[7] The quality is needed in psychotherapists who engage in person-to-person dialogue and counselling of the mentally ill.[8]

    Perspicacity is different from acuity, which also describes a keen insight, since it does not include physical abilities such as sight or hearing.[3]

    In an article dated October 7, 1966, the journal Science discussed NASA scientist-astronaut program recruitment efforts:

    To quote an Academy brochure, the quality most needed by a scientist-astronaut is "perspicacity." He must, the brochure says, be able to quickly pick out, from among the thousands of things he sees, those that are significant, and to synthesize observations and develop and test working hypotheses.[9]

    Concept

    In 17th-century Europe, René Descartes devised systematic rules for clear thinking in his work Regulæ ad directionem ingenii (Rules for the direction of natural intelligence). In Descartes' scheme, intelligence consisted of two faculties: perspicacity, which provided an understanding or intuition of distinct detail; and sagacity, which enabled reasoning about the details in order to make deductions. Rule 9 was De Perspicacitate Intuitionis (On the Perspicacity of Intuition).[10] He summarised the rule as

    Oportet ingenii aciem ad res minimas et maxime faciles totam convertere, atque in illis diutius immorari, donec assuescamus veritatem distincte et perspicue intueri.

    We should totally focus the vision of the natural intelligence on the smallest and easiest things, and we should dwell on them for a long time, so long, until we have become accustomed to intuiting the truth distinctly and perspicuously.

    In his study of the elements of wisdom, the modern psychometrician Robert Sternberg identified perspicacity as one of its six components or dimensions; the other five being reasoning, sagacity, learning, judgement and the expeditious use of information.[11] In his analysis, the perspicacious individual is someone who

    ...has intuition; can offer solutions that are on the side of right and truth; is able to see through things — read between the lines; has the ability to understand and interpret his or her environment.

    — Robert J. Sternberg, Wisdom: its nature, origins, and development

    See also

    References


  • Em Olivia Bevis (1989), Curriculum Building in Nursing, p. 134, ISBN 978-0-7637-0941-9

  • Lyles, Dick (2010). Pearls of Perspicacity: Proven Wisdom to Help You Find Career Satisfaction and Success. Bloomington, IN: iUniverse, Inc. pp. xiv. ISBN 9781450244794.

  • Strutzel, Dan (2018-10-09). Vocabulary Power for Business: 500 Words You Need to Transform Your Career and Your Life: 500 Words You Need to Transform Your Career and Your Life. Gildan Media LLC aka G&D Media. ISBN 9781722521158.

  • Frederick Grinnell (2009), Everyday Practice of Science, Oxford University Press, p. 84, ISBN 978-0-19-506457-5

  • Baumeister, Andrea T.; Horton, John (1996). Literature and the Political Imagination. London: Routledge. pp. 13. ISBN 0415129141.

  • Sheng, Andrew (2009). From Asian to Global Financial Crisis: An Asian Regulator's View of Unfettered Finance in the 1990s and 2000s. Cambridge: Cambridge University Press. pp. 395. ISBN 9780521118644.

  • Joiner, Thomas E.; Kistner, Janet A.; Stellrecht, Nadia E.; Merrill, Katherine A. (May 2006), "On Seeing Clearly and Thriving: Interpersonal Perspicacity as Adaptive (Not Depressive) Realism (Or Where Three Theories Meet)", Journal of Social and Clinical Psychology, 25 (5): 542–564, doi:10.1521/jscp.2006.25.5.542, ISSN 0736-7236[permanent dead link]

  • Blaine Fowers (2005), Virtue and psychology, American Psychological Association, pp. 107–128, ISBN 978-1-59147-251-3

  • Carter LJ (7 October 1966), "Scientist-Astronauts: Only the "Perspicacious" Need Apply", Science, 154 (3745): 133–135, Bibcode:1966Sci...154..133C, doi:10.1126/science.154.3745.133, PMID 17740099

  • René Descartes, edited and translated by George Heffernan (1998), "Regula IX De Perspicacitate Intuitionis", Regulæ ad directionem ingenii, Rodopi, p. 122, ISBN 978-90-420-0138-1 {{citation}}: |author= has generic name (help)

  •  

     https://en.wikipedia.org/wiki/Perspicacity

    In philosophy, a point of view is a specific attitude or manner through which a person thinks about something.[1][2] This figurative usage of the expression dates back to 1760.[3] In this meaning, the usage is synonymous with one of the meanings of the term perspective[4][5] (also epistemic perspective).[6]

    The concept of the "point of view" is highly multifunctional and ambiguous. Many things may be judged from certain personal, traditional or moral points of view (as in "beauty is in the eye of the beholder"). Our knowledge about reality is often relative to a certain point of view.[4]

    Vázquez Campos and Manuel Liz Gutierrez suggested to analyse the concept of "point of view" using two approaches: one based on the concept of "propositional attitudes", the other on the concepts of "location" and "access".[7] 

    https://en.wikipedia.org/wiki/Point_of_view_(philosophy)

    In philosophy, practical reason is the use of reason to decide how to act. It contrasts with theoretical reason, often called speculative reason, the use of reason to decide what to follow. For example, agents use practical reason to decide whether to build a telescope, but theoretical reason to decide which of two theories of light and optics is the best.  

    https://en.wikipedia.org/wiki/Practical_reason

    The preface paradox, or the paradox of the preface,[1] was introduced by David Makinson in 1965. Similar to the lottery paradox, it presents an argument according to which it can be rational to accept mutually incompatible beliefs. While the preface paradox nullifies a claim contrary to one's belief, it is opposite to Moore's paradox which asserts a claim contrary to one's belief. 

    https://en.wikipedia.org/wiki/Preface_paradox

    In epistemology, a presupposition relates to a belief system, or Weltanschauung, that is required for the argument to make sense. A variety of Christian apologetics, called presuppositional apologetics, argues that the existence or non-existence of God is the basic presupposition of all human thought, and that all people arrive at a worldview which is ultimately determined by the theology they presuppose. Evidence and arguments are only developed after the fact in an attempt to justify the theological assumptions already made. According to this view, it is impossible to demonstrate the existence of God unless one presupposes that God exists, with the stance that modern science relies on methodological naturalism, and thus is incapable of discovering the supernatural. It thereby fashions a Procrustean bed which rejects any observation which would disprove the naturalistic assumption. Apologetics argue that the resulting worldview is inconsistent with itself and therefore irrational (for example, via the Argument from morality or via the Transcendental argument for the existence of God). 

    https://en.wikipedia.org/wiki/Presupposition_(philosophy)

    The primary–secondary quality distinction is a conceptual distinction in epistemology and metaphysics, concerning the nature of reality. It is most explicitly articulated by John Locke in his Essay concerning Human Understanding, but earlier thinkers such as Galileo and Descartes made similar distinctions.

    Primary qualities are thought to be properties of objects that are independent of any observer, such as solidity, extension, motion, number and figure. These characteristics convey facts. They exist in the thing itself, can be determined with certainty, and do not rely on subjective judgments. For example, if an object is spherical, no one can reasonably argue that it is triangular. Primary qualities as mentioned earlier, exist outside of the observer. They inhere to an object in such a way that if the object was changed, e.g. divided (if the object is divisible; a sphere is not, since dividing a sphere would result in two non-spheres), the primary qualities would remain. When dividing a divisible object, “solidity, extension, figure, and mobility” [1] would not be altered because the primary qualities are built into the object itself. Another key component of primary qualities is that they create ideas in our minds through experience; they represent the actual object. Because of this, primary qualities such as size, weight, solidity, motion, and so forth can all be measured in some form.[2] Using an apple as an example, the shape and size can actually be measured and produce the idea in our minds of what the object is. A clear distinction to make is that qualities do not exist in the mind, rather they produce ideas in our minds and exist within the objects. In the case of primary qualities, they exist inside the actual body/substance and create an idea in our mind that resembles the object.

    Secondary qualities are thought to be properties that produce sensations in observers, such as color, taste, smell, and sound. They can be described as the effect things have on certain people. Secondary qualities use the power of reflection in order to be perceived by our minds. These qualities “would ordinarily be said to be only a power in rather than a quality of the object”.[3] They are sensible qualities that produce different ideas in our mind from the actual object. Going back to the example of the aforementioned apple, something such as the redness of the apple does not produce an image of the object itself, but rather the idea of red. Secondary qualities are used to classify similar ideas produced by an object. That is why when we see something “red” it is only “red” in our minds because they produce the same idea as another object. So, going back to the color of the apple, it produces an idea of red, which we classify and identify with other red ideas. Again, secondary qualities do not exist inside the mind; they are simply the powers that allow us to sense a certain object and thus ‘reflect’ and classify similar ideas.[4]

    According to the theory, primary qualities are measurable aspects of physical reality; secondary qualities are subjective. 

    https://en.wikipedia.org/wiki/Primary%E2%80%93secondary_quality_distinction

    A propositional attitude is a mental state held by an agent or organism toward a proposition.

    In philosophy, propositional attitudes can be considered to be neurally-realized causally efficacious content-bearing internal states.[1]

    Linguistically, propositional attitudes are denoted by a verb (e.g. "believed") governing an embedded "that" clause, for example, 'Sally believed that she had won'.

    Propositional attitudes are often assumed to be the fundamental units of thought and their contents, being propositions, are true or false from the perspective of the person. An agent can have different propositional attitudes toward the same proposition (e.g., "S believes that her ice-cream is cold," and "S fears that her ice-cream is cold").

    Propositional attitudes have directions of fit: some are meant to reflect the world, others to influence it.

    One topic of central concern is the relation between the modalities of assertion and belief, perhaps with intention thrown in for good measure. For example, we frequently find ourselves faced with the question of whether or not a person's assertions conform to his or her beliefs. Discrepancies here can occur for many reasons, but when the departure of assertion from belief is intentional, we usually call that a lie.

    Other comparisons of multiple modalities that frequently arise are the relationships between belief and knowledge and the discrepancies that occur among observations, expectations, and intentions. Deviations of observations from expectations are commonly perceived as surprises, phenomena that call for explanations to reduce the shock of amazement. 

    https://en.wikipedia.org/wiki/Propositional_attitude

    A proof is sufficient evidence or a sufficient argument for the truth of a proposition.[1][2][3][4]

    The concept applies in a variety of disciplines,[5] with both the nature of the evidence or justification and the criteria for sufficiency being area-dependent. In the area of oral and written communication such as conversation, dialog, rhetoric, etc., a proof is a persuasive perlocutionary speech act, which demonstrates the truth of a proposition.[6] In any area of mathematics defined by its assumptions or axioms, a proof is an argument establishing a theorem of that area via accepted rules of inference starting from those axioms and from other previously established theorems.[7] The subject of logic, in particular proof theory, formalizes and studies the notion of formal proof.[8] In some areas of epistemology and theology, the notion of justification plays approximately the role of proof,[9] while in jurisprudence the corresponding term is evidence,[10] with "burden of proof" as a concept common to both philosophy and law.

    In most disciplines, evidence is required to prove something. Evidence is drawn from the experience of the world around us, with science obtaining its evidence from nature,[11] law obtaining its evidence from witnesses and forensic investigation,[12] and so on. A notable exception is mathematics, whose proofs are drawn from a mathematical world begun with axioms and further developed and enriched by theorems proved earlier.

    Exactly what evidence is sufficient to prove something is also strongly area-dependent, usually with no absolute threshold of sufficiency at which evidence becomes proof.[13][14] In law, the same evidence that may convince one jury may not persuade another. Formal proof provides the main exception, where the criteria for proofhood are ironclad and it is impermissible to defend any step in the reasoning as "obvious" (except for the necessary ability of the one proving and the one being proven to, to correctly identify any symbol used in the proof.);[15] for a well-formed formula to qualify as part of a formal proof, it must be the result of applying a rule of the deductive apparatus of some formal system to the previous well-formed formulae in the proof sequence.[16]

    Proofs have been presented since antiquity. Aristotle used the observation that patterns of nature never display the machine-like uniformity of determinism as proof that chance is an inherent part of nature.[17] On the other hand, Thomas Aquinas used the observation of the existence of rich patterns in nature as proof that nature is not ruled by chance.[18]

    Proofs need not be verbal. Before Copernicus, people took the apparent motion of the Sun across the sky as proof that the Sun went round the Earth.[19] Suitably incriminating evidence left at the scene of a crime may serve as proof of the identity of the perpetrator. Conversely, a verbal entity need not assert a proposition to constitute a proof of that proposition. For example, a signature constitutes direct proof of authorship; less directly, handwriting analysis may be submitted as proof of authorship of a document.[20] Privileged information in a document can serve as proof that the document's author had access to that information; such access might in turn establish the location of the author at certain time, which might then provide the author with an alibi.

    Proof vs evidence

    18th-century Scottish philosopher David Hume built on Aristotle's separation of belief from knowledge,[21] recognizing that one can be said to "know" something only if one has firsthand experience with it, in a strict sense proof, while one can infer that something is true and therefore "believe" it without knowing, via evidence or supposition. This speaks to one way of separating proof from evidence:

    If one cannot find their chocolate bar, and sees chocolate on their napping roommate's face, this evidence can cause one to believe their roommate ate the chocolate bar. But they do not know their roommate ate it. It may turn out that the roommate put the candy away when straightening up, but was thus inspired to go eat their own chocolate. Only if one directly experiences proof of the roommate eating it, perhaps by walking in on them doing so, does one know the roommate did it.

    In an absolute sense, one can be argued not to "know" anything, except for the existence of one's own thoughts, as 17th-century philosopher John Locke pointed out.[22] Even earlier, Descartes addressed when saying cogito, ergo sum (I think, therefore I am). While Descartes was attempting to "prove" logically that the world exists, his legacy in doing so is to have shown that one cannot have such proof, because all of one's perceptions could be false (such as under the evil demon or simulated reality hypotheses). But one at least has proof of one's own thoughts existing, and strong evidence that the world exists, enough to be considered "proof" by practical standards, though always indirect and impossible to objectively confirm.

    See also

    References


  • Proof and other dilemmas: mathematics and philosophy by Bonnie Gold, Roger A. Simons 2008 ISBN 0883855674 pages 12–20

  • Philosophical Papers, Volume 2 by Imre Lakatos, John Worrall, Gregory Currie, ISBN Philosophical Papers, Volume 2 by Imre Lakatos, John Worrall, Gregory Currie 1980 ISBN 0521280303 pages 60–63

  • Evidence, proof, and facts: a book of sources by Peter Murphy 2003 ISBN 0199261954 pages 1–2

  • Logic in Theology – And Other Essays by Isaac Taylor 2010 ISBN 1445530139 pages 5–15

  • Compare 1 Thessalonians 5:21: "Prove all things [...]."

  • John Langshaw Austin: How to Do Things With Words. Cambridge (Mass.) 1962 – Paperback: Harvard University Press, 2nd edition, 2005, ISBN 0-674-41152-8.

  • Cupillari, Antonella. The Nuts and Bolts of Proofs. Academic Press, 2001. Page 3.

  • Alfred Tarski, Introduction to Logic and to the Methodology of the Deductive Sciences (ed. Jan Tarski). 4th Edition. Oxford Logic Guides, No. 24. New York and Oxford: Oxford University Press, 1994, xxiv + 229 pp. ISBN 0-19-504472-X

  • "Foundationalist Theories of Epistemic Justification". The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. 2018.

  • "Definition of proof | Dictionary.com". www.dictionary.com.

  • Reference Manual on Scientific Evidence, 2nd Ed. (2000), p. 71. Accessed May 13, 2007.

  • John Henry Wigmore, A Treatise on the System of Evidence in Trials at Common Law, 2nd ed., Little, Brown, and Co., Boston, 1915

  • Simon, Rita James & Mahan, Linda. (1971). "Quantifying Burdens of Proof—A View from the Bench, the Jury, and the Classroom". Law and Society Review. 5 (3): 319–330. doi:10.2307/3052837. JSTOR 3052837.

  • Katie Evans; David Osthus; Ryan G. Spurrier. "Distributions of Interest for Quantifying Reasonable Doubt and Their Applications" (PDF). Archived from the original (PDF) on 2013-03-17. Retrieved 2007-01-14.

  • A. S. Troelstra, H. Schwichtenberg (1996). Basic Proof Theory. In series Cambridge Tracts in Theoretical Computer Science, Cambridge University Press, ISBN 0-521-77911-1.

  • Hunter, Geoffrey, Metalogic: An Introduction to the Metatheory of Standard First-Order Logic, University of California Press, 1971

  • Aristotle's Physics: a Guided Study, Joe Sachs, 1995 ISBN 0813521920 p. 70

  • The treatise on the divine nature: Summa theologiae I, 1–13, by Saint Thomas Aquinas, Brian J. Shanley, 2006 ISBN 0872208052 p. 198

  • Thomas S. Kuhn, The Copernican Revolution, pp. 5–20

  • Trial tactics by Stephen A. Saltzburg, 2007 ISBN 159031767X page 47

  • David Hume

  •  https://en.wikipedia.org/wiki/Proof_(truth)


    In the field of epistemology, the problem of the criterion is an issue regarding the starting point of knowledge. This is a separate and more fundamental issue than the regress argument found in discussions on justification of knowledge.[1]

    In Western philosophy the earliest surviving documentation of the problem of the criterion is in the works of the Pyrrhonist philosopher Sextus Empiricus.[1] In Outlines of Pyrrhonism Sextus Empiricus demonstrated that no criterion of truth had been established, contrary to the position of dogmatists such as the Stoics and their doctrine of katalepsis.[2] In this Sextus was repeating or building upon earlier Pyrrhonist arguments about the problem of the criterion, as Pyrrho, the founder of Pyrrhonism, had declared that "neither our sense-perceptions nor our doxai (views, theories, beliefs) tell us the truth or lie.[3]

    American philosopher Roderick Chisholm in his Theory of Knowledge details the problem of the criterion with two sets of questions:

    1. What do we know? or What is the extent of our knowledge?
    2. How do we know? or What is the criterion for deciding whether we have knowledge in any particular case?

    An answer to either set of questions will allow us to devise a means of answering the other. Answering the former question set first is called particularism, whereas answering the latter set first is called methodism. A third solution is skepticism, which proclaims that since one cannot have an answer to the first set of questions without first answering the second set, and one cannot hope to answer the second set of questions without first knowing the answers to the first set, we are, therefore, unable to answer either. This has the result of us being unable to justify any of our beliefs.[citation needed]

    Particularist theories organize things already known and attempt to use these particulars of knowledge to find a method of how we know, thus answering the second question set. Methodist theories propose an answer to question set two and proceed to use this to establish what we, in fact, know. Classical empiricism embraces the methodist approach.[citation needed]

    See also

    References


    https://en.wikipedia.org/wiki/Problem_of_the_criterion

     

    The problem of other minds is a philosophical problem traditionally stated as the following epistemological question: Given that I can only observe the behavior of others, how can I know that others have minds?[1] The problem is that knowledge of other minds is always indirect. The problem of other minds does not negatively impact social interactions due to people having a "theory of mind" - the ability to spontaneously infer the mental states of others - supported by innate mirror neurons,[2] a theory of mind mechanism,[3] or a tacit theory.[4] There has also been an increase in evidence that behavior results from cognition which in turn requires consciousness and the brain.

    It is a problem of the philosophical idea known as solipsism: the notion that for any person only one's own mind is known to exist. The problem of other minds maintains that no matter how sophisticated someone's behavior is, that does not reasonably guarantee the same presence of thought occurring in the self.[5] However, it is often disregarded by most philosophers as outdated. Behavior is recognized to occur due to a number of processes within the brain quelling much of the debate on this problem.

    Phenomenology studies the subjective experience of human life resulting from consciousness.

    See also

    References


  • Hyslop, Alec (14 January 2014). Zalta, Edward N.; Nodelman, Uri (eds.). "Other minds". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Center for the Study of Language and Information, Stanford University. ISSN 1095-5054. Retrieved May 26, 2015.

  • Colle, Livia; Becchio, Cristina; Bara, Bruno (2008). "The Non-Problem of the Other Minds: A Neurodevelopmental Perspective on Shared Intentionality". Human Development. 51 (5/6): 336–348. doi:10.1159/000170896. JSTOR 26764876. S2CID 143370747. Retrieved 29 April 2021.

  • Leslie, Alan; Friedman, Ori; German, Tim (2004). "Core mechanisms in 'theory of mind'". Trends in Cognitive Sciences. 8 (12): 528–533. doi:10.1016/j.tics.2004.10.001. PMID 15556021. S2CID 17591514.

  • Gopnik, Alison; Wellman, Henry (2012). "Reconstructing constructivism: causal models, Bayesian learning mechanisms, and the theory theory". Psychological Bulletin. 138 (6): 1085–1108. doi:10.1037/a0028044. PMC 3422420. PMID 22582739.

    1. Thornton, Stephen. "Solipsism and the Problem of Other Minds". Internet Encyclopedia of Philosophy. ISSN 2161-0002. Retrieved 2021-06-02.

    Further reading

    External links


     

    https://en.wikipedia.org/wiki/Problem_of_other_minds


    First formulated by David Hume, the problem of induction questions our reasons for believing that the future will resemble the past, or more broadly it questions predictions about unobserved things based on previous observations. This inference from the observed to the unobserved is known as "inductive inferences", and Hume, while acknowledging that everyone does and must make such inferences, argued that there is no non-circular way to justify them, thereby undermining one of the Enlightenment pillars of rationality.[1]

    While David Hume is credited with raising the issue in Western analytic philosophy in the 18th century, the Pyrrhonist school of Hellenistic philosophy and the Cārvāka school of ancient Indian philosophy had expressed skepticism about inductive justification long prior to that.

    The traditional inductivist view is that all claimed empirical laws, either in everyday life or through the scientific method, can be justified through some form of reasoning. The problem is that many philosophers tried to find such a justification but their proposals were not accepted by others. Identifying the inductivist view as the scientific view, C. D. Broad once said that induction is "the glory of science and the scandal of philosophy".[2] In contrast, Karl Popper's critical rationalism claimed that inductive justifications are never used in science and proposed instead that science is based on the procedure of conjecturing hypotheses, deductively calculating consequences, and then empirically attempting to falsify them. 

    https://en.wikipedia.org/wiki/Problem_of_induction

    The private language argument argues that a language understandable by only a single individual is incoherent, and was introduced by Ludwig Wittgenstein in his later work, especially in the Philosophical Investigations.[1] The argument was central to philosophical discussion in the second half of the 20th century.  

    https://en.wikipedia.org/wiki/Private_language_argument

    A principle is a proposition or value that is a guide for behavior or evaluation. In law, it is a rule that has to be or usually is to be followed. It can be desirably followed, or it can be an inevitable consequence of something, such as the laws observed in nature or the way that a system is constructed. The principles of such a system are understood by its users as the essential characteristics of the system, or reflecting system's designed purpose, and the effective operation or use of which would be impossible if any one of the principles was to be ignored.[2] A system may be explicitly based on and implemented from a document of principles as was done in IBM's 360/370 Principles of Operation.

    Examples of principles are, entropy in a number of fields, least action in physics, those in descriptive comprehensive and fundamental law: doctrines or assumptions forming normative rules of conduct, separation of church and state in statecraft, the central dogma of molecular biology, fairness in ethics, etc.

    In common English, it is a substantive and collective term referring to rule governance, the absence of which, being "unprincipled", is considered a character defect. It may also be used to declare that a reality has diverged from some ideal or norm as when something is said to be true only "in principle" but not in fact.

    As law

    As moral law

    Socrates preferred to face execution rather than betray his moral principles.[3]

    A principle represents values that orient and rule the conduct of persons in a particular society. To "act on principle" is to act in accordance with one's moral ideals.[4] Principles are absorbed in childhood through a process of socialization. There is a presumption of liberty of individuals that is restrained. Exemplary principles include First, do no harm, the golden rule and the doctrine of the mean.

    As a juridic law

    It represents a set of values that inspire the written norms that organize the life of a society submitting to the powers of an authority, generally the State. The law establishes a legal obligation, in a coercive way; it therefore acts as principle conditioning of the action that limits the liberty of the individuals. See, for examples, the territorial principle, homestead principle, and precautionary principle.

    As scientific law

    Archimedes principle, relating buoyancy to the weight of displaced water, is an early example of a law in science. Another early one developed by Malthus is the population principle, now called the Malthusian principle.[5] Freud also wrote on principles, especially the reality principle necessary to keep the id and pleasure principle in check. Biologists use the principle of priority and principle of Binomial nomenclature for precision in naming species. There are many principles observed in physics, notably in cosmology which observes the mediocrity principle, the anthropic principle, the principle of relativity and the cosmological principle. Other well-known principles include the uncertainty principle in quantum mechanics and the pigeonhole principle and superposition principle in mathematics.

    As axiom or logical fundament

    Principle of sufficient reason

    The principle states that every event has a rational explanation.[6] The principle has a variety of expressions, all of which are perhaps best summarized by the following:

    For every entity x, if x exists, then there is a sufficient explanation for why x exists.
    For every event e, if e occurs, then there is a sufficient explanation for why e occurs.
    For every proposition p, if p is true, then there is a sufficient explanation for why p is true.

    However, one realizes that in every sentence there is a direct relation between the predicate and the subject. To say that "the Earth is round", corresponds to a direct relation between the subject and the predicate.

    Principle of non-contradiction

    Portrait bust of Aristotle; an Imperial Roman copy of a lost bronze sculpture made by Lysippos

    According to Aristotle, “It is impossible for the same thing to belong and not to belong at the same time to the same thing and in the same respect.”[7] For example, it is not possible that in exactly the same moment and place, it rains and does not rain.[8]

    Principle of excluded middle

    The principle of the excluding third or "principium tertium exclusum" is a principle of the traditional logic formulated canonically by Leibniz as: either A is B or A isn't B. It is read the following way: either P is true, or its denial ¬P is.[9] It is also known as "tertium non datur" ('A third (thing) is not'). Classically it is considered to be one of the most important fundamental principles or laws of thought (along with the principles of identity, non-contradiction and sufficient reason).

    See also

    References


  • Jacoby, Jeff. "Lady Justice's blindfold." Boston.com. 10 May 2009. 25 October 2017.

  • Alpa, Guido (1994) General Principles of Law, Annual Survey of International & Comparative Law, Vol. 1: Is. 1, Article 2. from Golden Gate University School of Law

  • "The Ethics of Socrates." Archived 2018-05-01 at the Wayback Machine Philosophy. 25 October 2017.

  • "Full Transcript: Jeff Flake’s Speech on the Senate Floor." New York Times. 24 October 2017. 25 October 2017.

  • Elwell, Frank W. "T. Robert Mathus's Principle ...." Rogers State University. 2013. 25 October 2017.

  • "Principle of Sufficient Reason." Archived 2018-06-11 at the Wayback Machine Stanford Encyclopedia of Philosophy. 7 September 2016. 25 October 2017.

  • "Aristotle on Non-contradiction." Archived 2018-06-11 at the Wayback Machine Stanford Encyclopedia of Philosophy. 12 June 2015. 25 October 2017.

  • "Great Philosophers." Oregon State University. 2002. 25 October 2017.

  • External links

     https://en.wikipedia.org/wiki/Principle

    Doubt is a mental state in which the mind remains suspended between two or more contradictory propositions, unable to be certain of any of them.[1][better source needed] Doubt on an emotional level is indecision between belief and disbelief. It may involve uncertainty, distrust or lack of conviction on certain facts, actions, motives, or decisions. Doubt can result in delaying or rejecting relevant action out of concern for mistakes or missed opportunities.  

    https://en.wikipedia.org/wiki/Doubt

    Distinction, the fundamental philosophical abstraction, involves the recognition of difference.[1]

    In classical philosophy, there were various ways in which things could be distinguished. The merely logical or virtual distinction, such as the difference between concavity and convexity, involves the mental apprehension of two definitions, but which cannot be realized outside the mind, as any concave line would be a convex line considered from another perspective. A real distinction involves a level of ontological separation, as when squirrels are distinguished from llamas (for no squirrel is a llama, and no llama is a squirrel).[2] A real distinction is thus different than a merely conceptual one, in that in a real distinction, one of the terms can be realized in reality without the other being realized.

    Later developments include Duns Scotus's formal distinction, which developed in part out of the recognition in previous authors that there need to be an intermediary between logical and real distinctions.[3]

    Some relevant distinctions to the history of Western philosophy include:

    Distinctions in contemporary thought

    Analytic–synthetic distinction

    While there are anticipation of this distinction prior to Kant in the British Empiricists (and even further in Scholastic thought), it was Kant who introduced the terminology. The distinction concerns the relation of a subject to its predicate: analytic claims are those in which the subject contains the predicate, as in "All bodies are extended." Synthetic claims bring two concepts together, as in "All events are caused." The distinction was recently called into question by W.V.O. Quine, in his paper "Two Dogmas of Empiricism."

    A priori and a posteriori

    The origins of the distinction are less clear, and it concerns the origins of knowledge. A posteriori knowledge arises from, or is caused by, experience. A priori knowledge may come temporally after experience, but its certainty is not derivable from the experience itself. Saul Kripke was the first major thinker to propose that there are analytic a posteriori knowledge claims.

    Notable distinctions in historical authors

    Aristotle

    Aristotle makes the distinction between actuality and potentiality.[4] Actuality is a realization of the way a thing could be, while potency refers simply to the way a thing could be. There are two levels to each: matter itself can be anything, and becomes something actually by causes, making it something which then has the ability to be in a certain way, and that ability can then be realized. The matter of an ax can be an ax, then is made into an ax. The ax thereby is able to cut, and reaches a new form of actuality in actually cutting.

    Aquinas

    The major distinction Aquinas makes is that of essence and existence. It is a distinction already in Avicenna, but Aquinas maps the distinction onto the actuality/potentiality distinction of Aristotle, such that the essence of a thing is in potency to the existence of a thing, which is that thing's actuality.[5]

    Kant

    In Kant, the distinction between appearance and thing-in-itself is foundational to his entire philosophical project.[6] The distinction separates the way a thing appears to us on the one hand, and the way a thing really is.

    See also

    References


  • Sokolowski, Robert (1998-01-01). "The Method of Philosophy: Making Distinctions". Review of Metaphysics. 51 (3): 515–532.

  • Copleston, Frederick (2003-06-12). History of Philosophy Volume 2: Medieval Philosophy. A&C Black. ISBN 9780826468963.

  • Wengert, R. G.; Institute, The Hegeler (1965-11-01). "The Development of the Doctrine of the Formal Distinction in the Lectura Prima of John Duns Scotus". Monist. 49 (4): 571–587. doi:10.5840/monist196549435.

  • Cohen, S. Marc (2016-01-01). Zalta, Edward N. (ed.). The Stanford Encyclopedia of Philosophy (Winter 2016 ed.). Metaphysics Research Lab, Stanford University.

  • "Aquinas: Metaphysics | Internet Encyclopedia of Philosophy". www.iep.utm.edu. Retrieved 2017-04-05.

    1. Colin, Marshall (2013). "Kant's One Self and the Appearance/Thing-in-itself Distinction". Kant-Studien. 104 (4). ISSN 0022-8877.

     https://en.wikipedia.org/wiki/Distinction_(philosophy)

    Direct experience or immediate experience generally denotes experience gained through immediate sense perception. Many philosophical systems hold that knowledge or skills gained through direct experience cannot be fully put into words.

    See also

    References


     https://en.wikipedia.org/wiki/Direct_experience

    Declarative knowledge is an awareness of facts that can be expressed using declarative sentences, like knowing that Princess Diana died in 1997. It is also called theoretical knowledge, descriptive knowledge, propositional knowledge, and knowledge-that. It is not restricted to one specific use or purpose and can be stored in books or on computers.

    Epistemology is the main discipline studying declarative knowledge. Among other things, it investigates the essential components of declarative knowledge. According to a traditionally influential view, it has three components: it is a belief that is true and justified. As a belief, it is a subjective commitment to the accuracy of the believed claim while truth is an objective aspect. To be justified, a belief has to be rational by being based on good reasons. This means that mere guesses do not amount to knowledge even if they are true. In contemporary epistemology, various additional or alternative components have been suggested, for example, that no defeating evidence is present, that the belief was caused by a reliable cognitive process, and that the belief is infallible.

    Different types of declarative knowledge can be distinguished based on the source of knowledge, the type of claim that is known, and how certain the knowledge is. A central distinction is between a posteriori knowledge, which arises from experience, and a priori knowledge, which is grounded in pure rational reflection. Other classifications include domain-specific knowledge and general knowledge, knowledge of facts, concepts, and principles as well as explicit and implicit knowledge.

    Declarative knowledge is often contrasted with practical knowledge and knowledge by acquaintance. Practical knowledge consists of skills, like knowing how to ride a horse. It is a form of non-intellectual knowledge since it does not need to involve true beliefs. Knowledge by acquaintance is a familiarity with something based on first-hand experience, like knowing the taste of chocolate. This familiarity can be present even if the person does not possess any factual information about the object. Some theorists also contrast declarative knowledge with conditional knowledge, prescriptive knowledge, structural knowledge, case knowledge, and strategic knowledge.

    Declarative knowledge is required for various activities, such as labeling phenomena as well as describing and explaining them. It can guide the processes of problem-solving and decision-making. In many cases, its value is based on its usefulness in achieving one's goals. However, its usefulness is not always obvious and not all instances of declarative knowledge are valuable. A lot of knowledge taught at school is declarative knowledge. It can be learned through rote memorization of individual facts but in many cases, it is advantageous to foster a deeper understanding that integrates the new information into wider structures and connects it to pre-existing knowledge. Sources of declarative knowledge are perception, introspection, memory, reasoning, and testimony.

    Definition and semantic field

    Declarative knowledge is an awareness or understanding of facts. It can be expressed through spoken and written language using declarative sentences and can thus be acquired through verbal communication.[1] Examples of declarative knowledge are knowing "that Princess Diana died in 1997" or "that Goethe was 83 when he finished writing Faust".[2] Declarative knowledge involves mental representations in the form of concepts, ideas, theories, and general rules. Through these representations, the person stands in a relationship to a particular aspect of reality by depicting what it is like. Declarative knowledge tends to be context-independent: it is not tied to any specific use and may be employed for many different tasks.[3][4][5] It includes a wide range of phenomena and encompasses both specific knowledge of individual facts, for example, that the atomic mass of gold is 196.97 u, as well as general laws, for example, that the color of leaves of some trees changes in autumn.[6] Due to its verbal nature, declarative knowledge can be stored in media like books and harddisks. It may also be processed using computers and plays a key role in various forms of artificial intelligence, for example, in the knowledge base of expert systems.[7]

    Terms like theoretical knowledge, descriptive knowledge, propositional knowledge, and knowledge-that are used as synonyms of declarative knowledge and express its different aspects. Theoretical knowledge is knowledge of what is the case, in the past, present, or future independent of a practical outlook concerning how to achieve a specific goal. Descriptive knowledge is knowledge that involves descriptions of actual or speculative objects, events, or concepts. Propositional knowledge asserts that a certain proposition or claim about the world is true. This is often expressed using a that-clause, as in "knowing that kangaroos hop" or "knowing that 2 + 2 = 4". For this reason, it is also referred to as knowledge-that.[8] Declarative knowledge contrasts with non-declarative knowledge, which does not concern the explicit comprehension of factual information regarding the world. In this regard, practical knowledge in the form of skills and knowledge by acquaintance as a type of experiential familiarity are not forms of declarative knowledge.[9][10][11] The main discipline investigating declarative knowledge is called epistemology. It tries to determine its nature, how it arises, what value it has, and what its limits are.[12][13][14]

    Components

    A central issue in epistemology is to determine the components or essential features of declarative knowledge. This field of inquiry is called the analysis of knowledge. It aims to provide the conditions that are individually necessary and jointly sufficient for a state to amount to declarative knowledge. In this regard, it is similar to how a chemist breaks down a sample by identifying all the chemical elements composing it.[15][16][17]

    Venn diagram of justified true belief
    The main components traditionally associated with knowledge are belief, truth, and justification.

    A traditionally influential view states that declarative knowledge has three essential features: it is (1) a belief that is (2) true and (3) justified.[18][19][20] This position is referred to as the justified-true-belief conception of knowledge and is often seen as the standard view.[21][22] This view faced significant criticism following a series of counterexamples given by Edmund Gettier in the latter half of the 20th century. In response, various alternative theories of the components of declarative knowledge have been suggested. Some see justified true belief as a necessary condition that is not sufficient by itself and discuss additional components that are needed. Another response is to deny that justification is needed and seek a different component to replace it.[23][24][25] Some theorists, like Timothy Williamson, reject the idea that declarative knowledge can be deconstructed into various constituent parts. They argue instead that it is a fundamental and unanalyzable epistemological state.[26]

    Belief

    One commonly accepted component of knowledge is belief. In this sense, whoever knows that whales are animals automatically also believes that whales are animals. A belief is a mental state that affirms that something is the case. As an attitude toward a proposition, it belongs to the subjective side of knowledge. Some theorists, like Luis Villoro, distinguish between weak and strong beliefs. Having a weak belief implies that the person merely presumes that something is the case. They guess that the claim is probably correct while acknowledging at the same time that they might very well be mistaken about it. This contrasts with strong belief, which implies a substantial commitment to the believed claim. It involves certainty in the form of being sure about it. For declarative knowledge, this stronger sense of belief is relevant.[27]

    A few epistemologists, like Katalin Farkas, claim that, at least in some cases, knowledge is not a form of belief but a different type of mental state. One argument for this position is based on statements like "I don't believe it, I know it", which may be used to express that the person is very certain and has good reason to affirm this claim. However, this argument is not generally accepted since knowing something does not imply that the person disbelieves the claim. A different explanation is to hold that this statement is a linguistic tool to emphasize that the person is well-informed. In this regard, it only denies that a weak belief exists without rejecting that a stronger form of belief is involved.[28]

    Truth

    Beliefs are either true or false depending on whether they accurately represent reality. Truth is usually seen as one of the essential components of knowledge. This means that it is impossible to know a claim that is false. For example, it is possible to believe that Hillary Clinton won the 2016 US Presidential election but nobody can know it because this event did not occur. That a proposition is true does not imply that it is common knowledge, that an irrefutable proof exists, or that someone is thinking about it. Instead, it only means that it presents things as they are. For example, when flipping a coin, it may be true that it will land heads even if it is not possible to predict this with certainty. Truth is an objective factor of knowledge that goes beyond the psychological sphere of belief since it usually depends on what the world outside the person's mind is like.[29][30][31]

    Some epistemologists hold that there are at least some forms of knowledge that do not require truth. For example, Joseph Thomas Tolliver argues that certain mental states amount to knowledge only because of the causes and effects they have even though they do not represent anything and are therefore neither true nor false.[31][32] A different outlook is found in the field of the anthropology of knowledge, which studies how knowledge is acquired, stored, retrieved, and communicated. In this discipline, knowledge is often understood in a very wide sense that is roughly equivalent to understanding and culture. In this regard, the main interest is usually about how people ascribe truth values to meaning-contents, like when affirming an assertion, independent of whether this assertion is true or false.[33][34][35] Despite these positions, it is widely accepted in epistemology that truth is an essential component of declarative knowledge.[29]

    Justification

    In epistemology, justification means that a proposition is supported by evidence or that a person has good reasons for believing it. This implies some form of appraisal in relation to an evaluative standard of rationality.[36][37] For example, a person who just checked their bank account and saw that their balance is 500 dollars has a good reason to believe that they have 500 dollars in their bank account.[38] However, justification by itself does not imply that a belief is true. For example, if someone reads the time from their clock they may form a justified belief about the current time even if the clock stopped a while ago and shows a false time now.[39] If a person has a justified belief then they are often able to articulate what this belief is and to provide arguments stating the reasons supporting it. However, this ability to articulate one's reasons is not an essential requirement of justification.[37]

    Justification is usually included as a component of knowledge to exclude lucky guesses. For example, a compulsive gambler flipping a coin may be certain that it will land heads this time without a good reason for this belief. In this case, the belief does not amount to knowledge even if it turns out that it was true. This observation can be easily explained by including justification as an essential component: the gambler's belief does not amount to knowledge because it lacks justification. In this regard, mere true opinion is not enough to establish knowledge. A central issue in epistemology concerns the standards of justification, i.e., what conditions have to be fulfilled for a belief to be justified. Internalists understand justification as a purely subjective component, akin to belief. They claim that a belief is justified if it stands in the right relation to other mental states of the believer. For example, perceptual experiences can justify beliefs about the perceived object. This contrasts with externalists, who claim that justification involves objective factors that are external to the person's mind. Such factors can include causal relations with the object of the belief or that reliable cognitive processes are responsible for the formation of the belief.[40][41]

    Diagram showing the differences between foundationalism, coherentism, and infinitism
    Foundationalism, coherentism, and infinitism are theories about how justification arises. The black arrows symbolize how one belief supports another belief.

    A closely related issue concerns the question of how the different mental states have to be related to each other to be justified. For example, one belief may be supported by another belief. However, it is questionable whether this is sufficient for justification if the second belief is itself not justified. For example, a person may believe that Ford cars are cheaper than BMWs because they heard this from a friend. However, this belief may not be justified if there is no good reason to think that the friend is a reliable source of information. This can lead to an infinite regress since whatever reason is provided for the friend's reliability may itself lack justification. Three popular responses to this problem are foundationalism, coherentism, and infinitism. According to foundationalists, some reasons are foundational and do not depend on other reasons for their justification. Coherentists also reject the idea that an infinite chain of reasons is needed and argue that different beliefs can mutually support each other without one being more basic than the others. Infinitists, on the other hand, accept the idea that an infinite chain of reasons is required.[42]

    Many debates concerning the nature of declarative knowledge focus on the role of justification, specifically whether it is needed at all and what else might be needed to complement it. Influential in this regard was a series of thought experiments by Edmund Gettier. They present concrete cases of justified true beliefs that fail to amount to knowledge. The reason for their failure is a type of epistemic luck. This means that the justification is not relevant to whether the belief is true. In one thought experiment, Smith and Jones apply for a job and before officially declaring the result, the company president tells Smith that Jones will get the job. Smith saw that Jones has 10 coins in his pocket so he comes to form the justified belief that the successful candidate has 10 coins in his pocket. In the end, it turns out that Smith gets the job after all. By lucky coincidence, Smith also has 10 coins in his pocket. Gettier claims that, because of this coincidence, Smith's belief that the successful candidate has 10 coins in his pocket does not amount to knowledge even though it is justified and true because the justification is not relevant to the truth.[43][44]

    Others

    Photo of Edmund Gettier
    The thought experiments by Edmund Gettier influenced many epistemologists to seek additional components of declarative knowledge.

    In response to Gettier's thought experiments, various further components of declarative knowledge have been suggested. Some of them are intended as additional elements besides belief, truth, and justification while others are understood as replacements for justification.[45][46][47]

    According to defeasibility theory, an additional condition besides having evidence in favor of the belief is that no defeating evidence is present. Defeating evidence of a belief is evidence that undermines the justification of the belief. For example, if a person looks outside the window and sees a rainbow then this impression justifies their belief that there is a rainbow. However, if the person just ate a psychedelic drug then this is defeating evidence since it undermines the reliability of their experiences. Defeasibility theorists claim that, in this case, the belief does not amount to knowledge because defeating evidence is present. As an additional component of knowledge, they require that the person has no defeating evidence of the belief.[48][49][50] Some theorists demand the stronger requirement that there is no true proposition that would defeat the belief, independent of whether the person is aware of this proposition or not.[51] A closely related theory holds that beliefs can only amount to knowledge if they are not inferred from a falsehood.[52]

    A different theory is based on the idea that knowledge states should be responsive to what the world is like. One suggested component in this regard is that the belief is safe or sensitive. This means that the person has the belief because it is true but that they would not hold the belief if it was false. In this regard, the person's belief tracks the state of the world.[53]

    Some theories do not try to provide additional requirements but instead propose replacing justification with alternative components. For example, according to some forms of reliabilism, a true belief amounts to knowledge if it was formed through a reliable cognitive process. A cognitive process is reliable if it produces mostly true beliefs in actual situations and would also do so in counterfactual situations. [47][54][55] Examples of reliable processes are perception and reasoning.[56] A consequence of reliabilism is that knowledge is not restricted to humans since reliable belief-formation processes may also be present in other animals, like dogs, apes, or rats, even if they do not possess justification for their beliefs.[47] Virtue epistemology is a closely related approach that understands knowledge as the manifestation of epistemic virtues. It agrees with regular forms of reliabilism that knowledge is not a matter of luck but puts additional emphasis on the evaluative aspect of knowledge and the underlying skills responsible for it.[57][58][59]

    According to causal theories of knowledge, a necessary element of knowing a fact is that this fact somehow caused the knowledge of it. This is the case, for example, if a belief about the color of a house is based on a perceptual experience, which causally connects the house to the belief. This causal connection does not have to be direct and can be mediated through different steps like activating memories and drawing inferences.[60][47]

    In many cases, the goal of suggesting additional components is to avoid cases of epistemic luck. In this regard, some theorists have argued that the additional component would have to ensure that the belief is true. This approach is reflected in the idea that knowledge implies a form of certainty. But it sets the standards of knowledge very high and may require that a belief has to be infallible to amount to knowledge. This means that the justification ensures that the belief is true. For example, Richard Kirkham argues that the justification required for knowledge must be based on self-evident premises that deductively entail the held belief. Such a position leads to a form of skepticism about knowledge since the great majority of regular beliefs do not live up to these requirements. It would imply that people know very little and that most who claim to know a certain fact are mistaken. However, a more common view among epistemologists is that knowledge does not require infallibility and that many knowledge claims in everyday life are true.[61]

    Types

    Declarative knowledge arises in different forms. It is possible to distinguish between them based on the type of content of what is known. For example, empirical knowledge is knowledge of observable facts while conceptual knowledge is an understanding of general categorizations and theories as well as the relations between them.[62][63][64] Other examples are ethical, religious, scientific, mathematical, and logical knowledge as well as self-knowledge. A different distinction focuses on the mode of how something is known. On a causal level, different sources of knowledge correspond to different types of declarative knowledge. Examples are knowledge through perception, introspection, memory, reasoning, and testimony.[62][65][66] On a logical level, forms of knowledge can be distinguished based on how a knowledge claim is supported by its premises. This classification corresponds to the different forms of logical reasoning, such as deductive and inductive reasoning.[62][67][68] A closely related categorization focuses on the strength of the source of the justification and distinguishes between probabilistic and apodictic knowledge while the distinction between a priori and a posteriori knowledge focuses on the type of the source. These different classifications overlap with each other at various points. For example, a priori knowledge is closely connected to apodictic, conceptual, deductive, and logical knowledge. A posteriori knowledge, on the other hand, is associated with probabilistic, empirical, inductive, and scientific knowledge. Self-knowledge may be identified with introspective knowledge.[62][69]

    The distinction between a priori and a posteriori knowledge is determined by the role of experience and matches the distinction between empirical and non-empirical knowledge. A posteriori knowledge is knowledge from experience. This means that experience, like regular perception, is responsible for its formation and justification. Knowing that the door of one's house is green is one example of a posteriori knowledge since some form of sensory observation is required. For a priori knowledge, on the other hand, no experience is required. It is based on pure rational reflection and can neither be verified nor falsified through experience. Examples are knowing that 7 + 5 = 12 or that whatever is red everywhere is not blue everywhere.[70] In this context, experience means primarily sensory observation but can also include related processes, like introspection and memory. However, it does not include all conscious phenomena. For example, having a rational insight into the solution of a mathematical problem does not mean that the resulting knowledge is a posteriori. And knowing that 7 + 5 = 12 is a priori knowledge even though some form of consciousness is involved in learning what symbols like "7" and "+" mean and in becoming aware of the associated concepts.[71][72][69]

    One classification distinguishes between knowledge of facts, concepts, and principles. Knowledge of facts pertains to the association of concrete information, for example, that the red color on a traffic light means stop or that Christopher Columbus sailed in 1492 from Spain to America. Knowledge of concepts applies to more abstract and general ideas that group together many individual phenomena. For example, knowledge of the concept of jogging implies knowing how it differs from walking and running as well as being able to apply this concept to concrete cases. Knowledge of principles is an awareness of general patterns of cause and effect, including rules of thumb. It is a form of understanding how things work and being aware of the explanation of why something happened the way it did. Examples are that if there is lightning then there will be thunder or if a person robs a bank then they may go to jail.[73][74] Similar classifications distinguish between declarative knowledge of persons, events, principles, maxims, and norms.[75][76][77]

    Declarative knowledge is traditionally identified with explicit knowledge and contrasted with tacit or implicit knowledge. Explicit knowledge is knowledge of which the person is aware and which can be articulated. Implicit knowledge, on the other hand, is a form of embodied knowledge that the person cannot articulate. The traditional association of declarative knowledge with explicit knowledge is not always accepted in the contemporary literature and some theorists have argued that there are forms of implicit declarative knowledge. A putative example is a person who has learned a concept and is now able to correctly classify objects according to this concept even though they are not able to provide a verbal rationale for their decision.[78][79][80]

    A further distinction is between domain-specific and general knowledge. Domain-specific knowledge applies to a narrow subject or a particular task but is useless outside this focus. General knowledge, on the other hand, concerns wide topics or has general applications. For example, declarative knowledge of the rules of grammar belongs to general knowledge while having memorized the lines of the poem The Raven is domain-specific knowledge. This distinction is based on a continuum of cases that are more or less general without a clear-cut line between the types.[6][81] According to Paul Kurtz, there are six types of descriptive knowledge: knowledge of available means, of consequences, of particular facts, of general causal laws, of established values, and of fundamental needs.[82] Another classification distinguishes between structural knowledge and perceptual knowledge.[83]

    Contrast with other forms of knowledge

    Photo of a man playing the guitar
    Knowing how to play the guitar is one form of non-declarative knowledge.

    Declarative knowledge is often contrasted with other types of knowledge. A common classification in epistemology distinguishes it from practical knowledge and knowledge by acquaintance. All of them can be expressed with the verb "to know" but their differences are reflected in the grammatical structures used to articulate them. Declarative knowledge is usually expressed with a that-clause, as in "Ann knows that koalas sleep most of the time". For practical knowledge, a how-clause is used instead, for example, "Dave knows how to read the time on a clock". Knowledge by acquaintance can be articulated using a direct object without a preposition, as in "Emily knows Obama personally".[84]

    Practical knowledge consists of skills. Knowing how to ride a horse or how to play the guitar are forms of practical knowledge. The terms "procedural knowledge" and "knowledge-how" are often used as synonyms.[11][85][86] It differs from declarative knowledge in various aspects. It is usually imprecise and cannot be proven by deducing it from premises. It is non-propositional and, for the most part, cannot be taught in abstract without concrete exercise. In this regard, it is a form of non-intellectual knowledge.[87][10] It is tied to a specific goal and its value lies not in being true, but rather in how effective it is to accomplish its goal.[88] Practical knowledge can be present without any beliefs and may even involve false beliefs. For example, a ball player may know how to catch a ball despite falsely believing that their eyes continuously track the ball while, in truth, their eyes perform a series of abrupt movements that anticipate the ball's trajectory rather than following it.[89] Another difference is that declarative knowledge is commonly only ascribed to animals with highly developed minds, like humans. Practical knowledge, on the other hand, is more prevalent in the animal kingdom. For example, ants know how to walk through the kitchen despite presumably lacking the mental capacity for the declarative knowledge that they are walking through the kitchen.[90]

    Photo of a boy eating a chocolate egg
    Familiarity with the flavor of chocolate is one example of knowledge by acquaintance, which belongs to non-declarative knowledge.

    Declarative knowledge is also distinguished from knowledge by acquaintance, which is also known as objectual knowledge, and knowledge-of. Knowledge by acquaintance is a form of familiarity or direct awareness that a person has with another person, a thing, or a place. For example, a person who has tasted the flavor of chocolate knows chocolate in this sense, just like a person who visited Lake Taupō knows Lake Taupō. Knowledge by acquaintance does not imply that the person can provide factual information about the object. It is a form of non-inferential knowledge that depends on first-hand experience. For example, a person who has never left their home country may acquire a lot of declarative knowledge about other countries by reading books without any knowledge by acquaintance.[86][91][92] Knowledge by acquaintance plays a central role in the epistemology of Bertrand Russell. He holds that it is more basic than other forms of knowledge since to understand a proposition, one has to be acquainted with its constituents. According to Russell, knowledge by acquaintance covers a wide range of phenomena, such as thoughts, feelings, desires, memory, introspection, and sense data. It can happen in relation to particular things and universals. Knowledge of physical objects, on the other hand, belongs to declarative knowledge, which he calls knowledge by description. It also has a central role to play since it extends the realm of knowledge to things that lie beyond the personal sphere of experience.[93]

    Some theorists, like Anita Woolfolk et. al., distinguish declarative knowledge and procedural knowledge from conditional knowledge. According to this view, conditional knowledge is about knowing when and why to use declarative and procedural knowledge. For many issues, like solving math problems and learning a foreign language, it is not sufficient to know facts and general procedures if the person does not know under which situations to use them. To master a language, for example, it is not enough to acquire declarative knowledge of different verb forms if one lacks conditional knowledge of when it is appropriate to use them. Some theorists understand conditional knowledge as one type of declarative knowledge and not as a distinct category.[94]

    A further distinction is between declarative or descriptive knowledge in contrast to prescriptive knowledge. Descriptive knowledge represents what the world is like. It describes and classifies what phenomena are there and in what relations they stand toward each other. It is interested in what is true independently of what people want. Prescriptive knowledge is not about what things actually are like but what they should be like. This concerns specifically the question of what purposes people should follow and how they should act. It guides action by showing what people should do to fulfill their needs and desires. In this regard, it has a more subjective component since it depends on what people want. Some theorists equate prescriptive knowledge with procedural knowledge but others distinguish them based on the claim that prescriptive knowledge is about what should be done while procedural knowledge is about how to do it.[95] Other classifications contrast declarative knowledge with structural knowledge, meta knowledge, heuristic knowledge, control knowledge, case knowledge, and strategic knowledge.[96][76][77]

    Some theorists argue that one type of knowledge is more fundamental than others. For example, Robert E. Haskell claims that declarative knowledge is the basic form of knowledge since it constitutes a general framework of understanding and thereby is a precondition for acquiring other forms of knowledge.[97] However, this position is not generally accepted and philosophers like Gilbert Ryle defend the opposing thesis that declarative knowledge presupposes procedural knowledge.[98][99]

    Value

    Declarative knowledge plays a central role in human understanding of the world. It underlies activities such as labeling phenomena, describing them, explaining them, and communicating with others about them.[100] The value of declarative knowledge depends in part on its usefulness in helping people achieve their objectives. For example, to treat a disease, knowledge of its symptoms and possible cures is beneficial. Or if a person has applied for a new job then knowing where and when the interview takes place is important.[101][102][103] Due to its context-independence, declarative knowledge can be used for a great variety of tasks and because of its compact nature, it can be easily stored and retrieved.[4][3] Declarative knowledge can be useful for procedural knowledge, for example, by knowing the list of steps needed to execute a skill. It also has a key role in understanding and solving problems and can guide the process of decision-making.[104][105][106] A related issue in the field of epistemology concerns the question of whether declarative knowledge is more valuable than true belief, since, for most purposes, true belief seems to be as useful as knowledge to achieve one's goals.[103][107][108]

    Declarative knowledge is primarily desired in cases where it is immediately useful.[97] But not all forms of knowledge are useful. For example, indiscriminately memorizing phone numbers found in a foreign phone book is unlikely to result in useful declarative knowledge.[102] However, it is often difficult to assess the value of knowledge if one does not foresee a situation where it would be useful. In this regard, it can happen that the value of apparently useless knowledge is only discovered much later. For example, Maxwell's equations linking magnetism to electricity were considered useless at the time of discovery until experimental scientists discovered how to detect electromagnetic waves.[97] Occasionally, knowledge may have a negative value, for example, when it hinders someone to do what would be needed because their knowledge of associated dangers paralyzes them.[102]

    Learning

    Photo of a school lesson
    A lot of knowledge taught at school is declarative knowledge.

    The value of knowledge is specifically relevant in the field of education to make it possible to decide which of the vast amount of knowledge should become part of the curriculum to be passed on to students.[101] Many types of learning at school involve the acquisition of declarative knowledge.[100] One form of declarative knowledge learning is so-called rote learning. It is a memorization technique in which the claim to be learned is repeated again and again until it is fully memorized. Other forms of declarative knowledge learning focus more on developing an understanding of the subject. This means that the learner should not only be able to repeat the claim but also to explain, describe, and summarize it. For declarative knowledge to be useful, it is often advantageous if it is embedded in a meaningful structure. For example, learning about new concepts and ideas involves developing an understanding of how they are related to each other and to what is already known.[104]

    According to Ellen Gagné, learning declarative knowledge happens in four steps. In the first step, the learner comes into contact with the material to be learned and apprehends it. Next, they translate this information into propositions. Following that, the learner's memory triggers and activates related propositions. As the last step, new connections are established and inferences are drawn.[104] A similar process is described by John V. Dempsey, who emphasizes that the new information must be organized, subdivided, and linked to existing knowledge. He distinguishes between learning that involves recalling information in contrast to learning that only requires being able to recognize certain patterns.[109] A related theory is defended by Anthony J. Rhem, who holds that the process of learning declarative knowledge involves organizing new information into groups and drawing relations between these groups as well as connecting the new information to pre-existing knowledge.[110]

    Some theorists, like Robert Gagné and Leslie Briggs, distinguish between different types of declarative knowledge learning based on the cognitive processes involved: learning of labels and names, of facts and lists, and of organized discourse. Learning labels and names requires forming a mental connection between two elements. Examples include memorizing foreign vocabulary and learning the capital city of each state. Learning facts involves relationships between concepts, for example, that "Ann Richards was the governor of Texas in 1991". This process is usually easier if the person is not dealing with isolated facts but possesses a network of information into which the new fact is integrated. The case for learning lists is similar since it involves the association of many items. Learning organized discourse encompasses not discrete facts or items but a wider comprehension of the meaning present in an extensive body of information.[104][109][110]

    Various sources of declarative knowledge are discussed in epistemology. They include perception, introspection, memory, reasoning, and testimony.[65][66][62] Perception is usually understood as the main source of empirical knowledge. It is based on the senses, like seeing that it is raining when looking out the window.[111][112][113] Introspection is similar to perception but provides knowledge of the internal sphere and not of external objects.[114] An example is directing one's attention to a pain in one's toe to assess whether it has intensified.[115] Memory differs from perception and introspection in that it does not produce new knowledge but merely stores and retrieves pre-existing knowledge. As such, it depends on other sources.[66][116][117] It is similar to reasoning in this regard, which starts from a known fact and arrives at new knowledge by drawing inferences from it. Empiricists hold that this is the only way how reason can arrive at knowledge while rationalists contend that certain claims can be known by pure reason independent of additional sources.[113][118][119] Testimony is different from the other sources since it does not have its own cognitive faculty. Rather, it is grounded in the notion that people can acquire knowledge through communication with others, for example, by speaking to someone or by reading a newspaper.[120][121][122] Some religious philosophers include religious experiences (through the so-called sensus divinitatis) as a source of knowledge of the divine. However, such claims are controversial.[66][123]

    References

    Citations


  • Colman 2009a, declarative knowledge
    Woolfolk, Hughes & Walkup 2008, p. 307
    Strube & Wender 1993, p. 354
    Tokuhama-Espinosa 2011, p. 255
    Holyoak & Morrison 2005, p. 371

  • Colman 2009a, declarative knowledge.

  • Morrison 2005, p. 371.

  • Reif 2008.

  • Zagzebski 1999, p. 93.

  • Woolfolk & Margetts 2012, p. 251.

  • HarperCollins staff
    Magee & Popper 1971, pp. 74–75, Conversation with Karl Popper
    Walton 2005, pp. 59, 64
    Leondes 2001, p. 804
    Kent & Williams 1993, p. 295

  • Sadegh-Zadeh 2011, pp. 450–451, 470, 475
    Burstein & Holsapple 2008, pp. 44–45
    Hetherington 2023, 1b. Knowledge-That
    Burgin 2016, p. 48

  • Colman 2009b, non-declarative knowledge.

  • Pavese 2022, introduction.

  • Klauer et al. 2016, pp. 105–106.

  • Truncellito.

  • Moser 2005, p. 3.

  • Steup & Neta 2020, 2.3 Knowing Facts.

  • Ichikawa & Steup 2018, introduction.

  • Zagzebski 1999, p. 96.

  • Gupta 2021.

  • Klein 1998, Knowledge, concept of.

  • Zagzebski 1999, pp. 99–100.

  • Seel 2011, p. 1001.

  • Hetherington 2016, p. 219.

  • Carter, Gordon & Jarvis 2017, p. 114.

  • Ichikawa & Steup 2018, 3. The Gettier Problem.

  • Kornblith 2008, pp. 5–6, 1 Knowledge Needs No Justification.

  • Hetherington 2022, introduction.

  • Ichikawa & Steup 2018, 11. Knowledge First.

  • Ichikawa & Steup 2018, 1.2 The Belief Condition
    Villoro 1998, pp. 144, 148–149
    Zagzebski 1999, p. 93
    Black 1971, pp. 152–158
    Farkas 2015, pp. 185–200
    Kleinman 2013, p. 258

  • Hacker 2013, p. 211
    Ichikawa & Steup 2018, 1.2 The Belief Condition
    Black 1971, pp. 152–158
    Farkas 2015, pp. 185–200

  • Ichikawa & Steup 2018, 1.1 The Truth Condition.

  • Villoro 1998, pp. 199–200.

  • Tolliver 1989, pp. 29–51.

  • Villoro 1998, pp. 206–210.

  • Cohen 2010, pp. S193–S202.

  • Barth 2002, pp. 1–18.

  • Allwood 2013, pp. 69–72, Anthropology of Knowledge.

  • Watson, Introduction.

  • Goldman 1992, pp. 105–106.

  • Evans & Smith 2013, pp. 32–33.

  • Pritchard 2023, p. 38.

  • Ichikawa & Steup 2018, 1.3 The Justification Condition.

  • Poston, introduction.

  • Klein 1998, Knowledge, concept of
    Steup & Neta 2020
    Lehrer 2015, 1. The Analysis of Knowledge
    Cameron 2018

  • Hetherington 2022, 3. Gettier’s Original Challenge.

  • Ichikawa & Steup 2018, 3. The Gettier Problem.

  • Borges, Almeida & Klein 2017, p. 180.

  • Broadbent 2016, p. 128.

  • Ichikawa & Steup 2018, 6. Doing Without Justification?.

  • Craig 1996, Knowledge, defeasibility theory of.

  • Lee 2017, pp. 6–9.

  • Sudduth, Introduction; 1. The Concept of Defeasibility.

  • Sudduth, 2b. Defeasibility Analyses and Propositional Defeaters.

  • Ichikawa & Steup 2018, 4. No False Lemmas.

  • Ichikawa & Steup 2018, 5. Modal Conditions.

  • Bernecker & Pritchard 2011, p. 266.

  • Becker 2013, p. 12.

  • Crumley 2009, p. 117.

  • Turri, Alfano & Greco 2021, 5. Knowledge.

  • Baehr, introduction; 2. Virtue Reliabilism.

  • Battaly 2018, p. 772.

  • Schelling 2013, pp. 55–56.

  • Kirkham 1984, pp. 503, 512–513
    Hetherington 2023, 6. Standards for Knowing
    Zagzebski 1999, pp. 97–98
    Christensen 2003, p. 29

  • Campbell, O'Rourke & Silverstein 2010, p. 10.

  • Cassirer 2021, p. 208.

  • Freitas & Jameson 2012, p. 189.

  • Steup & Neta 2020, 5. Sources of Knowledge and Justification.

  • Blaauw 2020, p. 49.

  • Flick 2013, p. 123.

  • Bronkhorst et al. 2020, pp. 1673–1676.

  • Moser.

  • Moser
    Hamilton 2003, p. 23
    Barber & Stainton 2010, p. 11
    Baehr, 1. An Initial Characterization

  • Baehr, 1. An Initial Characterization.

  • Russell 2020.

  • Price & Nelson 2013, p. 4.

  • Foshay & Silber 2009, pp. 14–15.

  • Chiu & Hong 2013, p. 102.

  • Jankowski & Marshall 2016, p. 70.

  • Scott, Gallacher & Parry 2017, p. 97.

  • Bengson & Moffett 2012, p. 328.

  • Kikoski & Kikoski 2004, pp. 62, 65–66.

  • Reber & Allen 2022, p. 281.

  • Finlay 2020, p. 13.

  • Fischer 2019, p. 66.

  • Yamamoto 2016, p. 61.

  • Bishop et al. 2020, p. 74
    Lilley, Lightfoot & Amaral 2004, pp. 162–163
    Pavese 2022, introduction
    Klauer et al. 2016, pp. 105–106
    Hetherington 2023, 1. Kinds of Knowledge

  • Gaskins 2005, p. 51.

  • Peels 2023, p. 28.

  • Klauer et al. 2016, p. 105–6.

  • Merriënboer 1997, p. 32.

  • Pavese 2022, 6.1 Knowledge-how and Belief.

  • Pritchard 2023, 1 Some preliminaries.

  • Heydorn & Jesudason 2013, p. 10.

  • Foxall 2017, p. 75.

  • Hasan & Fumerton 2020, introduction
    Haymes & Özdalga 2016, pp. 26–28
    Miah 2006, pp. 19–20
    Alter & Nagasawa 2015, pp. 93–94

  • Woolfolk, Hughes & Walkup 2008, p. 307
    Dunlap 2004, pp. 144–145
    Earley & Ang 2003, p. 109
    Woolfolk & Margetts 2012, p. 251

  • Maedche, Brocke & Hevner 2017, p. 403
    Goldberg 2006, pp. 121–122
    Lalanda, McCann & Diaconescu 2013, pp. 187–188
    Chen & Terken 2022, p. 49

  • Nguyen, Nguyen & Tran 2022, pp. 33–34.

  • Haskell 2001, pp. 101–103.

  • Stillings et al. 1995, p. 370.

  • Cornelis, Smets & Bendegem 2013, p. 37.

  • Murphy & Alexander 2005, pp. 38–39.

  • Degenhardt 2019, pp. 1–6.

  • Pritchard 2023, 2 The value of knowledge.

  • Olsson 2011, pp. 874–883.

  • Smith & Ragan 2004, pp. 152–154.

  • Soled 1995, p. 49.

  • Leung 2019, p. 210.

  • Pritchard, Turri & Carter 2022, introduction.

  • Plato 2002, p. 89–90; 97b–98a.

  • Dempsey 1993, pp. 80–81.

  • Rhem 2005, pp. 42–43.

  • Hetherington 2023, 3b. Observational Knowledge.

  • O’Brien.

  • Martinich & Stroll, Rationalism and empiricism.

  • Steup & Neta 2020, 5.2 Introspection.

  • Hohwy 2013, p. 245.

  • Audi 2002, pp. 71–94, The Sources of Knowledge.

  • Gardiner 2001, pp. 1351–1361.

  • Steup & Neta 2020, 5.4 Reason.

  • Audi 2005, p. 315.

  • Steup & Neta 2020, 5.5 Testimony.

  • Leonard 2021, introduction.

  • Green, introduction.

  • Sources

     https://en.wikipedia.org/wiki/Declarative_knowledge

    In philosophy of mind, qualia (/ˈkwɑːliə/ or /ˈkwliə/; singular form: quale) are defined as instances of subjective, conscious experience. The term qualia derives from the Latin neuter plural form (qualia) of the Latin adjective quālis (Latin pronunciation: [ˈkʷaːlɪs]) meaning "of what sort" or "of what kind" in a specific instance, such as "what it is like to taste a specific apple — this particular apple now".

    Examples of qualia include the perceived sensation of pain of a headache, the taste of wine, and the redness of an evening sky. As qualitative characteristics of sensation, qualia stand in contrast to propositional attitudes,[1] where the focus is on beliefs about experience rather than what it is directly like to be experiencing.

    Philosopher and cognitive scientist Daniel Dennett suggested that qualia was "an unfamiliar term for something that could not be more familiar to each of us: the ways things seem to us".[2]

    Much of the debate over the importance of qualia hinges on the definition of the term, and various philosophers emphasize or deny the existence of certain features of qualia. Consequently, the nature and existence of qualia under various definitions remain controversial. While some philosophers of mind, like Daniel Dennett, argue that qualia do not exist and are incompatible with neuroscience and naturalism,[3] some neuroscientists and neurologists, like Gerald Edelman, Antonio Damasio, Vilayanur Ramachandran, Giulio Tononi, Christof Koch, and Rodolfo Llinás, state that qualia exist and that the desire by some philosophers to disregard qualia is based on an erroneous interpretation of what constitutes science.[4]

    Definitions

    Many definitions of qualia have been proposed. One of the simpler, broader definitions is: "The 'what it is like' character of mental states. The way it feels to have mental states such as pain, seeing red, smelling a rose, etc."[5]

    C.S. Peirce introduced the term quale in philosophy in 1866[6][7] C.I. Lewis (1929)[7] was the first to use the term "qualia" in its generally agreed upon modern sense.

    There are recognizable qualitative characters of the given, which may be repeated in different experiences, and are thus a sort of universals; I call these "qualia." But although such qualia are universals, in the sense of being recognized from one to another experience, they must be distinguished from the properties of objects. Confusion of these two is characteristic of many historical conceptions, as well as of current essence-theories. The quale is directly intuited, given, and is not the subject of any possible error because it is purely subjective.[7]

    Frank Jackson later defined qualia as "...certain features of the bodily sensations especially, but also of certain perceptual experiences, which no amount of purely physical information includes".[8]: 273 

    Daniel Dennett identifies four properties that are commonly ascribed to qualia.[2] According to these, qualia are:

    1. ineffable – they cannot be communicated, or apprehended by any means other than direct experience.
    2. intrinsic – they are non-relational properties, which do not change depending on the experience's relation to other things.
    3. private – all interpersonal comparisons of qualia are systematically impossible.
    4. directly or immediately apprehensible by consciousness – to experience a quale is to know one experiences a quale, and to know all there is to know about that quale.

    If qualia of this sort exist, then a normally sighted person who sees red would be unable to describe the experience of this perception in such a way that a listener who has never experienced color will be able to know everything there is to know about that experience. Though it is possible to make an analogy, such as "red looks hot", or to provide a description of the conditions under which the experience occurs, such as "it's the color you see when light of 700-nm wavelength is directed at you", supporters of this definition of qualia contend that such descriptions cannot provide a complete description of the experience.[citation needed]

    Another way of defining qualia is as "raw feels". A raw feel is a perception in and of itself, considered entirely in isolation from any effect it might have on behavior and behavioral disposition. In contrast, a cooked feel is that perception seen in terms of its effects. For example, the perception of the taste of wine is an ineffable, raw feel, while the behavioral reaction one has to the warmth or bitterness caused by that taste of wine would be a cooked feel. Cooked feels are not qualia.[citation needed]

    Saul Kripke argues that one key consequence of the claim that such things as raw feels can be meaningfully discussed – that qualia exist[clarification needed] – is that it leads to the logical possibility of two entities exhibiting identical behavior in all ways despite one of them entirely lacking qualia.[9] While few claim that such an entity, called a philosophical zombie, actually exists, the possibility is raised as a refutation of physicalism.[10][further explanation needed]

    Arguably, the idea of hedonistic utilitarianism, where the ethical value of things is determined from the amount of subjective pleasure or pain they cause, is dependent on the existence of qualia.[11]

    Arguments for the existence of qualia

    Since by definition one cannot fully convey qualia verbally, one also cannot demonstrate them directly in an argument; so a more nuanced approach is needed. Arguments for qualia generally come in the form of thought experiments designed to lead one to the conclusion that qualia exist.[12]

    "What's it like to be?" argument

    Thomas Nagel's paper "What is it like to be a bat?",[13] although it does not use the word "qualia", is often cited in debates about qualia. Nagel argues that consciousness has an essentially subjective character, a what-it-is-like aspect. He states that "an organism has conscious mental states if and only if there is something that it is like to be that organism – something it is like for the organism."[13] Nagel suggests that this subjective aspect may never be sufficiently accounted for by the objective methods of reductionistic science. He claims that "if we acknowledge that a physical theory of mind must account for the subjective character of experience, we must admit that no presently available conception gives us a clue about how this could be done."[14]: 450  Furthermore, "it seems unlikely that any physical theory of mind can be contemplated until more thought has been given to the general problem of subjective and objective."[14]: 450 

    Inverted spectrum argument

    Inverted qualia

    The inverted spectrum thought experiment, originally developed by John Locke,[15] invites us to imagine that we wake up one morning and find that for some unknown reason all the colors in the world have been inverted, i.e. swapped to the hue on the opposite side of a color wheel. Furthermore, we discover that no physical changes have occurred in our brains or bodies that would explain this phenomenon. Supporters of the existence of qualia argue that since we can imagine this happening without contradiction, it follows that we are imagining a change in a property that determines the way things look to us, but that has no physical basis.[16][17] In more detail:

    1. Metaphysical identity holds of necessity.[clarification needed]
    2. If something is possibly false, it is not necessary.
    3. It is conceivable that qualia could have a different relationship to physical brain-states.
    4. If it is conceivable, then it is possible.
    5. Since it is possible for qualia to have a different relationship with physical brain-states, they cannot be identical to brain states (by 1).
    6. Therefore, qualia are non-physical.

    The argument thus claims that if we find the inverted spectrum plausible, we must admit that qualia exist (and are non-physical). Some philosophers find it absurd that armchair theorizing can prove something to exist, and the detailed argument does involve a lot of assumptions about conceivability and possibility, which are open to criticism. Perhaps it is not possible for a given brain state to produce anything other than a given quale in our universe, and that is all that matters.

    The idea that an inverted spectrum would be undetectable in practice is also open to criticism on more scientific grounds (see main article).[16][17] There is an actual experiment – albeit somewhat obscure – that parallels the inverted spectrum argument. George M. Stratton, professor of psychology at the University of California, Berkeley, performed an experiment in which he wore special prism glasses that caused the external world to appear upside down.[18] After a few days of continually wearing the glasses, he adapted and the external world appeared upright to him. When he removed the glasses, his perception of the external world again returned to the "normal" perceptual state. If this argument provides indicia that qualia exist, it does not necessarily follow that they must be non-physical, because that distinction should be considered a separate epistemological issue.

    Zombie argument

    An argument holds that it is not inconceivable for a set of people to have qualia, while physical duplicates of that set, called "philosophical zombies", do not. These "zombies" would demonstrate outward behavior, including utterances, exactly the same as normal humans (who are assumed to have subjective phenomenology), without subjective phenomenology. For there to be a valid distinction between "normal humans" and philosophical zombies there must be no specific part or parts of the brain that directly give rise to qualia: The zombie/normal-human distinction can only be valid if subjective consciousness is causally separate from the physical brain.[citation needed]

    Are zombies possible? They're not just possible, they're actual. We're all zombies: Nobody is conscious. — D.C. Dennett (1992)[19][page needed]

    Explanatory gap argument

    Joseph Levine's paper Conceivability, Identity, and the Explanatory Gap takes up where the criticisms of conceivability arguments (such as the inverted spectrum argument and the zombie argument) leave off. Levine agrees that conceivability is a flawed means of establishing metaphysical realities, but points out that even if we come to the metaphysical conclusion that qualia are physical, there is still an explanatory problem.

    While I think this materialist response is right in the end, it does not suffice to put the mind-body problem to rest. Even if conceivability considerations do not establish that the mind is in fact distinct from the body, or that mental properties are metaphysically irreducible to physical properties, still they do demonstrate that we lack an explanation of the mental in terms of the physical.

    However, such an epistemological or explanatory problem might indicate an underlying metaphysical issue – the non-physicality of qualia, even if not proven by conceivability arguments, is far from ruled out.

    In the end, we are right back where we started. The explanatory gap argument doesn't demonstrate a gap in nature, but a gap in our understanding of nature. Of course a plausible explanation for there being a gap in our understanding of nature is that there is a genuine gap in nature. But so long as we have countervailing reasons for doubting the latter, we have to look elsewhere for an explanation of the former.[20]

    Knowledge argument

    F.C. Jackson offers what he calls the "knowledge argument" for qualia.[8] One example runs as follows:

    Mary the color scientist knows all the physical facts about color, including every physical fact about the experience of color in other people, from the behavior a particular color is likely to elicit to the specific sequence of neurological firings that register that a color has been seen. However, she has been confined from birth to a room that is black and white, and is only allowed to observe the outside world through a black and white monitor. When she is allowed to leave the room, it must be admitted that she learns something about the color red the first time she sees it – specifically, she learns what it is like to see that color.

    This thought experiment has two purposes. First, it is intended to show that qualia exist. If we accept the thought experiment, we believe that Mary gains something after she leaves the room – that she acquires knowledge of a particular thing that she did not possess before. That knowledge, Jackson argues, is knowledge of the quale that corresponds to the experience of seeing red, and it must thus be conceded that qualia are real properties, since there is a difference between a person who has access to a particular quale and one who does not.

    The second purpose of this argument is to refute the physicalist account of the mind. Specifically, the knowledge argument is an attack on the physicalist claim about the completeness of physical truths. The challenge posed to physicalism by the knowledge argument runs as follows:

    1. Before her release, Mary was in possession of all the physical information about color experiences of other people.
    2. After her release, Mary learns something about the color experiences of other people.
      Therefore,
    3. Before her release, Mary was not in possession of all the information about other people's color experiences, even though she was in possession of all the physical information.
            Therefore,
    4. There are truths about other people's color experience that are not physical.
            Therefore,
    5. Physicalism is false.

    At first Jackson argued that qualia are epiphenomenal: not causally efficacious with respect to the physical world. Jackson does not give a positive justification for this claim – rather, he seems to assert it simply because it defends qualia against the classic problems of dualism. Our[who?] natural assumption would be that qualia must be causally efficacious in the physical world, but some would ask how we[who?] could argue for their existence if they did not affect our brains. If qualia are non-physical properties (which they must be in order to constitute an argument against physicalism), some[who?] question how they could have a causal effect on the physical world. By redefining qualia as epiphenomenal, Jackson attempts to protect them from the demand of playing a causal role.

    Later, however, Jackson rejected epiphenomenalism. This, he argues, is because when Mary first sees red, she says "wow", so it must be Mary's qualia that cause her to say "wow". This contradicts epiphenomenalism. Since the Mary's room thought experiment seems to create this contradiction, there must be something wrong with it.[clarification needed] This[ambiguous] is often referred to as the "there must be a reply" reply.[citation needed]

    Analytical philosophers who are critics of qualia

    Daniel Dennett

    Daniel Dennett

    In Consciousness Explained (1991)[19] and "Quining Qualia" (1988),[21] Daniel Dennett argues against qualia by claiming that the[which?] above definition breaks down if one tries to practically apply it. In a series of thought experiments, which he calls "intuition pumps", he brings qualia into the world of neurosurgery, clinical psychology, and psychological experimentation. He argues that, once the concept of qualia is so imported, we can either make no use of it, or the questions introduced by it are unanswerable precisely because of the special properties defined for qualia.[citation needed]

    In Dennett's updated version of the inverted spectrum thought experiment, "alternative neurosurgery", you again awake to find that your qualia have been inverted – grass appears red, the sky appears orange, etc. According to the original account, you should be immediately aware that something has gone horribly wrong. Dennett argues, however, that it is impossible to know whether the diabolical neurosurgeons have indeed inverted your qualia (by tampering with your optic nerve, say), or have simply inverted your connection to memories of past qualia. Since both operations would produce the same result, you would have no means on your own to tell which operation has actually been conducted, and you are thus in the odd position of not knowing whether there has been a change in your "immediately apprehensible" qualia.[citation needed]

    Dennett's argues that for qualia to be taken seriously as a component of experience – for them to make sense as a discrete concept – it must be possible to show that

    1. it is possible to know that a change in qualia has occurred, as opposed to a change in something else; or that
    2. there is a difference between having a change in qualia and not having one.

    Dennett attempts to show that we cannot satisfy (a) either through introspection or through observation, and that qualia's very definition undermines its chances of satisfying (b).

    Supporters of qualia could point out that in order for you to notice a change in qualia, you must compare your current qualia with your memories of past qualia. Arguably, such a comparison would involve immediate apprehension of your current qualia and your memories of past qualia, but not the past qualia themselves. Furthermore, modern functional brain imaging has increasingly suggested that the memory of an experience is processed in similar ways and in similar zones of the brain as those originally involved in the original perception.[22] This may mean that there would be asymmetry in outcomes between altering the mechanism of perception of qualia and altering their memories. If the diabolical neurosurgery altered the immediate perception of qualia, you might not even notice the inversion directly, since the brain zones which re-process the memories would themselves invert the qualia remembered. On the other hand, alteration of the qualia memories themselves would be processed without inversion, and thus you would perceive them as an inversion. Thus, you might know immediately if memory of your qualia had been altered, but might not know if immediate qualia were inverted or whether the diabolical neurosurgeons had done a sham procedure.[original research?]

    Dennett responds to the "Mary the color scientist" thought experiment by arguing that Mary would not, in fact, learn something new if she stepped out of her black and white room to see the color red. Dennett asserts that if she already truly knew "everything about color", that knowledge would include a deep understanding of why and how human neurology causes us to sense the "quale" of color. Mary would therefore already know exactly what to expect of seeing red, before ever leaving the room. Dennett argues that the misleading aspect of the story is that Mary is supposed to not merely be knowledgeable about color but to actually know all the physical facts about it, which would be a knowledge so deep that it exceeds what can be imagined, and twists our intuitions.

    If Mary really does know everything physical there is to know about the experience of color, then this effectively grants her almost omniscient powers of knowledge. Using this, she will be able to deduce her own reaction, and figure out exactly what the experience of seeing red will feel like.

    Dennett finds that many people find it difficult to see this, so he uses the case of RoboMary to further illustrate what it would be like for Mary to possess such a vast knowledge of the physical workings of the human brain and color vision. RoboMary is an intelligent robot who has, instead of the ordinary color camera-eyes, a software lock such that she is only able to perceive black and white and shades in-between.[citation needed]

    RoboMary can examine the computer brain of similar non-color-locked robots when they look at a red tomato, and see exactly how they react and what kinds of impulses occur. RoboMary can also construct a simulation of her own brain, unlock the simulation's color-lock and, with reference to the other robots, simulate exactly how this simulation of herself reacts to seeing a red tomato. RoboMary naturally has control over all of her internal states except for the color-lock. With the knowledge of her simulation's internal states upon seeing a red tomato, RoboMary can put her own internal states directly into the states they would be in upon seeing a red tomato. In this way, without ever seeing a red tomato through her cameras, she will know exactly what it is like to see a red tomato.[citation needed]

    Dennett uses this example as attempt to show us that Mary's all-encompassing physical knowledge makes her own internal states as transparent as those of a robot or computer, and it is almost straightforward for her to figure out exactly how it feels to see red.[citation needed]

    Perhaps Mary's failure to learn exactly what seeing red feels like is simply a failure of language, or a failure of our ability to describe experiences. An alien race with a different method of communication or description might be perfectly able to teach their version of Mary exactly how seeing the color red would feel. Perhaps it is simply a uniquely human failing to communicate first-person experiences from a third-person perspective. Dennett suggests that the description might even be possible using English. He uses a simpler version of the Mary thought experiment to show how this might work. What if Mary was in a room without triangles and was prevented from seeing or making any triangles? An English-language description of just a few words would be sufficient for her to imagine what it is like to see a triangle – she can simply and directly visualize a triangle in her mind. Similarly, Dennett proposes, it is perfectly, logically possible that the quale of what it is like to see red could eventually be described in an English-language description of millions or billions of words.[citation needed]

    In "Are we explaining consciousness yet?" (2001),[23] Dennett approves of an account of qualia defined as the deep, rich collection of individual neural responses that are too fine-grained for language to capture. For instance, a person might have an alarming reaction to yellow because of a yellow car that hit her previously, and someone else might have a nostalgic reaction to a comfort food. These effects are too individual-specific to be captured by English words. "If one dubs this inevitable residue qualia, then qualia are guaranteed to exist, but they are just more of the same, dispositional properties that have not yet been entered in the catalog [...]."[23]

    Paul Churchland

    According to Paul Churchland, Mary might be considered to be like a feral child. Feral children have suffered extreme isolation during childhood. Technically when Mary leaves the room, she would not have the ability to see or know what the color red is. A brain has to learn and develop how to see colors. Patterns need to form in the V4 section of the visual cortex. These patterns are formed from exposure to wavelengths of light. This exposure is needed during the early stages of brain development. In Mary's case, the identifications and categorizations of color will only be in respect to representations of black and white.[24]

    Gary Drescher

    In his book Good and Real (2006),[25] Gary Drescher compares qualia with "gensyms" (generated symbols) in Common Lisp. These are objects that Lisp treats as having no properties or components and which can only be identified as equal or not equal to other objects. Drescher explains, "we have no introspective access to whatever internal properties make the red gensym recognizably distinct from the green [...] even though we know the sensation when we experience it."[25] Under this interpretation of qualia, Drescher responds to the Mary thought experiment by noting that "knowing about red-related cognitive structures and the dispositions they engender – even if that knowledge were implausibly detailed and exhaustive – would not necessarily give someone who lacks prior color-experience the slightest clue whether the card now being shown is of the color called red." This does not, however, imply that our experience of red is non-mechanical; "on the contrary, gensyms are a routine feature of computer-programming languages".[14]: 82 

    David Lewis

    D.K. Lewis introduced a hypothesis about types of knowledge and their transmission in qualia cases. Lewis agrees that Mary cannot learn what red looks like through her monochrome physicalist studies. But he proposes that this does not matter. Learning transmits information, but experiencing qualia does not transmit information; instead it communicates abilities. When Mary sees red, she does not get any new information. She gains new abilities – now she can remember what red looks like, imagine what other red things might look like and recognize further instances of redness.

    Lewis states that Jackson's thought experiment uses the phenomenal information hypothesis – that is, that the new knowledge that Mary gains upon seeing red is phenomenal information. Lewis then proposes a different ability hypothesis that differentiates between two types of knowledge: knowledge "that" (information) and knowledge "how" (abilities). Normally the two are entangled; ordinary learning is also an experience of the subject concerned, and people both learn information (for instance, that Freud was a psychologist) and gain ability (to recognize images of Freud). However, in the thought experiment, Mary can only use ordinary learning to gain know-that knowledge. She is prevented from using experience to gain the know-how knowledge that would allow her to remember, imagine and recognize the color red.

    We have the intuition that Mary has been deprived of some vital data to do with the experience of redness. It is also uncontroversial that some things cannot be learned inside the room; for example, we do not expect Mary to learn how to ski within the room. Lewis has articulated that information and ability are potentially different things. In this way, physicalism is still compatible with the conclusion that Mary gains new knowledge. It is also useful for considering other instances of qualia; "being a bat" is an ability, so it is know-how knowledge.[26]

    Marvin Minsky

    Marvin Minsky

    Artificial intelligence researcher Marvin Minsky thinks the problems posed by qualia are essentially issues of complexity, or rather of mistaking complexity for simplicity.

    Now, a philosophical dualist might then complain: "You've described how hurting affects your mind – but you still can't express how hurting feels." This, I maintain, is a huge mistake – that attempt to reify "feeling" as an independent entity, with an essence that's indescribable. As I see it, feelings are not strange alien things. It is precisely those cognitive changes themselves that constitute what "hurting" is – and this also includes all those clumsy attempts to represent and summarize those changes. The big mistake comes from looking for some single, simple, "essence" of hurting, rather than recognizing that this is the word we use for complex rearrangement of our disposition of resources.[27]

    Michael Tye

    Michael Tye

    Michael Tye believes there are no qualia, no "veils of perception" between us and the referents of our thought. He describes our experience of an object in the world as "transparent". By this he means that no matter what private understandings and/or misunderstandings we may have of some public entity, it is still there before us in reality. The idea that qualia intervene between ourselves and their origins he regards as "a massive error"; as he says, "it is just not credible that visual experiences are systematically misleading in this way";[14]: 46  "the only objects of which you are aware are the external ones making up the scene before your eyes";[14]: 47  there are "no such things as the qualities of experiences" for "they are qualities of external surfaces (and volumes and films) if they are qualities of anything."[14]: 48  He believes we can take our experiences at face value since there is no fear of losing contact with the realness of public objects.

    In Tye's thought there is no question of qualia without information being contained within them; it is always "an awareness that", always "representational". He characterizes the perception of children as a misperception of referents that are undoubtedly as present for them as they are for grown-ups. As he puts it, they may not know that "the house is dilapidated", but there is no doubt about their seeing the house. After-images are dismissed as presenting no problem for the Transparency theory because, as he puts it, after-images being illusory, there is nothing that one sees.

    Tye proposes that phenomenal experience has five basic elements, for which he has coined the acronym PANIC – Poised, Abstract, Nonconceptual, Intentional Content.[14]: 63  It is "Poised" in the sense that the phenomenal experience is always presented to the understanding, whether or not the agent is able to apply a concept to it. Tye adds that the experience is "maplike" in that, in most cases, it reaches through to the distribution[clarification needed] of shapes, edges, volumes, etc. in the world – you may not be reading the "map" but, as with an actual map there is a reliable match with what it is mapping. It is "Abstract" because it is still an open question in a particular case whether you are in touch with a concrete object (someone may feel a pain in a "left leg" when that leg has actually been amputated). It is "Nonconceptual" because a phenomenon can exist although one does not have the concept by which to recognize it. Nevertheless, it is "Intentional" in the sense that it represents something, again whether or not the particular observer is taking advantage of that fact; this is why Tye calls his theory "representationalism". This last makes it plain that Tye believes that he has retained a direct contact with what produces the phenomena and is therefore not hampered by any trace of a "veil of perception".[28]

    Roger Scruton

    Roger Scruton, while sceptical that neurobiology can tell us much about consciousness, believes qualia is an incoherent concept, and that Wittgenstein's private language argument effectively disproves it. Scruton writes,

    The belief that these essentially private features of mental states exist, and that they form the introspectible essence of whatever possesses them, is grounded in a confusion, one that Wittgenstein tried to sweep away in his arguments against the possibility of a private language. When you judge that I am in pain, it is on the basis of my circumstances and behavior, and you could be wrong. When I ascribe a pain to myself, I don’t use any such evidence. I don’t find out that I am in pain by observation, nor can I be wrong. But that is not because there is some other fact about my pain, accessible only to me, which I consult in order to establish what I am feeling. For if there were this inner private quality, I could misperceive it; I could get it wrong, and I would have to find out whether I am in pain. To describe my inner state, I would also have to invent a language, intelligible only to me – and that, Wittgenstein plausibly argues, is impossible. The conclusion to draw is that I ascribe pain to myself not on the basis of some inner quale but on no basis at all.

    In his book On Human Nature, Scruton poses a potential line of criticism to this, which is that while Wittgenstein's private language argument does disprove the concept of reference to qualia, or the idea that we can talk even to ourselves of their nature, it does not disprove their existence altogether. Scruton believes that this is a valid criticism, and this is why he stops short of actually saying that qualia do not exist, and instead merely suggests that we should abandon them as a concept. However, he quotes Wittgenstein in response: "Whereof one cannot speak, thereof one must be silent."[29]

    Analytical philosophers who are proponents of qualia

    David Chalmers

    David Chalmers

    David Chalmers formulated the hard problem of consciousness, which raised the issue of qualia to a new level of importance and acceptance in the field[clarification needed].[30] In Chalmers (1995)[31] he also argued for what he called "the principle of organizational invariance": If a system such as one of appropriately configured computer hardware reproduces the functional organization of the brain, it will also reproduce the qualia associated with the brain.

    E. J. Lowe

    E. J. Lowe denies that indirect realism (in which we have access only to sensory features internal to the brain) necessarily implies a Cartesian dualism. He agrees with Bertrand Russell that our "retinal images" – that is, the distributions across[clarification needed] our retinas – are connected to[vague] "patterns of neural activity in the cortex".[32] He defends a version of the causal theory of perception in which a causal path can be traced between the external object and the perception of it. He is careful to deny that we do any inferring from the sensory field[clarification needed]; he believes this allows us to found an access to knowledge on that causal connection. In a later work he moves closer to the non-epistemic argument in that he postulates "a wholly non-conceptual component of perceptual experience",[32] but he refrains from analyzing the relation between the perceptual and the "non-conceptual". More recently he drew attention to the problems that hallucination raises for the direct realist and to their disinclination to enter the discussion on the topic.[33]

    J. B. Maund

    John Barry Maund, an Australian philosopher of perception, argues that qualia can be described on two levels, a fact that he refers to as "dual coding".[34]

    If asked what we see on a television screen there are two varieties of answer that we might give. Consider the example of a "Movitype" screen, often used for advertisements and announcements in public places. A Movitype screen consists of a matrix – or "raster" as the neuroscientists prefer to call it (from the Latin rastrum, a "rake"; think of the lines on a TV screen as "raked" across) – that is made up of an array of tiny light-sources. A computer can excite these lights so as to give the impression of letters passing from right to left, or even to show moving pictures. In describing what we see on such a screen, we could adopt the everyday public language and say "I saw some sentences, followed by a picture of a 7-Up can." Although that is a perfectly adequate way of describing the sight, nevertheless, there is a scientific way of describing it which bears no relation whatsoever to this commonsense description. One could ask an electronics engineer to provide us with a computer print-out, staged across the seconds that we were watching, of the point-states of the raster of lights. This would no doubt be a long and complex document, with the state of each tiny light-source given its place in the sequence. Although such a list would give a comprehensive and point-by-point-detailed description of the state of the screen, nowhere would it contain mention of "English sentences" or "a 7-Up can".

    This illustrates that there are two ways to describe such a screen, (1) the "commonsense" one which refers to publicly recognizable objects, and (2) an accurate point-by-point account of the actual state of the field that makes no mention of what any passer-by would or would not make of it. This second description would be non-epistemic from the common sense point of view, since no objects are mentioned in the print-out, but perfectly acceptable from the engineer's point of view. Note that, if one carries this analysis across to human sensing and perceiving, this rules out Dennett's claim that all qualiaphiles must regard qualia as "ineffable", for at this second level they are in principle quite "effable" – indeed, it is not ruled out that some neurophysiologist of the future might be able to describe the neural detail of qualia at this level.

    Maund extended his argument with reference to color.[35] Color he sees as a dispositional property, not an objective one. Colors are "virtual properties"; is as if things possessed them. Although the naïve view attributes them to objects, they are intrinsic, non-relational, inner experiences. This allows for the facts of difference[clarification needed] between person and person, and also leaves aside the claim that external objects are colored.

    Moreland Perkins

    In his book Sensing the World,[36] Moreland Perkins argues that qualia need not be identified with their objective sources: a smell, for instance, bears no direct resemblance to the molecular shape that gives rise to it, nor is a toothache actually in the tooth. Like Hobbes he views the process of sensing as complete in itself; as he puts it, it is not like "kicking a football" where an external object is required – it is more like "kicking a kick". This explanation evades the Homunculus Objection, as adhered to, for example, by Gilbert Ryle. Ryle was unable to entertain this possibility, protesting that "in effect it explained the having of sensations as the not having of sensations."[37] However, A.J. Ayer called this objection "very weak" as it betrayed an inability to detach the notion of eyes, or indeed any sensory organ, from the neural sensory experience.[38]

    Howard Robinson and William Robinson

    Howard Robinson specialized in philosophy of mind. He argued against explanations of sensory experience that reduce them to physical origins. He never regarded the theory of sense-data[clarification needed] as refuted, but set out to refute persuasive objections to it. The version of the theory of sense-data he defends takes what is before consciousness in perception to be qualia as mental presentations that are causally linked to external entities, but which are not physical in themselves.[needs copy edit] He is therefore a dualist: one who takes both matter and mind to have real and metaphysically distinct natures. One of his articles takes the physicalist to task for ignoring the fact that sensory experience can be entirely free of representational character. He cites phosphenes as a stubborn example (phosphenes are flashes of light that result either from sudden pressure in the brain – as induced, for example, by intense coughing, or through direct physical pressure on the retina), and points out that it is counter-intuitive to argue that these are not visual experiences on a par with open-eye seeing.

    William Robinson (no relation) takes a similar view in his book, Understanding Phenomenal Consciousness.[39] He is unusual as a dualist in calling for research programs that investigate the relation of qualia to the brain. The problem is so stubborn, he says, that too many philosophers would prefer "to explain it away", but he would rather have it explained and does not see why the effort should not be made. However, he does not expect a straightforward scientific reduction of phenomenal experience to neural architecture; he regards this as a forlorn hope. The "Qualitative Event Realism" that Robinson espouses sees phenomenal consciousness as non-material events that are caused by brain events but not identical with them.

    He refuses to set aside the vividness – and commonness – of mental images, both visual and aural. In this he opposes Daniel Dennett, who has difficulty crediting such experience in others. He is similar to Moreland Perkins in keeping his investigation wide enough to apply to all the senses.

    Edmond Wright

    Edmond Wright is a philosopher who considers the inter-subjective aspect of perception.[40] From John Locke onwards it had been typical to frame perception problems in terms of a single subject S looking at a single entity E with a property p. However, if we begin with the facts of the differences in sensory registration from person to person, coupled with the differences in the criteria we have learned for distinguishing what we together call "the same" things, then a problem arises of how two persons align their differences on these two levels so that they can still get a practical overlap on parts of the real about them – and, in particular, update each other about them.[needs copy edit]

    Wright was struck by the hearing difference between himself and his son: he discovered that his son could hear sounds up to nearly 20 kHz while his range only reached to 14 kHz or so. This implies that a difference in qualia could emerge in human action (for example, the son could warn the father of a high-pitched escape of a dangerous gas kept under pressure, the sound-waves of which would be producing no qualia evidence at all for the father). The relevance for language thus becomes critical, for an informative statement can best be understood as an updating of a perception – and this may involve a radical re-selection from the qualia fields viewed as non-epistemic, even perhaps of the presumed singularity of "the" referent, a fortiori if that "referent" is the self.[needs copy edit] He distinguishes his view from that of Revonsuo[clarification needed], who too readily makes his "virtual space" "egocentric".[importance?]

    Wright emphasizes what he asserts is a core feature of communication, that, in order for an updating to be set up and made possible[clarification needed], both speaker and hearer have to behave as if they have identified "the same singular thing", which, he notes, partakes of the structure of a joke or a story[how?].[40] Wright says that this systematic ambiguity[clarification needed] seems to opponents of qualia to be a sign of fallacy in the argument (as ambiguity is in pure logic) whereas, on the contrary, it is sign – in talk about "what" is perceived – of something those speaking to each other have to learn to take advantage of. He argues that an important feature of human communication is the degree and character of the faith[clarification needed] maintained by the participants in the dialogue, something that has priority over such virtues of language as "sincerity", "truth", and "objectivity". Indeed, he considers that to prioritize them over faith is to move into superstition.

    Erwin Schrödinger

    Erwin Schrödinger, a theoretical physicist and one of the leading pioneers of quantum mechanics, also published in the areas of colorimetry and color perception. In several of his philosophical writings, he defends the notion that qualia are not physical.

    The sensation of colour cannot be accounted for by the physicist's objective picture of light-waves. Could the physiologist account for it, if he had fuller knowledge than he has of the processes in the retina and the nervous processes set up by them in the optical nerve bundles and in the brain? I do not think so.[41]: 154 

    He continues on to remark that subjective experiences do not form a one-to-one correspondence with stimuli. For example, light of wavelengths in the neighborhood of 590 nm produces the sensation of yellow, whereas exactly the same sensation is produced by mixing red light, with wavelength 760 nm, with green light, at 535 nm. From this he concludes that there is no "numerical connection with these physical, objective characteristics of the waves" and the sensations they produce.

    Schrödinger concludes with a proposal of how it is that we might arrive at the mistaken belief that a satisfactory theoretical account of qualitative experience has been – or might ever be – achieved:

    Scientific theories serve to facilitate the survey of our observations and experimental findings. Every scientist knows how difficult it is to remember a moderately extended group of facts, before at least some primitive theoretical picture about them has been shaped. It is therefore small wonder, and by no means to be blamed on the authors of original papers or of text-books, that after a reasonably coherent theory has been formed, they do not describe the bare facts they have found or wish to convey to the reader, but clothe them in the terminology of that theory or theories. This procedure, while very useful for our remembering the fact in a well-ordered pattern, tends to obliterate the distinction between the actual observations and the theory arisen from them. And since the former always are of some sensual quality, theories are easily thought to account for sensual qualities; which, of course, they never do.[41]: 163–164 

    Neuroscientists who state that qualia exist

    Gerald Edelman

    Neuroscientist and Nobel laureate in Physiology / Medicine Gerald Edelman, in his book Bright Air, Brilliant Fire, argues:[42]

    One alternative that definitely does not seem feasible is to ignore completely the reality of qualia, formulating a theory of consciousness that aims by its descriptions alone to convey to a hypothetical “qualia-free” observer what it is to feel warmth, see green, and so on. In other words, this is an attempt to propose a theory based on a kind of God's-eye view of consciousness. But no scientific theory of whatever kind can be presented without already assuming that observers have sensation as well as perception. To assume otherwise is to indulge the errors of theories that attempt syntactical formulations mapped onto objectivist interpretations – theories that ignore embodiment as a source of meaning (see the Postscript). There is no qualia-free scientific observer.

    Antonio Damasio

    Neurologist Antonio Damasio, in his book The Feeling Of What Happens, states:[43]

    Qualia are the simple sensory qualities to be found in the blueness of the sky or the tone of sound produced by a cello, and the fundamental components of the images in the movie metaphor are thus made of qualia.

    Damasio also argues that consciousness is subjective and is different from behavior:[43]: 307–309 

    The resistance found in some scientific quarters to the use of subjective observations is a revisitation of an old argument between behaviorists, who believed that only behaviors, not mental experiences, could be studied objectively, and cognitivists, who believed that studying only behavior did not do justice to human complexity. The mind and its consciousness are first and foremost private phenomena, much as they offer many public signs of their existence to the interested observer. The conscious mind and its constituent properties are real entities, not illusions, and they must be investigated as the personal, private, subjective experiences that they are. The idea that subjective experiences are not scientifically accessible is nonsense.

    Subjective entities require, as do objective ones, that enough observers undertake rigorous observations according to the same experimental design; and they require that those observations be checked for consistency across observers and that they yield some form of measurement. Moreover, knowledge gathered from subjective observations, e.g., introspective insights, can inspire objective experiments, and, no less importantly, subjective experiences can be explained in terms of the available scientific knowledge. The idea that the nature of subjective experiences can be grasped effectively by the study of their behavioral correlates is wrong. Although both mind and behavior are biological phenomena, mind is mind and behavior is behavior.

    Mind and behavior can be correlated, and the correlation will become closer as science progresses, but in their respective specifications, mind and behavior are different. This is why, in all likelihood, I will never know your thoughts unless you tell me, and you will never know mine until I tell you.

    Damasio also addresses qualia in his book Self Comes to Mind.[44][importance?]

    Rodolfo Llinás

    Neurologist Rodolfo Llinás states in his book I of the Vortex that from a strictly neurological perspective, qualia exist and are important to the organism's survival. He argues that qualia were important for the evolution of the nervous system of organisms, including simple organisms such as insects:[45]: 201–221 

    There are today two similar beliefs concerning the nature of qualia. The first is that qualia represent an epiphenomenon that is not necessary for the acquisition of consciousness. Second and somewhat related is the belief that while being the basis for consciousness, qualia appeared only in the highest life forms, suggesting that qualia represent a recently evolved central function that is present in only the more advanced brains. This view relegates the more lowly animals, for example ants, to a realm characterized by the absence of subjective experiences of any kind. It implies that these animals are wired with sets of automatic, reflexively organized circuits that provide for survival by maintaining a successful, albeit purely reactive interaction with the ongoing external world. Although primitive creatures such as ants and cockroaches may be wildly successful, for all practical purposes they are biological automatons.

    […] To me, these views lack a proper evolutionary perspective, which is perhaps why qualia are given so little overall emphasis in the study of brain function. We clearly understand that the functional architecture of the brain is a product of the slow tumblings of evolution and that brain function implements what natural selection has found to be the most beneficial in terms of species survivability. What is not often understood is how deeply related qualia truly are to the evolutionary, functional structure of the brain. […]

    One cannot operate without qualia; they are properties of mind of monumental importance.

    Llinás argues that qualia are ancient and necessary for an organism's survival and a product of neuronal oscillation. He gives the evidence of anesthesia of the brain and subsequent stimulation of limbs to demonstrate that qualia can be "turned off" by changing only the variable of neuronal oscillation (local brain electrical activity), while all other connections remain intact. He argues for an oscillatory-electrical origin of qualia, or important aspects of them.[45]: 202–207 

    Vilayanur Ramachandran

    Vilayanur S. Ramachandran

    Vilayanur S. Ramachandran and William Hirstein[46] proposed three laws of qualia (with a fourth later added), which are "functional criteria that need to be fulfilled in order for certain neural events to be associated with qualia" by philosophers of the mind:

    1. Qualia are irrevocable and indubitable. You don't say 'maybe it is red but I can visualize it as green if I want to'. An explicit neural representation of red is created that invariably and automatically 'reports' this to higher brain centres.
    2. Once the representation is created, what can be done with it is open-ended. You have the luxury of choice, e.g., if you have the percept of an apple you can use it to tempt Adam, to keep the doctor away, bake a pie, or just to eat. Even though the representation at the input level is immutable and automatic, the output is potentially infinite. This isn't true for, say, a spinal reflex arc where the output is also inevitable and automatic. Indeed, a paraplegic can even have an erection and ejaculate without an orgasm.
    3. Short-term memory. The input invariably creates a representation that persists in short-term memory – long enough to allow time for choice of output. Without this component, again, you get just a reflex arc.
    4. Attention. Qualia and attention are closely linked. You need attention to fulfill criterion number two; to choose. A study of circuits involved in attention, therefore, will shed much light on the riddle of qualia.[47]

    These authors approach qualia from an empirical perspective and not as a logical or philosophical problem. They wonder how qualia evolved. They consider a skeptical point of view in which, since the objective scientific description of the world is complete without qualia, it is nonsense to ask the question of why they evolved or what qualia are for. But they rule out such an option.

    Based on the parsimony principle of Occam's razor, one could accept epiphenomenalism and deny qualia since they are not necessary for a description of the functioning of the brain. However, they argue that Occam's razor is not useful for scientific discovery.[46] For example, the discovery of relativity in physics was not the product of accepting Occam's razor but rather of rejecting it and asking the question of whether a deeper generalization, not required by the currently available data, was true and would allow for unexpected predictions. Most scientific discoveries arise, these authors argue, from ontologically promiscuous conjectures that do not come from current data.

    The authors then point out that skepticism might be justified in the philosophical field, but that science is the wrong place for such skepticism. Such skeptical questions might include asking if "your red is not my green" or if we can be logically certain that we are not dreaming. Science, these authors assert, deals with what is probably true, beyond reasonable doubt, not with what can be known with complete and absolute certainty. The authors say that most neuroscientists and even most psychologists dispute the very existence of the “problem” of qualia[46]

    Roger Orpwood

    Roger Orpwood, an engineer who studies neural mechanisms, proposed a neurobiological model that gives rise to qualia and ultimately, consciousness. Advancements in cognitive and computational neuroscience necessitate study of the mind and qualia from a scientific perspective. Orpwood does not deny the existence of qualia, nor does he debate its physical or non-physical existence. He suggests that qualia are created through the neurobiological mechanism of re-entrant feedback in cortical systems.[48][49]

    Orpwood first addresses the issue of information. One unsolved aspect of qualia is the concept of the fundamental information[clarification needed] involved in creating the experience. He does not take a position on the metaphysics of the information underlying the experience of qualia, nor does he state what information actually is, but he does say that information is of two types: the information structure and information message. Information structures are defined by the physical vehicles and structural, biological patterns that encode information. That encoded information is the information message: a source that describes what that information is. The neural mechanism or network receives input information structures, completes a designated instructional task (firing of the neuron or network), and outputs a modified information structure to downstream regions. The information message is the purpose and meaning of the information structure and causally exists as a result of that particular information structure. Modification of the information structure changes the meaning of the information message, but the message itself cannot be directly altered.

    Local cortical networks can receive feedback from their own output information structures. This form of local feedback continuously cycles part of the network's output structures as its next input information structure. Since the output structure must represent the information message derived from the input structure, each consecutive cycle that is fed-back represents the output structure the network just generated. As the network of mechanisms cannot recognize the information message, but only the input information structure, the network is unaware that it is representing its own previous outputs. The neural mechanisms are merely completing their instructional tasks and outputting any recognizable[clarification needed] information structures. Orpwood proposes that these local networks come into an attractor state that consistently outputs exactly the same information structure as the input structure. Instead of only representing the information message derived from the input structure, the network will now represent its own output and thereby its own information message. As the input structures are fed-back, the network identifies the previous information structure as being a previous representation of the information message. As Orpwood writes:

    Once an attractor state has been established, the output [of a network] is a representation of its own identity to the network.[49]: 4 

    Orpwood explains the neurobiological manifestation of qualia as representations of the network's own output structures, by which it represents its own information message. This is particular to networks of pyramidal neurons. Although computational neuroscience still has much to investigate regarding pyramidal neurons, their complex circuitry is unusual. The complexity of pyramidal neuron networks in a species is directly related to the increase in that species's functional capabilities.[50] When human pyramidal networks are compared with other primate species and with species with less intricate behavioral and social interactions, the complexity of these neural networks drastically decline.[needs copy edit] These networks are also more complex[compared to?] in frontal brain regions. These regions are often associated with[weasel words] conscious assessment and modification of one's immediate environment; often referred to as executive functions.

    One needs sensory input to gain information from the environment. Perception of that input is necessary in order to navigate and modify interactions with the environment. This suggests that frontal regions containing more complex pyramidal networks are associated with an increased perceptive capacity.[how?] As perception is necessary for conscious thought to occur,[citation needed] and since qualia derive from[clarification needed] consciousness of some perception, qualia may be specific to the functional capacity of[clarification needed] pyramidal networks.[non sequitur] For this reason Orpwood believes that the mechanisms of re-entrant feedback may not only create qualia, but also be the foundation to consciousness.

    Other issues

    Indeterminacy

    A criticism similar to Nietzsche's criticism of Kant's "thing in itself" applies also to qualia: Qualia are unobservable in others and unquantifiable in us. We cannot possibly be sure, when discussing individual qualia, that we are even discussing the same phenomena. Thus, any discussion of them is of indeterminate value, as descriptions of qualia are necessarily of indeterminate accuracy.[citation needed]

    Qualia, like "things in themselves", have no publicly demonstrable properties. This, along with the impossibility of being sure that we are communicating about the same qualia, makes them of indeterminate value and definition in any philosophy in which proof relies upon precise definition.[citation needed]

    On the other hand, qualia could be considered akin to Kantian phenomena since they are held to be seemings of appearances. Revonsuo[clarification needed], however, considers that, within neurophysiological inquiry, a definition[vague] at the level of the fields[clarification needed] may become possible (just as we can define a television picture at the level of liquid crystal pixels).[non sequitur]

    Causal efficacy

    Whether qualia or consciousness can play any causal role in the physical world remains an open question. Epiphenomenalism acknowledges the existence of qualia while denying them any causal power. The position has been criticized by a number of philosophers,[a] if only because our own consciousness seem to be causally active.[54] In order to avoid epiphenomenalism, one who believes that qualia are nonphysical would need to embrace something like interactionist dualism; or perhaps emergentism, the claim that there are as yet unknown causal relations between the mental and physical. This in turn would imply that qualia can be detected by an external agency[clarification needed] through their causal powers.

    Epistemological issues

    Examples of qualia might include "the pain of a headache, the taste of wine, or the redness of an evening sky". But such examples prejudge a central debate on qualia.[citation needed] Suppose someone wants to know the nature of the liquid crystal pixels on a television screen—those tiny elements that make up the picture. It would not suffice as an answer to say that they are the "redness of an evening sky" as it appears on the screen; this would ignore their real character. Relying on a list like the one above assumes that we must tie sensations to both the notion of given objects in the world (the "head", "wine", "an evening sky") and to the properties with which we characterize the experiences themselves ("redness", for example).

    Nor is it satisfactory to print a little red square as at the top of this article, for, since each person has a slightly different registration of the light-rays,[55] it confusingly suggests that we all have the same response.[how?] Imagine in a television shop seeing "a red square" on twenty screens at once, each slightly different – something of vital importance would be overlooked if a single example were to be taken as defining them all.[further explanation needed]

    Yet it has been argued whether or not identification with the external object should still be the core of a correct approach to sensation, for there are many who state the definition thus because they regard the link with external reality as crucial.[needs copy edit] If sensations are defined as "raw feels", this threatens the reliability of knowledge because if one sees them as neurophysiological happenings in the brain, it is difficult to understand how they could have any connection to entities, whether in the body or the external world.

    John McDowell, for example, declared that to countenance qualia as a "bare presence" prevents us ever gaining a certain ground for our knowledge.[56] The issue is epistemological: it would appear that access to knowledge is blocked if one allows the existence of qualia as fields in which only virtual constructs are before[clarification needed] the mind. His reason is that it[ambiguous] puts the entities about which we require knowledge behind a "veil of perception", an occult field of "appearance" which leaves us ignorant of the reality presumed to be beyond it. He is convinced that such uncertainty propels[needs copy edit] into the dangerous regions of relativism and solipsism. These constitute an ethical argument against qualia being something going on in the brain,[clarification needed] and these implications are probably largely responsible for the fact that in the 20th century it was regarded as not only freakish, but also dangerously misguided to uphold the notion of sensations as going on inside the head.[citation needed] The argument was usually strengthened with mockery at the very idea of "redness" being in the brain:[57] – "How can there be red neurons in the brain?"

    Viewing sensations as "raw feels" implies that initially they have not yet – to carry on the metaphor – been "cooked", that is, unified into "things" and "persons", which is something the mind does after the sensation has responded to the blank input, that response driven by motivation, that is, initially by pain and pleasure, and subsequently, when memories have been implanted, by desire and fear.[needs copy edit] Such a "raw-feel" state has been called "non-epistemic". In support of this view, the theorists[who?] cite a range of empirical facts, for example:

    • There are brain-damaged persons, known as "agnosics" (literally "not-knowing") who still have vivid visual sensations but are quite unable to identify any entity before them, including parts of their own body.
    • There is also the similar predicament of persons, formerly blind, who are given sight for the first time. And consider what it is a newborn baby experiences.

    German physicist Hermann von Helmholtz proposed an experiment to demonstrate the non-epistemic nature of qualia: His instructions were to stand in front of a familiar landscape, turn your back on it, bend down and look at the landscape between your legs – you will find it difficult in the upside-down view to recognize what you found familiar before.[58]

    These examples suggest the reality of a "bare presence" – that is, knowledgeless sensation that is no more than evidence. Supporters of the non-epistemic theory thus regard sensations as only data in the sense that they are "given" (Latin datum, "given") and fundamentally involuntary, which is a good reason for not regarding them as basically mental.[why?] In the 20th century they were called "sense-data" by the proponents of qualia, but this led to the confusion that they carried with them reliable proofs of objective causal origins. For instance, one supporter of qualia was happy to speak of the redness and bulginess of a cricket ball as a typical "sense-datum",[59] though not all of them were happy to define qualia by their relation to external entities.[60] The modern argument centers on how we learn under the regime of motivation[definition needed] to interpret sensory evidence in terms of "things", "persons", and "selves" through a continuing process of feedback.

    The definition of qualia inevitably brings with it philosophical and neurophysiological presuppositions. The question of what qualia can be raises profound issues in the philosophy of mind, since some materialists want to deny their existence altogether: on the other hand, if they are accepted, they cannot be easily accounted for as they raise the difficult problem of consciousness. There are committed dualists such as Richard L. Amoroso or John Hagelin who believe that the mental and the material are two distinct aspects of physical reality like the distinction between the classical and quantum regimes.[61] In contrast, there are direct realists for whom qualia are unscientific as there appears to be no way of making them fit within the modern scientific picture; and there are committed proselytizers for a final truth who reject them as forcing knowledge out of reach.

    See also

    • Binding problem – Unanswered question in the study of consciousness
    • Blockhead (thought experiment) – philosophical thought experiment about a computer which has been programmed with all possible sentences in a natural language, so as to be able to pass the Turing test despite lacking intelligence
    • Chinese room – Thought experiment on artificial intelligence by John Searle
    • Explanatory gap – Inability to describe conscious experiences in solely physical or structural terms
    • Feeling – Conscious subjective experience of emotion
    • Form constant – Recurringly observed geometric pattern
    • Further facts – Philosophy idea
    • Hard problem of consciousness – Philosophical concept, first stated by David Chalmers in 1995
    • Ideasthesia – Phenomenon in which concepts evoke sensory experiences
    • Knowledge argument – Thought experiment
    • Mind–body problem – Open question in philosophy of how abstract minds interact with physical bodies
    • Open individualism – Philosophical concept
    • Philosophical zombie – Thought experiment in philosophy
    • Process philosophy – Philosophical approach
    • Self-awareness – Capacity for introspection and individuation as a subject
    • Self-reference – Sentence, idea or formula that refers to itself
    • Subjectivity – Philosophical concept, related to consciousness, agency, personhood, reality, and truth
    • Synesthesia – Neurological condition involving the crossing of senses
    • Vertiginous question – philosophical question: “of all the subjects of experience out there, why is this one—the one corresponding to the human being referred to as me—the one whose experiences are live?”

    Notes


    1. Epiphenomenalism has few friends. It has been deemed "thoughtless and incoherent" – Taylor (1927)[51]
      • "unintelligible" – Benecke (1901)[52]
      • "truly incredible" – McLaughlin (1994)[53]

    References


  • Kriegel, Uriah (2014). Current Controversies In Philosophy of Mind. New York: Taylor & Francis. p. 201. ISBN 978-0415530866.

  • Dennett, Daniel (1985-11-21). Quining Qualia. Oxford University Press. Retrieved 2020-05-19.

  • Dennett, D. (2002). "Quining qualia". In Chalmers, D. (ed.). Philosophy of mind. Classical and contemporary readings. Oxford University Press. pp. 226–246.
    • Dennett, D. (2015). "Why and how does consciousness seem the way it seems?". In Metzinger, T.; Windt, J. (eds.). Open mind. Mind Group. pp. 387–398.

  • Damasio, A. (1999). The feeling of what happens. Harcourt Brace.
    • Edelman, G.; Gally, J.; Baars, B. (2011). "Biology of consciousness". Frontiers In Psychology. 2 (4): 1–6.
    • Edelman, G. (1992). Bright air, brilliant fire. BasicBooks.
    • Edelman, G. (2003). "Naturalizing consciousness: A theoretical framework". Proceedings of the National Academy of Sciences. 100 (9): 5520–5524.
    • Koch, C. (2019). The feeling of life itself. The MIT Press.
    • Llinás, R. (2003). I of the Vortex. MIT Press. pp. 202–207.
    • Oizumi, M.; Albantakis, L.; Tononi, G. (2014). "From the phenomenology to the mechanisms of consciousness: Integrated information theory 3.0". PLOS Computational Biology. 10. e1003588
    • Overgaard, M.; Mogensen, J.; Kirkeby-Hinrup, A., eds. (2021). Beyond neural correlates of consciousness. Routledge Taylor & Francis.
    • Ramachandran, V.; Hirstein, W. (1997). "Three laws of qualia. What neurology tells us about the biological functions of consciousness, qualia and the self". Journal of consciousness studies. 4 (5–6): 429–458.
    • Tononi, G.; Boly, M.; Massimini, M.; Koch, C. (2016). "Integrated information theory: From consciousness to its physical substrate". Nature Reviews Neuroscience. 17: 450–461.

  • Eliasmith, Chris (2004-05-11). "qualia". Philosophy. Dictionary of Philosophy of Mind. Canada: University of Waterloo. Retrieved 2010-12-03.

  • Peirce, C.S. Writings Chronological Edition. Vol. 1. pp. 477–478.

  • Lewis, C.I. (1929). Mind and the World-Order: Outline of a theory of knowledge. New York: Charles Scribner's Sons. p. 121.

  • Jackson, Frank (1982). "Epiphenomenal Qualia" (PDF). The Philosophical Quarterly. 32 (127): 127–136. doi:10.2307/2960077. JSTOR 2960077. Retrieved August 7, 2019.

  • Kripke, Saul "Identity and Necessity" (1971)

  • Kripke, Saul A. (1972), "Naming and Necessity", Semantics of Natural Language, Dordrecht: Springer Netherlands, pp. 253–355, ISBN 978-90-277-0310-1, retrieved 2023-04-26

  • Levy, Neil (1 January 2014). "The value of consciousness". Journal of Consciousness Studies. 21 (1–2): 127–138. PMC 4001209. PMID 24791144.

  • Jackson, Frank (1982); Feigl, H. (1958); Broad, C.D. (1925) (2021). "Qualia: The Knowledge Argument". § 1. History of the underlying ideas. Stanford Encyclopedia of Philosophy. Qualia knowledge. Stanford U.

  • Nagel, Thomas (October 1974). "What is it like to be a bat?". The Philosophical Review. 83 (4): 435–450. doi:10.2307/2183914. JSTOR 2183914.

  • Tye, Michael (2000). Consciousness, Color, and Content. Cambridge, MA: MIT Press.

  • Locke, John (1975) [1689]. "Essay Concerning Human Understanding". [title not cited]. Oxford: Oxford University Press. II, xxxii, 15.[full citation needed]

  • "Inverted qualia". Stanford Encyclopedia of Philosophy. Stanford University. Retrieved 2010-12-03 – via Plato.stanford.edu.

  • Hardin, C.L. (1987). "Qualia and materialism: Closing the explanatory gap". Philosophy and Phenomenological Research. 48 (2): 281–298. doi:10.2307/2107629. JSTOR 2107629.

  • Stratton, George M. (1896). "Some preliminary experiments on vision" (PDF). Psychological Review.

  • Dennett, D.C. (20 October 1992). Consciousness Explained (1st ed.). Back Bay Books. ISBN 978-0316180665.

  • "Joseph Levine, Conceivability, Identity, and the Explanatory Gap". Cognet.mit.edu. 2000-09-26. Archived from the original on 2010-08-31. Retrieved 2010-12-03.

  • Dennett, D.C. (1988). "Quining qualia". In Marcel, A.J.; Bisiach, E. (eds.). Consciousness in Contemporary Science. Clarendon Press / Oxford University Press. pp. 42–77.

  • Ungerleider, L.G. (3 November 1995). "Functional brain imaging studies of cortical mechanisms for memory". Science. 270 (5237): 769–775. Bibcode:1995Sci...270..769U. doi:10.1126/science.270.5237.769. PMID 7481764. S2CID 37665998.

  • Dennett, D.C. (April 2001). "Are we explaining consciousness yet?". Cognition. 79 (1–2): 221–237. doi:10.1016/s0010-0277(00)00130-x. PMID 11164029. S2CID 2235514.

  • Churchland, Paul (2004). "Knowing qualia: A reply to Jackson (with postscript 1997)". In Ludlow, Peter; Nagasawa, Yujin; Stoljar, Daniel (eds.). There's Something about Mary. Cambridge, MA: MIT Press. pp. 163–178.

  • Drescher, G. (2006). Good and Real. Cambridge, MA: MIT Press. pp. 81–82. ISBN 978-0262042338 – via Google Books.

  • Lewis, D.K. (2004). "What experience teaches". In Ludlow, Peter; Nagasawa, Yujin; Stoljar, Daniel (eds.). There's Something about Mary. Cambridge, MA: MIT Press. pp. 77–103.

  • "Edge interview with Marvin Minsky". Edge.org. Edge Foundation, Inc. 1998-02-26. Retrieved 2010-12-03.

  • Tye, M. (1991). The Imagery Debate. Cambridge, MA: MIT Press.
    • Tye, M. (1995). Ten Problems of Consciousness: A representational theory of the phenomenal mind. Cambridge, MA: MIT Press.

  • Scruton, Roger (2017). On Human Nature.

  • Kind, Amy. "Qualia". Internet Encyclopedia of Philoshopy. Retrieved 6 November 2022.

  • Chalmers, D. (1995). "Absent qualia, fading qualia, dancing qualia". In Metzinger, Thomas (ed.). Conscious Experience. Imprint Academic;
          free copy available from
    "qualia on consc.net".

  • Lowe, E. J. (1996). Subjects of Experience. Cambridge: Cambridge University Press. p. 101.

  • Lowe, E. J. (2008). "Illusions and hallucinations as evidence for sense-data". In Wright, Edmond (ed.). The Case for Qualia. Cambridge, MA: MIT Press. pp. 59–72. ISBN 978-0262731881 – via Google Books.

  • Maund, J.B. (September 1975). "The representative theory of perception". Canadian Journal of Philosophy. 5 (1): 41–55. doi:10.1080/00455091.1975.10716096. S2CID 146937154.

  • Maund, J.B. (1995). Colours: Their nature and representation. Cambridge: Cambridge University Press. ISBN 978-0521472739 – via Google Books;
    • Maund, J.B. (2003). Perception. Chesham: Acumen Pub. Ltd.

  • Perkins, Moreland (1983). Sensing the World. Indianapolis, IN: Hackett Pub. Co. ISBN 978-0915145751 – via Archive.org.

  • Ryle, G. (1949). The Concept of Mind. London: Hutchinson. p. 215.

  • Ayer, A.J. (1957). The Problem of Knowledge. Harmondsworth: Penguin Books. p. 107.

  • Robinson, William (2004). Understanding Phenomenal Consciousness. Cambridge: Cambridge University Press.

  • Wright, Edmond, ed. (2008). The Case for Qualia (publisher's abstract). Cambridge, MA: MIT Press.

  • Schrödinger, Erwin (2001) [1958]. What is Life? : The physical aspects of the living cell (reprint ed.). Cambridge: Cambridge University Press. ISBN 978-0521427081.

  • Edelman, G. (1992). Bright air, brilliant fire. On the matter of mind. Basic Books. p. 115.

  • Damasio, A. (1999). The feeling of what happens. Heineman.

  • Damasio, A. (2010). Self comes to mind. Constructing the conscious brain. Knopf Doubleday Publishing.

  • Llinás, R. (2002). I of the Vortex. From neurons to self. The MIT Press.

  • Ramachandran, V.S.; Hirstein, W. (1 May 1997). "Three laws of qualia: What neurology tells us about the biological functions of consciousness". Journal of Consciousness Studies. 4 (5–6): 429–457;
        also available from
    "Three laws of qualia" (PDF). imprint.co.uk.

  • Ramachandran, V.S.; Hirstein, W. (1 December 2001). "Synaesthesia – a window into perception, thought, and language". Journal of Consciousness Studies. 8 (12): 3–34.

  • Orpwood, Roger D. (December 2007). "Neurobiological mechanisms underlying qualia". Journal of Integrative Neuroscience. 06 (4): 523–540. doi:10.1142/s0219635207001696. PMID 18181267.
    • Orpwood, Roger D. (June 2010). "Perceptual qualia and local network behavior in the cerebral cortex". Journal of Integrative Neuroscience. 09 (2): 123–152. doi:10.1142/s021963521000241x. PMID 20589951.

  • Orpwood, Roger D. (2013). "Qualia could arise from information processing in local cortical networks". Frontiers in Psychology. 4: 121. doi:10.3389/fpsyg.2013.00121. PMC 3596736. PMID 23504586.

  • Elston, G.N. (1 November 2003). "Cortex, cognition, and the cell: New insights into the pyramidal neuron and prefrontal function". Cerebral Cortex. 13 (11): 1124–1138. doi:10.1093/cercor/bhg093. PMID 14576205.

  • Taylor, A. (1927). Plato: The man and his work. New York: MacVeagh. p. 198.

  • Benecke, E.C. (1900–1901). "On the aspect theory of the relation of mind to body". Aristotelian Society Proceedings. n.s. 1: 18–44. doi:10.1093/aristotelian/1.1.18.

  • McLaughlin, B. (1994). Guttenplan, S. (ed.). Epiphenomenalism, a Companion to the Philosophy of Mind. Oxford: Blackwell. pp. 277–288.

  • Georgiev, Danko D. (2017). Quantum Information and Consciousness: A gentle introduction. Boca Raton, FL: CRC Press. p. 362. doi:10.1201/9780203732519. ISBN 978-1138104488. OCLC 1003273264. Zbl 1390.81001.

  • Hardin, C.L. (1988). Color for Philosophers. Indianapolis, IN: Hackett Pub. Co. ISBN 0872200396 – via Google Books.

  • McDowell, John (1994). Mind and World. Cambridge MA: Harvard University Press. p. 42.

  • Roberson, Gwendolyn E.; Wallace, Mark T.; Schirillo, James A. (October 2001). "The sensorimotor contingency of multisensory localization correlates with the conscious percept of spatial unity". Behavioral and Brain Sciences. 24 (5): 1001–1002. doi:10.1017/S0140525X0154011X.

  • Warren, Richard M.; Warren, Roslyn P., eds. (1968). Helmholtz on Perception: Its physiology and development. New York: John Wiley & Sons. p. 178.

  • Price, Hubert H. (1932). Perception. London: Methuen. p. 32.

  • Sellars, Roy Wood (1922). Evolutionary Naturalism. Chicago & London: Open Court Pub. Co.

    1. Amoroso, Richard L. (2010). Complementarity of Mind & Body: Realizing the dream of Descartes, Einstein, & Eccles. New York: Nova Science Publishers.

    Other references

    Further reading

    External links

     https://en.wikipedia.org/wiki/Qualia

    Reason is the capacity of consciously applying logic by drawing conclusions from new or existing information, with the aim of seeking the truth.[1][2] It is closely[how?] associated with such characteristically human activities as philosophy, science, language, mathematics, and art, and is normally considered to be a distinguishing ability possessed by humans.[3] Reason is sometimes referred to as rationality.[4]

    Reasoning is associated with the acts of thinking and cognition, and involves the use of one's intellect. The field of logic studies the ways in which humans can use formal reasoning to produce logically valid arguments.[5] Reasoning may be subdivided into forms of logical reasoning, such as deductive reasoning, inductive reasoning, and abductive reasoning. Aristotle drew a distinction between logical discursive reasoning (reason proper), and intuitive reasoning,[6] in which the reasoning process through intuition—however valid—may tend toward the personal and the subjectively opaque. In some social and political settings logical and intuitive modes of reasoning may clash, while in other contexts intuition and formal reason are seen as complementary rather than adversarial. For example, in mathematics, intuition is often necessary for the creative processes involved with arriving at a formal proof, arguably the most difficult of formal reasoning tasks.

    Reasoning, like habit or intuition, is one of the ways by which thinking moves from one idea to a related idea. For example, reasoning is the means by which rational individuals understand sensory information from their environments, or conceptualize abstract dichotomies such as cause and effect, truth and falsehood, or ideas regarding notions of good or evil. Reasoning, as a part of executive decision making, is also closely identified with the ability to self-consciously change, in terms of goals, beliefs, attitudes, traditions, and institutions, and therefore with the capacity for freedom and self-determination.[7]

    In contrast to the use of "reason" as an abstract noun, a reason is a consideration given which either explains or justifies events, phenomena, or behavior.[8] Reasons justify decisions, reasons support explanations of natural phenomena; reasons can be given to explain the actions (conduct) of individuals.

    Using reason, or reasoning, can also be described more plainly as providing good, or the best, reasons. For example, when evaluating a moral decision, "morality is, at the very least, the effort to guide one's conduct by reason—that is, doing what there are the best reasons for doing—while giving equal [and impartial] weight to the interests of all those affected by what one does."[9]

    Psychologists and cognitive scientists have attempted to study and explain how people reason, e.g. which cognitive and neural processes are engaged, and how cultural factors affect the inferences that people draw. The field of automated reasoning studies how reasoning may or may not be modeled computationally. Animal psychology considers the question of whether animals other than humans can reason. 

    https://en.wikipedia.org/wiki/Reason

    Reality is the sum or aggregate of all that is real or existent within a system, as opposed to that which is only imaginary, nonexistent or nonactual. The term is also used to refer to the ontological status of things, indicating their existence.[1] In physical terms, reality is the totality of a system, known and unknown.[2]

    Philosophical questions about the nature of reality or existence or being are considered under the rubric of ontology, which is a major branch of metaphysics in the Western philosophical tradition. Ontological questions also feature in diverse branches of philosophy, including the philosophy of science, of religion, of mathematics, and philosophical logic. These include questions about whether only physical objects are real (i.e., physicalism), whether reality is fundamentally immaterial (e.g. idealism), whether hypothetical unobservable entities posited by scientific theories exist, whether a 'God' exists, whether numbers and other abstract objects exist, and whether possible worlds exist. Epistemology is concerned with what can be known or inferred as likely and how, whereby in the modern world emphasis is put on reason, empirical evidence and science as sources and methods to determine or investigate reality. 

    https://en.wikipedia.org/wiki/Reality

    Rational reconstruction is a philosophical term with several distinct meanings. It is found in the work of Jürgen Habermas and Imre Lakatos.  

    https://en.wikipedia.org/wiki/Rational_reconstruction

    Rational ignorance is refraining from acquiring knowledge when the supposed cost of educating oneself on an issue exceeds the expected potential benefit that the knowledge would provide.

    Ignorance about an issue is said to be "rational" when the cost of educating oneself about the issue sufficiently to make an informed decision can outweigh any potential benefit one could reasonably expect to gain from that decision, and so it would be irrational to waste time doing so. This has consequences for the quality of decisions made by large numbers of people, such as in general elections, where the probability of any one vote changing the outcome is very small.

    The term is most often found in economics, particularly public choice theory, but also used in other disciplines which study rationality and choice, including philosophy (epistemology) and game theory.

    The term was coined by Anthony Downs in An Economic Theory of Democracy.[1] 

    https://en.wikipedia.org/wiki/Rational_ignorance

    In philosophy and rhetoric, eristic (from Eris, the ancient Greek goddess of chaos, strife, and discord) refers to an argument that aims to successfully dispute another's argument, rather than searching for truth. According to T.H. Irwin, "It is characteristic of the eristic to think of some arguments as a way of defeating the other side, by showing that an opponent must assent to the negation of what he initially took himself to believe."[1] Eristic is arguing for the sake of conflict, as opposed to resolving conflict.[2] 

    https://en.wikipedia.org/wiki/Eristic

    Eschatological verification describes a process whereby a proposition can be verified after death. A proposition such as "there is an afterlife" is verifiable if true but not falsifiable if false (if it's false, the individual will not know it's false, because they have no state of being). The term is most commonly used in relation to God and the afterlife, although there may be other propositions - such as moral propositions - which may also be verified after death.

    John Hick has expressed the premise as an allegory of a quest to a Celestial City. In this parable, a theist and an atheist are both walking down the same road. The theist believes there is a destination, the atheist believes there is not. If they reach the destination, the theist will have been proven right; but if there is no destination on an endless road, this can never be verified. This is an attempt to explain how a theist expects some form of life or existence after death and an atheist does not. They both have separate belief systems and live life accordingly, but logically one is right and the other is not. If the theist is right, he will be proven so when he arrives in the afterlife. But if the atheist is right, they will simply both be dead and nothing will be verified.

    This acts as a response to Verificationism. Under Hick's analogy claims about the afterlife are verifiable in principle because the truth becomes clear after death. To some extent it is therefore wrong to claim that religious language cannot be verified because it can (when you're dead). 

    https://en.wikipedia.org/wiki/Eschatological_verification

    The evil demon, also known as Deus deceptor,[1] malicious demon,[2] and evil genius,[1][3] is an epistemological concept that features prominently in Cartesian philosophy.[1] In the first of his 1641 Meditations on First Philosophy, Descartes imagines that a malevolent God[1] or an evil demon, of "utmost power and cunning has employed all his energies in order to deceive me." This malevolent God or evil demon is imagined to present a complete illusion of an external world, so that Descartes can say, "I shall think that the sky, the air, the earth, colours, shapes, sounds and all external things are merely the delusions of dreams which he has devised to ensnare my judgement. I shall consider myself as not having hands or eyes, or flesh, or blood or senses, but as falsely believing that I have all these things."  

    https://en.wikipedia.org/wiki/Evil_demon

    In the case of uncertainty, expectation is what is considered the most likely to happen. An expectation, which is a belief that is centered on the future, may or may not be realistic. A less advantageous result gives rise to the emotion of disappointment. If something happens that is not at all expected, it is a surprise. An expectation about the behavior or performance of another person, expressed to that person, may have the nature of a strong request, or an order; this kind of expectation is called a social norm. The degree to which something is expected to be true can be expressed using fuzzy logic. Anticipation is the emotion corresponding to expectation. 

    https://en.wikipedia.org/wiki/Expectation_(epistemic)

    Experiential knowledge is knowledge gained through experience, as opposed to a priori (before experience) knowledge: it can also be contrasted both with propositional (textbook) knowledge, and with practical knowledge.[1]

    Experiential knowledge is cognate to Michael Polanyi's personal knowledge, as well as to Bertrand Russell's contrast of Knowledge by Acquaintance and by Description.[2] 

    https://en.wikipedia.org/wiki/Experiential_knowledge

    Suspended judgment is a cognitive process and a rational state of mind in which one withholds judgments, particularly on the drawing of moral or ethical conclusions. The opposite of suspension of judgment is premature judgement, usually shortened to prejudice. While prejudgment involves drawing a conclusion or making a judgment before having the information relevant to such a judgment, suspension of judgment involves waiting for all the facts before making a decision.  

    https://en.wikipedia.org/wiki/Suspension_of_judgment

    In epistemology, the regress argument is the argument that any proposition requires a justification. However, any justification itself requires support. This means that any proposition whatsoever can be endlessly (infinitely) questioned, resulting in infinite regress. It is a problem in epistemology and in any general situation where a statement has to be justified.[1][2][3]

    The argument is also known as diallelus[4] (Latin) or diallelon, from Greek di' allelon "through or by means of one another" and as the epistemic regress problem. It is an element of the Münchhausen trilemma.[5]

    Structure

    Assuming that knowledge is justified true belief, then:

    1. Suppose that P is some piece of knowledge. Then P is a justified true belief.
    2. The only thing that can justify P is another statement – let's call it P1; so P1 justifies P.
    3. But if P1 is to be a satisfactory justification for P, then we must know that P1 is true.
    4. But for P1 to be known, it must also be a justified true belief.
    5. That justification will be another statement - let's call it P2; so P2 justifies P1.
    6. But if P2 is to be a satisfactory justification for P1, then we must know that P2 is true
    7. But for P2 to count as knowledge, it must itself be a justified true belief.
    8. That justification will in turn be another statement - let's call it P3; so P3 justifies P2.
    9. and so on, ad infinitum.

    Responses

    Throughout history many responses to this problem have been generated. The major counter-arguments are

    • some statements do not need justification,
    • the chain of reasoning loops back on itself,
    • the sequence never finishes,
    • belief cannot be justified as beyond doubt.

    Foundationalism

    Perhaps the chain begins with a belief that is justified, but which is not justified by another belief. Such beliefs are called basic beliefs. In this solution, which is called foundationalism, all beliefs are justified by basic beliefs. Foundationalism seeks to escape the regress argument by claiming that there are some beliefs for which it is improper to ask for a justification. (See also a priori.)

    Foundationalism is the belief that a chain of justification begins with a belief that is justified, but which is not justified by another belief. Thus, a belief is justified if and only if:

    1. it is a basic/foundational belief, or
    2. it is justified by a basic belief
    3. it is justified by a chain of beliefs that is ultimately justified by a basic belief or beliefs.

    Foundationalism can be compared to a building. Ordinary individual beliefs occupy the upper stories of the building; basic, or foundational beliefs are down in the basement, in the foundation of the building, holding everything else up. In a similar way, individual beliefs, say about economics or ethics, rest on more basic beliefs, say about the nature of human beings; and those rest on still more basic beliefs, say about the mind; and in the end the entire system rests on a set of basic beliefs which are not justified by other beliefs.

    Coherentism

    Alternatively, the chain of reasoning may loop around on itself, forming a circle. In this case, the justification of any statement is used, perhaps after a long chain of reasoning, in justifying itself, and the argument is circular. This is a version of coherentism.

    Coherentism is the belief that an idea is justified if and only if it is part of a coherent system of mutually supporting beliefs (i.e., beliefs that support each other). In effect Coherentism denies that justification can only take the form of a chain. Coherentism replaces the chain with a holistic web.

    The most common objection to naïve Coherentism is that it relies on the idea that circular justification is acceptable. In this view, P ultimately supports P, begging the question. Coherentists reply that it is not just P that is supporting P, but P along with the totality of the other statements in the whole system of belief.

    Coherentism accepts any belief that is part of a coherent system of beliefs. In contrast, P can cohere with P1 and P2 without P, P1 or P2 being true. Instead, Coherentists might say that it is very unlikely that the whole system would be both untrue and consistent, and that if some part of the system was untrue, it would almost certainly be inconsistent with some other part of the system.

    A third objection is that some beliefs arise from experience and not from other beliefs. An example is that one is looking into a room which is totally dark. The lights turn on momentarily and one sees a white canopy bed in the room. The belief that there is a white canopy bed in this room is based entirely on experience and not on any other belief. Of course other possibilities exist, such as that the white canopy bed is entirely an illusion or that one is hallucinating, but the belief remains well-justified. Coherentists might respond that the belief which supports the belief that there is a white canopy bed in this room is that one saw the bed, however briefly. This appears to be an immediate qualifier which does not depend on other beliefs, and thus seems to prove that Coherentism is not true because beliefs can be justified by concepts other than beliefs. But others have argued that the experience of seeing the bed is indeed dependent on other beliefs, about what a bed, a canopy and so on, actually look like.

    Another objection is that the rule demanding "coherence" in a system of ideas seems to be an unjustified belief.

    Infinitism

    Infinitism argues that the chain can go on forever. Critics argue that this means there is never adequate justification for any statement in the chain.

    Skepticism

    Skeptics reject the three above responses and argue that beliefs cannot be justified as beyond doubt. Note that many skeptics do not deny that things may appear in a certain way. However, such sense impressions cannot, in the skeptical view, be used to find beliefs that cannot be doubted. Also, skeptics do not deny that, for example, many laws of nature give the appearance of working or that doing certain things give the appearance of producing pleasure/pain or even that reason and logic seem to be useful tools. Skepticism is in this view valuable since it encourages continued investigation.[6]

    Synthesized approaches

    Common sense

    The method of common sense espoused by such philosophers as Thomas Reid and G. E. Moore points out that whenever we investigate anything at all, whenever we start thinking about some subject, we have to make assumptions. When one tries to support one's assumptions with reasons, one must make yet more assumptions. Since it is inevitable that we will make some assumptions, why not assume those things that are most obvious: the matters of common sense that no one ever seriously doubts.

    "Common sense" here does not mean old adages like "Chicken soup is good for colds" but statements about the background in which our experiences occur. Examples would be "Human beings typically have two eyes, two ears, two hands, two feet", or "The world has a ground and a sky" or "Plants and animals come in a wide variety of sizes and colors" or "I am conscious and alive right now". These are all the absolutely most obvious sorts of claims that one could possibly make; and, said Reid and Moore, these are the claims that make up common sense.

    This view can be seen as either a version of foundationalism, with common sense statements taking the role of basic statements, or as a version of Coherentism. In this case, commonsense statements are statements that are so crucial to keeping the account coherent that they are all but impossible to deny.

    If the method of common sense is correct, then philosophers may take the principles of common sense for granted. They do not need criteria in order to judge whether a proposition is true or not. They can also take some justifications for granted, according to common sense. They can get around Sextus' problem of the criterion because there is no infinite regress or circle of reasoning, because the principles of common sense ground the entire chain of reasoning.

    Critical philosophy

    Another escape from the diallelus is critical philosophy, which denies that beliefs should ever be justified at all. Rather, the job of philosophers is to subject all beliefs (including beliefs about truth criteria) to criticism, attempting to discredit them rather than justifying them. Then, these philosophers say, it is rational to act on those beliefs that have best withstood criticism, whether or not they meet any specific criterion of truth. Karl Popper expanded on this idea to include a quantitative measurement he called verisimilitude, or truth-likeness. He showed that even if one could never justify a particular claim, one can compare the verisimilitude of two competing claims by criticism to judge which is superior to the other.

    Pragmatism

    The pragmatist philosopher William James suggests that, ultimately, everyone settles at some level of explanation based on one's personal preferences that fit the particular individual's psychological needs. People select whatever level of explanation fits their needs, and things other than logic and reason determine those needs. In The Sentiment of Rationality, James compares the philosopher, who insists on a high degree of justification, and the boor, who accepts or rejects ideals without much thought:

    The philosopher's logical tranquillity is thus in essence no other than the boor's. They differ only as to the point at which each refuses to let further considerations upset the absoluteness of the data he assumes.

    See also

    References


     https://en.wikipedia.org/wiki/Regress_argument

     

    The simulation hypothesis proposes that all of our existence is a simulated reality, such as a computer simulation.[1][2][3] This simulation could contain conscious minds that may or may not know that they live inside a simulation. This is quite different from the current, technologically achievable concept of virtual reality, which is easily distinguished from the experience of actuality. Simulated reality, by contrast, would be hard or impossible to separate from "true" reality. There has been much debate over this topic, ranging from philosophical discourse to practical applications in computing.

    The simulation hypothesis, which was popularized in its current form by Nick Bostrom,[4] bears a close resemblance to various other skeptical scenarios from throughout the history of philosophy.

    The suggestion that such a hypothesis is compatible with all human perceptual experiences is thought to have significant epistemological consequences in the form of philosophical skepticism. Versions of the hypothesis have also been featured in science fiction, appearing as a central plot device in many stories and films.[5]

    The hypothesis popularized by Bostrom is very disputed, with, for example, theoretical physicist Sabine Hossenfelder, who called it pseudoscience[6] and cosmologist George F. R. Ellis, who stated that "[the hypothesis] is totally impracticable from a technical viewpoint" and that "protagonists seem to have confused science fiction with science. Late-night pub discussion is not a viable theory."[7]

    Origins

    There is a long philosophical and scientific history to the underlying thesis that reality is an illusion. This skeptical hypothesis can be traced back to antiquity; for example, to the "Butterfly Dream" of Zhuangzi,[8] or the Indian philosophy of Maya, or in Ancient Greek philosophy Anaxarchus and Monimus likened existing things to a scene-painting and supposed them to resemble the impressions experienced in sleep or madness.[9]

    Aztec philosophical texts theorized that the world was a painting or book written by the Teotl.[10]

    Nietzsche, in his 1886 book Beyond Good and Evil, chastised philosophers for seeking to find the true world behind the deceptive world of appearances.

    It is nothing more than a moral prejudice that truth is worth more than semblance; it is, in fact, the worst proved supposition in the world.... Why might not the world which concerns us⁠—be a fiction?[11]

     https://en.wikipedia.org/wiki/Simulation_hypothesis

     

    Hauntology (a portmanteau of haunting and ontology, also spectral studies, spectralities, or the spectral turn) is a range of ideas referring to the return or persistence of elements from the social or cultural past, as in the manner of a ghost. The term is a neologism first introduced by French philosopher Jacques Derrida in his 1993 book Specters of Marx. It has since been invoked in fields such as visual arts, philosophy, electronic music, anthropology, politics, fiction, and literary criticism.[1]

    While Christine Brooke-Rose had previously punned "dehauntological" (on "deontological") in Amalgamemnon (1984),[2] Derrida initially used "hauntology" for his idea of the atemporal nature of Marxism and its tendency to "haunt Western society from beyond the grave".[3] It describes a situation of temporal and ontological disjunction in which presence, especially socially and culturally, is replaced by a deferred non-origin.[1] The concept is derived from deconstruction, in which any attempt to locate the origin of identity or history must inevitably find itself dependent on an always-already existing set of linguistic conditions.[4] Despite being the central focus of Spectres of Marx, the word hauntology appears only three times in the book, and there is little consistency in how other writers define the term.[5]

    In the 2000s, the term was applied to musicians by theorists Simon Reynolds and Mark Fisher, who were said to explore ideas related to temporal disjunction, retrofuturism, cultural memory, and the persistence of the past. Hauntology has been used as a critical lens in various forms of media and theory, including music, aesthetics, political theory, architecture, Africanfuturism, Afrofuturism, Neo-futurism, Metamodernism, anthropology, and psychoanalysis.[1][failed verification][6][page needed] Due to the difficulty in understanding the concept, there is little consistency in how other writers define the term.[5]

    https://en.wikipedia.org/wiki/Hauntology


    "Fusion of horizons" (German: Horizontverschmelzung) is a dialectical concept which results from the rejection of two alternatives: objectivism, whereby the objectification of the other is premised on the forgetting of oneself; and absolute knowledge, according to which universal history can be articulated within a single horizon. Therefore, it argues that we exist neither in closed horizons, nor within a horizon that is unique.

    People come from different backgrounds and it is not possible to totally remove oneself from one's background, history, culture, gender, language, education, etc. to an entirely different system of attitudes, beliefs and ways of thinking.[1] People may be looking for a way to be engaged in understanding a conversation or dialogue about different cultures and the speaker interprets texts or stories based on his or her past experience and prejudice. Therefore, "hermeneutic reflection and determination of one's own present life interpretation calls for the unfolding of one's 'effective-historical' consciousness."[2] During the discourse, a fusion of "horizons" takes place between the speaker and listeners. 

    https://en.wikipedia.org/wiki/Fusion_of_horizons

    The fact–value distinction is a fundamental epistemological distinction described between:[1]

    1. Statements of fact (positive or descriptive statements), based upon reason and physical observation, and which are examined via the empirical method.
    2. Statements of value (normative or prescriptive statements), which encompass ethics and aesthetics, and are studied via axiology.

    This barrier between fact and value, as construed in epistemology, implies it is impossible to derive ethical claims from factual arguments, or to defend the former using the latter.[2]

    The fact–value distinction is closely related to, and derived from, the is–ought problem in moral philosophy, characterized by David Hume.[3] The terms are often used interchangeably, though philosophical discourse concerning the is–ought problem does not usually encompass aesthetics.

    If values do not arise from facts, it opens questions about whether these have distinct spheres of origin. Evolutionary psychology implicitly challenges the fact–value distinction to the extent that cognitive psychology is positioned as foundational in human ethics, as a single sphere of origin. Among the new atheists, who often lean toward evolutionary psychology, Sam Harris in particular endorses and promotes a science of morality as outlined in his book The Moral Landscape (2010).

    Religion and science

    In his essay Science as a Vocation (1917) Max Weber draws a distinction between facts and values. He argues that facts can be determined through the methods of a value-free, objective social science, while values are derived through culture and religion, the truth of which cannot be known through science. He writes, "it is one thing to state facts, to determine mathematical or logical relations or the internal structure of cultural values, while it is another thing to answer questions of the value of culture and its individual contents and the question of how one should act in the cultural community and in political associations. These are quite heterogeneous problems."[4] In his 1919 essay Politics as a Vocation, he argues that facts, like actions, do not in themselves contain any intrinsic meaning or power: "any ethic in the world could establish substantially identical commandments applicable to all relationships."[5]

    To MLK Jr., "Science deals mainly with facts; religion deals mainly with values. The two are not rivals. They are complementary."[6][7][8] He stated that science keeps religion from"crippling irrationalism and paralyzing obscurantism" whereas Religion prevents science from "falling into ... obsolete materialism and moral nihilism."[9]

    Albert Einstein remarked that

    the realms of religion and science in themselves are clearly marked off from each other, nevertheless there exist between the two strong reciprocal relationships and dependencies. Though religion may be that which determines the goal, it has, nevertheless, learned from science, in the broadest sense, what means will contribute to the attainment of the goals it has set up. But science can only be created by those who are thoroughly imbued with the aspiration toward truth and understanding. This source of feeling, however, springs from the sphere of religion. To this there also belongs the faith in the possibility that the regulations valid for the world of existence are rational, that is, comprehensible to reason. I cannot conceive of a genuine scientist without that profound faith. The situation may be expressed by an image: science without religion is lame, religion without science is blind.[10]

    David Hume's skepticism

    In A Treatise of Human Nature (1739), David Hume discusses the problems in grounding normative statements in positive statements, that is in deriving ought from is. It is generally regarded that Hume considered such derivations untenable, and his 'is–ought' problem is considered a principal question of moral philosophy.[11]

    Hume shared a political viewpoint with early Enlightenment philosophers such as Thomas Hobbes (1588–1679) and John Locke (1632–1704). Specifically, Hume, at least to some extent, argued that religious and national hostilities that divided European society were based on unfounded beliefs. In effect, Hume contended that such hostilities are not found in nature, but are a human creation, depending on a particular time and place, and thus unworthy of mortal conflict.

    Prior to Hume, Aristotelian philosophy maintained that all actions and causes were to be interpreted teleologically. This rendered all facts about human action examinable under a normative framework defined by cardinal virtues and capital vices. "Fact" in this sense was not value-free, and the fact-value distinction was an alien concept. The decline of Aristotelianism in the 16th century set the framework in which those theories of knowledge could be revised.[12]

    Naturalistic fallacy

    The fact–value distinction is closely related to the naturalistic fallacy, a topic debated in ethical and moral philosophy. G. E. Moore believed it essential to all ethical thinking.[13] However, contemporary philosophers like Philippa Foot have called into question the validity of such assumptions. Others, such as Ruth Anna Putnam, argue that even the most "scientific" of disciplines are affected by the "values" of those who research and practice the vocation.[14][15] Nevertheless, the difference between the naturalistic fallacy and the fact–value distinction is derived from the manner in which modern social science has used the fact–value distinction, and not the strict naturalistic fallacy to articulate new fields of study and create academic disciplines.

    Moralistic fallacy

    The fact–value distinction is also closely related to the moralistic fallacy, an invalid inference of factual conclusions from purely evaluative premises. For example, an invalid inference "Because everybody ought to be equal, there are no innate genetic differences between people" is an instance of the moralistic fallacy. As for the naturalistic fallacy one attempts to move from an "is" to an "ought" statement, with the moralistic fallacy one attempts to move from an "ought" to an "is" statement.

    Nietzsche's table of values

    Friedrich Nietzsche (1844–1900) in Thus Spoke Zarathustra said that a table of values hangs above every great people. Nietzsche argues that what is common among different peoples is the act of esteeming, of creating values, even if the values are different from one people to the next. Nietzsche asserts that what made people great was not the content of their beliefs, but the act of valuing. Thus the values a community strives to articulate are not as important as the collective will to act on those values.[16] The willing is more essential than the intrinsic worth of the goal itself, according to Nietzsche.[17] "A thousand goals have there been so far," says Zarathustra, "for there are a thousand peoples. Only the yoke for the thousand necks is still lacking: the one goal is lacking. Humanity still has no goal." Hence, the title of the aphorism, "On The Thousand And One Goals." The idea that one value-system is no more worthy than the next, although it may not be directly ascribed to Nietzsche, has become a common premise in modern social science. Max Weber and Martin Heidegger absorbed it and made it their own. It shaped their philosophical endeavor, as well as their political understanding.

    Criticisms

    Virtually all modern philosophers affirm some sort of fact–value distinction, insofar as they distinguish between science and "valued" disciplines such as ethics, aesthetics, or the fine arts. However, philosophers such as Hilary Putnam argue that the distinction between fact and value is not as absolute as Hume envisioned.[18] Philosophical pragmatists, for instance, believe that true propositions are those that are useful or effective in predicting future (empirical) states of affairs.[19] Far from being value-free, the pragmatists' conception of truth or facts directly relates to an end (namely, empirical predictability) that human beings regard as normatively desirable. Other thinkers, such as N. Hanson among others, talk of theory-ladenness, and reject an absolutist fact–value distinction by contending that our senses are imbued with prior conceptualizations, making it impossible to have any observation that is totally value-free, which is how Hume and the later positivists conceived of facts.

    Functionalist counterexamples

    Several counterexamples have been offered by philosophers claiming to show that there are cases when an evaluative statement does indeed logically follow from a factual statement. A. N. Prior argues, from the statement "He is a sea captain," that it logically follows, "He ought to do what a sea captain ought to do."[20] Alasdair MacIntyre argues, from the statement "This watch is grossly inaccurate and irregular in time-keeping and too heavy to carry about comfortably," that the evaluative conclusion validly follows, "This is a bad watch."[21] John Searle argues, from the statement "Jones promised to pay Smith five dollars," that it logically follows that "Jones ought to pay Smith five dollars", such that the act of promising by definition places the promiser under obligation.[22]

    Moral realism

    Philippa Foot adopts a moral realist position, criticizing the idea that when evaluation is superposed on fact there has been a "committal in a new dimension".[23] She introduces, by analogy, the practical implications of using the word "injury". Not just anything counts as an injury. There must be some impairment. When we suppose a man wants the things the injury prevents him from obtaining, haven’t we fallen into the old naturalist fallacy?

    It may seem that the only way to make a necessary connection between 'injury' and the things that are to be avoided, is to say that it is only used in an 'action-guiding sense' when applied to something the speaker intends to avoid. But we should look carefully at the crucial move in that argument, and query the suggestion that someone might happen not to want anything for which he would need the use of hands or eyes. Hands and eyes, like ears and legs, play a part in so many operations that a man could only be said not to need them if he had no wants at all.[24]

    Foot argues that the virtues, like hands and eyes in the analogy, play so large a part in so many operations that it is implausible to suppose that a committal in a non-naturalist dimension is necessary to demonstrate their goodness.

    Philosophers who have supposed that actual action was required if 'good' were to be used in a sincere evaluation have got into difficulties over weakness of will, and they should surely agree that enough has been done if we can show that any man has reason to aim at virtue and avoid vice. But is this impossibly difficult if we consider the kinds of things that count as virtue and vice? Consider, for instance, the cardinal virtues, prudence, temperance, courage and justice. Obviously any man needs prudence, but does he not also need to resist the temptation of pleasure when there is harm involved? And how could it be argued that he would never need to face what was fearful for the sake of some good? It is not obvious what someone would mean if he said that temperance or courage were not good qualities, and this not because of the 'praising' sense of these words, but because of the things that courage and temperance are.[25]

    Of Weber

    Philosopher Leo Strauss criticizes Weber for attempting to isolate reason completely from opinion. Strauss acknowledges the philosophical trouble of deriving "ought" from "is", but argues that what Weber has done in his framing of this puzzle is in fact deny altogether that the "ought" is within reach of human reason.[26]: 66  Strauss worries that if Weber is right, we are left with a world in which the knowable truth is a truth that cannot be evaluated according to ethical standards. This conflict between ethics and politics would mean that there can be no grounding for any valuation of the good, and without reference to values, facts lose their meaning.[26]: 72 

    See also

    References


  • Väyrynen, Pekka (2019). Zalta, Edward N. (ed.). "Thick Ethical Concepts". The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Retrieved 28 October 2019.

  • Prior, AN (1960). The Autonomy of Ethics, Australasian Journal of Philosophy, 38(3): 199–206.

  • Pigden, Charles (2018-12-06), Sinclair, Neil (ed.), "No-Ought-From-Is, the Naturalistic Fallacy, and the Fact/Value Distinction: The History of a Mistake", The Naturalistic Fallacy (1 ed.), Cambridge University Press, pp. 73–95, doi:10.1017/9781316717578.006, ISBN 978-1-316-71757-8, retrieved 2022-12-06

  • Weber, Max (1958). From Max Weber : essays in sociology. Gerth, Hans, 1908–1979; Mills, C. Wright (Charles Wright), 1916–1962. New York: Oxford University Press. pp. 146. ISBN 0195004620. OCLC 5654107.

  • Weber 1958, p. 357.

  • Knapp, Alex. "Martin Luther King, Jr. On Science And Religion". Forbes. Retrieved 2023-01-16.

  • "Science, Religion, and Dr. Martin Luther King, Jr". The Equation. 2013-08-28. Retrieved 2023-01-16.

  • "Dr. Martin Luther King's Impact on the Field of Science | Office of Equity, Diversity, and Inclusion". www.edi.nih.gov. Retrieved 2023-01-16.

  • "The Scientific Teachings Of Dr. Martin Luther King". HuffPost. 2012-01-16. Retrieved 2023-01-16.

  • Science, Philosophy and Religion, A Symposium, published by the Conference on Science, Philosophy and Religion in Their Relation to the Democratic Way of Life, Inc., New York (1941); later published in Out of My Later Years (1950)

  • Priest, Stephen (2007). The British Empiricists. Routledge. pp. 177–78. ISBN 978-0-415-35723-4.

  • MacIntyre, Alasdair (2007). After Virtue (3rd ed.). Notre Dame: University of Notre Dame Press. p. 81–84. ISBN 9780268035044.

  • Casimir Lewy 1965 – G. E. Moore on the naturalistic fallacy.

  • Putnam, Ruth Anna. "Perceiving Facts and Values", Philosophy 73, 1998. JSTOR 3752124 This article and her earlier one, "Creating Facts and Values", Philosophy 60, 1985 JSTOR 3750998, examine how scientists may base their choice of investigations on their unexamined subjectivity, which undermines the objectivity of their hypothesis and findings

  • J. C. Smart, "Ruth Anna Putnam and the Fact-Value Distinction", Philosophy 74, 1999. JSTOR 3751844

  • Nietzsche, Friedrich. Thus Spoke Zarathustra. Book Two "On the Virtuous": "You who are virtuous still want to be paid! Do you want rewards for virtue, and heaven for earth, and the eternal for your today? And now you are angry with me because I teach that there is no reward and paymaster? And verily, I do not even teach that virtue is its own reward."

  • Nietzsche, Friedrich. Thus Spoke Zarathustra. Book Four "On Old and New Tablets": "To redeem what is past in man and to recreate all 'it was' until the will says, 'Thus I willed it! Thus I shall will it!' – this I called redemption and this alone I taught them to call redemption."

  • Putnam, Hilary. "The Collapse of the Fact/Value Dichotomy and Other Essays, Cambridge, MA : Harvard University Press, 2002" (PDF). Reasonpapers.com. Retrieved 2013-10-03.

  • "Pragmatism (Stanford Encyclopedia of Philosophy)". Plato.stanford.edu. 2008-08-16. Retrieved 2013-10-03.

  • Alasdair MacIntyre, After Virtue (1984), p. 57.

  • Ibid., p. 68.

  • Don MacNiven, Creative Morality, pp. 41–42.

  • Philippa Foot, “Moral Beliefs,” Proceedings of the Aristotelian Society, vol. 59 (1958), pp. 83–104.

  • Foot 1958, p. 96.

  • Foot 1958, p. 97.

    1. Strauss, Leo (2008). Natural right and history. University of Chicago Press. ISBN 978-0226776941. OCLC 551845170.

    Bibliography

     https://en.wikipedia.org/wiki/Fact%E2%80%93value_distinction


    Time is the continued sequence of existence and events that occurs in an apparently irreversible succession from the past, through the present, into the future.[1][2][3] It is a component quantity of various measurements used to sequence events, to compare the duration of events or the intervals between them, and to quantify rates of change of quantities in material reality or in the conscious experience.[4][5][6][7] Time is often referred to as a fourth dimension, along with three spatial dimensions.[8]

    Time has long been an important subject of study in religion, philosophy, and science, but defining it in a manner applicable to all fields without circularity has consistently eluded scholars.[7][9] Nevertheless, diverse fields such as business, industry, sports, the sciences, and the performing arts all incorporate some notion of time into their respective measuring systems.[10][11][12]

    Time in physics is operationally defined as "what a clock reads".[6][13][14]

    The physical nature of time is addressed by general relativity with respect to events in spacetime. Examples of events are the collision of two particles, the explosion of a supernova, or the arrival of a rocket ship. Every event can be assigned four numbers representing its time and position (the event's coordinates). However, the numerical values are different for different observers. In general relativity, the question of what time it is now only has meaning relative to a particular observer. Distance and time are intimately related, and the time required for light to travel a specific distance is the same for all observers, as first publicly demonstrated by Michelson and Morley. General relativity does not address the nature of time for extremely small intervals where quantum mechanics holds. As of 2023, there is no generally accepted theory of quantum general relativity.[15]

    Time is one of the seven fundamental physical quantities in both the International System of Units (SI) and International System of Quantities. The SI base unit of time is the second, which is defined by measuring the electronic transition frequency of caesium atoms. Time is used to define other quantities, such as velocity, so defining time in terms of such quantities would result in circularity of definition.[16] An operational definition of time, wherein one says that observing a certain number of repetitions of one or another standard cyclical event (such as the passage of a free-swinging pendulum) constitutes one standard unit such as the second, is highly useful in the conduct of both advanced experiments and everyday affairs of life. To describe observations of an event, a location (position in space) and time are typically noted.

    The operational definition of time does not address what the fundamental nature of time is. It does not address why events can happen forward and backward in space, whereas events only happen in the forward progress of time. Investigations into the relationship between space and time led physicists to define the spacetime continuum. General relativity is the primary framework for understanding how spacetime works.[17] Through advances in both theoretical and experimental investigations of spacetime, it has been shown that time can be distorted and dilated, particularly at the edges of black holes.

    Temporal measurement has occupied scientists and technologists and was a prime motivation in navigation and astronomy. Periodic events and periodic motion have long served as standards for units of time. Examples include the apparent motion of the sun across the sky, the phases of the moon, and the swing of a pendulum. Time is also of significant social importance, having economic value ("time is money") as well as personal value, due to an awareness of the limited time in each day and in human life spans.

    There are many systems for determining what time it is, including the Global Positioning System, other satellite systems, Coordinated Universal Time and mean solar time. In general, the numbers obtained from different time systems differ from one another.

    Measurement

    The flow of sand in an hourglass can be used to measure the passage of time. It also concretely represents the present as being between the past and the future.

    Generally speaking, methods of temporal measurement, or chronometry, take two distinct forms: the calendar, a mathematical tool for organising intervals of time,[18] and the clock, a physical mechanism that counts the passage of time. In day-to-day life, the clock is consulted for periods less than a day, whereas the calendar is consulted for periods longer than a day. Increasingly, personal electronic devices display both calendars and clocks simultaneously. The number (as on a clock dial or calendar) that marks the occurrence of a specified event as to hour or date is obtained by counting from a fiducial epoch – a central reference point.

    History of the calendar

    Artifacts from the Paleolithic suggest that the moon was used to reckon time as early as 6,000 years ago.[19] Lunar calendars were among the first to appear, with years of either 12 or 13 lunar months (either 354 or 384 days). Without intercalation to add days or months to some years, seasons quickly drift in a calendar based solely on twelve lunar months. Lunisolar calendars have a thirteenth month added to some years to make up for the difference between a full year (now known to be about 365.24 days) and a year of just twelve lunar months. The numbers twelve and thirteen came to feature prominently in many cultures, at least partly due to this relationship of months to years. Other early forms of calendars originated in Mesoamerica, particularly in ancient Mayan civilization. These calendars were religiously and astronomically based, with 18 months in a year and 20 days in a month, plus five epagomenal days at the end of the year.[20]

    The reforms of Julius Caesar in 45 BC put the Roman world on a solar calendar. This Julian calendar was faulty in that its intercalation still allowed the astronomical solstices and equinoxes to advance against it by about 11 minutes per year. Pope Gregory XIII introduced a correction in 1582; the Gregorian calendar was only slowly adopted by different nations over a period of centuries, but it is now by far the most commonly used calendar around the world.

    During the French Revolution, a new clock and calendar were invented in an attempt to de-Christianize time and create a more rational system in order to replace the Gregorian calendar. The French Republican Calendar's days consisted of ten hours of a hundred minutes of a hundred seconds, which marked a deviation from the base 12 (duodecimal) system used in many other devices by many cultures. The system was abolished in 1806.[21]

    History of other devices

    Horizontal sundial in Taganrog
    An old kitchen clock

    A large variety of devices have been invented to measure time. The study of these devices is called horology.[22]

    An Egyptian device that dates to c. 1500 BC, similar in shape to a bent T-square, measured the passage of time from the shadow cast by its crossbar on a nonlinear rule. The T was oriented eastward in the mornings. At noon, the device was turned around so that it could cast its shadow in the evening direction.[23]

    A sundial uses a gnomon to cast a shadow on a set of markings calibrated to the hour. The position of the shadow marks the hour in local time. The idea to separate the day into smaller parts is credited to Egyptians because of their sundials, which operated on a duodecimal system. The importance of the number 12 is due to the number of lunar cycles in a year and the number of stars used to count the passage of night.[24]

    The most precise timekeeping device of the ancient world was the water clock, or clepsydra, one of which was found in the tomb of Egyptian pharaoh Amenhotep I. They could be used to measure the hours even at night but required manual upkeep to replenish the flow of water. The ancient Greeks and the people from Chaldea (southeastern Mesopotamia) regularly maintained timekeeping records as an essential part of their astronomical observations. Arab inventors and engineers, in particular, made improvements on the use of water clocks up to the Middle Ages.[25] In the 11th century, Chinese inventors and engineers invented the first mechanical clocks driven by an escapement mechanism.

    A contemporary quartz watch, 2007

    The hourglass uses the flow of sand to measure the flow of time. They were used in navigation. Ferdinand Magellan used 18 glasses on each ship for his circumnavigation of the globe (1522).[26]

    Incense sticks and candles were, and are, commonly used to measure time in temples and churches across the globe. Waterclocks, and later, mechanical clocks, were used to mark the events of the abbeys and monasteries of the Middle Ages. Richard of Wallingford (1292–1336), abbot of St. Alban's abbey, famously built a mechanical clock as an astronomical orrery about 1330.[27][28]

    Great advances in accurate time-keeping were made by Galileo Galilei and especially Christiaan Huygens with the invention of pendulum-driven clocks along with the invention of the minute hand by Jost Burgi.[29]

    The English word clock probably comes from the Middle Dutch word klocke which, in turn, derives from the medieval Latin word clocca, which ultimately derives from Celtic and is cognate with French, Latin, and German words that mean bell. The passage of the hours at sea was marked by bells and denoted the time (see ship's bell). The hours were marked by bells in abbeys as well as at sea.

    Chip-scale atomic clocks, such as this one unveiled in 2004, are expected to greatly improve GPS location.[30]

    Clocks can range from watches to more exotic varieties such as the Clock of the Long Now. They can be driven by a variety of means, including gravity, springs, and various forms of electrical power, and regulated by a variety of means such as a pendulum.

    Alarm clocks first appeared in ancient Greece around 250 BC with a water clock that would set off a whistle. This idea was later mechanized by Levi Hutchins and Seth E. Thomas.[29]

    A chronometer is a portable timekeeper that meets certain precision standards. Initially, the term was used to refer to the marine chronometer, a timepiece used to determine longitude by means of celestial navigation, a precision firstly achieved by John Harrison. More recently, the term has also been applied to the chronometer watch, a watch that meets precision standards set by the Swiss agency COSC.

    The most accurate timekeeping devices are atomic clocks, which are accurate to seconds in many millions of years,[31] and are used to calibrate other clocks and timekeeping instruments.

    Atomic clocks use the frequency of electronic transitions in certain atoms to measure the second. One of the atoms used is caesium, most modern atomic clocks probe caesium with microwaves to determine the frequency of these electron vibrations.[32] Since 1967, the International System of Measurements bases its unit of time, the second, on the properties of caesium atoms. SI defines the second as 9,192,631,770 cycles of the radiation that corresponds to the transition between two electron spin energy levels of the ground state of the 133Cs atom.

    Today, the Global Positioning System in coordination with the Network Time Protocol can be used to synchronize timekeeping systems across the globe.

    In medieval philosophical writings, the atom was a unit of time referred to as the smallest possible division of time. The earliest known occurrence in English is in Byrhtferth's Enchiridion (a science text) of 1010–1012,[33] where it was defined as 1/564 of a momentum (112 minutes),[34] and thus equal to 15/94 of a second. It was used in the computus, the process of calculating the date of Easter.

    As of May 2010, the smallest time interval uncertainty in direct measurements is on the order of 12 attoseconds (1.2 × 10−17 seconds), about 3.7 × 1026 Planck times.[35]

    Units

    The second (s) is the SI base unit. A minute (min) is 60 seconds in length, and an hour is 60 minutes or 3600 seconds in length. A day is usually 24 hours or 86,400 seconds in length; however, the duration of a calendar day can vary due to Daylight saving time and Leap seconds.

    Definitions and standards

    A time standard is a specification for measuring time: assigning a number or calendar date to an instant (point in time), quantifying the duration of a time interval, and establishing a chronology (ordering of events). In modern times, several time specifications have been officially recognized as standards, where formerly they were matters of custom and practice. The invention in 1955 of the caesium atomic clock has led to the replacement of older and purely astronomical time standards such as sidereal time and ephemeris time, for most practical purposes, by newer time standards based wholly or partly on atomic time using the SI second.

    International Atomic Time (TAI) is the primary international time standard from which other time standards are calculated. Universal Time (UT1) is mean solar time at 0° longitude, computed from astronomical observations. It varies from TAI because of the irregularities in Earth's rotation. Coordinated Universal Time (UTC) is an atomic time scale designed to approximate Universal Time. UTC differs from TAI by an integral number of seconds. UTC is kept within 0.9 second of UT1 by the introduction of one-second steps to UTC, the "leap second". The Global Positioning System broadcasts a very precise time signal based on UTC time.

    The surface of the Earth is split up into a number of time zones. Standard time or civil time in a time zone deviates a fixed, round amount, usually a whole number of hours, from some form of Universal Time, usually UTC. Most time zones are exactly one hour apart, and by convention compute their local time as an offset from UTC. For example, time zones at sea are based on UTC. In many locations (but not at sea) these offsets vary twice yearly due to daylight saving time transitions.

    Some other time standards are used mainly for scientific work. Terrestrial Time is a theoretical ideal scale realized by TAI. Geocentric Coordinate Time and Barycentric Coordinate Time are scales defined as coordinate times in the context of the general theory of relativity. Barycentric Dynamical Time is an older relativistic scale that is still in use. 

    https://en.wikipedia.org/wiki/Time

    In philosophy, transcendence is the basic ground concept from the word's literal meaning (from Latin), of climbing or going beyond, albeit with varying connotations in its different historical and cultural stages. It includes philosophies, systems, and approaches that describe the fundamental structures of being, not as an ontology (theory of being), but as the framework of emergence and validation of knowledge of being. These definitions are generally grounded in reason and empirical observation, and seek to provide a framework for understanding the world that is not reliant on religious beliefs or supernatural forces. [1][2][3] "Transcendental" is a word derived from the scholastic, designating the extra-categorical attributes of beings.[4][5]

    https://en.wikipedia.org/wiki/Transcendence_(philosophy)

    In epistemology, transparency is a property of epistemic states defined as follows: An epistemic state E is "weakly transparent" to a subject S if and only if when S is in state E, S can know that S is in state E; an epistemic state E is "strongly transparent" to a subject S if and only if when S is in state E, S can know that S is in state E, AND when S is not in state E, S can know S is not in state E.

    Pain is usually considered to be strongly transparent: when someone is in pain, they know immediately that they are in pain, and if they are not in pain, they will know they are not. Transparency is important in the study of self-knowledge and meta-knowledge

    https://en.wikipedia.org/wiki/Transparency_(philosophy)

    Trope denotes figurative and metaphorical language and one which has been used in various technical senses. The term trope derives from the Greek τρόπος (tropos), "a turn, a change",[1] related to the root of the verb τρέπειν (trepein), "to turn, to direct, to alter, to change";[2] this means that the term is used metaphorically to denote, among other things, metaphorical language.

    The term is also used in technical senses, which do not always correspond to its linguistic origin. Its meaning has to be judged from the context, some of which are given below. 

    https://en.wikipedia.org/wiki/Trope_(philosophy)

    A consensus theory of truth is the process of taking statements to be true simply because people generally agree upon them.

    Varieties of consensus

    Consensus gentium

    An ancient criterion of truth, the consensus gentium (Latin for agreement of the people), states "that which is universal among men carries the weight of truth" (Ferm, 64). A number of consensus theories of truth are based on variations of this principle. In some criteria the notion of universal consent is taken strictly, while others qualify the terms of consensus in various ways. There are versions of consensus theory in which the specific population weighing in on a given question, the proportion of the population required for consent, and the period of time needed to declare consensus vary from the classical norm.

    Consensus as a regulative ideal

    A descriptive theory is one that tells how things are, while a normative theory tells how things ought to be. Expressed in practical terms, a normative theory, more properly called a policy, tells agents how they ought to act. A policy can be an absolute imperative, telling agents how they ought to act in any case, or it can be a contingent directive, telling agents how they ought to act if they want to achieve a particular goal. A policy is frequently stated in the form of a piece of advice called a heuristic, a maxim, a norm, a rule, a slogan, and so on. Other names for a policy are a recommendation and a regulative principle.

    A regulative ideal can be expressed in the form of a description, but what it describes is an ideal state of affairs, a condition of being that constitutes its aim, end, goal, intention, or objective. It is not the usual case for the actual case to be the ideal case, or else there would hardly be much call for a policy aimed at achieving an ideal.

    Corresponding to the distinction between actual conditions and ideal conditions there is a distinction between actual consensus and ideal consensus. A theory of truth founded on a notion of actual consensus is a very different thing from a theory of truth founded on a notion of ideal consensus. Moreover, an ideal consensus may be ideal in several different ways. The state of consensus may be ideal in its own nature, conceived in the matrix of actual experience by way of intellectual operations like abstraction, extrapolation, and limit formation. Or the conditions under which the consensus is conceived to be possible may be formulated as idealizations of actual conditions. A very common type of ideal consensus theory refers to a community that is an idealization of actual communities in one or more respects.

    Critiques

    It is very difficult to find any philosopher of note who asserts a bare, naive, or pure consensus theory of truth, in other words, a treatment of truth that is based on actual consensus in an actual community without further qualification. One obvious critique is that not everyone agrees to consensus theory, implying that it may not be true by its own criteria. Another problem is defining how we know that consensus is achieved without falling prey to an infinite regress. Even if everyone agrees to a particular proposition, we may not know that it is true until everyone agrees that everyone agrees to it. Bare consensus theories are frequent topics of discussion, however, evidently because they serve the function of reference points for the discussion of alternative theories.

    If consensus equals truth, then truth can be made by forcing or organizing a consensus, rather than being discovered through experiment or observation, or existing separately from consensus. The principles of mathematics also do not hold under consensus truth because mathematical propositions build on each other. If the consensus declared 2+2=5 it would render the practice of mathematics where 2+2=4 impossible.

    Imre Lakatos characterizes it as a "watered down" form of provable truth propounded by some sociologists of knowledge, particularly Thomas Kuhn and Michael Polanyi.[1]

    Philosopher Nigel Warburton argues that the truth by consensus process is not a reliable way of discovering truth, that there is general agreement upon something does not make it actually true.
    There are two main reasons for this:[2]

    1. One reason Warburton discusses is that people are prone to wishful thinking. People can believe an assertion and espouse it as truth in the face of overwhelming evidence and facts to the contrary, simply because they wish that things were so.
    2. The other one is that people are gullible, and easily misled.

    See also

    Related topics

    References


  • Imre Lakatos (1978). "Falsification and the Methodology of Scientific Research Programmes" (PDF). Philosophical Papers. Cambridge University Press. p. 8. ISBN 978-0-521-28031-0. Retrieved 1 October 2016.

    1. Nigel Warburton (2000). "truth by consensus". Thinking from A to Z. Routledge. pp. 134–135. ISBN 0-415-22281-8.

    Sources

    • Ferm, Vergilius (1962), "Consensus Gentium", p. 64 in Runes (1962).
    • Haack, Susan (1993), Evidence and Inquiry: Towards Reconstruction in Epistemology, Blackwell Publishers, Oxford, UK.
    • Habermas, Jürgen (1976), "What Is Universal Pragmatics?", 1st published, "Was heißt Universalpragmatik?", Sprachpragmatik und Philosophie, Karl-Otto Apel (ed.), Suhrkamp Verlag, Frankfurt am Main. Reprinted, pp. 1–68 in Jürgen Habermas, Communication and the Evolution of Society, Thomas McCarthy (trans.), Beacon Press, Boston, Massachusetts, 1979.
    • Habermas, Jürgen (1979), Communication and the Evolution of Society, Thomas McCarthy (trans.), Beacon Press, Boston, Massachusetts.
    • Habermas, Jürgen (1990), Moral Consciousness and Communicative Action, Christian Lenhardt and Shierry Weber Nicholsen (trans.), Thomas McCarthy (intro.), MIT Press, Cambridge, Massachusetts.
    • Habermas, Jürgen (2003), Truth and Justification, Barbara Fultner (trans.), MIT Press, Cambridge, Massachusetts.
    • James, William (1907), Pragmatism, A New Name for Some Old Ways of Thinking, Popular Lectures on Philosophy, Longmans, Green, and Company, New York, New York.
    • James, William (1909), The Meaning of Truth, A Sequel to 'Pragmatism', Longmans, Green, and Company, New York, New York.
    • Kant, Immanuel (1800), Introduction to Logic. Reprinted, Thomas Kingsmill Abbott (trans.), Dennis Sweet (intro.), Barnes and Noble, New York, New York, 2005.
    • Kirkham, Richard L. (1992), Theories of Truth: A Critical Introduction, MIT Press, Cambridge, Massachusetts.
    • Rescher, Nicholas (1995), Pluralism: Against the Demand for Consensus, Oxford University Press, Oxford, UK.
    • Runes, Dagobert D. (ed., 1962), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, New Jersey.

     https://en.wikipedia.org/wiki/Consensus_theory_of_truth

     

    The principle of truth-value links is a concept in metaphysics discussed in debates between philosophical realism and anti-realism. Philosophers who appeal to truth-value links in order to explain how individuals can come to understand parts of the world that are apparently cognitively inaccessible (the past, the feelings of others, etc.) are called truth-value link realists.

    Truth-value link realism

    The principle of truth-value links depicted graphically.

    Proponents of truth-value link realism argue that our understanding of past-tense statements allows us to grasp the truth-conditions of the statements, even if they are evidence-transcendent. They explain this by noting that it is unproblematic for us to conceptualize a present-tense true statement being true in the future. In other words, if "It is raining today" is true today, then "It was raining yesterday" will be true tomorrow. Truth-value link realists argue that this same construction can be applied to past-tense statements. For example, "It was raining yesterday" is true today if and only if "It is raining today" was true yesterday.

    The truth-value link allows us to understand the following. First, suppose that we can understand, in an unproblematic way, truth about a present-tense statement. Assume that it is true, now, when one claims "On 22 May 2006, Student X is writing a paper for her philosophy seminar," and call it statement A. Suppose that, a year later, someone claims, "On 22 May 2006, Student X was writing a paper for her philosophy seminar," and call it statement B. According to Michael Dummett’s explication of the truth-value link, "since the statement A is now true, the statement B, made in one year’s time, is likewise true."[1] It is in understanding the truth-value link that one is able to understand what it is for a statement in the past-tense to be true. The truth-value persists in the tense shift – hence the "link."

    Criticisms

    Some philosophers, including Michael Dummett and John McDowell, have criticized truth-value link realism. They argue that it is unintelligible to suppose that training in a language can give someone more than what is involved in the training, i.e. access to inaccessible realms like the past and the minds of others. More important, they suggest that the realist appeal to the principle of truth-value links does not actually explain how the inaccessible can be cognized. When the truth-value link realist claims that, if "It is raining today" was true yesterday, then "It was raining yesterday" is true today, he or she is still appealing to an inaccessible realm of the past. In brief, one is attempting to access that which one has already conceded as being inaccessible.

    See also

    References


    1. Dummett, Michael, "The Reality of the Past," in Truth and Other Enigmas (Harvard University Press, 1978), 363.

     

    In the philosophy of science, underdetermination or the underdetermination of theory by data (sometimes abbreviated UTD) is the idea that evidence available to us at a given time may be insufficient to determine what beliefs we should hold in response to it.[1] The underdetermination thesis says that all evidence necessarily underdetermines any scientific theory.[2]

    Underdetermination exists when available evidence is insufficient to identify which belief one should hold about that evidence. For example, if all that was known was that exactly $10 was spent on apples and oranges, and that apples cost $1 and oranges $2, then one would know enough to eliminate some possibilities (e.g., 6 oranges could not have been purchased), but one would not have enough evidence to know which specific combination of apples and oranges was purchased. In this example, one would say that belief in what combination was purchased is underdetermined by the available evidence.

    In contrast, overdetermination in philosophy of science means that more evidence is available than is necessary to justify a conclusion.

    Origin

    Ancient Greek skeptics argued for equipollence, the view that reasons for and against claims are equally balanced. This captures at least one sense of saying that the claims themselves are underdetermined.

    Underdetermination, again under different labels, arises in the modern period in the work of René Descartes. Among other skeptical arguments, Descartes presents two arguments involving underdetermination. His dream argument points out that experiences perceived while dreaming (for example, falling) do not necessarily contain sufficient information to deduce the true situation (being in bed). He concluded that since one cannot always distinguish dreams from reality, one cannot rule out the possibility that one is dreaming rather than having veridical experiences; thus the conclusion that one is having a veridical experience is underdetermined. His demon argument posits that all of one's experiences and thoughts might be manipulated by a very powerful and deceptive "evil demon". Once again, so long as the perceived reality appears internally consistent to the limits of one's limited ability to tell, the situation is indistinguishable from reality and one cannot logically determine that such a demon does not exist.

    Underdetermination and evidence

    To show that a conclusion is underdetermined, one must show that there is a rival conclusion that is equally well supported by the standards of evidence. A trivial example of underdetermination is the addition of the statement "whenever we look for evidence" (or more generally, any statement which cannot be falsified). For example, the conclusion "objects near earth fall toward it when dropped" might be opposed by "objects near earth fall toward it when dropped but only when one checks to see that they do." Since one may append this to any conclusion, all conclusions are at least trivially underdetermined. If one considers such statements to be illegitimate, e.g. by applying Occam's Razor, then such "tricks" are not considered demonstrations of underdetermination.

    This concept also applies to scientific theories: for example, it is similarly trivial to find situations that a theory does not address. For example, classical mechanics did not distinguish between non-accelerating reference frames. As a result, any conclusion about such a reference frame was underdetermined; it was equally consistent with the theory to say that the solar system is at rest, as it is to say that it moves at any constant velocity in any particular direction. Newton himself stated that these possibilities were indistinguishable. More generally, evidence may not always be sufficient to distinguish between competing theories (or to determine a different theory that will unify both), as is the case with general relativity and quantum mechanics.

    Another example is provided by Johann Wolfgang von Goethe's Theory of Colours: "Newton believed that with the help of his prism experiments, he could prove that sunlight was composed of variously coloured rays of light. Goethe showed that this step from observation to theory is more problematic than Newton wanted to admit. By insisting that the step to theory is not forced upon us by the phenomena, Goethe revealed our own free, creative contribution to theory construction. And Goethe's insight is surprisingly significant, because he correctly claimed that all of the results of Newton's prism experiments fit a theoretical alternative equally well. If this is correct, then by suggesting an alternative to a well-established physical theory, Goethe developed the problem of underdetermination a century before Duhem and Quine's famous argument." (Mueller, 2016)[3] Hermann von Helmholtz says of this, "And I for one do not know how anyone, regardless of what his views about colours are, can deny that the theory in itself is fully consequent, that its assumptions, once granted, explain the facts treated completely and indeed simply".[4]

    Experimental violations of Bell inequality show, that there are some limitations to underdetermination – every theory exhibiting local realism and statistical independence was disproved by this tests. Analogus limitations follow from Kochen–Specker experiments.[5][6][7] These tests employ only correlations between results of measurements and therefore are able to bypass the issue of theory-ladenness of observation.[7][8]

    Arguments involving underdetermination

    Arguments involving underdetermination attempt to show that there is no reason to believe some conclusion because it is underdetermined by the evidence. Then, if the evidence available at a particular time can be equally well explained by at least one other hypothesis, there is no reason to believe it rather than the equally supported rival, which can be considered observationally equivalent (although many other hypotheses may still be eliminated).

    Because arguments involving underdetermination involve both a claim about what the evidence is and that such evidence underdetermines a conclusion, it is often useful to separate these two claims within the underdetermination argument as follows:

    1. All the available evidence of a certain type underdetermines which of several rival conclusions is correct.
    2. Only evidence of that type is relevant to believing one of these conclusions.
    3. Therefore, there is no evidence for believing one among the rival conclusions.

    The first premise makes the claim that a theory is underdetermined. The second says that rational decision (i.e. using available evidence) depends upon insufficient evidence.

    Epistemological problem of the indeterminacy of data to theory

    Any phenomenon can be explained by a multiplicity of hypotheses. How, then, can data ever be sufficient to prove a theory? This is the "epistemological problem of the indeterminacy of data to theory".

    The poverty of the stimulus argument and W.V.O. Quine's 1960 'Gavagai' example are perhaps the most commented variants of the epistemological problem of the indeterminacy of data to theory.

    General skeptical arguments

    Some skeptical arguments appeal to the fact that no possible evidence could be incompatible with 'skeptical hypotheses' like the maintenance of a complex illusion by Descartes' evil demon or (in a modern version) the machines who run the Matrix. A skeptic may argue that this undermines any claims to knowledge, or even (by internalist definitions), justification.

    Philosophers have found this argument very powerful. Hume felt it was unanswerable, but observed that it was in practice impossible to accept its conclusions. Influenced by this, Kant held that while the nature of the 'noumenal' world was indeed unknowable, we could aspire to knowledge of the 'phenomenal' world. A similar response has been advocated by modern anti-realists.

    Underdetermined ideas are not implied to be incorrect (taking into account present evidence); rather, we cannot know if they are correct.

    Philosophy of science

    In the philosophy of science, underdetermination is often presented as a problem for scientific realism, which holds that we have reason to believe in entities that are not directly observable talked about by scientific theories. One such argument proceeds as follows (to be compared to the previous one):

    1. All the available observational evidence for such entities underdetermines the claims of a scientific theory about such entities.
    2. Only the observational evidence is relevant to believing a scientific theory.
    3. Therefore, there is no evidence for believing what scientific theories say about such entities.

    Particular responses to this argument attack both the first and the second premise (1 and 2). It is argued against the first premise that the underdetermination must be strong and/or inductive. It is argued against the second premise that there is evidence for a theory's truth besides observations; for example, it is argued that simplicity, explanatory power or some other feature of a theory is evidence for it over its rivals.

    A more general response from the scientific realist is to argue that underdetermination is no special problem for science, because, as indicated earlier in this article, all knowledge that is directly or indirectly supported by evidence suffers from it—for example, conjectures concerning unobserved observables. It is therefore too powerful an argument to have any significance in the philosophy of science, since it does not cast doubt uniquely on conjectured unobservables.

    See also

    Notes and references


  • "Underdetermination of Scientific Theory". The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. 2017.

  • Norton, John D. (July 9–12, 2003). "Must Evidence Underdetermine Theory?" (PDF). Retrieved 2023-04-07.

  • Mueller, Olaf L. (2016). "Prismatic Equivalence – A New Case of Underdetermination: Goethe vs. Newton on the Prism Experiments, 2016/2 (n° 24)". British Journal for the History of Philosophy. 24 (2): 323–347. doi:10.1080/09608788.2015.1132671. S2CID 170843218.

  • Helmholtz, Hermann von. 1853. Goethes Vorahnungen kommender naturwissenschaftlicher Ideen. Berlin: Pastel. 1971. Philosophische Vortrdge und Aufsdtze. Ed. H. Horz and S. Wollgast. Berlin: Akademie-Verlag.

  • Agazzi, Evandro, ed. (2017). Varieties of Scientific Realism. Cham: Springer International Publishing. p. 312. doi:10.1007/978-3-319-51608-0. ISBN 978-3-319-51607-3.

  • Alai, Mario (December 2019). "The Underdetermination of Theories and Scientific Realism". Axiomathes. 29 (6): 621–637. doi:10.1007/s10516-018-9384-4. hdl:11576/2661165. ISSN 1122-1151. S2CID 126042452.

  • Maudlin, Tim (1996), Cushing, James T.; Fine, Arthur; Goldstein, Sheldon (eds.), "Space-Time in the Quantum World", Bohmian Mechanics and Quantum Theory: An Appraisal, Boston Studies in the Philosophy of Science, Dordrecht: Springer Netherlands, vol. 184, p. 305, doi:10.1007/978-94-015-8715-0_20, ISBN 978-90-481-4698-7, retrieved 2022-04-23

    1. Philosophical consequences of quantum theory : reflections on Bell's theorem. James T. Cushing, Ernan McMullin. Notre Dame, Ind.: University of Notre Dame Press. 1989. p. 65. ISBN 0-268-01578-3. OCLC 19352903.

    External links

     https://en.wikipedia.org/wiki/Underdetermination

    ntentionality is the power of minds to be about something: to represent or to stand for things, properties and states of affairs.[1] Intentionality is primarily ascribed to mental states, like perceptions, beliefs or desires, which is why it has been regarded as the characteristic mark of the mental by many philosophers. A central issue for theories of intentionality has been the problem of intentional inexistence: to determine the ontological status of the entities which are the objects of intentional states.

    An early theory of intentionality is associated with Anselm of Canterbury's ontological argument for the existence of God, and with his tenets distinguishing between objects that exist in the understanding and objects that exist in reality.[2] The idea fell out of discussion with the end of the medieval scholastic period, but in recent times was resurrected by empirical psychologist Franz Brentano and later adopted by contemporary phenomenological philosopher Edmund Husserl. Today, intentionality is a live concern among philosophers of mind and language.[3] A common dispute is between naturalism, the view that intentional properties are reducible to natural properties as studied by the natural sciences, and the phenomenal intentionality theory, the view that intentionality is grounded in consciousness. 

    https://en.wikipedia.org/wiki/Intentionality

    Intellectual responsibility (also known as epistemic responsibility) is a philosophical concept related to that of epistemic justification.[1] According to Frederick F. Schmitt, "the conception of justified belief as epistemically responsible belief has been endorsed by a number of philosophers, including Roderick Chisholm (1977), Hilary Kornblith (1983), and Lorraine Code (1983)."[2] 

    https://en.wikipedia.org/wiki/Intellectual_responsibility

    Insight is the understanding of a specific cause and effect within a particular context. The term insight can have several related meanings:

    • a piece of information
    • the act or result of understanding the inner nature of things or of seeing intuitively (called noesis in Greek)
    • an introspection
    • the power of acute observation and deduction, discernment, and perception, called intellection or noesis
    • An understanding of cause and effect based on the identification of relationships and behaviors within a model, context, or scenario (see artificial intelligence)

    An insight that manifests itself suddenly, such as understanding how to solve a difficult problem, is sometimes called by the German word Aha-Erlebnis. The term was coined by the German psychologist and theoretical linguist Karl Bühler. It is also known as an epiphany, eureka moment or (for cross word solvers) the penny dropping moment (PDM).[1] Sudden sickening realisations often identify a problem rather than solving it, so Uh-oh rather than Aha moments are further seen in negative insight.[2] A further example of negative insight is chagrin which is annoyance at the obviousness of a solution missed up until the point of insight,[3] an example of this being the Homer Simpson's catchphrase exclamation, D'oh!.

    Psychology

    Representation of the Duncker Candle Problem[4]

    In psychology, insight occurs when a solution to a problem presents itself quickly and without warning.[5] It is the sudden discovery of the correct solution following incorrect attempts based on trial and error.[6][7] Solutions via insight have been proven to be more accurate than non-insight solutions.[6]

    Insight was first studied by Gestalt psychology, in the early part of the 20th century, during the search for an alternative to associationism and the associationistic view of learning.[8] Some proposed potential mechanisms for insight include: suddenly seeing the problem in a new way, connecting the problem to another relevant problem/solution pair, releasing past experiences that are blocking the solution, or seeing problem in a larger, coherent context.[8]

    Classic methods

    Solution to the Nine-dot problem.[9][page needed]

    Generally, methodological approaches to the study of insight in the laboratory involve presenting participants with problems and puzzles that cannot be solved in a conventional or logical manner.[8] Problems of insight commonly fall into three types.[8]

    Breaking functional fixedness

    Example of a RAT problem.

    The first type of problem forces participants to use objects in a way they are not accustomed to (thus, breaking their functional fixedness), like the "Duncker candle problem".[8] In the "Duncker candle problem", individuals are given matches and a box of tacks and asked to find a way to attach a candle to the wall to light the room.[4] The solution requires the participants to empty the box of tacks, set the candle inside the box, tack the box to the wall, and light the candle with the matches.

    Spatial ability

    The second type of insight problem requires spatial ability to solve, like the "Nine-dot problem".[8] The famous "Nine-dot problem" requires participants to draw four lines, through nine dots, without picking their pencil up.[9][page needed]

    Using verbal ability

    The third and final type of problem requires verbal ability to solve, like the Remote Associates Test (RAT).[8] In the RAT, individuals must think of a word that connects three, seemingly unrelated, words.[10] RAT are often used in experiments, because they can be solved both with and without insight.[11]

    Specific results

    Versus non-insight problems

    Two clusters of problems, those solvable by insight and those not requiring insight to solve, have been observed.[12] An individual’s cognitive flexibility, fluency, and vocabulary ability are predictive of performance on insight problems, but not on non-insight problems.[12] In contrast, fluid intelligence is mildly predictive of performance on non-insight problems, but not on insight problems.[12] More recent research suggests that rather than insight versus search, that the subjective feeling of insight varies, with some solutions experienced with a stronger feeling of Aha than others.[13][14]

    Emotion

    People in a better mood are more likely to solve problems using insight.[15] Research demonstrated that self-reported positive affect of participants uniquely increased insight before and during the solving of a problem,[16] as indicated by differing brain activity patterns.[15] People experiencing anxiety showed the opposite effect, and solved fewer problems by insight.[15] Emotion can also be considered in terms of the insight experience and whether this is a positive Aha or negative Uh-oh moment.[2] Research demonstrate that in order to have insights it is important to have a good degree of access to one's own emotions and sensations, that can cause insights. To the degree that individuals have limited introspective access to these underlying causes, they have only limited control over these processes as well. [17]

    Incubation

    Using a geometric and spatial insight problem, it was found that providing participants with breaks improved their performance as compared to participants who did not receive a break.[18] However, the length of incubation between problems did not matter. Thus, participants' performance on insight problems improved just as much with a short break (4 minutes) as it did with a long break (12 minutes).[18]

    Sleep

    Research has shown sleep to help produce insight.[19] Individuals were initially trained on insight problems. Following training, one group was tested on the insight problems after sleeping for eight hours at night, one group was tested after staying awake all night, and one group was tested after staying awake all day. Those that slept performed twice as well on the insight problems than those who stayed awake.[19]

    In the brain

    Differences in brain activation in the left and right hemisphere seem to be indicative of insight versus non-insight solutions.[20] Using RAT’s that were either presented to the left or right visual field, it was shown that participants having solved the problem with insight were more likely to have been shown the RAT on the left visual field, indicating right hemisphere processing. This provides evidence that the right hemisphere plays a unique role in insight.[20]

    fMRI and EEG scans of participants completing RAT's demonstrated unique brain activity corresponding to problems solved by insight.[11] For one, there is high EEG activity in the alpha- and gamma-band about 300 milliseconds before participants indicated a solution to insight problems, but not to non-insight problems.[11] Additionally, problems solved by insight corresponded to increased activity in the temporal lobes and mid-frontal cortex, while more activity in the posterior cortex corresponded to non-insight problems.[11] The data suggests there is something different occurring in the brain when solving insight versus non-insight problems that happens right before the solving of the problem. This conclusion has been supported also by eye-tracking data which shows an increased eye blink duration and frequency when people solve problems via Insight. This latter result, paired with an eye pattern oriented to look away from sources of visual inputs (such as looking at blank wall, or out the window at the sky) proves different attention involvement in Insight problem solving vs problem solving via analysis.[21]

    Group insight

    It was found that groups typically perform better on insight problems (in the form of rebus puzzles with either helpful or unhelpful clues) than individuals.[22]

    Example of a rebus puzzle. Answer: man overboard.

    Additionally, while incubation improves insight performance for individuals, it improves insight performance for groups even more.[22] Thus, after a 15-minute break, individual performance improved for the rebus puzzles with unhelpful clues, and group performance improved for rebus puzzles with both unhelpful and helpful clues.[22]

    Individual differences

    Personality and gender, as they relate to performance on insight problems, was studied using a variety of insight problems. It was found that participants who ranked lower on emotionality and higher on openness to experience performed better on insight problems. Men outperformed women on insight problems, and women outperformed men on non-insight problems.[23]

    Higher intelligence (higher IQ) has also been found to be associated with better performance on insight problems. However, those of lower intelligence benefit more than those of higher intelligence from being provided with cues and hints for insight problems.[8]

    A recent large-scale study in Australia suggests that insight may not be universally experienced, with almost 20% of respondents reporting that they had not experienced insight.[24]

    Metacognition

    Individuals are poorer at predicting their own metacognition for insight problems, than for non-insight problems.[25] Individuals were asked to indicate how "hot" or "cold" to a solution they felt. Generally, they were able to predict this fairly well for non-insight problems, but not for insight problems.[25] This provides evidence for the suddenness involved during insight.

    Naturalistic settings

    Recently, insight was studied in a non-laboratory setting.[26] Accounts of insight that have been reported in the media, such as in interviews, etc., were examined and coded. It was found that insights that occur in the field are typically reported to be associated with a sudden "change in understanding" and with "seeing connections and contradictions" in the problem.[26] It was also found that insight in nature differed from insight in the laboratory. For example, insight in nature was often rather gradual, not sudden, and incubation was not as important.[26] Other studies used online questionnaires to further explore insight outside of the laboratory,[27][2] verifying the notion that insight often happens in situations such as in the shower,[24] echoing the idea that creative ideas occur in situations where divergent thought is more likely, sometimes called the Three B's of Creativity, in Bed, on the Bus or in the Bath.

    Non-Human Animals

    Studies on primate cognition have provided evidence of what may be interpreted as insight in animals. In 1917, Wolfgang Köhler published his book The Mentality of Apes, having studied primates on the island of Tenerife for six years. In one of his experiments, apes were presented with an insight problem that required the use of objects in new and original ways, in order to win a prize (usually, some kind of food). He observed that the animals would continuously fail to get the food, and this process occurred for quite some time; however, rather suddenly, they would purposefully use the object in the way needed to get the food, as if the realization had occurred out of nowhere. He interpreted this behavior as something resembling insight in apes.[28] A more recent study suggested that elephants might also experience insight, showing that a young male elephant was able to identify and move a large cube under food that was out of reach so that he could stand on it to get the reward.[29]

    Theories

    There are a number of theories representing insight; at present, no one theory dominates interpretation.[8]

    Dual-process theory

    According to the dual-process theory, there are two systems used to solve problems.[23] The first involves logical and analytical thought processes based on reason, while the second involves intuitive and automatic processes based on experience.[23] Research has demonstrated that insight probably involves both processes; however, the second process is more influential.[23]

    Three-process theory

    According to the three-process theory, intelligence plays a large role in insight.[30] Specifically, insight involves three different processes (selective encoding, combination, and comparison), which require intelligence to apply to problems.[30] Selective encoding is the process of focusing attention on ideas relevant to a solution, while ignoring features that are irrelevant.[30] Selective combination is the process of combining the information previously deemed relevant.[30] Finally, selective comparison is the use of past experience with problems and solutions that are applicable to the current problem and solution.[30]

    Four-stage model

    According to the four-stage model of insight, there are four stages to problem solving.[31] First, the individual prepares to solve a problem.[31] Second, the individual incubates on the problem, which encompasses trial-and-error, etc.[31] Third, the insight occurs, and the solution is illuminated.[31] Finally, the verification of the solution to the problem is experienced.[31] Since this model was proposed, other similar models have been explored that contain two or three similar stages.[8]

    Psychiatry

    In psychology and psychiatry, insight can mean the ability to recognize one's own mental illness.[32][33] Psychiatric insight is typically measured with the Beck cognitive insight scale (BCIS), named after American psychiatrist Aaron Beck.[34] This form of insight has multiple dimensions, such as recognizing the need for treatment, and recognizing consequences of one's behavior as stemming from an illness.[35] A person with very poor recognition or acknowledgment is referred to as having "poor insight" or "lack of insight". The most extreme form is anosognosia, the total absence of insight into one's own mental illness. Many mental illnesses are associated with varying levels of insight. For example, people with obsessive compulsive disorder and various phobias tend to have relatively good insight that they have a problem and that their thoughts and/or actions are unreasonable, yet are compelled to carry out the thoughts and actions regardless.[36] Patients with schizophrenia, and various psychotic conditions tend to have very poor awareness that anything is wrong with them.[37] Psychiatric insight favourably predicts outcomes in cognitive behavioural therapy for psychosis.[38] Today some psychiatrists believe psychiatric medication may contribute to the patient's lack of insight.[39][40][41]

    Spirituality

    The Pali word for "insight" is vipassana, which has been adopted as the name of a kind of Buddhist mindfulness meditation. Recent research indicates that mindfulness meditation does facilitate solving of insight problems with dosage of 20 minutes.[42]

    Marketing

    Pat Conroy[citation needed] points out that an insight is a statement based on a deep understanding of your target consumers' attitudes and beliefs, which connect at an emotional level with your consumer, provoking a clear response (This brand understands me! That is exactly how I feel! — even if they've never thought about it quite like that) which, when leveraged, has the power to change consumer behavior. Insights must effect a change in consumer behavior that benefits your brand, leading to the achievement of the marketing objective.[citation needed]

    Insights can be based on:

    1. Real or perceived weakness to be exploited in competitive product performance or value
    2. Attitudinal or perceived barrier in the minds of consumers, regarding your brand
    3. Untapped or compelling belief or practice

    Insights are most effective when they are/do one of the following:

    1. Unexpected
    2. Create a disequilibrium
    3. Change momentum
    4. Exploited via a benefit or point of difference that your brand can deliver

    In order to be actionable, as the expression of a consumer truth, an insight is to be stated as an articulated sentence, containing:[43]

    1. An observation or a wish, e.g. "I would like to ...."
    2. A motivation explaining the wish, e.g. " because ..."
    3. A barrier preventing the consumer from being satisfied with the fulfillment of his/her motivation, e.g. " but..."

    The gap between the second and the third term offers a tension, which constitutes a potential for a brand. Like there are concept writers for copies, there are insight writers.

    In technical terminology of insight in market research is the understanding of local market by referring different source of information (such as quantitative research and qualitative research) proving for the consumers' insight.

    See also

    References


  • Friedlander, Kathryn J.; Fine, Philip A. (2016). "The Grounded Expertise Components Approach in the Novel Area of Cryptic Crossword Solving". Frontiers in Psychology. 7: 567. doi:10.3389/fpsyg.2016.00567. ISSN 1664-1078. PMC 4853387. PMID 27199805.

  • Hill, Gillian; Kemp, Shelly M. (2016-02-01). "Uh-Oh! What Have We Missed? A Qualitative Investigation into Everyday Insight Experience" (PDF). The Journal of Creative Behavior. 52 (3): 201–211. doi:10.1002/jocb.142. ISSN 2162-6057.

  • Gick, Mary L.; Lockhart, Robert S. "Cognitive and affective components of insight". In Sternberg, Robert J.; Davidson, J.E. (eds.). The nature of insight. MIT Press. pp. 197–228. Retrieved 2017-11-13.

  • Duncker, Karl; Lees, Lynne S. (1 January 1945). "On problem-solving". Psychological Monographs. 58 (5): i–113. doi:10.1037/h0093599.

  • Robinson-Riegler, Bridget Robinson-Riegler, Gregory (2012). Cognitive psychology : applying the science of the mind (3rd ed.). Boston: Pearson Allyn & Bacon. ISBN 978-0-205-03364-5.

  • Salvi, Carola; Bricolo, Emanuela; Bowden, Edward; et al. (2016). "Insight solutions are correct more often than analytic solutions". Thinking and Reasoning. 22 (4): 443–60. doi:10.1080/13546783.2016.1141798. PMC 5035115. PMID 27667960.

  • Weiten, W.; McCann, D. (2007). Themes and Variations. Nelson Education ltd: Thomson Wadsworth. ISBN 978-0176472733.

  • Sternberg, Robert J.; Davidson, Janet E., eds. (1996). The nature of insight (Reprint ed.). Cambridge, Massachusetts ; London: The MIT Press. ISBN 978-0-262-69187-1.

  • Sloan, Sam Loyd (2007). Cyclopedia of puzzles. Bronx, NY: Ishi Press International. ISBN 978-0-923891-78-7.

  • Mednick, Sarnoff (1 January 1962). "The associative basis of the creative process". Psychological Review. 69 (3): 220–232. CiteSeerX 10.1.1.170.572. doi:10.1037/h0048850. PMID 14472013.

  • Kounios, John; Beeman, Mark (1 August 2009). "The Aha! Moment: The Cognitive Neuroscience of Insight". Current Directions in Psychological Science. 18 (4): 210–216. CiteSeerX 10.1.1.521.6014. doi:10.1111/j.1467-8721.2009.01638.x. S2CID 16905317.

  • Gilhooly, KJ; Murphy, P (1 August 2005). "Differentiating insight from non-insight problems". Thinking & Reasoning. 11 (3): 279–302. doi:10.1080/13546780442000187. S2CID 144379831.

  • Webb, Margaret E.; Little, Daniel R.; Cropper, Simon J. (2016). "Insight Is Not in the Problem: Investigating Insight in Problem Solving across Task Types". Frontiers in Psychology. 7: 1424. doi:10.3389/fpsyg.2016.01424. ISSN 1664-1078. PMC 5035735. PMID 27725805.

  • Danek, Amory H.; Fraps, Thomas; von Müller, Albrecht; et al. (2014-12-08). "It's a kind of magic—what self-reports can reveal about the phenomenology of insight problem solving". Frontiers in Psychology. 5: 1408. doi:10.3389/fpsyg.2014.01408. ISSN 1664-1078. PMC 4258999. PMID 25538658.

  • Subramaniam, Karuna; Kounios, John; Parrish, Todd B.; et al. (1 March 2009). "A Brain Mechanism for Facilitation of Insight by Positive Affect". Journal of Cognitive Neuroscience. 21 (3): 415–432. doi:10.1162/jocn.2009.21057. PMID 18578603. S2CID 7133900.

  • Shen, W.; Yuan, Y.; Liu, C.; et al. (2015). "In search of the 'Aha!' experience: Elucidating the emotionality of insight problem-solving". British Journal of Psychology. 107 (2): 281–298. doi:10.1111/bjop.12142. PMID 26184903.

  • Howard Tennen & Jerry Suls (eds) (2013), Handbook of Psychology, Volume 5: Personality and Social Psychology. Wiley, New Jersey. p. 53

  • Segal, Eliaz (1 March 2004). "Incubation in Insight Problem Solving". Creativity Research Journal. 16 (1): 141–48. doi:10.1207/s15326934crj1601_13. S2CID 145742283.

  • Wagner, Ullrich; Gais, Steffen; Haider, Hilde; et al. (22 January 2004). "Sleep inspires insight". Nature. 427 (6972): 352–355. Bibcode:2004Natur.427..352W. doi:10.1038/nature02223. PMID 14737168. S2CID 4405704.

  • Bowden, Edward M.; Jung-Beeman, Mark (1 September 2003). "Aha! Insight experience correlates with solution activation in the right hemisphere". Psychonomic Bulletin & Review. 10 (3): 730–737. doi:10.3758/BF03196539. PMID 14620371.

  • Salvi, Carola; Bricolo, Emanuela; Franconeri, Steven; Kounios, John; Beeman, Mark (December 2015). "Sudden insight is associated with shutting out visual inputs". Psychonomic Bulletin & Review. 22 (6): 1814–1819. doi:10.3758/s13423-015-0845-0. PMID 26268431.

  • Smith, C. M.; Bushouse, E.; Lord, J. (13 November 2009). "Individual and group performance on insight problems: The effects of experimentally induced fixation". Group Processes & Intergroup Relations. 13 (1): 91–99. doi:10.1177/1368430209340276. S2CID 35914153.

  • Lin, Wei-Lun; Hsu, Kung-Yu; Chen, Hsueh-Chih; et al. (1 January 2011). "The relations of gender and personality traits on different creativities: A dual-process theory account". Psychology of Aesthetics, Creativity, and the Arts. 6 (2): 112–123. doi:10.1037/a0026241. S2CID 55632785.

  • Ovington, Linda A.; Saliba, Anthony J.; Moran, Carmen C.; et al. (2015-11-01). "Do People Really Have Insights in the Shower? The When, Where and Who of the Aha! Moment". The Journal of Creative Behavior. 52: 21–34. doi:10.1002/jocb.126. ISSN 2162-6057.

  • Metcalfe, Janet; David Wiebe (1987). "Intuition in insight and noninsight problem solving". Memory & Cognition. 15 (3): 238–246. doi:10.3758/BF03197722. PMID 3600264.

  • Klein, G.; Jarosz, A. (17 November 2011). "A Naturalistic Study of Insight". Journal of Cognitive Engineering and Decision Making. 5 (4): 335–351. doi:10.1177/1555343411427013.

  • Jarman, Matthew S. (2014-07-01). "Quantifying the Qualitative: Measuring the Insight Experience". Creativity Research Journal. 26 (3): 276–288. doi:10.1080/10400419.2014.929405. ISSN 1040-0419. S2CID 144300757.

  • Köhler, Wolfgang (1999). The mentality of apes (Repr. ed.). London: Routledge. ISBN 978-0-415-20979-3.

  • Foerder, Preston; Galloway, Marie; Barthel, Tony III; et al. (2011-08-18). "Insightful Problem Solving in an Asian Elephant". PLOS ONE. 6 (8): e23251. Bibcode:2011PLoSO...623251F. doi:10.1371/journal.pone.0023251. ISSN 1932-6203. PMC 3158079. PMID 21876741.

  • Davidson, J. E.; Sternberg, R. J. (1 April 1984). "The Role of Insight in Intellectual Giftedness". Gifted Child Quarterly. 28 (2): 58–64. doi:10.1177/001698628402800203. S2CID 145767981.

  • Hadamard, Jacques (1975). An essay on the psychology of invention in the mathematical field (Unaltered and unabridged reprint of the enlarged (1949) ed.). New York, NY: Dover Publ. ISBN 978-0-486-20107-8.

  • Marková I. S. (2005) Insight in Psychiatry. Cambridge, Cambridge University Press.

  • Pijnenborg, G.H.M.; Spikman, J.M.; Jeronimus, B.F.; Aleman, A. (2012). "Insight in schizophrenia: associations with empathy". European Archives of Psychiatry and Clinical Neuroscience. 263 (4): 299–307. doi:10.1007/s00406-012-0373-0. PMID 23076736. S2CID 25194328.

  • Kao, Yu-Chen; Liu, Yia-Ping (2010). "The Beck Cognitive Insight Scale (BCIS): Translation and validation of the Taiwanese version". BMC Psychiatry. 10: 27. doi:10.1186/1471-244X-10-27. PMC 2873466. PMID 20377914.

  • Ghaemi, S. Nassir (2002). Polypharmacy in Psychiatry. Hoboken: Informa Healthcare. ISBN 978-0-8247-0776-7.

  • Markova, I S; Jaafari, N; Berrios, G E (2009). "Insight and Obsessive-Compulsive Disorder: A conceptual analysis". Psychopathology. 42 (5): 277–282. doi:10.1159/000228836. PMID 19609097. S2CID 36968617.

  • Marková, I. S.; Berrios, G. E.; Hodges, J. H. (2004). "Insight into Memory Function". Neurology, Psychiatry & Brain Research. 11: 115–126.

  • Perivoliotis, Dimitri; Grant, Paul M.; Peters, Emmanuelle R.; et al. (2010). "Cognitive insight predicts favorable outcome in cognitive behavioral therapy for psychosis". Psychosis. 2: 23–33. doi:10.1080/17522430903147520. S2CID 143474848.

  • Murray, Robin M. (2017). "Mistakes I Have Made in My Research Career". Schizophrenia Bulletin. 43 (2): 253–256. doi:10.1093/schbul/sbw165. PMC 5605250. PMID 28003469.

  • Harrow, M; Jobe, TH; Faull, RN (October 2012). "Do all schizophrenia patients need antipsychotic treatment continuously throughout their lifetime? A 20-year longitudinal study". Psychol Med. 42 (10): 2145–55. doi:10.1017/S0033291712000220. PMID 22340278. S2CID 29641445.

  • Moncrieff, J (2015). "Antipsychotic Maintenance Treatment: Time to Rethink?". PLOS Med. 12 (8): e1001861. doi:10.1371/journal.pmed.1001861. PMC 4524699. PMID 26241954.

  • Ren, Jun; Huang, ZhiHui; Luo, Jing; et al. (29 October 2011). "Meditation promotes insightful problem-solving by keeping people in a mindful and alert conscious state". Science China Life Sciences. 54 (10): 961–965. doi:10.1007/s11427-011-4233-3. PMID 22038009.

    1. "Qu'est-ce qu'un insight consommateur ?". insightquest.fr (in French). Retrieved January 14, 2021.

    Further reading

    External links

    • The dictionary definition of insight at Wiktionary

     https://en.wikipedia.org/wiki/Insight

     Illative sense is an epistemological concept coined by John Henry Newman (1801–1890) in his Grammar of Assent. For him it is the unconscious process of the mind, by which probabilities converge into certainty.

    https://en.wikipedia.org/wiki/Illative_sense

    Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word infer means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinction that in Europe dates at least to Aristotle (300s BCE). Deduction is inference deriving logical conclusions from premises known or assumed to be true, with the laws of valid inference being studied in logic. Induction is inference from particular evidence to a universal conclusion. A third type of inference is sometimes distinguished, notably by Charles Sanders Peirce, contradistinguishing abduction from induction.

    Various fields study how inference is done in practice. Human inference (i.e. how humans draw conclusions) is traditionally studied within the fields of logic, argumentation studies, and cognitive psychology; artificial intelligence researchers develop automated inference systems to emulate human inference. Statistical inference uses mathematics to draw conclusions in the presence of uncertainty. This generalizes deterministic reasoning, with the absence of uncertainty as a special case. Statistical inference uses quantitative or qualitative (categorical) data which may be subject to random variations.

    Definition

    The process by which a conclusion is inferred from multiple observations is called inductive reasoning. The conclusion may be correct or incorrect, or correct to within a certain degree of accuracy, or correct in certain situations. Conclusions inferred from multiple observations may be tested by additional observations.

    This definition is disputable (due to its lack of clarity. Ref: Oxford English dictionary: "induction ... 3. Logic the inference of a general law from particular instances."[clarification needed]) The definition given thus applies only when the "conclusion" is general.

    Two possible definitions of "inference" are:

    1. A conclusion reached on the basis of evidence and reasoning.
    2. The process of reaching such a conclusion.

    Examples

    Example for definition #1

    Ancient Greek philosophers defined a number of syllogisms, correct three part inferences, that can be used as building blocks for more complex reasoning. We begin with a famous example:

    1. All humans are mortal.
    2. All Greeks are humans.
    3. All Greeks are mortal.

    The reader can check that the premises and conclusion are true, but logic is concerned with inference: does the truth of the conclusion follow from that of the premises?

    The validity of an inference depends on the form of the inference. That is, the word "valid" does not refer to the truth of the premises or the conclusion, but rather to the form of the inference. An inference can be valid even if the parts are false, and can be invalid even if some parts are true. But a valid form with true premises will always have a true conclusion.

    For example, consider the form of the following symbological track:

    1. All meat comes from animals.
    2. All beef is meat.
    3. Therefore, all beef comes from animals.

    If the premises are true, then the conclusion is necessarily true, too.

    Now we turn to an invalid form.

    1. All A are B.
    2. All C are B.
    3. Therefore, all C are A.

    To show that this form is invalid, we demonstrate how it can lead from true premises to a false conclusion.

    1. All apples are fruit. (True)
    2. All bananas are fruit. (True)
    3. Therefore, all bananas are apples. (False)

    A valid argument with a false premise may lead to a false conclusion, (this and the following examples do not follow the Greek syllogism):

    1. All tall people are French. (False)
    2. John Lennon was tall. (True)
    3. Therefore, John Lennon was French. (False)

    When a valid argument is used to derive a false conclusion from a false premise, the inference is valid because it follows the form of a correct inference.

    A valid argument can also be used to derive a true conclusion from a false premise:

    1. All tall people are musicians. (Valid, False)
    2. John Lennon was tall. (Valid, True)
    3. Therefore, John Lennon was a musician. (Valid, True)

    In this case we have one false premise and one true premise where a true conclusion has been inferred.

    Example for definition #2

    Evidence: It is the early 1950s and you are an American stationed in the Soviet Union. You read in the Moscow newspaper that a soccer team from a small city in Siberia starts winning game after game. The team even defeats the Moscow team. Inference: The small city in Siberia is not a small city anymore. The Soviets are working on their own nuclear or high-value secret weapons program.

    Knowns: The Soviet Union is a command economy: people and material are told where to go and what to do. The small city was remote and historically had never distinguished itself; its soccer season was typically short because of the weather.

    Explanation: In a command economy, people and material are moved where they are needed. Large cities might field good teams due to the greater availability of high quality players; and teams that can practice longer (weather, facilities) can reasonably be expected to be better. In addition, you put your best and brightest in places where they can do the most good—such as on high-value weapons programs. It is an anomaly for a small city to field such a good team. The anomaly (i.e. the soccer scores and great soccer team) indirectly described a condition by which the observer inferred a new meaningful pattern—that the small city was no longer small. Why would you put a large city of your best and brightest in the middle of nowhere? To hide them, of course.

    Incorrect inference

    An incorrect inference is known as a fallacy. Philosophers who study informal logic have compiled large lists of them, and cognitive psychologists have documented many biases in human reasoning that favor incorrect reasoning.

    Applications

    Inference engines

    AI systems first provided automated logical inference and these were once extremely popular research topics, leading to industrial applications under the form of expert systems and later business rule engines. More recent work on automated theorem proving has had a stronger basis in formal logic.

    An inference system's job is to extend a knowledge base automatically. The knowledge base (KB) is a set of propositions that represent what the system knows about the world. Several techniques can be used by that system to extend KB by means of valid inferences. An additional requirement is that the conclusions the system arrives at are relevant to its task.

    Prolog engine

    Prolog (for "Programming in Logic") is a programming language based on a subset of predicate calculus. Its main job is to check whether a certain proposition can be inferred from a KB (knowledge base) using an algorithm called backward chaining.

    Let us return to our Socrates syllogism. We enter into our Knowledge Base the following piece of code:

    mortal(X) :- 	man(X).
    man(socrates). 
    

    ( Here :- can be read as "if". Generally, if P Q (if P then Q) then in Prolog we would code Q:-P (Q if P).)
    This states that all men are mortal and that Socrates is a man. Now we can ask the Prolog system about Socrates:

    ?- mortal(socrates).
    

    (where ?- signifies a query: Can mortal(socrates). be deduced from the KB using the rules) gives the answer "Yes".

    On the other hand, asking the Prolog system the following:

    ?- mortal(plato).
    

    gives the answer "No".

    This is because Prolog does not know anything about Plato, and hence defaults to any property about Plato being false (the so-called closed world assumption). Finally ?- mortal(X) (Is anything mortal) would result in "Yes" (and in some implementations: "Yes": X=socrates)
    Prolog can be used for vastly more complicated inference tasks. See the corresponding article for further examples.

    Semantic web

    Recently automatic reasoners found in semantic web a new field of application. Being based upon description logic, knowledge expressed using one variant of OWL can be logically processed, i.e., inferences can be made upon it.

    Bayesian statistics and probability logic

    Philosophers and scientists who follow the Bayesian framework for inference use the mathematical rules of probability to find this best explanation. The Bayesian view has a number of desirable features—one of them is that it embeds deductive (certain) logic as a subset (this prompts some writers to call Bayesian probability "probability logic", following E. T. Jaynes).

    Bayesians identify probabilities with degrees of beliefs, with certainly true propositions having probability 1, and certainly false propositions having probability 0. To say that "it's going to rain tomorrow" has a 0.9 probability is to say that you consider the possibility of rain tomorrow as extremely likely.

    Through the rules of probability, the probability of a conclusion and of alternatives can be calculated. The best explanation is most often identified with the most probable (see Bayesian decision theory). A central rule of Bayesian inference is Bayes' theorem.

    Fuzzy logic

    Non-monotonic logic

    [1]

    A relation of inference is monotonic if the addition of premises does not undermine previously reached conclusions; otherwise the relation is non-monotonic. Deductive inference is monotonic: if a conclusion is reached on the basis of a certain set of premises, then that conclusion still holds if more premises are added.

    By contrast, everyday reasoning is mostly non-monotonic because it involves risk: we jump to conclusions from deductively insufficient premises. We know when it is worth or even necessary (e.g. in medical diagnosis) to take the risk. Yet we are also aware that such inference is defeasible—that new information may undermine old conclusions. Various kinds of defeasible but remarkably successful inference have traditionally captured the attention of philosophers (theories of induction, Peirce's theory of abduction, inference to the best explanation, etc.). More recently logicians have begun to approach the phenomenon from a formal point of view. The result is a large body of theories at the interface of philosophy, logic and artificial intelligence.

    See also

    References


    1. Fuhrmann, André. Nonmonotonic Logic (PDF). Archived from the original (PDF) on 9 December 2003.

    Further reading

    Inductive inference:

    Abductive inference:

    • O'Rourke, P.; Josephson, J., eds. (1997). Automated abduction: Inference to the best explanation. AAAI Press.
    • Psillos, Stathis (2009). Gabbay, Dov M.; Hartmann, Stephan; Woods, John (eds.). An Explorer upon Untrodden Ground: Peirce on Abduction (PDF). Handbook of the History of Logic. Vol. 10. Elsevier. pp. 117–152.
    • Ray, Oliver (Dec 2005). Hybrid Abductive Inductive Learning (Ph.D.). University of London, Imperial College. CiteSeerX 10.1.1.66.1877.

    Psychological investigations about human reasoning:

    External links

     

     

    No comments:

    Post a Comment