Blog Archive

Wednesday, September 15, 2021

09-15-2021-0515 - order theory Hierarchy theory pattern theory complexity zero article

Order theory is a branch of mathematics which investigates the intuitive notion of order using binary relations. It provides a formal framework for describing statements such as "this is less than that" or "this precedes that". This article introduces the field and provides basic definitions. A list of order-theoretic terms can be found in the order theory glossary.

See also[edit]

https://en.wikipedia.org/wiki/Order_theory

The causal sets program is an approach to quantum gravity. Its founding principles are that spacetime is fundamentally discrete (a collection of discrete spacetime points, called the elements of the causal set) and that spacetime events are related by a partial order. This partial order has the physical meaning of the causality relationsbetween spacetime events.

The program is based on a theorem[1] by David Malament that states that if there is a bijective map between two past and future distinguishing space times that preserves their causal structure then the map is a conformal isomorphism. The conformal factor that is left undetermined is related to the volume of regions in the spacetime. This volume factor can be recovered by specifying a volume element for each space time point. The volume of a space time region could then be found by counting the number of points in that region.

Causal sets was initiated by Rafael Sorkin who continues to be the main proponent of the program. He has coined the slogan "Order + Number = Geometry" to characterize the above argument. The program provides a theory in which space time is fundamentally discrete while retaining local Lorentz invariance.

Standard Model

hide

Evidence

Hierarchy problem Dark matter Dark energy Quintessence Phantom energy Dark radiation Dark photon Cosmological constant problem Strong CP problem Neutrino oscillation

hide

Theories

Brans–Dicke theory Cosmic censorship hypothesis Fifth force F-theory Theory of everything Unified field theory Grand Unified Theory Technicolor Kaluza–Klein theory Topological quantum field theory Local quantum field theory Liouville field theory 6D (2,0) superconformal field theory Noncommutative quantum field theory Quantum cosmology Brane cosmology String theory Superstring theory M-theory Mathematical universe hypothesis Mirror matter Randall–Sundrum model Yang–Mills theory N = 4 supersymmetric Yang–Mills theory Twistor string theory Dark fluid Doubly special relativity de Sitter invariant special relativity Causal fermion systems Black hole thermodynamics Unparticle physics Graviphoton Graviscalar Graviton Gravitino Massive gravity Gauge gravitation theory Gauge theory gravity CPT symmetry

hide

Supersymmetry

MSSM NMSSM Superstring theory M-theory Supergravity Supersymmetry breaking Extra dimensions Large extra dimensions

hide

Quantum gravity

False vacuum String theory Spin foam Quantum foam Quantum geometry Loop quantum gravity Quantum cosmology Loop quantum cosmology Causal dynamical triangulation Causal fermion systems Causal sets Canonical quantum gravity Semiclassical gravity Superfluid vacuum theory

hide

Experiments

ANNIE Gran Sasso INO LHC SNO Super-K Tevatron NOvA

https://en.wikipedia.org/wiki/Causal_sets


Hierarchy theory is a means of studying ecological systems in which the relationship between all of the components is of great complexity. Hierarchy theory focuses on levels of organization and issues of scale, with a specific focus on the role of the observer in the definition of the system.[1] Complexity in this context does not refer to an intrinsic property of the system but to the possibility of representing the systems in a plurality of non-equivalent ways depending on the pre-analytical choices of the observer. Instead of analyzing the whole structure, hierarchy theory refers to the analysis of hierarchical levels, and the interactions between them.

https://en.wikipedia.org/wiki/Hierarchy_theory


Pattern theory, formulated by Ulf Grenander, is a mathematical formalism to describe knowledge of the world as patterns. It differs from other approaches to artificial intelligence in that it does not begin by prescribing algorithms and machinery to recognize and classify patterns; rather, it prescribes a vocabulary to articulate and recast the pattern concepts in precise language. Broad in its mathematical coverage, Pattern Theory spans algebra and statistics, as well as local topological and global entropic properties.

In addition to the new algebraic vocabulary, its statistical approach is novel in its aim to:

  • Identify the hidden variables of a data set using real world data rather than artificial stimuli, which was previously commonplace.
  • Formulate prior distributions for hidden variables and models for the observed variables that form the vertices of a Gibbs-like graph.
  • Study the randomness and variability of these graphs.
  • Create the basic classes of stochastic models applied by listing the deformations of the patterns.
  • Synthesize (sample) from the models, not just analyze signals with them.

The Brown University Pattern Theory Group was formed in 1972 by Ulf Grenander.[1] Many mathematicians are currently working in this group, noteworthy among them being the Fields Medalist David Mumford. Mumford regards Grenander as his "guru" in Pattern Theory.[citation needed]

https://en.wikipedia.org/wiki/Pattern_theory


Complexity characterises the behaviour of a system or model whose components interact in multiple ways and follow local rules, meaning there is no reasonable higher instruction to define the various possible interactions.[1]

The term is generally used to characterize something with many parts where those parts interact with each other in multiple ways, culminating in a higher order of emergence greater than the sum of its parts. The study of these complex linkages at various scales is the main goal of complex systems theory.

Science as of 2010 takes a number of approaches to characterizing complexity; Zayed et al.[2] reflect many of these. Neil Johnsonstates that "even among scientists, there is no unique definition of complexity – and the scientific notion has traditionally been conveyed using particular examples..." Ultimately Johnson adopts the definition of "complexity science" as "the study of the phenomena which emerge from a collection of interacting objects".[3]

https://en.wikipedia.org/wiki/Complexity


In physics and philosophy, a relational theory (or relationism) is a framework to understand reality or a physical system in such a way that the positions and other properties of objects are only meaningful relative to other objects. In a relational spacetime theory, space does not exist unless there are objects in it; nor does time exist without events. The relational view proposes that space is contained in objects and that an object represents within itself relationships to other objects. Space can be defined through the relations among the objects that it contains considering their variations through time. The alternative spatial theory is an absolute theory in which the space exists independently of any objects that can be immersed in it.[1]

Relational Order Theories

A number of independent lines of research depict the universe, including the social organization of living creatures which is of particular interest to humans, as systems, or networks, of relationships. Basic physics has assumed and characterized distinctive regimes of relationships. For common examples, gases, liquids and solids are characterized as systems of objects which have among them relationships of distinctive types. Gases contain elements which vary continuously in their spatial relationships as among themselves. In liquids component elements vary continuously as to angles as between themselves, but are restricted as to spatial dispersion. In solids both angles and distances are circumscribed. These systems of relationships, where relational states are relatively uniform, bounded and distinct from other relational states in their surroundings, are often characterized as phases of matter, as set out in Phase (matter). These examples are only a few of the sorts of relational regimes which can be identified, made notable by their relative simplicity and ubiquity in the universe.

Such Relational systems, or regimes, can be seen as defined by reductions in degrees of freedom among the elements of the system. This diminution in degrees of freedom in relationships among elements is characterized as correlation. In the commonly observed transitions between phases of matter, or phase transitions, the progression of less ordered, or more random, to more ordered, or less random, systems is recognized as the result of correlational processes (e.g. gas to liquid, liquid to solid). In the reverse of this process, transitions from a more-ordered state to a less ordered state, as from ice to liquid water, are accompanied by the disruption of correlations.

Correlational processes have been observed at several levels. For example, atoms are fused in suns, building up aggregations of nucleons, which we recognize as complex and heavy atoms. Atoms, both simple and complex, aggregate into molecules. In life a variety of molecules form extremely complex dynamically ordered living cells. Over evolutionary time multicellular organizations developed as dynamically ordered aggregates of cells. Multicellular organisms have over evolutionary time developed correlated activities forming what we term social groups. Etc.

Thus, as is reviewed below, correlation, i.e. ordering, processes have been tiered through several levels, reaching from  quantum mechanics upward through complex, dynamic, 'non-equilibrium', systems, including living systems.

Quantum mechanics[edit]

Lee Smolin[4] proposes a system of "knots and networks" such that "the geometry of space arises out of a … fundamental quantum level which is made up of an interwoven network of … processes".[5] Smolin and a group of like minded researchers have devoted a number of years to developing a  loop quantum gravity basis for physics, which encompasses this relational network viewpoint.

Carlo Rovelli initiated development of a system of views now called relational quantum mechanics. This concept has at its foundation the view that all systems are quantum systems, and that each quantum system is defined by its relationship with other quantum systems with which it interacts.

The physical content of the theory is not to do with objects themselves, but the relations between them. As Rovelli puts it: "Quantum mechanics is a theory about the physical description of physical systems relative to other systems, and this is a complete description of the world".[6]

Rovelli has proposed that each interaction between quantum systems involves a ‘measurement’, and such interactions involved reductions in degrees of freedom between the respective systems, to which he applies the term correlation.

Cosmology[edit]

The conventional explanations of Big Bang and related cosmologies (see also Timeline of the Big Bang) project an expansion and related ‘cooling’ of the universe. This has entailed a cascade of phase transitions. Initially were quark-gluon transitions to simple atoms. According to current, consensus cosmology, given gravitational forces, simple atoms aggregated into stars, and stars into galaxies and larger groupings. Within stars, gravitational compression fused simple atoms into increasingly complex atoms, and stellar explosions seeded interstellar gas with these atoms. Over the cosmological expansion process, with continuing star formation and evolution, the cosmic mixmaster produced smaller scale aggregations, many of which, surrounding stars, we call planets. On some planets, interactions between simple and complex atoms could produce differentiated sets of relational states, including gaseous, liquid, and solid (as, on Earth, atmosphere, oceans, and rock or land). In one and probably more of those planet level aggregations, energy flows and chemical interactions could produce dynamic, self replicating systems which we call life.

Strictly speaking, phase transitions can both manifest correlation and differentiation events, in the direction of diminution of degrees of freedom, and in the opposite direction disruption of correlations. However, the expanding universe picture presents a framework in which there appears to be a direction of phase transitions toward differentiation and correlation, in the universe as a whole, over time.

This picture of progressive development of order in the observable universe as a whole is at variance with the general framework of the Steady State theory of the universe, now generally abandoned. It also appears to be at variance with an understanding of the Second law of thermodynamics which would view the universe as an isolated system which would at some posited equilibrium be in a maximally random set of configurations.

Two prominent cosmologists have provided slightly varying but compatible explanations of how the expansion of the universe allows ordered, or correlated, relational regimes to arise and persist, notwithstanding the second law of thermodynamics.  David Layzer[7]and Eric Chaisson.[8]

Layzer speaks in terms of the rate of expansion outrunning the rate of equilibration involved at local scales. Chaisson summarizes the argument as "In an expanding universe actual entropy … increases less than the maximum possible entropy"[9] thus allowing for, or requiring, ordered (negentropic) relationships to arise and persist.

Chaisson depicts the universe as a non-equilibrium process, in which energy flows into and through ordered systems, such as galaxies, stars, and life processes. This provides a cosmological basis for non-equilibrium thermodynamics, treated elsewhere to some extent in this encyclopedia at this time. In terms which unite non-equilibrium thermodynamics language and relational analyses language, patterns of processes arise and are evident as ordered, dynamic relational regimes.

Biology[edit]

Basic levels[edit]

There seems to be agreement that life is a manifestation of non-equilibrium thermodynamics, both as to individual living creatures and as to aggregates of such creatures, or ecosystems. See e.g. Brooks and Wiley[10] Smolin,[11] Chaisson, Stuart Kauffman[12] and Ulanowicz.[13]

This realization has proceeded from, among other sources, a seminal concept of ‘dissipative systems’ offered by Ilya Prigogine. In such systems, energy feeds through a stable, or correlated, set of dynamic processes, both engendering the system and maintaining the stability of the ordered, dynamic relational regime. A familiar example of such a structure is the Red Spot of Jupiter.

In the 1990s, Eric Schnieder and J.J. Kaye[14] began to develop the concept of life working off differentials, or gradients (e.g. the energy gradient manifested on Earth as a result of sunlight impinging on earth on the one hand and the temperature of interstellar space on the other). Schneider and Kaye identified the contributions of by Prigogine and Erwin Schrödinger  What is Life? (Schrödinger) as foundations for their conceptual developments.

Schneider and Dorion Sagan have since elaborated on the view of life dynamics and the ecosystem in Into the Cool.[15] In this perspective, energy flows tapped from gradients create dynamically ordered structures, or relational regimes, in pre-life precursor systems and in living systems.

As noted above, Chaisson[16] has provided a conceptual grounding for the existence of the differentials, or gradients, off which, in the view of Kaye, Schneider, Sagan and others, life works. Those differentials and gradients arise in the ordered structures (such as suns, chemical systems, and the like) created by correlation processes entailed in the expansion and cooling processes of the universe.

Two investigators, Robert Ulanowicz[13] and Stuart Kauffman, .[17] have suggested the relevance of  autocatalysis models for life processes. In this construct, a group of elements catalyse reactions in a cyclical, or topologically circular, fashion.

Several investigators have used these insights to suggest essential elements of a thermodynamic definition of the life process, which might briefly be summarized as stable, patterned (correlated) processes which intake (and dissipate) energy, and reproduce themselves.[18]

Ulanowicz, a theoretical ecologist, has extended the relational analysis of life processes to ecosystems, using information theorytools. In this approach, an ecosystem is a system of networks of relationships (a common viewpoint at present), which can be quantified and depicted at a basic level in terms of the degrees of order or organization manifested in the systems.

Two prominent investigators, Lynn Margulis and, more fully,  Leo Buss[19] have developed a view of the evolved life structure as exhibiting tiered levels of (dynamic) aggregation of life units. In each level of aggregation, the component elements have mutually beneficial, or complementary, relationships.

In brief summary, the comprehensive Buss approach is cast in terms of replicating precursors which became inclusions in single celled organisms, thence single celled organisms, thence the eukaryotic cell (which are, in Margulis’ now widely adopted analysis, made up of single celled organisms), thence multicellular organisms, composed of eukaryotic cells, and thence social organizationscomposed of multicellular organisms. This work adds to the ‘tree of life’ metaphor a sort of ‘layer cake of life’ metaphor., taking into account tiered levels of life organization.

Related areas of current interest[edit]

Second law of thermodynamics[edit]

The development of non equilibrium thermodynamics and the observations of cosmological generation of ordered systems, identified above, have engendered proposed modifications in the interpretation of the Second Law of Thermodynamics, as compared with the earlier interpretations on the late 19th and the 20th century. For example, Chaisson and Layzer have advanced reconciliations of the concept of entropy with the cosmological creation of order. In another approach, Schneider and D. Sagan, in Into the Cool and other publications, depict the organization of life, and some other phenomena such as benard cells, as entropy generating phenomena which facilitate the dissipation, or reduction, of gradients (without in this treatment visibly getting to the prior issue of how gradients have arisen).

The ubiquity of power law and log-normal distribution manifestations in the universe[edit]

The development of network theories has yielded observations of widespread, or ubiquitous, appearance of power law and log-normal distributions of events in such networks, and in nature generally. (Mathematicians often distinguish between ‘power laws’ and ‘log-normal’ distributions, but not all discussions do so.) Two observers have provided documentation of these phenomena, Albert-László Barabási,[20] and Mark Buchanan[21]

Buchanan demonstrated that power law distribution occur throughout nature, in events such as earthquake frequencies, the size of cities, the size of sun and planetary masses, etc. Both Buchanan and Barabasi reported the demonstrations of a variety of investigators that such power law distributions arise in phase transitions.

In Barabasi's characterization "…if the system is forced to undergo a phase transition … then power laws emerge – nature's unmistakable sign that chaos is departing in favor of order. The theory of phase transitions told us loud and clear that the road from disorder to order is maintained by the powerful forces of self organization and paved with power laws."[22]

Given Barabasi's observation that phase transitions are, in one direction, correlational events yielding ordered relationships, relational theories of order following this logic would consider the ubiquity of power laws to be a reflection of the ubiquity of combinatorial processes of correlation in creating all ordered systems.

Emergence[edit]

The relational regime approach includes a straightforward derivation of the concept of emergence.

From the perspective of relational theories of order, emergent phenomena could be said to be relational effects of an aggregated and differentiated system made of many elements, in a field of relationships external to the considered system, when the elements of the considered system, taken separately and independently, would not have such effects.

For example, the stable structure of a rock, which allows very few degrees of freedom for its elements, can be seen to have a variety of external manifestations depending on the relational system in which it may be found. It could impede fluid flow, as a part of a retaining wall. If it were placed in a wind tunnel, it could be said to induce turbulence in the flow of air around it. In contests among rivalrous humans, it has sometimes been a convenient skull cracker. Or it might become, though itself a composite, an element of another solid, having similarly reduced degrees of freedom for its components, as would a pebble in a matrix making up cement.

To shift particulars, embedding carbon filaments in a resin making up a composite material can yield ‘emergent’ effects. (See the composite material article for a useful description of how varying components can, in a composite, yield effects within an external field of use, or relational setting, which the components alone would not yield).

This perspective has been advanced by Peter Corning, among others. In Corning's words, "...the debate about whether or not the whole can be predicted from the properties of the parts misses the point. Wholes produce unique combined effects, but many of these effects may be co-determined by the context and the interactions between the whole and its environment(s)."[23]

That this derivation of the concept of emergence is conceptually straightforward does not imply that the relational system may not itself be complex, or participate as an element in a complex system of relationships – as is illustrated using different terminology in some aspects of the linked emergence and complexity articles.

The term "emergence" has been used in the very different sense of characterizing the tiering of relational systems (groupings made of groupings) which constitutes the apparently progressive development of order in the universe, described by Chaisson, Layzer, and others, and noted in the Cosmology and Life Organization portions of this page. See for an additional example the derived, popularized narrative Epic of Evolution described in this encyclopedia. From his perspective, Corning adverts to this process of building 'wholes' which then in some circumstances participate in complex systems, such as life systems, as follows "...it is the synergistic effects produced by wholes that are the very cause of the evolution of complexity in nature."

The arrow of time[edit]

As the article on the Arrow of time makes clear, there have been a variety of approaches to defining time and defining how time may have a direction.

The theories which outline a development of order in the universe, rooted in the asymmetric processes of expansion and cooling, project an ‘arrow of time’ . That is, the expanding universe is a sustained process which as it proceeds yields changes of state which do not appear, over the universe as a whole, to be reversible. The changes of state in a given system, and in the universe as a whole, can be earmarked by observable periodicities to yield the concept of time.

Given the challenges confronting humans in determining how the Universe may evolve over billions and trillions of our years, it is difficult to say how long this arrow may be and its eventual end state. At this time some prominent investigators suggest that much if not most of the visible matter of the universe will collapse into black holes which can be depicted as isolated, in a static cosmology.[24]

Economics[edit]

At this time there is a visible attempt to re-cast the foundations of the economics discipline in the terms of non-equilibrium dynamics and network effects.

Albert-László Barabási, Igor Matutinovic[25] and others have suggested that economic systems can fruitfully be seen as network phenomena generated by non-equilibrium forces.

As is set out in Thermoeconomics, a group of analysts have adopted the non equilibrium thermodynamics concepts and mathematical apparati, discussed above, as a foundational approach to considering and characterizing economic systems. They propose that human economic systems can be modeled as thermodynamic systems. Then, based on this premise, theoretical economic analogs of the first and second laws of thermodynamics are developed.[26] In addition, the thermodynamic quantity exergy, i.e. measure of the useful work energy of a system, is one measure of value.[citation needed]

Thermoeconomists argue that economic systems always involve matterenergyentropy, and information.[27] Thermoeconomics thus adapts the theories in non-equilibrium thermodynamics, in which structure formations called dissipative structures form, and information theory, in which information entropy is a central construct, to the modeling of economic activities in which the natural flows of energy and materials function to create and allocate resources. In thermodynamic terminology, human economic activity (as well as the activity of the human life units which make it up) may be described as a dissipative system, which flourishes by consuming free energy in transformations and exchange of resources, goods, and services.

The article on Complexity economics also contains concepts related to this line of thinking.

Another approach is led by researchers belonging to the school of evolutionary and institutional economics (Jason Potts), and ecological economics (Faber et al.).[28]

Separately, some economists have adopted the language of ‘network industries’.[29]

Particular formalisms[edit]

Two other entries in this encyclopedia set out particular formalisms involving mathematical modeling of relationships, in one case focusing to a substantial extent on mathematical expressions for relationships  Theory of relations and in the other recording suggestions of a universal perspective on modeling and reality Relational theory.

See also[edit]


https://en.wikipedia.org/wiki/Relational_theory


Relationalism is any theoretical position that gives importance to the relational nature of things. For relationalism, things exist and function only as relational entities. Relationalism may be contrasted with relationism, which tends to emphasize relations per se.

https://en.wikipedia.org/wiki/Relationalism


Relativism is a family of philosophical views which deny claims to objectivity within a particular domain and assert that facts in that domain are relative to the perspective of an observer or the context in which they are assessed.[1] There are many different forms of relativism, with a great deal of variation in scope and differing degrees of controversy among them.[2] Moral relativism encompasses the differences in moral judgments among people and cultures.[3] Epistemic relativism holds that there are no absolute facts regarding norms of beliefjustification, or rationality, and that there are only relative ones.[4] Alethic relativism (also factual relativism) is the doctrine that there are no absolute truths, i.e., that truth is always relative to some particular frame of reference, such as a language or a culture (cultural relativism).[5] Some forms of relativism also bear a resemblance to philosophical skepticism.[6] Descriptive relativism seeks to describe the differences among cultures and people without evaluation, while normative relativism evaluates the morality or truthfulness of views within a given framework.

https://en.wikipedia.org/wiki/Relativism


Epistemology (/ɪˌpɪstɪˈmɒləi/ (About this soundlisten); from Greek ἐπιστήμη, epistēmē 'knowledge', and -logy) is the branch of philosophy concerned with knowledge. Epistemologists study the nature, origin, and scope of knowledge, epistemic justification, the rationalityof belief, and various related issues. Epistemology is considered a major subfield of philosophy, along with other major subfields such as ethicslogic, and metaphysics.[1]

Debates in epistemology are generally clustered around four core areas:[2][3][4]

  1. The philosophical analysis of the nature of knowledge and the conditions required for a belief to constitute knowledge, such as truth and justification
  2. Potential sources of knowledge and justified belief, such as perceptionreasonmemory, and testimony
  3. The structure of a body of knowledge or justified belief, including whether all justified beliefs must be derived from justified foundational beliefs or whether justification requires only a coherent set of beliefs
  4. Philosophical skepticism, which questions the possibility of knowledge, and related problems, such as whether skepticism poses a threat to our ordinary knowledge claims and whether it is possible to refute skeptical arguments

In these debates and others, epistemology aims to answer questions such as "What do we know?", "What does it mean to say that we know something?", "What makes justified beliefs justified?", and "How do we know that we know?".[1][2][5][6][7]

https://en.wikipedia.org/wiki/Epistemology


A priori and a posteriori ('from the earlier' and 'from the later', respectively) are Latin phrases used in philosophy to distinguish types of knowledgejustification, or argument by their reliance on empirical evidence or experience. A priori knowledge is that which is independent from experience. Examples include mathematics,[i]tautologies, and deduction from pure reason.[ii] A posteriori knowledge is that which depends on empirical evidence. Examples include most fields of science and aspects of personal knowledge.

The terms originate from the analytic methods of Aristotle's organon:  prior analyticscovering deductive logic from definitions and first principles, and posterior analyticscovering inductive logic from observational evidence.

Both terms appear in Euclid's Elements but were popularized by Immanuel Kant's Critique of Pure Reason, one of the most influential works in the history of philosophy.[1] Both terms are primarily used as modifiers to the noun "knowledge" (i.e. "a priori knowledge"). A priori can also be used to modify other nouns such as 'truth". Philosophers also may use apriorityapriorist, and aprioricity as nouns referring to the quality of being a priori.[2]

https://en.wikipedia.org/wiki/A_priori_and_a_posteriori


Bayesian probability is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation[1] representing a state of knowledge[2] or as quantification of a personal belief.[3]

https://en.wikipedia.org/wiki/Bayesian_probability


Truth is the property of being in accord with fact or reality.[1] In everyday language, truth is typically ascribed to things that aim to represent reality or otherwise correspond to it, such as beliefspropositions, and declarative sentences.[2]

Truth is usually held to be the opposite of falsehood. The concept of truth is discussed and debated in various contexts, including philosophy, art, theology, and science. Most human activities depend upon the concept, where its nature as a concept is assumed rather than being a subject of discussion; these include most of the scienceslawjournalism, and everyday life. Some philosophers view the concept of truth as basic, and unable to be explained in any terms that are more easily understood than the concept of truth itself.[2] Most commonly, truth is viewed as the correspondence of language or thought to a mind-independent world. This is called the correspondence theory of truth.

Various theories and views of truth continue to be debated among scholars, philosophers, and theologians.[2][3] There are many different questions about the nature of truth which are still the subject of contemporary debates, such as: How do we define truth? Is it even possible to give an informative definition of truth? What things are truth-bearers and are therefore capable of being true or false? Are truth and falsehood bivalent, or are there other truth values? What are the criteria of truth that allow us to identify it and to distinguish it from falsehood? What role does truth play in constituting knowledge? And is truth always absolute, or can it be relative to one's perspective?

https://en.wikipedia.org/wiki/Truth


Knowledge is a familiarity, awareness, or understanding of someone or something, such as facts (descriptive knowledge), skills (procedural knowledge), or objects (acquaintance knowledge). By most accounts, knowledge can be acquired in many different ways and from many sources, including but not limited to perceptionreasonmemorytestimonyscientific inquiryeducation, and practice. The philosophical study of knowledge is called epistemology.

The term "knowledge" can refer to a theoretical or practical understanding of a subject. It can be implicit (as with practical skill or expertise) or explicit (as with the theoretical understanding of a subject); formal or informal; systematic or particular.[1] The philosopher Plato famously pointed out the need for a distinction between knowledge and true belief in the Theaetetus, leading many to attribute to him a definition of knowledge as "justified true belief".[2][3] The difficulties with this definition raised by the Gettier problem have been the subject of extensive debate in epistemology for more than half a century.[2]

https://en.wikipedia.org/wiki/Knowledge


Reality is the sum or aggregate of all that is real or existent within a system, as opposed to that which is only imaginary. The term is also used to refer to the ontological status of things, indicating their existence.[1] In physical terms, reality is the totality of a system, known and unknown.[2] Philosophical questions about the nature of reality or existence or being are considered under the rubric of ontology, which is a major branch of metaphysics in the Western philosophical tradition. Ontological questions also feature in diverse branches of philosophy, including the philosophy of sciencephilosophy of religionphilosophy of mathematics, and philosophical logic. These include questions about whether only physical objects are real (i.e., Physicalism), whether reality is fundamentally immaterial (e.g., Idealism), whether hypothetical unobservable entities posited by scientific theories exist, whether God exists, whether numbers and other abstract objects exist, and whether possible worlds exist.

https://en.wikipedia.org/wiki/Reality


Existence is the ability of an entity to interact with physical or mental reality. In philosophy, it refers to the ontological property[1] of being.[2]

https://en.wikipedia.org/wiki/Existence


Essence (Latinessentia) is a polysemic term, used in philosophy and theology as a designation for the property or set of properties that make an entity or substance what it fundamentally is, and which it has by necessity, and without which it loses its identity. Essence is contrasted with accident: a property that the entity or substance has contingently, without which the substance can still retain its identity. 

The concept originates rigorously with Aristotle (although it can also be found in Plato),[1] who used the Greek expression to ti ên einai (τὸ τί ἦν εἶναι,[2] literally meaning "the what it was to be" and corresponding to the scholastic term quiddity) or sometimes the shorter phrase to ti esti (τὸ τί ἐστι,[3] literally meaning "the what it is" and corresponding to the scholastic term haecceity) for the same idea. This phrase presented such difficulties for its Latin translators that they coined the word essentia(English "essence") to represent the whole expression. For Aristotle and his scholastic followers, the notion of essence is closely linked to that of definition (ὁρισμός horismos).[4]

In the history of Western philosophy, essence has often served as a vehicle for doctrines that tend to individuate different forms of existence as well as different identity conditions for objects and properties; in this logical meaning, the concept has given a strong theoretical and common-sense basis to the whole family of logical theories based on the "possible worlds" analogy set up by Leibniz and developed in the intensional logic from Carnap to Kripke, which was later challenged by "extensionalist" philosophers such as Quine.

https://en.wikipedia.org/wiki/Essence


Axiology (from Greek ἀξίαaxia: "value, worth"; and -λογία-logia: "study of") is the philosophical study of value. It includes questions about the nature and classification of values and about what kinds of things have value. It is intimately connected with various other philosophical fields that crucially depend on the notion of value, like ethicsaesthetics or philosophy of religion.[1][2] It is also closely related to value theory and meta-ethics. The term was first used by Paul Lapie, in 1902,[3][4] and Eduard von Hartmann, in 1908.[5][6]

https://en.wikipedia.org/wiki/Axiology


Logic[1] is an interdisciplinary field which studies truth and reasoningInformal logicseeks to characterize valid arguments informally, for instance by listing varieties of fallaciesFormal logic represents statements and argument patterns symbolically, using formal systems such as first order logic. Within formal logic, mathematical logicstudies the mathematical characteristics of logical systems, while philosophical logicapplies them to philosophical problems such as the nature of meaningknowledge, and existence. Systems of formal logic are also applied in other fields including linguisticscognitive science, and computer science.

Logic has been studied since Antiquity, early approaches including Aristotelian logicstoic logicAnviksiki, and the mohists. Modern formal logic has its roots in the work of late 19th century mathematicians such as Gottlob Frege.

https://en.wikipedia.org/wiki/Logic


Ethics or moral philosophy is a branch[1] of philosophy that "involves systematizing, defending, and recommending concepts of right and wrong behavior".[2] The field of ethics, along with aesthetics, concerns matters of value; these fields comprise the branch of philosophy called axiology.[3]

Ethics seeks to resolve questions of human morality by defining concepts such as good and evil, right and wrongvirtue and vicejustice and crime. As a field of intellectual inquiry, moral philosophy is related to the fields of moral psychologydescriptive ethics, and value theory.

Three major areas of study within ethics recognized today are:[2]

  1. Meta-ethics, concerning the theoretical meaning and reference of moral propositions, and how their truth values (if any) can be determined;
  2. Normative ethics, concerning the practical means of determining a moral course of action;
  3. Applied ethics, concerning what a person is obligated (or permitted) to do in a specific situation or a particular domain of action.[2]

https://en.wikipedia.org/wiki/Ethics

Philosophy of science is a branch of philosophy concerned with the foundations, methods, and implications of science. The central questions of this study concern what qualifies as science, the reliability of scientific theories, and the ultimate purpose of science. This discipline overlaps with metaphysicsontology, and epistemology, for example, when it explores the relationship between science and truth. Philosophy of science focuses on metaphysical, epistemic and semantic aspects of science. Ethical issues such as bioethics and scientific misconduct are often considered ethics or science studies rather than philosophy of science.

There is no consensus among philosophers about many of the central problems concerned with the philosophy of science, including whether science can reveal the truth about unobservable things and whether scientific reasoning can be justified at all. In addition to these general questions about science as a whole, philosophers of science consider problems that apply to particular sciences (such as biology or physics). Some philosophers of science also use contemporary results in science to reach conclusions about philosophy itself.

While philosophical thought pertaining to science dates back at least to the time of Aristotle, general philosophy of science emerged as a distinct discipline only in the 20th century in the wake of the logical positivist movement, which aimed to formulate criteria for ensuring all philosophical statements' meaningfulness and objectively assessing them. Charles Sanders Peirce and Karl Popper moved on from positivism to establish a modern set of standards for scientific methodology. Thomas Kuhn's 1962 book The Structure of Scientific Revolutions was also formative, challenging the view of scientific progress as steady, cumulative acquisition of knowledge based on a fixed method of systematic experimentation and instead arguing that any progress is relative to a "paradigm", the set of questions, concepts, and practices that define a scientific discipline in a particular historical period.[1]

Subsequently, the coherentist approach to science, in which a theory is validated if it makes sense of observations as part of a coherent whole, became prominent due to W.V. Quine and others. Some thinkers such as Stephen Jay Gould seek to ground science in axiomatic assumptions, such as the uniformity of nature. A vocal minority of philosophers, and Paul Feyerabend in particular, argue that there is no such thing as the "scientific method", so all approaches to science should be allowed, including explicitly supernatural ones. Another approach to thinking about science involves studying how knowledge is created from a sociological perspective, an approach represented by scholars like David Bloor and Barry Barnes. Finally, a tradition in continental philosophy approaches science from the perspective of a rigorous analysis of human experience.

Philosophies of the particular sciences range from questions about the nature of time raised by Einstein's general relativity, to the implications of economics for public policy. A central theme is whether the terms of one scientific theory can be intra- or intertheoretically reduced to the terms of another. That is, can chemistry be reduced to physics, or can sociology be reduced to individual psychology? The general questions of philosophy of science also arise with greater specificity in some particular sciences. For instance, the question of the validity of scientific reasoning is seen in a different guise in the foundations of statistics. The question of what counts as science and what should be excluded arises as a life-or-death matter in the philosophy of medicine. Additionally, the philosophies of biology, of psychology, and of the social sciences explore whether the scientific studies of human nature can achieve objectivity or are inevitably shaped by values and by social relations.

https://en.wikipedia.org/wiki/Philosophy_of_science


Methodology is "'a contextual framework' for research, a coherent and logical scheme based on views, beliefs, and values, that guides the choices researchers [or other users] make".[1][2]

It comprises the theoretical analysis of the body of methods and principles associated with a branch of knowledge such that the methodologies employed from differing disciplines vary depending on their historical development. This creates a continuum of methodologies[3] that stretch across competing understandings of how knowledgeand reality are best understood. This situates methodologies within overarching philosophies and approaches.[4]

Methodology may be visualized as a spectrum from a predominantly quantitativeapproach towards a predominantly qualitative approach.[5] Although a methodology may conventionally sit specifically within one of these approaches, researchers may blend approaches in answering their research objectives and so have methodologies that are multimethod and/or interdisciplinary.[6][7][8]

Overall, a methodology does not set out to provide solutions - it is therefore, not the same as a method.[8][9] Instead, a methodology offers a theoretical perspective for understanding which method, set of methods, or best practices can be applied to the research question(s) at hand.

https://en.wikipedia.org/wiki/Methodology


Metaphysics is the branch of philosophy that studies the first principles of being, identity and change, space and time, causality, necessity and possibility.[1][2][3] It includes questions about the nature of consciousness and the relationship between mind and matter. The word "metaphysics" comes from two Greek words that, together, literally mean "after or behind or among [the study of] the natural". It has been suggested that the term might have been coined by a first century CE editor who assembled various small selections of Aristotle’s works into the treatise we now know by the name Metaphysics (μετὰ τὰ φυσικά, meta ta physika, lit. 'after the Physics ', another of Aristotle's works).[4]

Metaphysics studies questions related to what it is for something to exist and what types of existence there are. Metaphysics seeks to answer, in an abstract and fully general manner, the questions:[5]

  1. What is there?
  2. What is it like?

Topics of metaphysical investigation include existenceobjects and their propertiesspace and timecause and effect, and possibility. Metaphysics is considered one of the four main branches of philosophy, along with epistemologylogic, and ethics.[6]

https://en.wikipedia.org/wiki/Metaphysics


Pragmatism is a philosophical tradition that considers words and thought as tools and instruments for predictionproblem solving, and action, and rejects the idea that the function of thought is to describe, represent, or mirror reality. Pragmatists contend that most philosophical topics—such as the nature of knowledge, language, concepts, meaning, belief, and science—are all best viewed in terms of their practical uses and successes.

Pragmatism began in the United States in the 1870s. Its origins are often attributed to the philosophers Charles Sanders PeirceWilliam James, and John Dewey. In 1878, Peirce described it in his pragmatic maxim: "Consider the practical effects of the objects of your conception. Then, your conception of those effects is the whole of your conception of the object."[1]

https://en.wikipedia.org/wiki/Pragmatism


Philosophical skepticism (UK spellingscepticism; from Greek σκέψις skepsis, "inquiry") is a family of philosophical views that question the possibility of knowledge.[1][2] Philosophical skeptics are often classified into two general categories: Those who deny all possibility of knowledge, and those who advocate for the suspension of judgment due to the inadequacy of evidence.[3] This distinction is modeled after the differences between the Academic skeptics and the Pyrrhonian skeptics in ancient Greek philosophy.

https://en.wikipedia.org/wiki/Philosophical_skepticism


The analytic–synthetic distinction is a semantic distinction, used primarily in philosophy to distinguish between propositions (in particular, statements that are affirmative subjectpredicate judgments) that are of two types: analytic propositions and synthetic propositions. Analytic propositions are true or not true solely by virtue of their meaning, whereas synthetic propositions' truth, if any, derives from how their meaning relates to the world.[1]

While the distinction was first proposed by Immanuel Kant, it was revised considerably over time, and different philosophers have used the terms in very different ways. Furthermore, some philosophers (starting with W.V.O. Quine) have questioned whether there is even a clear distinction to be made between propositions which are analytically true and propositions which are synthetically true.[2] Debates regarding the nature and usefulness of the distinction continue to this day in contemporary philosophy of language.[2]

https://en.wikipedia.org/wiki/Analytic–synthetic_distinction


The internal–external distinction is a distinction used in philosophy to divide an ontology into two parts: an internal part consisting of a linguistic framework and observations related to that framework, and an external part concerning practical questions about the utility of that framework. This division was introduced by Rudolf Carnap in his work "Empiricism, Semantics, and Ontology".[1] It was subsequently criticized at length by Willard Van Orman Quine in a number of works,[2][3] and was considered for some time to have been discredited. However, recently a number of authors have come to the support of some or another version of Carnap's approach.[4][5][6]

https://en.wikipedia.org/wiki/Internal–external_distinction


Ontology is the branch of philosophy that studies concepts such as existencebeingbecoming, and reality. It includes the questions of how entities are grouped into basic categories and which of these entities exist on the most fundamental level. Ontology is sometimes referred to as the science of being and belongs to the major branch of philosophy known as metaphysics.

Ontologists often try to determine what the categories or highest kinds are and how they form a system of categories that provides an encompassing classification of all entities. Commonly proposed categories include substancespropertiesrelationsstates of affairs and events. These categories are characterized by fundamental ontological concepts, like particularity and universalityabstractness and concreteness, or possibility and necessity. Of special interest is the concept of ontological dependence, which determines whether the entities of a category exist on the most fundamental level. Disagreements within ontology are often about whether entities belonging to a certain category exist and, if so, how they are related to other entities.[1]

When used as a countable noun, the terms "ontology" and "ontologies" refer not to the science of being but to theories within the science of being. Ontological theories can be divided into various types according to their theoretical commitments. Monocategorical ontologies hold that there is only one basic category, which is rejected by polycategorical ontologiesHierarchical ontologies assert that some entities exist on a more fundamental level and that other entities depend on them. Flat ontologies, on the other hand, deny such a privileged status to any entity.

https://en.wikipedia.org/wiki/Ontology


Ontogeny (also ontogenesis) is the origination and development of an organism (both physical and psychological, e.g., moral development[1]), usually from the time of fertilization of the egg to adult. The term can also be used to refer to the study of the entirety of an organism's lifespan.

Ontogeny is the developmental history of an organism within its own lifetime, as distinct from phylogeny, which refers to the evolutionary history of a species. In practice, writers on evolution often speak of species as "developing" traits or characteristics. This can be misleading. While developmental (i.e., ontogenetic) processes can influence subsequent evolutionary (e.g., phylogenetic) processes[2] (see evolutionary developmental biology and recapitulation theory), individual organisms develop (ontogeny), while species evolve (phylogeny).

Ontogeny, embryology and developmental biology are closely related studies and those terms are sometimes used interchangeably. Aspects of ontogeny are morphogenesis, the development of form; tissue growth; and cellular differentiation. The term ontogeny has also been used in cell biology to describe the development of various cell types within an organism.[3]

Ontogeny is a useful field of study in many disciplines, including developmental biologydevelopmental psychologydevelopmental cognitive neuroscience, and developmental psychobiology.

Ontogeny is used in anthropology as "the process through which each of us embodies the history of our own making."[4]

https://en.wikipedia.org/wiki/Ontogeny


In moral philosophydeontological ethics or deontology (from Greekδέον, 'obligation, duty' + λόγος, 'study') is the normative ethical theory that the morality of an action should be based on whether that action itself is right or wrong under a series of rules, rather than based on the consequences of the action.[1] It is sometimes described as duty-, obligation-, or rule-based ethics.[2][3]Deontological ethics is commonly contrasted to consequentialism,[4] virtue ethics, and pragmatic ethics. In this terminology, action is more important than the consequences.

The term deontological was first used to describe the current, specialised definition by C. D. Broad in his 1930 book, Five Types of Ethical Theory.[5] Older usage of the term goes back to Jeremy Bentham, who coined it prior to 1816 as a synonym of dicastic or censorial ethics (i.e., ethics based on judgement).[6][7] The more general sense of the word is retained in French, especially in the term code de déontologie (ethical code), in the context of professional ethics.

Depending on the system of deontological ethics under consideration, a moral obligation may arise from an external or internal source, such as a set of rules inherent to the universe (ethical naturalism), religious law, or a set of personal or cultural values (any of which may be in conflict with personal desires).

https://en.wikipedia.org/wiki/Deontology


In linguistics and philosophymodality is the phenomenon whereby language is used to discuss possible situations. For instance, a modal expression may convey that something is likely, desirable, or permissible. Quintessential modal expressions include modal auxiliariessuch as "could", "should", or "must"; modal adverbs such as "possibly" or "necessarily"; and modal adjectives such as "conceivable" or "probable". However, modal components have been identified in the meanings of countless natural language expressions including counterfactualspropositional attitudesevidentialshabituals, and generics.

Modality has been intensely studied from a variety of perspectives. Within linguistics, typological studies have traced crosslinguistic variation in the strategies used to mark modality, with a particular focus on its interaction with Tense–aspect–mood marking. Theoretical linguists have sought to analyze both the propositional content and discourse effects of modal expressions using formal tools derived from modal logic. Within philosophy, linguistic modality is often seen as a window into broader metaphysical notions of necessity and possibility.


See also[edit]


show

Authority control Edit this at Wikidata

showvte

Formal semantics (natural language)

Categories: Semantics (linguistics)Linguistic modalityPhilosophy of languageFormal semantics (natural language)


https://en.wikipedia.org/wiki/Modality_(natural_language)


Philosophy of language is the branch of philosophy that studies language. Its primary concerns include the nature of linguistic meaningreference, language use, language learning and creation, language understanding, truththought and experience (to the extent that both are linguistic), communicationinterpretation, and translation.

https://en.wikipedia.org/wiki/Category:Philosophy_of_language


Philosophy (from Greekφιλοσοφίαphilosophia, 'love of wisdom')[1][2] is the study of general and fundamental questions, such as those about existencereasonknowledgevaluesmind, and language.[3][4] Such questions are often posed as problems[5][6] to be studied or resolved. Some sources claim the term was coined by Pythagoras (c. 570 – c. 495 BCE),[7][8] others dispute this story,[9][10] arguing that Pythagoreans merely claimed use of a preexisting term.[11]  Philosophical methods include questioningcritical discussionrational argument, and systematic presentation.[12][13][i]

Historically, philosophy encompassed all bodies of knowledge and a practitioner was known as a philosopher.[14] From the time of Ancient Greekphilosopher Aristotle to the 19th century, "natural philosophy" encompassed astronomymedicine, and physics.[15] For example, Newton's 1687 Mathematical Principles of Natural Philosophy later became classified as a book of physics.

In the 19th century, the growth of modern research universities led academic philosophy and other disciplines to professionalize and specialize.[16][17] Since then, various areas of investigation that were traditionally part of philosophy have become separate academic disciplines, and namely the social sciences such as psychologysociologylinguistics, and economics.

Today, major subfields of academic philosophy include metaphysics, which is concerned with the fundamental nature of existence and realityepistemology, which studies the nature of knowledge and beliefethics, which is concerned with moral value; and logic, which studies the rules of inference that allow one to derive conclusions from true premises.[18][19] Other notable subfields include philosophy of sciencepolitical philosophyaestheticsphilosophy of language, and philosophy of mind.

https://en.wikipedia.org/wiki/Philosophy


In linguisticsevidentiality[1][2] is, broadly, the indication of the nature of evidence for a given statement; that is, whether evidence exists for the statement and if so, what kind. An evidential (also verificational or validational) is the particular grammatical element (affixclitic, or particle) that indicates evidentiality. Languages with only a single evidential have had terms such as mediativemédiatifmédiaphorique, and indirective used instead of evidential.

https://en.wikipedia.org/wiki/Evidentiality


Free choice is a phenomenon in natural language where a disjunction appears to receive a conjunctive interpretation when it interacts with a modal operator. For example, the following English sentences can be interpreted to mean that the addressee can watch a movie and that they can also play video games, depending on their preference.[1]

  1. You can watch a movie or play video games.
  2. You can watch a movie or you can play video games.

Free choice inferences are a major topic of research in formal semantics and philosophical logic because they are not valid in classical systems of modal logic. If they were valid, then the semantics of natural language would validate the Free Choice Principle.

  1. Free Choice Principle

This principle is not valid in classical modal logic. Moreover adding this principle to standard modal logics would allow one to conclude  from , for any  and . This observation is known as the Paradox of Free Choice.[1][2] To resolve this paradox, some researchers have proposed analyses of free choice within nonclassical frameworks such as dynamic semanticslinear logicalternative semantics, and inquisitive semantics.[1][3][4] Others have proposed ways of deriving free choice inferences as scalar implicatures which arise on the basis of classical lexical entries for disjunction and modality.[1][5][6][7]

Free choice inferences are most widely studied for deontic modals, but also arise with other flavors of modality as well as imperativesconditionals, and other kinds of operators.[1][8][9][4] Indefinite noun phrases give rise to a similar inference which is also referred to as "free choice" though researchers disagree as to whether it forms a  natural class with disjunctive free choice.[9][10]

Formal semantics (natural language)

Central concepts

Compositionality Denotation Entailment Extension Generalized quantifier Intension Logical form Presupposition Proposition Reference Scope Speech act Syntax–semantics interface Truth conditions

Topics

Areas

Anaphora Ambiguity Binding Conditionals Definiteness Disjunction Evidentiality Focus Indexicality Lexical semantics Modality Negation Propositional attitudes Tense–aspect–mood Quantification Vagueness

Phenomena

Antecedent-contained deletion Cataphora Coercion Conservativity Counterfactuals Cumulativity De dicto and de re De se Deontic modality Discourse relations Donkey anaphora Epistemic modality Faultless disagreement Free choice inferences Givenness Crossover effects Hurford disjunction Inalienable possession Intersective modification Logophoricity Mirativity Modal subordination Negative polarity items Opaque contexts Performatives Privative adjectives Quantificational variability effect Responsive predicate Rising declaratives Scalar implicature Sloppy identity Subsective modification Telicity Temperature paradox Veridicality

Formalism

Formal systems

Alternative semantics Categorial grammar Combinatory categorial grammar Discourse representation theory Dynamic semantics Generative grammar Glue semantics Inquisitive semantics Intensional logic Lambda calculus Mereology Montague grammar Segmented discourse representation theory Situation semantics Supervaluationism Type theory TTR

Concepts

Autonomy of syntax Context set Continuation Conversational scoreboard Existential closure Function application Meaning postulate Monads Possible world Quantifier raising Quantization Question under discussion Squiggle operator Type shifter Universal grinder

See also

Cognitive semantics Computational semantics Distributional semantics Formal grammar Inferentialism Linguistics wars Philosophy of language Pragmatics Semantics of logic


Categories: SemanticsLogicModal logicPhilosophical logicMathematical logicRules of inferenceFormal semantics (natural language)Logic stubsSemantics stubsLinguistics stubs


https://en.wikipedia.org/wiki/Free_choice_inference


Deontic logic is the field of philosophical logic that is concerned with obligationpermission, and related concepts. Alternatively, a deontic logic is a formal system that attempts to capture the essential logical features of these concepts. Typically, a deontic logic uses OA to mean it is obligatory that A (or it ought to be (the case) that A), and PA to mean it is permitted (or permissible) that A.

Dyadic deontic logic[edit]

An important problem of deontic logic is that of how to properly represent conditional obligations, e.g. If you smoke (s), then you ought to use an ashtray (a). It is not clear that either of the following representations is adequate:

Under the first representation it is vacuously true that if you commit a forbidden act, then you ought to commit any other act, regardless of whether that second act was obligatory, permitted or forbidden (Von Wright 1956, cited in Aqvist 1994). Under the second representation, we are vulnerable to the gentle murder paradox, where the plausible statements (1) if you murder, you ought to murder gently, (2) you do commit murder, and (3) to murder gently you must murder imply the less plausible statement: you ought to murder. Others argue that must in the phrase to murder gently you must murder is a mistranslation from the ambiguous English word (meaning either implies or ought). Interpreting must as implies does not allow one to conclude you ought to murder but only a repetition of the given you murder. Misinterpreting must as ought results in a perverse axiom, not a perverse logic. With use of negations one can easily check if the ambiguous word was mistranslated by considering which of the following two English statements is equivalent with the statement to murder gently you must murder: is it equivalent to if you murder gently it is forbidden not to murder or if you murder gently it is impossible not to murder ?

Some deontic logicians have responded to this problem by developing dyadic deontic logics, which contain binary deontic operators:

 means it is obligatory that A, given B
 means it is permissible that A, given B.

(The notation is modeled on that used to represent conditional probability.) Dyadic deontic logic escapes some of the problems of standard (unary) deontic logic, but it is subject to some problems of its own.[example needed]


https://en.wikipedia.org/wiki/Deontic_logic


In mathematics and logic, a vacuous truth is a conditional or universal statement (a universal statement that can be converted to a conditional statement) that is true because the antecedent cannot be satisfied.[1][2] For example, the statement "all cell phones in the room are turned off" will be true when there are no cell phones in the room. In this case, the statement "all cell phones in the room are turned on" would also be vacuously true, as would the conjunction of the two: "all cell phones in the room are turned on and turned off". For that reason, it is sometimes said that a statement is vacuously true because it does not really say anything.[3]

More formally, a relatively well-defined usage refers to a conditional statement (or a universal conditional statement) with a false antecedent.[1][2][4][3][5] One example of such a statement is "if London is in France, then the Eiffel Tower is in Bolivia".

Such statements are considered vacuous truths, because the fact that the antecedent is false prevents using the statement to infer anything about the truth value of the consequent. In essence, a conditional statement, that is based on the material conditional, is true when the antecedent ("London is in France" in the example) is false regardless of whether the conclusion or consequent ("the Eiffel Tower is in Bolivia" in the example) is true or false because the material conditional is defined in that way.

Examples common to everyday speech include conditional phrases like "when hell freezes over..." and "when pigs can fly...", indicating that not before the given (impossible) condition is met will the speaker accept some respective (typically false or absurd) proposition.

In pure mathematics, vacuously true statements are not generally of interest by themselves, but they frequently arise as the base case of proofs by mathematical induction.[6][1] This notion has relevance in pure mathematics, as well as in any other field that uses classical logic.

Outside of mathematics, statements which can be characterized informally as vacuously true can be misleading. Such statements make reasonable assertions about qualified objects which do not actually exist. For example, a child might tell their parent "I ate every vegetable on my plate", when there were no vegetables on the child's plate to begin with. In this case, the parent can believe that the child has actually eaten all the vegetable on this plate (misleading) but this is not a fact. In addition, a vacuous truth is often used colloquially with absurd statements, either to confidently assert something (e.g. "the dog was red, or I'm a monkey's uncle" to strongly claim that the dog was red), or to express doubt, sarcasm, disbelief, incredulity or indignation (e.g. "yes, and I'm the Queen of England" to disagree a previously made statement).

https://en.wikipedia.org/wiki/Vacuous_truth


Indefinite article[edit]

An indefinite article is an article that marks an indefinite noun phrase. Indefinite articles are those such as English "some" or "a", which do not refer to a specific identifiable entity. Indefinites are commonly used to introduce a new discourse referent which can be referred back to in subsequent discussion:

  1. A monster ate a cookie. His name is Cookie Monster.

Indefinites can also be used to generalize over entities who have some property in common:

  1. A cookie is a wonderful thing to eat.

Indefinites can also be used to refer to specific entities whose precise identity is unknown or unimportant.

  1. A monster must have broken into my house last night and eaten all my cookies.
  2. A friend of mine told me that happens frequently to people who live on Sesame Street.

Indefinites also have predicative uses:

  1. Leaving my door unlocked was a bad decision.

Indefinite noun phrases are widely studied within linguistics, in particular because of their ability to take exceptional scope.

Proper article[edit]

proper article indicates that its noun is proper, and refers to a unique entity. It may be the name of a person, the name of a place, the name of a planet, etc. The Māori language has the proper article a, which is used for personal nouns; so, "a Pita" means "Peter". In Māori, when the personal nouns have the definite or indefinite article as an important part of it, both articles are present; for example, the phrase "a Te Rauparaha", which contains both the proper article a and the definite article Te refers to the person name Te Rauparaha.

The definite article is sometimes also used with proper names, which are already specified by definition (there is just one of them). For example: the Amazon, the Hebrides. In these cases, the definite article may be considered superfluous. Its presence can be accounted for by the assumption that they are shorthand for a longer phrase in which the name is a specifier, i.e. the Amazon Riverthe Hebridean Islands.[citation needed] Where the nouns in such longer phrases cannot be omitted, the definite article is universally kept: the United Statesthe People's Republic of China. This distinction can sometimes become a political matter: the former usage the Ukraine stressed the word's Russian meaning of "borderlands"; as Ukraine became a fully independent state following the collapse of the Soviet Union, it requested that formal mentions of its name omit the article. Similar shifts in usage have occurred in the names of Sudan and both Congo (Brazzaville) and Congo (Kinshasa); a move in the other direction occurred with The Gambia. In certain languages, such as French and Italian, definite articles are used with all or most names of countries: la France/le Canada/l'Allemagne, l'Italia/la Spagna/il Brasile.

If a name [has] a definite article, e.g. the Kremlin, it cannot idiomatically be used without it: we cannot say Boris Yeltsinis in Kremlin.

Some languages also use definite articles with personal names. For example, such use is standard in Portuguese (a Maria, literally: "the Maria"), in Greek (η Μαρία, ο Γιώργος, ο Δούναβης, η Παρασκευή) and in Catalan (la Núria, el/en Oriol). It can also occur colloquially or dialectally in SpanishGermanFrenchItalian and other languages. In Hungarian, the use of definite articles with personal names is quite widespread, mainly colloquially, although it is considered to be a Germanism.

This usage can appear in American English for particular nicknames. One prominent example occurs in the case of former United States President Donald Trump, who is also sometimes informally called "The Donald" in speech and in print media.[4] Another is President Ronald Reagan's most common nickname, "The Gipper", which is still used today in reference to him.[4]

Partitive article[edit]

partitive article is a type of article, sometimes viewed as a type of indefinite article, used with a mass noun such as water, to indicate a non-specific quantity of it. Partitive articles are a class of determiner; they are used in French and Italian in addition to definite and indefinite articles. (In Finnish and Estonian, the partitive is indicated by inflection.) The nearest equivalent in English is some, although it is classified as a determiner, and English uses it less than French uses de

French: Veux-tu du café ?
Do you want (some) coffee?
For more information, see the article on the French partitive article.

Haida has a partitive article (suffixed -gyaa) referring to "part of something or... to one or more objects of a given group or category," e.g., tluugyaa uu hal tlaahlaang "he is making a boat (a member of the category of boats)."[5]

Negative article[edit]

negative article specifies none of its noun, and can thus be regarded as neither definite nor indefinite. On the other hand, some consider such a word to be a simple determiner rather than an article. In English, this function is fulfilled by no, which can appear before a singular or plural noun: 

No man has been on this island.
No dogs are allowed here.
No one is in the room.

In German, the negative article is, among other variations, kein, in opposition to the indefinite article ein.

Ein Hund – a dog
Kein Hund – no dog

The equivalent in Dutch is geen:

een hond – a dog
geen hond – no dog

Zero article[edit]

The zero article is the absence of an article. In languages having a definite article, the lack of an article specifically indicates that the noun is indefinite. Linguists interested in X-bar theory causally link zero articles to nouns lacking a determiner.[6] In English, the zero article rather than the indefinite is used with plurals and mass nouns, although the word "some" can be used as an indefinite plural article.

Visitors end up walking in mud.

Articles often develop by specialization of adjectives or determiners. Their development is often a sign of languages becoming more analytic instead of synthetic, perhaps combined with the loss of inflection as in English, Romance languages, Bulgarian, Macedonian and Torlakian.

Joseph Greenberg in Universals of Human Language[9] describes "the cycle of the definite article": Definite articles (Stage I) evolve from demonstratives, and in turn can become generic articles (Stage II) that may be used in both definite and indefinite contexts, and later merely noun markers (Stage III) that are part of nouns other than proper names and more recent borrowings. Eventually articles may evolve anew from demonstratives.

Definite articles[edit]

Definite articles typically arise from demonstratives meaning that. For example, the definite articles in most Romance languages—e.g., elillelalo — derive from the Latin demonstratives ille (masculine), illa (feminine) and illud (neuter).

The English definite article the, written þe in Middle English, derives from an Old English demonstrative, which, according to gender, was written se (masculine), seo (feminine) (þe and þeo in the Northumbrian dialect), or þæt (neuter). The neuter form þæt also gave rise to the modern demonstrative that. The ye occasionally seen in pseudo-archaic usage such as "Ye Olde Englishe Tea Shoppe" is actually a form of þe, where the letter thorn (þ) came to be written as a y.

Multiple demonstratives can give rise to multiple definite articles. Macedonian, for example, in which the articles are suffixed, has столот (stolot), the chair; столов (stolov), this chair; and столон (stolon), that chair. These derive from the Proto-Slavicdemonstratives *tъ "this, that", *ovъ "this here" and *onъ "that over there, yonder" respectively. Colognian prepositions articles such as in dat Auto, or et Auto, the car; the first being specifically selected, focused, newly introduced, while the latter is not selected, unfocused, already known, general, or generic.

Standard Basque distinguishes between proximal and distal definite articles in the plural (dialectally, a proximal singular and an additional medial grade may also be present). The Basque distal form (with infix -a-, etymologically a suffixed and phonetically reduced form of the distal demonstrative har-/hai-) functions as the default definite article, whereas the proximal form (with infix -o-, derived from the proximal demonstrative hau-/hon-) is marked and indicates some kind of (spatial or otherwise) close relationship between the speaker and the referent (e.g., it may imply that the speaker is included in the referent): etxeak ("the houses") vs. etxeok ("these houses [of ours]"), euskaldunak ("the Basque speakers") vs. euskaldunok ("we, the Basque speakers").

Speakers of Assyrian Neo-Aramaic, a modern Aramaic language that lacks a definite article, may at times use demonstratives  ahaand aya (feminine) or awa (masculine) – which translate to "this" and "that", respectively – to give the sense of "the".[10]

Indefinite articles[edit]

Indefinite articles typically arise from adjectives meaning one. For example, the indefinite articles in the Romance languages—e.g., ununaune—derive from the Latin adjective unus. Partitive articles, however, derive from Vulgar Latin de illo, meaning (some) of the.

The English indefinite article an is derived from the same root as one. The -n came to be dropped before consonants, giving rise to the shortened form a. The existence of both forms has led to many cases of juncture loss, for example transforming the original a napron into the modern an apron.

The Persian indefinite article is yek, meaning one.

See also[edit]


Lexical categories and their features
show
Authority control Edit this at Wikidata
Categories: GrammarParts of speech

Reference
  1.  Master, Peter (1997). "The English article system: Acquisition, function, and pedagogy". System25 (2): 215–232. doi:10.1016/S0346-251X(97)00010-9.

Lexical categories and their features
Noun
Abstract / Concrete Adjectival Agent Animacy Bare Collective Countable Initial-stress-derived Mass Noun adjunct Proper Relational Strong / Weak Verbal / Deverbal
Verb
Forms
Attributive Converb Finite / Nonfinite Gerund Gerundive Infinitive Participle Supine Transgressive Verbal noun
Types
Ambitransitive Andative / Venitive Anticausative Autocausative Auxiliary Captative Catenative Compound Copular Defective Denominal Deponent Ditransitive Dynamic Exceptional case-marking Frequentative Germanic strong Germanic weak Impersonal Inchoative Intransitive Labile Lexical Light Modal Negative Performative Phrasal Predicative Preterite-present Pure Reflexive Regular / Irregular Separable Stative Stretched Transitive Unaccusative Unergative
Adjective
Anti-intersective Collateral Common Demonstrative Intersective Nominalized Non-intersective Possessive Postpositive Proper Pure intersective Relative subsective Subsective
Adverb
Conjunctive Flat Genitive Interrogative Locative Prepositional Pronominal Relative
Pronoun
Bound variable Demonstrative Disjunctive Distributive Donkey Dummy Formal / Informal Gender-neutral / Gender-specific Inclusive / Exclusive Indefinite Intensive Interrogative Personal Possessive Reciprocal Reflexive Relative Resumptive Strong / Weak Subject / Object / Prepositional
Adposition
Casally modulated Inflected Stranded
Determiner
Article Demonstrative Interrogative Possessive Quantifier
Particle
Discourse Interrogative Modal Noun Possessive
Other
Classifier Measure word Complementizer Conjunction Copula Coverb Expletive Interjection Ideophone Onomatopoeia Preverb Procedure word Pro-form Pro-verb / Pro-sentence Prop-word Syntax–semantics interface Yes and no

https://en.wikipedia.org/wiki/Article_(grammar)#Indefinite_article


In traditional grammar, a part of speech or part-of-speech  (abbreviated as POS or PoS) is a category of words (or, more generally, of lexical items) that have similar grammatical properties. Words that are assigned to the same part of speech generally display similar syntactic behavior—they play similar roles within the grammatical structure of sentences—and sometimes similar morphology in that they undergo inflection for similar properties.

Commonly listed English parts of speech are nounverbadjectiveadverbpronounprepositionconjunctioninterjectionnumeralarticle, or determiner. Other Indo-European languages also have essentially all these word classes;[1] one exception to this generalization is that LatinSanskrit and most Slavic languages do not have articles. Beyond the Indo-European family, such other European languages as Hungarian and Finnish, both of which belong to the Uralic family, completely lack prepositions or have only very few of them; rather, they have postpositions.

Other terms than part of speech—particularly in modern linguistic classifications, which often make more precise distinctions than the traditional scheme does—include word classlexical class, and lexical category. Some authors restrict the term lexical category to refer only to a particular type of syntactic category; for them the term excludes those parts of speech that are considered to be functional, such as pronouns. The term form class is also used, although this has various conflicting definitions.[2] Word classes may be classified as open or closedopen classes (typically including nouns, verbs and adjectives) acquire new members constantly, while closed classes (such as pronouns and conjunctions) acquire new members infrequently, if at all.

Almost all languages have the word classes noun and verb, but beyond these two there are significant variations among different languages.[3] For example:

Because of such variation in the number of categories and their identifying properties, analysis of parts of speech must be done for each individual language. Nevertheless, the labels for each category are assigned on the basis of universal criteria.[3]

Western tradition[edit]

A century or two after the work of Yāska, the Greek scholar Plato wrote in his Cratylus dialog, "sentences are, I conceive, a combination of verbs [rhêma] and nouns [ónoma]".[7] Aristotle added another class, "conjunction" [sýndesmos], which included not only the words known today as conjunctions, but also other parts (the interpretations differ; in one interpretation it is pronounsprepositions, and the article).[8]

By the end of the 2nd century BCE, grammarians had expanded this classification scheme into eight categories, seen in the Art of Grammar, attributed to Dionysius Thrax:[9]

  1. Noun (ónoma): a part of speech inflected for case, signifying a concrete or abstract entity
  2. Verb (rhêma): a part of speech without case inflection, but inflected for tenseperson and number, signifying an activity or process performed or undergone
  3. Participle (metokhḗ): a part of speech sharing features of the verb and the noun
  4. Article (árthron): a declinable part of speech, taken to include the definite article, but also the basic relative pronoun
  5. Pronoun (antōnymíā): a part of speech substitutable for a noun and marked for a person
  6. Preposition (próthesis): a part of speech placed before other words in composition and in syntax
  7. Adverb (epírrhēma): a part of speech without inflection, in modification of or in addition to a verb, adjective, clause, sentence, or other adverb
  8. Conjunction (sýndesmos): a part of speech binding together the discourse and filling gaps in its interpretation

It can be seen that these parts of speech are defined by morphologicalsyntactic and semantic criteria.

The Latin grammarian Priscian (fl. 500 CE) modified the above eightfold system, excluding "article" (since the Latin language, unlike Greek, does not have articles) but adding "interjection".[10][11]

The Latin names for the parts of speech, from which the corresponding modern English terms derive, were nomenverbumparticipiumpronomenpraepositioadverbiumconjunctio and interjectio. The category nomen included substantives (nomen substantivum, corresponding to what are today called nouns in English), adjectives (nomen adjectivum) and numerals (nomen numerale). This is reflected in the older English terminology noun substantivenoun adjective and noun numeral. Later[12] the adjective became a separate class, as often did the numerals, and the English word noun came to be applied to substantives only.

Works of English grammar generally follow the pattern of the European tradition as described above, except that participles are now usually regarded as forms of verbs rather than as a separate part of speech, and numerals are often conflated with other parts of speech: nouns (cardinal numerals, e.g., "one", and collective numerals, e.g., "dozen"), adjectives (ordinal numerals, e.g., "first", and multiplier numerals, e.g., "single") and adverbs (multiplicative numerals, e.g., "once", and distributive numerals, e.g., "singly"). Eight or nine parts of speech are commonly listed:

  1. noun
  2. verb
  3. adjective
  4. adverb
  5. pronoun
  6. preposition
  7. conjunction
  8. interjection
  9. article or (more recently) determiner

Some modern classifications define further classes in addition to these. For discussion see the sections below.

Functional classification[edit]

Linguists recognize that the above list of eight or nine word classes is drastically simplified.[14] For example, "adverb" is to some extent a catch-all class that includes words with many different functions. Some have even argued that the most basic of category distinctions, that of nouns and verbs, is unfounded,[15] or not applicable to certain languages.[16][17] Modern linguists have proposed many different schemes whereby the words of English or other languages are placed into more specific categories and subcategories based on a more precise understanding of their grammatical functions.

Common lexical category set defined by function may include the following (not all of them will necessarily be applicable in a given language):

Within a given category, subgroups of words may be identified based on more precise grammatical properties. For example, verbs may be specified according to the number and type of objects or other complements which they take. This is called subcategorization.

Many modern descriptions of grammar include not only lexical categories or word classes, but also phrasal categories, used to classify phrases, in the sense of groups of words that form units having specific grammatical functions. Phrasal categories may include noun phrases (NP), verb phrases (VP) and so on. Lexical and phrasal categories together are called syntactic categories.

A diagram showing some of the posited English syntactic categories

Open and closed classes[edit]

Word classes may be either open or closed. An open class is one that commonly accepts the addition of new words, while a closed class is one to which new items are very rarely added. Open classes normally contain large numbers of words, while closed classes are much smaller. Typical open classes found in English and many other languages are nounsverbs (excluding auxiliary verbs, if these are regarded as a separate class), adjectivesadverbs and interjectionsIdeophones are often an open class, though less familiar to English speakers,[18][19][a] and are often open to nonce words. Typical closed classes are prepositions (or postpositions), determinersconjunctions, and pronouns.[21]

The open–closed distinction is related to the distinction between lexical and functional categories, and to that between content words and function words, and some authors consider these identical, but the connection is not strict. Open classes are generally lexical categories in the stricter sense, containing words with greater semantic content,[22] while closed classes are normally functional categories, consisting of words that perform essentially grammatical functions. This is not universal: in many languages verbs and adjectives[23][24][25] are closed classes, usually consisting of few members, and in Japanese the formation of new pronouns from existing nouns is relatively common, though to what extent these form a distinct word class is debated.

Words are added to open classes through such processes as compoundingderivationcoining, and borrowing. When a new word is added through some such process, it can subsequently be used grammatically in sentences in the same ways as other words in its class.[26] A closed class may obtain new items through these same processes, but such changes are much rarer and take much more time. A closed class is normally seen as part of the core language and is not expected to change. In English, for example, new nouns, verbs, etc. are being added to the language constantly (including by the common process of verbing and other types of conversion, where an existing word comes to be used in a different part of speech). However, it is very unusual for a new pronoun, for example, to become accepted in the language, even in cases where there may be felt to be a need for one, as in the case of gender-neutral pronouns.

The open or closed status of word classes varies between languages, even assuming that corresponding word classes exist. Most conspicuously, in many languages verbs and adjectives form closed classes of content words. An extreme example is found in Jingulu, which has only three verbs, while even the modern Indo-European Persian has no more than a few hundred simple verbs, a great deal of which are archaic. (Some twenty Persian verbs are used as light verbs to form compounds; this lack of lexical verbs is shared with other Iranian languages.) Japanese is similar, having few lexical verbs.[27] Basque verbs are also a closed class, with the vast majority of verbal senses instead expressed periphrastically.

In Japanese, verbs and adjectives are closed classes,[28] though these are quite large, with about 700 adjectives,[29][30] and verbs have opened slightly in recent years. Japanese adjectives are closely related to verbs (they can predicate a sentence, for instance). New verbal meanings are nearly always expressed periphrastically by appending suru (する, to do) to a noun, as in undō suru (運動する, to (do) exercise), and new adjectival meanings are nearly always expressed by adjectival nouns, using the suffix -na (〜な)when an adjectival noun modifies a noun phrase, as in hen-na ojisan (変なおじさん, strange man). The closedness of verbs has weakened in recent years, and in a few cases new verbs are created by appending -ru (〜る) to a noun or using it to replace the end of a word. This is mostly in casual speech for borrowed words, with the most well-established example being sabo-ru (サボる, cut class; play hooky), from sabotāju (サボタージュ, sabotage).[31] This recent innovation aside, the huge contribution of Sino-Japanese vocabulary was almost entirely borrowed as nouns (often verbal nouns or adjectival nouns). Other languages where adjectives are closed class include Swahili,[25] Bemba, and Luganda.

By contrast, Japanese pronouns are an open class and nouns become used as pronouns with some frequency; a recent example is jibun (自分, self), now used by some young men as a first-person pronoun. The status of Japanese pronouns as a distinct class is disputed[by whom?], however, with some considering it only a use of nouns, not a distinct class. The case is similar in languages of Southeast Asia, including Thai and Lao, in which, like Japanese, pronouns and terms of address vary significantly based on relative social standing and respect.[32]

Some word classes are universally closed, however, including demonstratives and interrogative words.[32]


See also[edit]

Notes[edit]

  1. ^ Ideophones do not always form a single grammatical word class, and their classification varies between languages, sometimes being split across other word classes. Rather, they are a phonosemantic word class, based on derivation, but may be considered part of the category of "expressives",[18] which thus often form an open class due to the productivity of ideophones. Further, "[i]n the vast majority of cases, however, ideophones perform an adverbial function and are closely linked with verbs."[20]


https://en.wikipedia.org/wiki/Part_of_speech


In corpus linguisticspart-of-speech tagging (POS tagging or PoS tagging or POST), also called grammatical tagging is the process of marking up a word in a text (corpus) as corresponding to a particular part of speech,[1] based on both its definition and its context. A simplified form of this is commonly taught to school-age children, in the identification of words as nounsverbsadjectivesadverbs, etc.

Once performed by hand, POS tagging is now done in the context of computational linguistics, using algorithms which associate discrete terms, as well as hidden parts of speech, by a set of descriptive tags. POS-tagging algorithms fall into two distinctive groups: rule-based and stochastic. E. Brill's tagger, one of the first and most widely used English POS-taggers, employs rule-based algorithms.

Tag sets[edit]

Schools commonly teach that there are 9 parts of speech in English: nounverbarticleadjectiveprepositionpronounadverbconjunction, and interjection. However, there are clearly many more categories and sub-categories. For nouns, the plural, possessive, and singular forms can be distinguished. In many languages words are also marked for their "case" (role as subject, object, etc.), grammatical gender, and so on; while verbs are marked for tenseaspect, and other things. In some tagging systems, different inflections of the same root word will get different parts of speech, resulting in a large number of tags. For example, NN for singular common nouns, NNS for plural common nouns, NP for singular proper nouns (see the POS tags used in the Brown Corpus). Other tagging systems use a smaller number of tags and ignore fine differences or model them as features somewhat independent from part-of-speech.[2]

In part-of-speech tagging by computer, it is typical to distinguish from 50 to 150 separate parts of speech for English. Work on stochastic methods for tagging Koine Greek (DeRose 1990) has used over 1,000 parts of speech and found that about as many words were ambiguous in that language as in English. A morphosyntactic descriptor in the case of morphologically rich languages is commonly expressed using very short mnemonics, such as Ncmsan for Category=Noun, Type = common, Gender = masculine, Number = singular, Case = accusative, Animate = no.

The most popular "tag set" for POS tagging for American English is probably the Penn tag set, developed in the Penn Treebank project. It is largely similar to the earlier Brown Corpus and LOB Corpus tag sets, though much smaller. In Europe, tag sets from the Eagles Guidelines see wide use and include versions for multiple languages.

POS tagging work has been done in a variety of languages, and the set of POS tags used varies greatly with language. Tags usually are designed to include overt morphological distinctions, although this leads to inconsistencies such as case-marking for pronouns but not nouns in English, and much larger cross-language differences. The tag sets for heavily inflected languages such as Greekand Latin can be very large; tagging words in agglutinative languages such as Inuit languages may be virtually impossible. At the other extreme, Petrov et al.[3] have proposed a "universal" tag set, with 12 categories (for example, no subtypes of nouns, verbs, punctuation, etc.; no distinction of "to" as an infinitive marker vs. preposition (hardly a "universal" coincidence), etc.). Whether a very small set of very broad tags or a much larger set of more precise ones is preferable, depends on the purpose at hand. Automatic tagging is easier on smaller tag-sets.

https://en.wikipedia.org/wiki/Part-of-speech_tagging


Pages in category "Informal fallacies"

The following 85 pages are in this category, out of 85 total. This list may not reflect recent changes (learn more).

V


https://en.wikipedia.org/wiki/Category:Informal_fallacies

https://en.wikipedia.org/wiki/Category:Truth
https://en.wikipedia.org/wiki/Category:Communication_of_falsehoods
https://en.wikipedia.org/wiki/Category:Obfuscation
https://en.wikipedia.org/wiki/Category:Theories_of_truth

In semantics and pragmatics, a truth condition is the condition under which a sentence is true. For example, "It is snowing in Nebraska" is true precisely when it is snowing in Nebraska. Truth conditions of a sentence do not necessarily reflect current reality. They are merely the conditions under which the statement would be true.[1]
https://en.wikipedia.org/wiki/Truth_condition

The principle of sufficient reason states that everything must have a reason or a cause. The modern[1] formulation of the principle is usually attributed to early Enlightenment philosopher Gottfried Leibniz,[2] although the idea was conceived of and utilized by various philosophers who preceded him, including Anaximander,[3] ParmenidesArchimedes,[4] Plato and Aristotle,[5] Cicero,[5]Avicenna,[6] Thomas Aquinas, and Spinoza.[7] Notably, the post-Kantian philosopher Arthur Schopenhauer elaborated the principle, and used it as the foundation of his system. Some philosophers have associated the principle of sufficient reason with "ex nihilo nihil fit".[8][9] William Hamilton identified the laws of inference modus ponens with the "law of Sufficient Reason, or of Reason and Consequent" and modus tollens with its contrapositive expression.[10]
https://en.wikipedia.org/wiki/Principle_of_sufficient_reason

In philosophyobjectivity is the concept of truth independent from individual subjectivity (bias caused by one's perceptionemotions, or imagination). A proposition is considered to have objective truth when its truth conditions are met without bias caused by a sentient subject. Scientific objectivity refers to the ability to judge without partiality or external influence. Objectivity in the moral framework calls for moral codes to be assessed based on the well-being of the people in the society that follow it.[1] Moral objectivity also calls for moral codes to be compared to one another through a set of universal facts and not through subjectivity.[1]
https://en.wikipedia.org/wiki/Objectivity_(philosophy)

Reason is the capacity of consciously applying logic to seek truth and draw conclusions from new or existing information.[1][2] It is closely associated with such characteristically human activities as philosophysciencelanguagemathematics, and art, and is normally considered to be a distinguishing ability possessed by humans.[3]Reason is sometimes referred to as rationality.[4]

Reasoning is associated with the acts of thinking and cognition, and involves using one's intellect. The field of logic studies the ways in which humans can use formalreasoning to produce logically valid arguments.[5] Reasoning may be subdivided into forms of logical reasoning, such as: deductive reasoninginductive reasoning, and abductive reasoningAristotle drew a distinction between logical discursive reasoning (reason proper), and intuitive reasoning,[6] in which the reasoning process through intuition—however valid—may tend toward the personal and the subjectively opaque. In some social and political settings logical and intuitive modes of reasoning may clash, while in other contexts intuition and formal reason are seen as complementary rather than adversarial. For example, in mathematics, intuition is often necessary for the creative processes involved with arriving at a formal proof, arguably the most difficult of formal reasoning tasks.

Reasoning, like habit or intuition, is one of the ways by which thinking moves from one idea to a related idea. For example, reasoning is the means by which rational individuals understand sensory information from their environments, or conceptualize abstract dichotomies such as cause and effecttruth and falsehood, or ideas regarding notions of good or evil. Reasoning, as a part of executive decision making, is also closely identified with the ability to self-consciously change, in terms of goalsbeliefsattitudestraditions, and institutions, and therefore with the capacity for freedom and self-determination.[7]

In contrast to the use of "reason" as an abstract nouna reason is a consideration given which either explains or justifies events, phenomena, or behavior.[8] Reasons justify decisions, reasons support explanations of natural phenomena; reasons can be given to explain the actions (conduct) of individuals.

Using reason, or reasoning, can also be described more plainly as providing good, or the best, reasons. For example, when evaluating a moral decision, "morality is, at the very least, the effort to guide one's conduct by reason—that is, doing what there are the best reasons for doing—while giving equal [and impartial] weight to the interests of all those affected by what one does."[9]

Psychologists and cognitive scientists have attempted to study and explain how people reason, e.g. which cognitive and neural processes are engaged, and how cultural factors affect the inferences that people draw. The field of automated reasoning studies how reasoning may or may not be modeled computationally. Animal psychology considers the question of whether animals other than humans can reason.

https://en.wikipedia.org/wiki/Reason


In classical logic, propositions are typically unambiguously considered as being true or false. For instance, the proposition one is both equal and not equal to itself is regarded as simply false, being contrary to the Law of Noncontradiction; while the proposition one is equal to one is regarded as simply true, by the Law of Identity. However, some mathematicians, computer scientists, and philosophers have been attracted to the idea that a proposition might be more or less true, rather than wholly true or wholly false. Consider My coffee is hot.

In mathematics, this idea can be developed in terms of fuzzy logic. In computer science, it has found application in artificial intelligence. In philosophy, the idea has proved particularly appealing in the case of vagueness. Degrees of truth is an important concept in law.

The term is an older concept than conditional probability. Instead of determining the objective probability, only a subjective assessment is defined.[1] Especially for novices in the field, the chance for confusion is high. They are highly likely to confound the concept of probability with the concept of degree of truth.[2] To overcome the misconception, it makes sense to see probability theory as the preferred paradigm to handle uncertainty.

https://en.wikipedia.org/wiki/Degree_of_truth


fact is an occurrence in the real world.[1] The usual test for a statement of fact is verifiability—that is whether it can be demonstrated to correspond to experience. Standard reference works are often used to check facts. Scientificfacts are verified by repeatable careful observation or measurement by experiments or other means. 

For example, "This sentence contains words." accurately describes a linguisticfact, and "The sun is a star" accurately describes an astronomical fact. Further, "Abraham Lincoln was the 16th President of the United States" and "Abraham Lincoln was assassinated" both accurately describe historical facts. Generally speaking, facts are independent of belief and of knowledge.

https://en.wikipedia.org/wiki/Fact


faultless disagreement is a disagreement when Party A states that P is true, while Party B states that non-P is true, and neither party is not at fault. Disagreements of this kind may arise in areas of evaluative discourse, such as aesthetics, justification of beliefsor moral values, etc. A representative example is that John says Paris is more interesting than Rome, while Bob claims Rome is more interesting than Paris. Furthermore, in the case of a faultless disagreement, it is possible that if any party gives up their claim, there will be no improvement in the position of any of them.[1]

Within the framework of formal logic it is impossible that both P and not-P are true, and it was attempted to justify faultless disagreements within the framework of relativism of the Truth,[2] Max Kölbel and Sven Rosenkranz present arguments to the point that genuine faultless disagreements are impossible[1][2]

https://en.wikipedia.org/wiki/Faultless_disagreement


In the study of the human mindintellect refers to and identifies the ability of the mind to reach correct conclusions about what is true and what is false, and about how to solve problems. The term intellect derives from the Ancient Greek philosophy term nous, which translates to the Latin intellectus (from intelligere, “to understand”) and into the French and English languages as intelligence. Discussion of the intellect is in two areas of knowledge, wherein the terms intellect and intelligence are related terms.[1]

  • In philosophy, especially in classical and medieval philosophy the intellect (nous) is an important subject connected to the question: How do humans know things? Especially during late antiquity and the Middle Ages, the intellect was proposed as a concept that could reconcile philosophical and scientific understandings of Nature, with monotheistic religious understandings, by making the intellect a link between each human soul, and the divine intellect of the cosmos. During the Latin Middle Ages the distinction developed whereby the term intelligencereferred to the incorporeal beings that governed the celestial sphere; see: passive intellect and active intellect.[2]
  • In modern psychology and in neuroscience, the terms intelligence and intellect describe mental abilities that allow people to understand; the distinction is that intellect relates to facts, whereas intelligence relates to feelings.[3]
https://en.wikipedia.org/wiki/Intellect


https://en.wikipedia.org/wiki/Deontic_logic
https://en.wikipedia.org/wiki/Free_choice_inference
https://en.wikipedia.org/wiki/Evidentiality
https://en.wikipedia.org/wiki/Ontology
https://en.wikipedia.org/wiki/Internal–external_distinction

https://en.wikipedia.org/wiki/Category:Truth

https://en.wikipedia.org/wiki/Critical_thinking

https://en.wikipedia.org/wiki/Double_truth

https://en.wikipedia.org/wiki/Truth_function

https://en.wikipedia.org/wiki/Immutable_truth

https://en.wikipedia.org/wiki/T-schema

https://en.wikipedia.org/wiki/Sworn_testimony

https://en.wikipedia.org/wiki/Substantial_truth

https://en.wikipedia.org/wiki/Truth_value

https://en.wikipedia.org/wiki/Verisimilitude

https://en.wikipedia.org/wiki/Veridicality

https://en.wikipedia.org/wiki/Veritas


https://en.wikipedia.org/wiki/Category:Reality

https://en.wikipedia.org/wiki/Category:Ethical_principles

https://en.wikipedia.org/wiki/Category:Philosophical_logic

https://en.wikipedia.org/wiki/Category:Meaning_(philosophy_of_language)

https://en.wikipedia.org/wiki/Category:Concepts_in_the_philosophy_of_language



https://en.wikipedia.org/wiki/Bias

https://en.wikipedia.org/wiki/Distortion

https://en.wikipedia.org/wiki/Defence_mechanism

https://en.wikipedia.org/wiki/Internalization

https://en.wikipedia.org/wiki/Enmeshment

https://en.wikipedia.org/wiki/Displacement

https://en.wikipedia.org/wiki/Projection

https://en.wikipedia.org/wiki/Portal:Psychology



Modal logic is a collection of formal systems originally developed and still widely used to represent statements about necessity and possibility. The basic unary (1-place) modal operators are most often interpreted "□" for "Necessarily" and "◇" for "Possibly". 

In a classical modal logic, each can be expressed in terms of the other and negation in a De Morgan duality:

The modal formula  can be read using the above interpretation as "if P is necessary, then it is also possible", which is almost always held to be valid. This interpretation of the modal operators as necessity and possibility is called alethic modal logic. There are modal logics of other modes, such as "□" for "Obligatorily" and "◇" for "Permissibly" in deontic modal logic, where this same formulae means "if P is obligatory, then it is permissible", which is also almost always held to be valid.

https://en.wikipedia.org/wiki/Modal_logic


Dynamic semantics is a framework in logic and natural language semantics that treats the meaning of a sentence as its potential to update a context. In static semantics, knowing the meaning of a sentence amounts to knowing when it is true; in dynamic semantics, knowing the meaning of a sentence means knowing "the change it brings about in the information state of anyone who accepts the news conveyed by it."[1] In dynamic semantics, sentences are mapped to functions called context change potentials, which take an input context and return an output context. Dynamic semantics was originally developed by Irene Heim and Hans Kamp in 1981 to model anaphora, but has since been applied widely to phenomena including presuppositionpluralsquestionsdiscourse relations, and modality.[2]

The first systems of dynamic semantics were the closely related File Change Semantics and discourse representation theory, developed simultaneously and independently by Irene Heim and Hans Kamp. These systems were intended to capture donkey anaphora, which resists an elegant compositional treatment in classic approaches to semantics such as Montague grammar.[2][3]Donkey anaphora is exemplified by the infamous donkey sentences, first noticed by the medieval logician Walter Burley and brought to modern attention by Peter Geach.[4][5]

Donkey sentence (relative clause): Every farmer who owns a donkey beats it.
Donkey sentence (conditional): If a farmer owns a donkey, he beats it.

To capture the empirically observed truth conditions of such sentences in first order logic, one would need to translate the indefinite noun phrase "a donkey" as a universal quantifier scoping over the variable corresponding to the pronoun "it".

FOL translation of donkey sentence:  : 

While this translation captures (or approximates) the truth conditions of the natural language sentences, its relationship to the syntactic form of the sentence is puzzling in two ways. First, indefinites in non-donkey contexts normally express existential rather than universal quantification. Second, the syntactic position of the donkey pronoun would not normally allow it to be bound by the indefinite.

To explain these peculiarities, Heim and Kamp proposed that natural language indefinites are special in that they introduce a new discourse referent that remains available outside the syntactic scope of the operator that introduced it. To cash this idea out, they proposed their respective formal systems that capture donkey anaphora because they validate Egli's theorem and its corollary.[6]

Egli's theorem
Egli's corollary

https://en.wikipedia.org/wiki/Dynamic_semantics


In pragmaticsscalar implicature, or quantity implicature,[1] is an implicature that attributes an implicit meaning beyond the explicit or literal meaning of an utterance, and which suggests that the utterer had a reason for not using a more informative or stronger term on the same scale. The choice of the weaker characterization suggests that, as far as the speaker knows, none of the stronger characterizations in the scale holds. This is commonly seen in the use of 'some' to suggest the meaning 'not all', even though 'some' is logically consistent with 'all'.[2] If Bill says 'I have some of my money in cash', this utterance suggests to a hearer (though the sentence uttered does not logically imply it) that Bill does not have all his money in cash.

https://en.wikipedia.org/wiki/Scalar_implicature


In linguistics and related fields, pragmatics is the study of how context contributes to meaning. Pragmatics encompasses phenomena including implicaturespeech actsrelevance and conversation.[1] Theories of pragmatics go hand-in-hand with theories of semantics, which studies aspects of meaning which are grammatically or lexicallyencoded.[2][3] The ability to understand another speaker's intended meaning is called pragmatic competence.[4][5][6] Pragmatics emerged as its own subfield in the 1950s after the pioneering work of J.L. Austin and Paul Grice.[7][8]

https://en.wikipedia.org/wiki/Pragmatics


In formal linguisticsdiscourse representation theory (DRT) is a framework for exploring meaning under a formal semanticsapproach. One of the main differences between DRT-style approaches and traditional Montagovian approaches is that DRT includes a level of abstract mental representations (discourse representation structures, DRS) within its formalism, which gives it an intrinsic ability to handle meaning across sentence boundaries. DRT was created by Hans Kamp in 1981.[1] A very similar theory was developed independently by Irene Heim in 1982, under the name of File Change Semantics (FCS).[2] Discourse representation theories have been used to implement semantic parsers[3] and natural language understanding systems.[4][5][6]

https://en.wikipedia.org/wiki/Discourse_representation_theory


Formal semantics is the study of grammatical meaning in natural languages using formal tools from logic and theoretical computer science. It is an interdisciplinary field, sometimes regarded as a subfield of both linguistics and philosophy of language. It provides accounts of what linguistic expressions mean and how their meanings are composed from the meanings of their parts. The enterprise of formal semantics can be thought of as that of reverse engineering the semantic components of natural languages' grammars.

Formal semantics studies the denotations of natural language expressions. High-level concerns include compositionalityreference, and the nature of meaning. Key topic areas include scopemodalitybindingtense, and aspect. Semantics is distinct from pragmatics, which encompasses aspects of meaning which arise from interaction and communicative intent.

Formal semantics is an interdisciplinary field, often viewed as a subfield of both linguistics and philosophy, while also incorporating work from computer sciencemathematical logic, and cognitive psychology. Within philosophy, formal semanticists typically adopt a Platonistic ontology and an externalist view of meaning.[1] Within linguistics, it is more common to view formal semantics as part of the study of linguistic cognition. As a result, philosophers put more of an emphasis on conceptual issues while linguists are more likely to focus on the syntax-semantics interface and crosslinguistic variation.[2][3]

Truth conditions[edit]

The fundamental question of formal semantics is what you know when you know how to interpret expressions of a language. A common assumption is that knowing the meaning of a sentence requires knowing its truth conditions, or in other words knowing what the world would have to be like for the sentence to be true. For instance, to know the meaning of the English sentence "Nancy smokes" one has to know that it is true when the person Nancy performs the action of smoking.[1][4]

However, many current approaches to formal semantics posit that there is more to meaning than truth-conditions.[5] In the formal semantic framework of inquisitive semantics, knowing the meaning of a sentence also requires knowing what issues (i.e. questions) it raises. For instance "Nancy smokes, but does she drink?" conveys the same truth-conditional information as the previous example but also raises an issue of whether Nancy drinks.[6] Other approaches generalize the concept of truth conditionality or treat it as epiphenomenal. For instance in dynamic semantics, knowing the meaning of a sentence amounts to knowing how it updates a context.[7] Pietroski treats meanings as instructions to build concepts.[8]

Compositionality[edit]

The Principle of Compositionality is the fundamental assumption in formal semantics. This principle states that the denotation of a complex expression is determined by the denotations of its parts along with their mode of composition. For instance, the denotation of the English sentence "Nancy smokes" is determined by the meaning of "Nancy", the denotation of "smokes", and whatever semantic operations combine the meanings of subjects with the meanings of predicates. In a simplified semantic analysis, this idea would be formalized by positing that "Nancy" denotes Nancy herself, while "smokes" denotes a function which takes some individual x as an argument and returns the truth value "true" if x indeed smokes. Assuming that the words "Nancy" and "smokes" are semantically composed via function application, this analysis would predict that the sentence as a whole is true if Nancy indeed smokes.[9][10][11]

Phenomena[edit]

Scope[edit]

Scope can be thought of as the semantic order of operations. For instance, in the sentence "Paulina doesn't drink beer but she does drink wine," the proposition that Paulina drinks beer occurs within the scope of negation, but the proposition that Paulina drinks wine does not. One of the major concerns of research in formal semantics is the relationship between operators' syntactic positions and their semantic scope. This relationship is not transparent, since the scope of an operator need not directly correspond to its surface position and a single surface form can be semantically ambiguous between different scope construals. Some theories of scope posit a level of syntactic structure called logical form, in which an item's syntactic position corresponds to its semantic scope. Others theories compute scope relations in the semantics itself, using formal tools such as type shifters, monads, and continuations.[12][13][14][15]

Binding[edit]

Binding is the phenomenon in which anaphoric elements such as pronouns are grammatically associated with their antecedents. For instance in the English sentence "Mary saw herself", the anaphor "herself" is bound by its antecedent "Mary". Binding can be licensed or blocked in certain contexts or syntactic configurations, e.g. the pronoun "her" cannot be bound by "Mary" in the English sentence "Mary saw her". While all languages have binding, restrictions on it vary even among closely related languages. Binding was a major for the government and binding theory paradigm.

Modality[edit]

Modality is the phenomenon whereby language is used to discuss potentially non-actual scenarios. For instance, while a non-modal sentence such as "Nancy smoked" makes a claim about the actual world, modalized sentences such as "Nancy might have smoked" or "If Nancy smoked, I'll be sad" make claims about alternative scenarios. The most intensely studied expressions include modal auxiliaries such as "could", "should", or "must"; modal adverbs such as "possibly" or "necessarily"; and modal adjectives such as "conceivable" and "probable". However, modal components have been identified in the meanings of countless natural language expressions including counterfactualspropositional attitudesevidentialshabituals and generics. The standard treatment of linguistic modality was proposed by Angelika Kratzer in the 1970s, building on an earlier tradition of work in modal logic.[16][17][18]



Central concepts

Compositionality Denotation Entailment Extension Generalized quantifier Intension Logical form Presupposition Proposition Reference Scope Speech act Syntax–semantics interface Truth conditions

Topics

Areas

Anaphora Ambiguity Binding Conditionals Definiteness Disjunction Evidentiality Focus Indexicality Lexical semantics Modality Negation Propositional attitudes Tense–aspect–mood Quantification Vagueness

Phenomena

Antecedent-contained deletion Cataphora Coercion Conservativity Counterfactuals Cumulativity De dicto and de re De se Deontic modality Discourse relations Donkey anaphora Epistemic modality Faultless disagreement Free choice inferences Givenness Crossover effects Hurford disjunction Inalienable possession Intersective modification Logophoricity Mirativity Modal subordination Negative polarity items Opaque contexts Performatives Privative adjectives Quantificational variability effect Responsive predicate Rising declaratives Scalar implicature Sloppy identity Subsective modification Telicity Temperature paradox Veridicality

Formalism

Formal systems

Alternative semantics Categorial grammar Combinatory categorial grammar Discourse representation theory Dynamic semantics Generative grammar Glue semantics Inquisitive semantics Intensional logic Lambda calculus Mereology Montague grammar Segmented discourse representation theory Situation semantics Supervaluationism Type theory TTR

Concepts

Autonomy of syntax Context set Continuation Conversational scoreboard Existential closure Function application Meaning postulate Monads Possible world Quantifier raising Quantization Question under discussion Squiggle operator Type shifter Universal grinder

See also

Cognitive semantics Computational semantics Distributional semantics Formal grammar Inferentialism Linguistics wars Philosophy of language Pragmatics Semantics of logic

Categories: SemanticsFormal semantics (natural language)Grammar


https://en.wikipedia.org/wiki/Formal_semantics_(natural_language)


In linguistics, the autonomy of syntax is the assumption that  syntax is arbitrary and self-contained with respect to meaning, semanticspragmaticsdiscourse function, and other factors external to language.[1] The autonomy of syntax is advocated by linguistic formalists, and in particular by generative linguistics, whose approaches have hence been called autonomist linguistics. 

The autonomy of syntax is at the center of the debates between formalist and functionalist linguistics,[1][2][3] and since the 1980s research has been conducted on the syntax–semantics interface within functionalist approaches, aimed at finding instances of semantically determined syntactic structures, to disprove the formalist argument of the autonomy of syntax.[4]

The principle of iconicity is contrasted, for some scenarios, with that of the autonomy of syntax. The weaker version of the argument for the autonomy of syntax (or that for the autonomy of grammar), includes only for the principle of arbitrariness, while the stronger version includes the claim of self-containedness.[1] The principle of arbitrariedness of syntax is actually accepted by most functionalist linguist, and the real dispute between functionalist and generativists is on the claim of self-containedness of grammar or syntax.[5]

https://en.wikipedia.org/wiki/Autonomy_of_syntax


The simplest update systems are intersective ones, which simply lift static systems into the dynamic framework. However, update semantics includes systems more expressive than what can be defined in the static framework. In particular, it allows information sensitive semantic entries, in which the information contributed by updating with some formula can depend on the information already present in the context.[8] This property of update semantics has led to its widespread application to presuppositionsmodals, and conditionals.

Formal semantics (natural language)

hidevte

Non-classical logic

Intuitionistic

Intuitionistic logic Constructive analysis Heyting arithmetic Intuitionistic type theory Constructive set theory

Fuzzy

Degree of truth Fuzzy rule Fuzzy set Fuzzy finite element Fuzzy set operations

Substructural

Structural rule Relevance logic Linear logic

Paraconsistent

Dialetheism

Description

Ontology Ontology language

Many-valued

Three-valued Four-valued Łukasiewicz

Digital logic

Three-state logic Tri-state buffer Four-valued Verilog IEEE 1164 VHDL

Others

Dynamic semantics Inquisitive logic Intermediate logic Nonmonotonic logic

Categories: SemanticsLogicPhilosophy of languageNon-classical logicSystems of formal logicLinguistic modality

https://en.wikipedia.org/wiki/Dynamic_semantics


Ambiguity is a type of meaning in which a phrase, statement or resolution is not explicitly defined, making several interpretations plausible. A common aspect of ambiguity is uncertainty. It is thus an attribute of any idea or statement whose intendedmeaning cannot be definitively resolved according to a rule or process with a finite number of steps. (The ambi- part of the term reflects an idea of "two", as in "two meanings".)

The concept of ambiguity is generally contrasted with vagueness. In ambiguity, specific and distinct interpretations are permitted (although some may not be immediately obvious), whereas with information that is vague, it is difficult to form any interpretation at the desired level of specificity.

https://en.wikipedia.org/wiki/Ambiguity


In logicImport-Export is a deductive argument form which states that . In natural language terms, the principle means that the following English sentences are logically equivalent.[1][2][3]

  1. If Mary isn't at home, then if Sally isn't at home, then the house is empty.
  2. If Mary isn't home and Sally isn't home, then the house is empty.

Import-Export holds in classical logic, where the conditional operator  is taken as material implication. However, there are other logics where it does not hold and its status as a true principle of logic is a matter of debate. Controversy over the principle arises from the fact that any conditional operator that satisfies it will collapse to material implication when combined with certain other principles. This conclusion would be problematic given the paradoxes of material implication, which are commonly taken to show that natural language conditionals are not material implication.[2][3][4]

This problematic conclusion can be avoided within the framework of dynamic semantics, whose expressive power allows one to define a non-material conditional operator which nonetheless satisfies Import-Export along with the other principles.[3][5] However, other approaches reject Import-Export as a general principle, motivated by cases such as the following, uttered in a context where it is most likely that the match will be lit by throwing it into a campfire, but where it is possible that it could be lit by striking it. In this context, the first sentence is intuitively true but the second is intuitively false.[5][6][7]

  1. If you strike the match and it lights, it will light.
  2. If the match lights, it will light if you strike it.

https://en.wikipedia.org/wiki/Import–export_(logic)

In formal semantics, the scope of a semantic operator is the semantic object to which it applies. For instance, in the sentence "Paulina doesn't drink beer but she does drink wine," the proposition that Paulina drinks beer occurs within the scope of negation, but the proposition that Paulina drinks wine does not. Scope can be thought of as the semantic order of operations. 

One of the major concerns of research in formal semantics is the relationship between operators' syntactic positions and their semantic scope. This relationship is not transparent, since the scope of an operator need not directly correspond to its surface position and a single surface form can be semantically ambiguous between different scope construals. Some theories of scope posit a level of syntactic structure called logical form, in which an item's syntactic position corresponds to its semantic scope. Others theories compute scope relations in the semantics itself, using formal tools such as type shifters, monads, and continuations.

See also[edit]


https://en.wikipedia.org/wiki/Scope_(formal_semantics)


In linguistics and philosophy of language, the conversational scoreboard is a tuple which represents the discourse context at a given point in a conversation. The scoreboard is updated by each speech act performed by one of the interlocutors.[1][2][3][4][5]

Most theories of conversational scorekeeping take one of the scoreboard's elements to be a common ground, which represents the propositional information mutually agreed upon by the interlocutors. When an interlocutor makes a successful assertion, its content is added to the common ground. Once in the common ground, that information can then be presupposed by future utterances. Depending on the particular theory of scorekeeping, additional elements of the scoreboard may include a stack of questions under discussion, a list of discourse referents available for anaphora, among other categories of contextual information.[4][3][5]

The notion of a conversational scoreboard was introduced by David Lewis in his most-cited paper Scorekeeping in a Language Game. In the paper, Lewis draws an analogy between conversation and baseball, where the scoreboard tracks categories of information such as strikes, outs, and runs, thereby defining the current state of the game and thereby determining which future moves are licit.[1][5]

See also[edit]

https://en.wikipedia.org/wiki/Conversational_scoreboard


Linguistic determinism is the concept that language and its structures limit and determine human knowledge or thought, as well as thought processes such as categorizationmemory, and perception. The term implies that people’s native languages will affect their thought process and therefore people will have different thought processes based on their mother tongues.[1]

Linguistic determinism is the strong form of linguistic relativity (popularly known as the Sapir–Whorf hypothesis), which argues that individuals experience the world based on the structure of the language they habitually use.

https://en.wikipedia.org/wiki/Linguistic_determinism


Discourse is a generalization of the notion of a conversation to any form of communication.[1] Discourse is a major topic in social theory, with work spanning fields such as sociologyanthropologycontinental philosophy, and discourse analysis. Following pioneering work by Michel Foucault, these fields view discourse as a system of thought, knowledge, or communication that constructs our experience of the world. Since control of discourse amounts to control of how the world is perceived, social theory often studies discourse as a window into power. Within theoretical linguistics, discourse is understood more narrowly as linguistic information exchange and was one of the major motivations for the framework of dynamic semantics, in which expressions' denotations are equated with their ability to update a discourse context.

https://en.wikipedia.org/wiki/Discourse


In semioticslinguisticsanthropology and philosophy of languageindexicality is the phenomenon of a sign pointing to (or indexing) some object in the context in which it occurs. A sign that signifies indexically is called an index or, in philosophy, an indexical.

The modern concept originates in the semiotic theory of Charles Sanders Peirce, in which indexicality is one of the three fundamental sign modalities by which a sign relates to its referent (the others being iconicity and symbolism).[1] Peirce's concept has been adopted and extended by several twentieth-century academic traditions, including those of linguistic pragmatics,[2]: 55–57linguistic anthropology,[3] and Anglo-American philosophy of language.[4]

Words and expressions in language often derive some part of their referential meaning from indexicality. For example, I indexically refers to the entity that is speaking; now indexically refers to a time frame including the moment at which the word is spoken; and here indexically refers to a locational frame including the place where the word is spoken. Linguistic expressions that refer indexically are known as deictics, which thus form a particular subclass of indexical signs, though there is some terminological variation among scholarly traditions.

Linguistic signs may also derive nonreferential meaning from indexicality, for example when features of a speaker's registerindexically signal their social class. Nonlinguistic signs may also display indexicality: for example, a pointing index finger may index (without referring to) some object in the direction of the line implied by the orientation of the finger, and smoke may index the presence of a fire.

In linguistics and philosophy of language, the study of indexicality tends to focus specifically on deixis, while in semiotics and anthropology equal attention is generally given to nonreferential indexicality, including altogether nonlinguistic indexicality.

https://en.wikipedia.org/wiki/Indexicality


Linguistics is the scientific study of language.[1] It encompasses the analysis of every aspect of language, as well as the methods for studying and modelling them.

The traditional areas of linguistic analysis include phoneticsphonologymorphologysyntaxsemantics, and pragmatics.[2] Each of these areas roughly corresponds to phenomena found in human linguistic systems: sounds (and gesture, in the case of signed languages), minimal units (words, morphemes), phrases and sentences, and meaning and use.

Linguistics studies these phenomena in diverse ways and from various perspectives. Theoretical linguistics (including traditional descriptive linguistics) is concerned with building models of these systems, their parts (ontologies), and their combinatorics. Psycholinguistics builds theories of the processing and production of all these phenomena. These phenomena may be studied synchronically or diachronically(through history), in monolinguals or polyglots, in children or adults, as they are acquired or statically, as abstract objects or as embodied cognitive structures, using texts (corpora) or through experimental elicitation, by gathering data mechanically, through fieldwork, or through introspective judgment tasks. Computational linguistics implements theoretical constructs to parse or produce natural language or homologues. Neurolinguistics investigates linguistic phenomena by experiments on actual brain responses involving linguistic stimuli.

Linguistics is related to philosophy of language, stylistics and rhetoric, semiotics, lexicography, and translation.

https://en.wikipedia.org/wiki/Linguistics


Corpus linguistics is the study of a language as that language is expressed in its text corpus (plural corpora), its body of "real world" text. Corpus linguistics proposes that a reliable analysis of a language is more feasible with a corpora collected in the field—the natural context ("realia") of that language—with minimal experimental interference.

The text-corpus method uses the body of texts written in any natural language to derive the set of abstract rules which govern that language. Those results can be used to explore the relationships between that subject language and other languages which have undergone a similar analysis. The first such corpora were manually derived from source texts, but now that work is automated.

Corpora have not only been used for linguistics research, they have also been used to compile dictionaries (starting with The American Heritage Dictionary of the English Language in 1969) and grammar guides, such as A Comprehensive Grammar of the English Language, published in 1985.

Experts in the field have differing views about the annotation of a corpus. These views range from John McHardy Sinclair, who advocates minimal annotation so texts speak for themselves,[1] to the Survey of English Usage team (University College, London), who advocate annotation as allowing greater linguistic understanding through rigorous recording.[2]

https://en.wikipedia.org/wiki/Corpus_linguistics


Phonology is a branch of linguistics that studies how languages or dialects systematically organize their sounds (or signs, in sign languages). The term also refers to the sound system of any particular language variety. At one time, the study of phonology only related to the study of the systems of phonemes in spoken languages. Now it may relate to

(a) any linguistic analysis either at a level beneath the word (including syllable, onset and rimearticulatory gestures, articulatory features, mora, etc.), or
(b) all levels of language where sound or signs are structured to convey linguistic meaning.[1]

Sign languages have a phonological system equivalent to the system of sounds in spoken languages. The building blocks of signs are specifications for movement, location and handshape.[2]

https://en.wikipedia.org/wiki/Phonology


In linguistics, the syntax–semantics interface is the interaction between syntax and semantics. Its study encompasses phenomena that pertain to both syntax and semantics, with the goal of explaining correlations between form and meaning.[1]Specific topics include scope,[2][3] binding,[2] and lexical semantic properties such as verbal aspect and nominal individuation,[4][5][6][7][8] semantic macroroles,[8] and unaccusativity.[4]

The interface is conceived of very differently in formalist and functionalist approaches. While functionalists tend to look into semantics and pragmatics for explanations of syntactic phenomena, formalists try to limit such explanations within syntax itself.[9] It is sometimes referred to as the morphosyntax–semantics interface or the syntax-lexical semantics interface.[3]

https://en.wikipedia.org/wiki/Syntax–semantics_interface


Phonetics is a branch of linguistics that studies how humans produce and perceive sounds, or in the case of sign languages, the equivalent aspects of sign.[1]Phoneticians—linguists who specialize in phonetics—study the physical properties of speech. The field of phonetics is traditionally divided into three sub-disciplines based on the research questions involved such as how humans plan and execute movements to produce speech (articulatory phonetics), how different movements affect the properties of the resulting sound (acoustic phonetics), or how humans convert sound waves to linguistic information (auditory phonetics). Traditionally, the minimal linguistic unit of phonetics is the phone—a speech sound in a language—which differs from the phonological unit of phoneme; the phoneme is an abstract categorization of phones.

Phonetics broadly deals with two aspects of human speech: production—the ways humans make sounds—and perception—the way speech is understood. The communicative modality of a language describes the method by which a language produces and perceives languages. Languages with oral-aural modalities such as English produce speech orally (using the mouth) and perceive speech aurally (using the ears). Sign languages, such as Auslan and ASL, have a manual-visual modality, producing speech manually (using the hands) and perceiving speech visually (using the eyes). ASL and some other sign languages have in addition a manual-manual dialect for use in tactile signing by deafblind speakers where signs are produced with the hands and perceived with the hands as well.

Language production consists of several interdependent processes which transform a non-linguistic message into a spoken or signed linguistic signal. After identifying a message to be linguistically encoded, a speaker must select the individual words—known as lexical items—to represent that message in a process called lexical selection. During phonological encoding, the mental representation of the words are assigned their phonological content as a sequence of phonemes to be produced. The phonemes are specified for articulatory features which denote particular goals such as closed lips or the tongue in a particular location. These phonemes are then coordinated into a sequence of muscle commands that can be sent to the muscles, and when these commands are executed properly the intended sounds are produced.

These movements disrupt and modify an airstream which results in a sound wave. The modification is done by the articulators, with different places and manners of articulation producing different acoustic results. For example, the words tack and sackboth begin with alveolar sounds in English, but differ in how far the tongue is from the alveolar ridge. This difference has large effects on the air stream and thus the sound that is produced. Similarly, the direction and source of the airstream can affect the sound. The most common airstream mechanism is pulmonic—using the lungs—but the glottis and tongue can also be used to produce airstreams.

Language perception is the process by which a linguistic signal is decoded and understood by a listener. In order to perceive speech the continuous acoustic signal must be converted into discrete linguistic units such as phonemesmorphemes, and words. In order to correctly identify and categorize sounds, listeners prioritize certain aspects of the signal that can reliably distinguish between linguistic categories. While certain cues are prioritized over others, many aspects of the signal can contribute to perception. For example, though oral languages prioritize acoustic information, the McGurk effect shows that visual information is used to distinguish ambiguous information when the acoustic cues are unreliable.

Modern phonetics has three main branches:

Lexical access[edit]

According to the lexical access model two different stages of cognition are employed; thus, this concept is known as the two-stage theory of lexical access. The first stage, lexical selection provides information about lexical items required to construct the functional level representation. These items are retrieved according to their specific semantic and syntactic properties, but phonological forms are not yet made available at this stage. The second stage, retrieval of wordforms, provides information required for building the positional level representation.[49]


https://en.wikipedia.org/wiki/Phonetics


An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomographysource reconstruction in acoustics, or calculating the density of the Earth from measurements of its gravity field. It is called an inverse problem because it starts with the effects and then calculates the causes. It is the inverse of a forward problem, which starts with the causes and then calculates the effects.

Inverse problems are some of the most important mathematical problems in science and mathematics because they tell us about parameters that we cannot directly observe. They have wide application in system identificationopticsradaracousticscommunication theorysignal processingmedical imagingcomputer vision,[1] geophysicsoceanographyastronomyremote sensingnatural language processingmachine learning,[2] nondestructive testing, slope stability analysis[3] and many other fields.[citation needed]

https://en.wikipedia.org/wiki/Inverse_problem


In formal semantics, a generalized quantifier (GQ) is an expression that denotes a set of sets. This is the standard semantics assigned to quantified noun phrases. For example, the generalized quantifier every boy denotes the set of sets of which every boy is a member:

This treatment of quantifiers has been essential in achieving a compositional semantics for sentences containing quantifiers.[1][2]

https://en.wikipedia.org/wiki/Generalized_quantifier


In formal semantics, a generalized quantifier (GQ) is an expression that denotes a set of sets. This is the standard semantics assigned to quantified noun phrases. For example, the generalized quantifier every boy denotes the set of sets of which every boy is a member:

This treatment of quantifiers has been essential in achieving a compositional semantics for sentences containing quantifiers.[1][2]

https://en.wikipedia.org/wiki/Generalized_quantifier


In generative grammar and related approaches, the logical Form (LF) of a linguistic expression is the variant of its syntactic structure which undergoes semantic interpretation. It is distinguished from phonetic form, the structure which corresponds to a sentence's pronunciation. These separate representations are postulated in order to explain the ways in which an expression's meaning can be partially independent of its pronunciation, e.g. scope ambiguities

LF is the cornerstone of the classic generative view of the syntax-semantics interface. However, it is not used in Lexical Functional Grammar and Head-Driven Phrase Structure Grammar, as well as some modern variants of the generative approach.

https://en.wikipedia.org/wiki/Logical_form_(linguistics)


In formal semantics a type shifter is an interpretation rule which changes an expression's semantic type. For instance, while the English expression "John" might ordinarily denote John himself, a type shifting rule called Lift can raise its denotation to a functionwhich takes a property and returns "true" if John himself has that property. Lift can be seen as mapping an individual onto the principal ultrafilter which it generates.[1][2][3]

  1. Without type shifting: 
  2. Type shifting with Lift

Type shifters were proposed by Barbara Partee and Mats Rooth in 1983 to allow for systematic type ambiguity. Work of this period assumed that syntactic categories corresponded directly with semantic types and researchers thus had to "generalize to the worst case" when particular uses of particular expressions from a given category required an especially high type. Moreover, Partee argued that evidence in fact supported expressions having different types in different contexts. Thus, she and Rooth proposed type shifting as a principled mechanism for generating this ambiguity.[1][2][3]

Type shifters remain a standard tool in formal semantic work, particularly in categorial grammar and related frameworks. Type shifters have also been used to interpret quantifiers in object position and to capture scope ambiguities. In this regard, they serve as an alternative to syntactic operations such as quantifier raising used in mainstream generative approaches to semantics. [4][5] Type shifters have also been used to generate and compose alternative sets without the need to fully adopt an alternative-based semantics.[6][7]

https://en.wikipedia.org/wiki/Type_shifter


Alternative semantics (or Hamblin semantics) is a framework in formal semantics and logic. In alternative semantics, expressions denote alternative sets, understood as sets of objects of the same semantic type. For instance, while the word "Lena" might denote Lena herself in a classical semantics, it would denote the singleton set containing Lena in alternative semantics. The framework was introduced by Charles Leonard Hamblin in 1973 as a way of extending Montague grammar to provide an analysis for questions. In this framework, a question denotes the set of its possible answers. Thus, if  and  are propositions, then  is the denotation of the question whether  or  is true. Since the 1970s, it has been extended and adapted to analyze phenomena including focusscopedisjunctionNPIspresupposition, and implicature.[1][2]

https://en.wikipedia.org/wiki/Alternative_semantics

https://en.wikipedia.org/wiki/Squiggle_operator


De dicto and de re are two phrases used to mark a distinction in intensional statements, associated with the intensional operators in many such statements. The distinction is used regularly in metaphysics and in philosophy of language.[1]

The literal translation of the phrase "de dicto" is "about what is said",[2] whereas de re translates as "about the thing".[3] The original meaning of the Latin locutions may help to elucidate the living meaning of the phrases, in the distinctions they mark. The distinction can be understood by examples of intensional contexts of which three are considered here: a context of thought, a context of desire, and a context of modality.

https://en.wikipedia.org/wiki/De_dicto_and_de_re


In propositional logicmodus ponens (/ˈmdəs ˈpnɛnz/MP), also known as modus ponendo ponens (Latin for "method of putting by placing")[1] or implication elimination or affirming the antecedent,[2] is a deductive argument form and rule of inference.[3] It can be summarized as "implies Q. P is true. Therefore Q must also be true."

Modus ponens is closely related to another valid form of argument, modus tollens. Both have apparently similar but invalid forms such as affirming the consequentdenying the antecedent, and evidence of absenceConstructive dilemma is the disjunctive version of modus ponensHypothetical syllogism is closely related to modus ponens and sometimes thought of as "double modus ponens."

The history of modus ponens goes back to antiquity.[4] The first to explicitly describe the argument form modus ponens was Theophrastus.[5] It, along with modus tollens, is one of the standard patterns of inference that can be applied to derive chains of conclusions that lead to the desired goal.

https://en.wikipedia.org/wiki/Modus_ponens


Propositional calculus is a branch of logic. It is also called propositional logicstatement logicsentential calculussentential logic, or sometimes zeroth-order logic. It deals with propositions (which can be true or false) and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives. Propositions that contain no logical connectives are called atomic propositions.

Unlike first-order logic, propositional logic does not deal with non-logical objects, predicates about them, or quantifiers. However, all the machinery of propositional logic is included in first-order logic and higher-order logics. In this sense, propositional logic is the foundation of first-order logic and higher-order logic.

https://en.wikipedia.org/wiki/Propositional_calculus


Deductive reasoning, also deductive logic, is the process of reasoning from one or more statements (premises) to reach a logical conclusion.[1]

Deductive reasoning goes in the same direction as that of the conditionals, and links premises with conclusions. If all premises are true, the terms are clear, and the rules of deductive logic are followed, then the conclusion reached is necessarily true.

Deductive reasoning ("top-down logic") contrasts with inductive reasoning ("bottom-up logic"): in deductive reasoning, a conclusion is reached reductively by applying general rules which hold over the entirety of a closed domain of discourse, narrowing the range under consideration until only the conclusion(s) remains. In deductive reasoning there is no uncertainty.[2] In inductive reasoning, the conclusion is reached by generalizing or extrapolating from specific cases to general rules resulting in a conclusion that has epistemic uncertainty.[2]

The inductive reasoning is not the same as induction used in mathematical proofs – mathematical induction is actually a form of deductive reasoning.

Deductive reasoning differs from abductive reasoning by the direction of the reasoning relative to the conditionals. The idea of "deduction" popularized in Sherlock Holmes stories is technically abduction, rather than deductive reasoning. Deductive reasoninggoes in the same direction as that of the conditionals, whereas abductive reasoning goes in the direction contrary to that of the conditionals.

https://en.wikipedia.org/wiki/Deductive_reasoning


Inductive reasoning is a method of reasoning in which a body of observations is synthesized to come up with a general principle.[1]Inductive reasoning is distinct from deductive reasoning. If the premises are correct, the conclusion of a deductive argument is certain; in contrast, the truth of the conclusion of an inductive argument is probable, based upon the evidence given.[2]

https://en.wikipedia.org/wiki/Inductive_reasoning


In propositional logicmodus tollens (/ˈmdəs ˈtɒlɛnz/) (MT), also known as modus tollendo tollens (Latin for "method of removing by taking away")[1] and denying the consequent,[2] is a deductive argument form and a rule of inferenceModus tollenstakes the form of "If P, then Q. Not Q. Therefore, not P." It is an application of the general truth that if a statement is true, then so is its contrapositive. The form shows that inference from P implies Q to the negation of Q implies the negation of P is a validargument.

The history of the inference rule modus tollens goes back to antiquity.[3] The first to explicitly describe the argument form modus tollens was Theophrastus.[4]

Modus tollens is closely related to modus ponens. There are two similar, but invalid, forms of argumentaffirming the consequent and denying the antecedent. See also contraposition and proof by contrapositive.

https://en.wikipedia.org/wiki/Modus_tollens


In the philosophy of logic, A rule of inferenceinference rule or transformation ruleis a logical form consisting of a function which takes premises, analyzes their syntax, and returns a conclusion (or conclusions). For example, the rule of inference called modus ponens takes two premises, one in the form "If p then q" and another in the form "p", and returns the conclusion "q". The rule is valid with respect to the semantics of classical logic (as well as the semantics of many other non-classical logics), in the sense that if the premises are true (under an interpretation), then so is the conclusion.

Typically, a rule of inference preserves truth, a semantic property. In many-valued logic, it preserves a general designation. But a rule of inference's action is purely syntactic, and does not need to preserve any semantic property: any function from sets of formulae to formulae counts as a rule of inference. Usually only rules that are recursive are important; i.e. rules such that there is an effective procedure for determining whether any given formula is the conclusion of a given set of formulae according to the rule. An example of a rule that is not effective in this sense is the infinitary ω-rule.[1]

Popular rules of inference in propositional logic include modus ponensmodus tollens, and contraposition. First-order predicate logic uses rules of inference to deal with logical quantifiers.

https://en.wikipedia.org/wiki/Rule_of_inference


In logiclogical form of a statement is a precisely-specified semantic version of that statement in a formal system. Informally, the logical form attempts to formalize a possibly ambiguous statement into a statement with a precise, unambiguous logical interpretation with respect to a formal system. In an ideal formal language, the meaning of a logical form can be determined unambiguously from syntax alone. Logical forms are semantic, not syntactic constructs; therefore, there may be more than one string that represents the same logical form in a given language.[1]

The logical form of an argument is called the argument form of the argument.

Importance of argument form[edit]

Attention is given to argument and sentence form, because form is what makes an argument valid or cogent. All logical form arguments are either inductive or deductive. Inductive logical forms include inductive generalization, statistical arguments, causal argument, and arguments from analogy. Common deductive argument forms are hypothetical syllogismcategorical syllogism, argument by definition, argument based on mathematics, argument from definition. The most reliable forms of logic are modus ponensmodus tollens, and chain arguments because if the premises of the argument are true, then the conclusion necessarily follows.[5] Two invalid argument forms are affirming the consequent and denying the antecedent

Affirming the consequent
All dogs are animals.
Coco is an animal.
Therefore, Coco is a dog.
Denying the antecedent
All cats are animals.
Missy is not a cat.
Therefore, Missy is not an animal.

A logical argument, seen as an ordered set of sentences, has a logical form that derives from the form of its constituent sentences; the logical form of an argument is sometimes called argument form.[6] Some authors only define logical form with respect to whole arguments, as the schemata or inferential structure of the argument.[7] In argumentation theory or informal logic, an argument form is sometimes seen as a broader notion than the logical form.[8]

It consists of stripping out all spurious grammatical features from the sentence (such as gender, and passive forms), and replacing all the expressions specific to the subject matter of the argument by schematic variables. Thus, for example, the expression "all A's are B's" shows the logical form which is common to the sentences "all men are mortals," "all cats are carnivores," "all Greeks are philosophers," and so on.

https://en.wikipedia.org/wiki/Logical_form


The word schema comes from the Greek word σχῆμα (skhēma), which means shape, or more generally, plan. The plural is σχήματα (skhēmata). In English, both schemas and schemata are used as plural forms.

https://en.wikipedia.org/wiki/Schema


In psychology and cognitive science, a schema (plural schemata or schemas) describes a pattern of thought or behavior that organizes categories of information and the relationships among them.[1][2] It can also be described as a mental structure of preconceived ideas, a framework representing some aspect of the world, or a system of organizing and perceiving new information.[3] Schemata influence attention and the absorption of new knowledge: people are more likely to notice things that fit into their schema, while re-interpreting contradictions to the schema as exceptions or distorting them to fit. Schemata have a tendency to remain unchanged, even in the face of contradictory information.[4] Schemata can help in understanding the world and the rapidly changing environment.[5] People can organize new perceptions into schemata quickly as most situations do not require complex thought when using schema, since automatic thought is all that is required.[5]

People use schemata to organize current knowledge and provide a framework for future understanding. Examples of schemata include academic rubricssocial schemasstereotypessocial rolesscriptsworldviews, and archetypes. In Piaget's theory of development, children construct a series of schemata, based on the interactions they experience, to help them understand the world.[6]

https://en.wikipedia.org/wiki/Schema_(psychology)


In mathematical logic, an axiom schema (plural: axiom schemata or axiom schemas) generalizes the notion of axiom.

https://en.wikipedia.org/wiki/Axiom_schema


In mathematics and logic, an axiomatic system is any set of axioms from which some or all axioms can be used in conjunction to logically derive theorems. A theory is a consistent, relatively-self-contained body of knowledge which usually contains an axiomatic system and all its derived theorems.[1] An axiomatic system that is completely described is a special kind of formal system. A formal theory is an axiomatic system (usually formulated within model theory) that describes a set of sentences that is closed under logical implication.[2] A formal proof is a complete rendition of a mathematical proof within a formal system.

Properties[edit]

An axiomatic system is said to be consistent if it lacks contradiction. That is, it is impossible to derive both a statement and its negation from the system's axioms. Consistency is a key requirement for most axiomatic systems, as the presence of contradiction would allow any statement to be proven (principle of explosion).

In an axiomatic system, an axiom is called independent if it is not a theorem that can be derived from other axioms in the system. A system is called independent if each of its underlying axioms is independent. Unlike consistency, independence is not a necessary requirement for a functioning axiomatic system — though it is usually sought after to minimize the number of axioms in the system.

An axiomatic system is called complete if for every statement, either itself or its negation is derivable from the system's axioms (equivalently, every statement is capable of being proven true or false).[3]

https://en.wikipedia.org/wiki/Axiomatic_system


In mathematical logicindependence is the unprovability of a sentence from other sentences.

sentence σ is independent of a given first-order theory T if T neither proves nor refutes σ; that is, it is impossible to prove σ from T, and it is also impossible to prove from T that σ is false. Sometimes, σ is said (synonymously) to be undecidable from T; this is not the same meaning of "decidability" as in a decision problem.

A theory T is independent if each axiom in T is not provable from the remaining axioms in T. A theory for which there is an independent set of axioms is independently axiomatizable.

https://en.wikipedia.org/wiki/Independence_(mathematical_logic)


In classical logicintuitionistic logic and similar logical systems, the principle of explosion (Latinex falso [sequitur] quodlibet, 'from falsehood, anything [follows]'; or ex contradictione [sequitur] quodlibet, 'from contradiction, anything [follows]'), or the principle of Pseudo-Scotus, is the law according to which any statement can be proven from a contradiction.[1] That is, once a contradiction has been asserted, any proposition (including their negations) can be inferred from it; this is known as deductive explosion.[2][3]

The proof of this principle was first given by 12th-century French philosopher William of Soissons.[4] Due to the principle of explosion, the existence of a contradiction (inconsistency) in a formal axiomatic system is disastrous; since any statement can be proven, it trivializes the concepts of truth and falsity.[5] Around the turn of the 20th century, the discovery of contradictions such as Russell's paradox at the foundations of mathematics thus threatened the entire structure of mathematics. Mathematicians such as Gottlob FregeErnst ZermeloAbraham Fraenkel, and Thoralf Skolem put much effort into revising set theory to eliminate these contradictions, resulting in the modern Zermelo–Fraenkel set theory.

As a demonstration of the principle, consider two contradictory statements—"All lemons are yellow" and "Not all lemons are yellow"—and suppose that both are true. If that is the case, anything can be proven, e.g., the assertion that "unicorns exist", by using the following argument:

  1. We know that "Not all lemons are yellow", as it has been assumed to be true.
  2. We know that "All lemons are yellow", as it has been assumed to be true.
  3. Therefore, the two-part statement "All lemons are yellow OR unicorns exist" must also be true, since the first part "All lemons are yellow" of the two-part statement is true (as this has been assumed).
  4. However, since we know that "Not all lemons are yellow" (as this has been assumed), the first part is false, and hence the second part must be true to ensure the two-part statement to be true, i.e., unicorns exist.

In a different solution to these problems, a few mathematicians have devised alternate theories of logic called paraconsistent logics, which eliminate the principle of explosion.[5] These allow some contradictory statements to be proven without affecting other proofs.

https://en.wikipedia.org/wiki/Principle_of_explosion


Symbolic logic[edit]

Leopold Löwenheim[26] and Thoralf Skolem[27] obtained the Löwenheim–Skolem theorem, which says that first-order logic cannot control the cardinalities of infinite structures. Skolem realized that this theorem would apply to first-order formalizations of set theory, and that it implies any such formalization has a countable model. This counterintuitive fact became known as Skolem's paradox.

In his doctoral thesis, Kurt Gödel proved the completeness theorem, which establishes a correspondence between syntax and semantics in first-order logic.[28] Gödel used the completeness theorem to prove the compactness theorem, demonstrating the finitary nature of first-order logical consequence. These results helped establish first-order logic as the dominant logic used by mathematicians.

In 1931, Gödel published On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which proved the incompleteness (in a different meaning of the word) of all sufficiently strong, effective first-order theories. This result, known as Gödel's incompleteness theorem, establishes severe limitations on axiomatic foundations for mathematics, striking a strong blow to Hilbert's program. It showed the impossibility of providing a consistency proof of arithmetic within any formal theory of arithmetic. Hilbert, however, did not acknowledge the importance of the incompleteness theorem for some time.[a]

Gödel's theorem shows that a consistency proof of any sufficiently strong, effective axiom system cannot be obtained in the system itself, if the system is consistent, nor in any weaker system. This leaves open the possibility of consistency proofs that cannot be formalized within the system they consider. Gentzen proved the consistency of arithmetic using a finitistic system together with a principle of transfinite induction.[29] Gentzen's result introduced the ideas of cut elimination and proof-theoretic ordinals, which became key tools in proof theory. Gödel gave a different consistency proof, which reduces the consistency of classical arithmetic to that of intuitionistic arithmetic in higher types.[30]

The first textbook on symbolic logic for the layman was written by Lewis Carroll, author of Alice in Wonderland, in 1896.[31]

Mathematical logic is the study of logic within mathematics. Major subareas include model theoryproof theoryset theory, and recursion theory. Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic such as their expressive or deductive power. However, it can also include uses of logic to characterize correct mathematical reasoning or to establish foundations of mathematics.

Since its inception, mathematical logic has both contributed to, and has been motivated by, the study of foundations of mathematics. This study began in the late 19th century with the development of axiomatic frameworks for geometryarithmetic, and analysis. In the early 20th century it was shaped by David Hilbert's program to prove the consistency of foundational theories. Results of Kurt GödelGerhard Gentzen, and others provided partial resolution to the program, and clarified the issues involved in proving consistency. Work in set theory showed that almost all ordinary mathematics can be formalized in terms of sets, although there are some theorems that cannot be proven in common axiom systems for set theory. Contemporary work in the foundations of mathematics often focuses on establishing which parts of mathematics can be formalized in particular formal systems (as in reverse mathematics) rather than trying to find theories in which all of mathematics can be developed.

https://en.wikipedia.org/wiki/Mathematical_logic#Symbolic_logic


In mathematical logicNew Foundations (NF) is an axiomatic set theory, conceived by Willard Van Orman Quine as a simplification of the theory of types of Principia Mathematica. Quine first proposed NF in a 1937 article titled "New Foundations for Mathematical Logic"; hence the name. Much of this entry discusses NFU, an important variant of NF due to Jensen (1969) and exposited in Holmes (1998).[1] In 1940 and in a revision of 1951, Quine introduced an extension of NF sometimes called "Mathematical Logic" or "ML", that included proper classes as well as sets.

New Foundations has a universal set, so it is a non-well-founded set theory.[2] That is to say, it is an axiomatic set theory that allows infinite descending chains of membership such as … xn ∈ xn-1 ∈ … ∈ x2 ∈ x1. It avoids Russell's paradox by permitting only stratifiable formulas to be defined using the axiom schema of comprehension. For instance x ∈ y is a stratifiable formula, but x ∈ x is not.

https://en.wikipedia.org/wiki/New_Foundations


In axiomatic set theory and the branches of logicmathematics, and computer science that use it, the axiom of extensionality, or axiom of extension, is one of the axioms of Zermelo–Fraenkel set theory. It says that sets having the same elements are the same set.

https://en.wikipedia.org/wiki/Axiom_of_extensionality


In set theory with ur-elements[edit]

An ur-element is a member of a set that is not itself a set. In the Zermelo–Fraenkel axioms, there are no ur-elements, but they are included in some alternative axiomatisations of set theory. Ur-elements can be treated as a different logical type from sets; in this case,  makes no sense if  is an ur-element, so the axiom of extensionality simply applies only to sets.

Alternatively, in untyped logic, we can require  to be false whenever  is an ur-element. In this case, the usual axiom of extensionality would then imply that every ur-element is equal to the empty set. To avoid this consequence, we can modify the axiom of extensionality to apply only to nonempty sets, so that it reads:

That is:

Given any set A and any set Bif A is a nonempty set (that is, if there exists a member X of A), then if A and B have precisely the same members, then they are equal.

Yet another alternative in untyped logic is to define  itself to be the only element of  whenever  is an ur-element. While this approach can serve to preserve the axiom of extensionality, the axiom of regularity will need an adjustment instead.

https://en.wikipedia.org/wiki/Axiom_of_extensionality#In_set_theory_with_ur-elements


In mathematics, the axiom of regularity (also known as the axiom of foundation) is an axiom of Zermelo–Fraenkel set theory that states that every non-empty set A contains an element that is disjoint from A. In first-order logic, the axiom reads:

The axiom of regularity together with the axiom of pairing implies that no set is an element of itself, and that there is no infinite sequence (an) such that ai+1 is an element of ai for all i. With the axiom of dependent choice (which is a weakened form of the axiom of choice), this result can be reversed: if there are no such infinite sequences, then the axiom of regularity is true. Hence, in this context the axiom of regularity is equivalent to the sentence that there are no downward infinite membership chains.

The axiom was introduced by von Neumann (1925); it was adopted in a formulation closer to the one found in contemporary textbooks by Zermelo (1930). Virtually all results in the branches of mathematics based on set theory hold even in the absence of regularity; see chapter 3 of Kunen (1980). However, regularity makes some properties of ordinals easier to prove; and it not only allows induction to be done on well-ordered sets but also on proper classes that are well-founded relational structures such as the lexicographical ordering on 

Given the other axioms of Zermelo–Fraenkel set theory, the axiom of regularity is equivalent to the axiom of induction. The axiom of induction tends to be used in place of the axiom of regularity in intuitionistic theories (ones that do not accept the law of the excluded middle), where the two axioms are not equivalent.

In addition to omitting the axiom of regularity, non-standard set theories have indeed postulated the existence of sets that are elements of themselves.

https://en.wikipedia.org/wiki/Axiom_of_regularity


In mathematics-induction (epsilon-induction or set-induction) is a variant of transfinite induction

Considered as an alternative set theory axiom schema, it is called the Axiom (schema) of (set) induction

It can be used in set theory to prove that all sets satisfy a given property P(x). This is a special case of well-founded induction.

https://en.wikipedia.org/wiki/Epsilon-induction


Transfinite induction is an extension of mathematical induction to well-ordered sets, for example to sets of ordinal numbers or cardinal numbers.

https://en.wikipedia.org/wiki/Transfinite_induction


Induction and recursion[edit]

An important reason that well-founded relations are interesting is because a version of transfinite induction can be used on them: if (XR) is a well-founded relation, P(x) is some property of elements of X, and we want to show that

P(x) holds for all elements x of X,

it suffices to show that:

If x is an element of X and P(y) is true for all y such that y R x, then P(x) must also be true.

That is,

Well-founded induction is sometimes called Noetherian induction,[3] after Emmy Noether.

On par with induction, well-founded relations also support construction of objects by transfinite recursion. Let (XR) be a set-likewell-founded relation and F a function that assigns an object F(xg) to each pair of an element x ∈ X and a function g on the initial segment {yy R x} of X. Then there is a unique function G such that for every x ∈ X,

That is, if we want to construct a function G on X, we may define G(x) using the values of G(y) for y R x.

As an example, consider the well-founded relation (NS), where N is the set of all natural numbers, and S is the graph of the successor function x ↦ x+1. Then induction on S is the usual mathematical induction, and recursion on S gives primitive recursion. If we consider the order relation (N, <), we obtain complete induction, and course-of-values recursion. The statement that (N, <) is well-founded is also known as the well-ordering principle.

There are other interesting special cases of well-founded induction. When the well-founded relation is the usual ordering on the class of all ordinal numbers, the technique is called transfinite induction. When the well-founded set is a set of recursively-defined data structures, the technique is called structural induction. When the well-founded relation is set membership on the universal class, the technique is known as ∈-induction. See those articles for more details.


https://en.wikipedia.org/wiki/Well-founded_relation#Induction_and_recursion


Mathematical induction is a mathematical proof technique. It is essentially used to prove that a statement P(n) holds for every natural number n = 0, 1, 2, 3, . . . ; that is, the overall statement is a sequence of infinitely many cases P(0), P(1), P(2), P(3), . . . . Informal metaphors help to explain this technique, such as falling dominoes or climbing a ladder:

Mathematical induction proves that we can climb as high as we like on a ladder, by proving that we can climb onto the bottom rung (the basis) and that from each rung we can climb up to the next one (the step).

— Concrete Mathematics, page 3 margins.

proof by induction consists of two cases. The first, the base case (or basis), proves the statement for n = 0 without assuming any knowledge of other cases. The second case, the induction step, proves that if the statement holds for any given case n = kthen it must also hold for the next case n = k + 1. These two steps establish that the statement holds for every natural number n.[3] The base case does not necessarily begin with n = 0, but often with n = 1, and possibly with any fixed natural number n = N, establishing the truth of the statement for all natural numbers n ≥ N.

The method can be extended to prove statements about more general well-founded structures, such as trees; this generalization, known as structural induction, is used in mathematical logic and computer science. Mathematical induction in this extended sense is closely related to recursion. Mathematical induction is an inference rule used in formal proofs, and is the foundation of most correctness proofs for computer programs.[4]

Although its name may suggest otherwise, mathematical induction should not be confused with inductive reasoning as used in philosophy (see Problem of induction). The mathematical method examines infinitely many cases to prove a general statement, but does so by a finite chain of deductive reasoning involving the variable n, which can take infinitely many values.[5]

https://en.wikipedia.org/wiki/Mathematical_induction


Variants[edit]

In practice, proofs by induction are often structured differently, depending on the exact nature of the property to be proven. All variants of induction are special cases of transfinite induction; see below.

Induction basis other than 0 or 1[edit]

If one wishes to prove a statement, not for all natural numbers, but only for all numbers n greater than or equal to a certain number b, then the proof by induction consists of the following:

  1. Showing that the statement holds when n = b.
  2. Showing that if the statement holds for an arbitrary number n ≥ b, then the same statement also holds for n + 1.

This can be used, for example, to show that 2n ≥ n + 5 for n ≥ 3.

In this way, one can prove that some statement P(n) holds for all n ≥ 1, or even for all n ≥ −5. This form of mathematical induction is actually a special case of the previous form, because if the statement to be proved is P(n) then proving it with these two rules is equivalent with proving P(n + b) for all natural numbers n with an induction base case 0.[17]

Example: forming dollar amounts by coins[edit]

Assume an infinite supply of 4- and 5-dollar coins. Induction can be used to prove that any whole amount of dollars greater than or equal to 12 can be formed by a combination of such coins. Let S(k) denote the statement "k dollars can be formed by a combination of 4- and 5-dollar coins". The proof that S(k) is true for all k ≥ 12 can then be achieved by induction on k as follows:

Base case: Showing that S(k) holds for k = 12 is simple: take three 4-dollar coins.

Induction step: Given that S(k) holds for some value of k ≥ 12 (induction hypothesis), prove that S(k + 1) holds, too: 

Assume S(k) is true for some arbitrary k ≥ 12. If there is a solution for k dollars that includes at least one 4-dollar coin, replace it by a 5-dollar coin to make k + 1 dollars. Otherwise, if only 5-dollar coins are used, k must be a multiple of 5 and so at least 15; but then we can replace three 5-dollar coins by four 4-dollar coins to make k + 1 dollars. In each case, S(k + 1) is true.

Therefore, by the principle of induction, S(k) holds for all k ≥ 12, and the proof is complete.

In this example, although S(k) also holds for , the above proof cannot be modified to replace the minimum amount of 12 dollar to any lower value m. For m = 11, the base case is actually false; for m = 10, the second case in the induction step (replacing three 5- by four 4-dollar coins) will not work; let alone for even lower m.

Induction on more than one counter[edit]

It is sometimes desirable to prove a statement involving two natural numbers, n and m, by iterating the induction process. That is, one proves a base case and an inductive step for n, and in each of those proves a base case and an inductive step for m. See, for example, the proof of commutativity accompanying addition of natural numbers. More complicated arguments involving three or more counters are also possible.

Infinite descent[edit]

The method of infinite descent is a variation of mathematical induction which was used by Pierre de Fermat. It is used to show that some statement Q(n) is false for all natural numbers n. Its traditional form consists of showing that if Q(n) is true for some natural number n, it also holds for some strictly smaller natural number m. Because there are no infinite decreasing sequences of natural numbers, this situation would be impossible, thereby showing (by contradiction) that Q(n) cannot be true for any n.

The validity of this method can be verified from the usual principle of mathematical induction. Using mathematical induction on the statement P(n) defined as "Q(m) is false for all natural numbers m less than or equal to n", it follows that P(n) holds for all n, which means that Q(n) is false for every natural number n.


https://en.wikipedia.org/wiki/Mathematical_induction#Sum_of_consecutive_natural_numbers


In mathematicslogicphilosophy, and formal systems, a primitive notion is a concept that is not defined in terms of previously-defined concepts. It is often motivated informally, usually by an appeal to intuition and everyday experience. In an axiomatic theory, relations between primitive notions are restricted by axioms.[1] Some authors refer to the latter as "defining" primitive notions by one or more axioms, but this can be misleading. Formal theories cannot dispense with primitive notions, under pain of infinite regress(per the regress problem).

For example, in contemporary geometry, pointline, and contains are some primitive notions. Instead of attempting to define them,[2]their interplay is ruled (in Hilbert's axiom system) by axioms like "For every two points there exists a line that contains them both".[3]

Examples[edit]

The necessity for primitive notions is illustrated in several axiomatic foundations in mathematics:

https://en.wikipedia.org/wiki/Primitive_notion


notion in logic and philosophy is a reflection in the mind of real objects and phenomena in their essential features and relations. Notions are usually described in terms of scope (sphere) and content. This is because notions are often created in response to empirical observations (or experiments) of covarying trends among variables.

Notion is the common translation for Begriff as used by Georg Wilhelm Friedrich Hegel in his Science of Logic (1812–16).

https://en.wikipedia.org/wiki/Notion_(philosophy)




Foundations of mathematics is the study of the philosophical and logical[1] and/or algorithmic basis of mathematics, or, in a broader sense, the mathematical investigation of what underlies the philosophical theories concerning the nature of mathematics.[2]In this latter sense, the distinction between foundations of mathematics and philosophy of mathematics turns out to be quite vague. Foundations of mathematics can be conceived as the study of the basic mathematical concepts (set, function, geometrical figure, number, etc.) and how they form hierarchies of more complex structures and concepts, especially the fundamentally important structures that form the language of mathematics (formulas, theories and their models giving a meaning to formulas, definitions, proofs, algorithms, etc.) also called metamathematical concepts, with an eye to the philosophical aspects and the unity of mathematics. The search for foundations of mathematics is a central question of the philosophy of mathematics; the abstract nature of mathematical objects presents special philosophical challenges.

The foundations of mathematics as a whole does not aim to contain the foundations of every mathematical topic. Generally, the foundations of a field of study refers to a more-or-less systematic analysis of its most basic or fundamental concepts, its conceptual unity and its natural ordering or hierarchy of concepts, which may help to connect it with the rest of human knowledge. The development, emergence, and clarification of the foundations can come late in the history of a field, and might not be viewed by everyone as its most interesting part.

Mathematics always played a special role in scientific thought, serving since ancient times as a model of truth and rigor for rational inquiry, and giving tools or even a foundation for other sciences (especially physics). Mathematics' many developments towards higher abstractions in the 19th century brought new challenges and paradoxes, urging for a deeper and more systematic examination of the nature and criteria of mathematical truth, as well as a unification of the diverse branches of mathematics into a coherent whole.

The systematic search for the foundations of mathematics started at the end of the 19th century and formed a new mathematical discipline called mathematical logic, which later had strong links to theoretical computer science. It went through a series of crises with paradoxical results, until the discoveries stabilized during the 20th century as a large and coherent body of mathematical knowledge with several aspects or components (set theorymodel theoryproof theory, etc.), whose detailed properties and possible variants are still an active research field. Its high level of technical sophistication inspired many philosophers to conjecture that it can serve as a model or pattern for the foundations of other sciences.

https://en.wikipedia.org/wiki/Foundations_of_mathematics


Object theory is a theory in mathematical logic concerning objects and the statements that can be made about objects.

In some cases "objects" can be concretely thought of as symbols and strings of symbols, here illustrated by a string of four symbols " ←←↑↓←→←↓" as composed from the 4-symbol alphabet { ←, ↑, →, ↓ } . When they are "known only through the relationships of the system [in which they appear], the system is [said to be] abstract ... what the objects are, in any respect other than how they fit into the structure, is left unspecified." (Kleene 1952:25) A further specification of the objects results in a model or representation of the abstract system, "i.e. a system of objects which satisfy the relationships of the abstract system and have some further status as well" (ibid).

A system, in its general sense, is a collection of objects O = {o1, o2, ... on, ... } and (a specification of) the relationship r or relationships r1, r2, ... rn between the objects.

Example: Given a simple system = { { ←, ↑, →, ↓ },  } for a very simple relationship between the objects as signified by the symbol :[1]
→ => ↑, ↑ => ←, ← => ↓, ↓ => →

A model of this system would occur when we assign, for example the familiar natural numbers { 0, 1, 2, 3 }, to the symbols { ←, ↑, →, ↓ }, i.e. in this manner: → = 0, ↑ = 1, ← = 2, ↓ = 3 . Here, the symbol  indicates the "successor function" (often written as an apostrophe ' to distinguish it from +) operating on a collection of only 4 objects, thus 0' = 1, 1' = 2, 2' = 3, 3' = 0. 

Or, we might specify that  represents 90-degree counter-clockwise rotations of a simple object → .

https://en.wikipedia.org/wiki/Object_theory


In logicanti-psychologism (also logical objectivism[1] or logical realism[2][3]) is a theory about the nature of logical truth, that it does not depend upon the contents of human ideas but exists independent of human ideas.

https://en.wikipedia.org/wiki/Anti-psychologism


Logical truth is one of the most fundamental concepts in logic. Broadly speaking, a logical truth is a statement which is trueregardless of the truth or falsity of its constituent propositions. In other words, a logical truth is a statement which is not only true, but one which is true under all interpretations of its logical components (other than its logical constants). Thus, logical truths such as "if p, then p" can be considered tautologies. Logical truths are thought to be the simplest case of statements which are analytically true(or in other words, true by definition). All of philosophical logic can be thought of as providing accounts of the nature of logical truth, as well as logical consequence.[1]

Logical truths are generally considered to be necessarily true. This is to say that they are such that no situation could arise in which they could fail to be true. The view that logical statements are necessarily true is sometimes treated as equivalent to saying that logical truths are true in all possible worlds. However, the question of whether any statements are necessarily true remains the subject of continued debate.

Treating logical truths, analytic truths, and necessary truths as equivalent, logical truths can be contrasted with facts (which can also be called contingent claims or synthetic claims). Contingent truths are true in this world, but could have turned out otherwise (in other words, they are false in at least one possible world). Logically true propositions such as "If p and q, then p" and "All married people are married" are logical truths because they are true due to their internal structure and not because of any facts of the world (whereas "All married people are happy", even if it were true, could not be true solely in virtue of its logical structure).

Rationalist philosophers have suggested that the existence of logical truths cannot be explained by empiricism, because they hold that it is impossible to account for our knowledge of logical truths on empiricist grounds. Empiricists commonly respond to this objection by arguing that logical truths (which they usually deem to be mere tautologies), are analytic and thus do not purport to describe the world. The latter view was notably defended by the logical positivists in the early 20th century.

https://en.wikipedia.org/wiki/Logical_truth


"Two Dogmas of Empiricism" is a paper by analytic philosopher Willard Van Orman Quine published in 1951. According to University of Sydney professor of philosophy Peter Godfrey-Smith, this "paper [is] sometimes regarded as the most important in all of twentieth-century philosophy".[1] The paper is an attack on two central aspects of the logical positivists' philosophy: the first being the analytic–synthetic distinction between analytic truths and synthetic truths, explained by Quine as truths grounded only in meanings and independent of facts, and truths grounded in facts; the other being reductionism, the theory that each meaningful statement gets its meaning from some logical construction of terms that refer exclusively to immediate experience.

"Two Dogmas" has six sections. The first four focus on analyticity, the last two on reductionism. There, Quine turns the focus to the logical positivists' theory of meaning. He also presents his own holistic theory of meaning.

https://en.wikipedia.org/wiki/Two_Dogmas_of_Empiricism


The literal translation of the Latin "salva veritate" is "with (or by) unharmed truth", using ablative of manner: "salva" meaning "rescue," "salvation," or "welfare," and "veritate" meaning "reality" or "truth". Thus, Salva veritate (or intersubstitutivity) is the logical condition by which two expressions may be interchanged without altering the truth-value of statements in which the expressions occur. Substitution salva veritate of co-extensional terms can fail in opaque contexts.[1]

See also[edit]

https://en.wikipedia.org/wiki/Salva_veritate


propositional attitude is a mental state held by an agent toward a proposition

Linguistically, propositional attitudes are denoted by a verb (e.g. "believed") governing an embedded "that" clause, for example, 'Sally believed that she had won'.

Propositional attitudes are often assumed to be the fundamental units of thought and their contents, being propositions, are true or false from the perspective of the person. An agent can have different propositional attitudes toward the same proposition (e.g., "S believes that her ice-cream is cold," and "S fears that her ice-cream is cold").

Propositional attitudes have directions of fit: some are meant to reflect the world, others to influence it.

One topic of central concern is the relation between the modalities of assertion and belief, perhaps with intention thrown in for good measure. For example, we frequently find ourselves faced with the question of whether or not a person's assertions conform to his or her beliefs. Discrepancies here can occur for many reasons, but when the departure of assertion from belief is intentional, we usually call that a lie.

Other comparisons of multiple modalities that frequently arise are the relationships between belief and knowledge and the discrepancies that occur among observations, expectations, and intentions. Deviations of observations from expectations are commonly perceived as surprises, phenomena that call for explanations to reduce the shock of amazement.

https://en.wikipedia.org/wiki/Propositional_attitude


An opaque context or referentially opaque context is a linguistic context in which it is not always possible to substitute "co-referential" expressions (expressions referring to the same object) without altering the truth of sentences.[1] The expressions involved are usually grammatically singular terms. So, substitution of co-referential expressions into an opaque context does not always preserve truth. For example, "Lois believes x is a hero" is an opaque context because "Lois believes Superman is a hero" is true while "Lois believes Clark Kent is a hero" is false, even though 'Superman' and 'Clark Kent' are co-referential expressions.

Usage[edit]

The term is used in philosophical theories of reference, and is to be contrasted with referentially transparent context. In rough outline:

  • Opacity: "Mary believes that Cicero is a great orator" gives rise to an opaque context; although Cicero was also called 'Tully',[2]we can't simply substitute 'Tully' for 'Cicero' in this context ("Mary believes that Tully is a great orator") and guarantee the same truth value, for Mary might not know that the names 'Tully' and 'Cicero' refer to one and the same thing. Of course, if Mary does believe that Cicero is a great orator, then there is a sense in which Mary believes that Tully is a great orator, even if she does not know that 'Tully' and 'Cicero' corefer. It is the sense forced on us by "direct reference" theories of proper names, i.e. those that maintain that the meaning of a proper name just is its referent.
  • Transparency: "Cicero was a Roman orator" gives rise to a transparent context; there is no problem substituting 'Tully' for 'Cicero' here: "Tully was a Roman orator". Both sentences necessarily express the same thing if 'Cicero' and 'Tully' refer to the same person. Note that this element is missing in the opaque contexts, where a shift in the name can result in a sentence that expresses something different from the original.

Similar usage of the term applies for artificial languages such as programming languages and logics. The Cicero–Tully example above can be easily adapted. Use the notation  as a quotation that mentions a term . Define a predicate  which is true for terms six letters long. Then  induces an opaque context, or is referentially opaque, because  is true while  is false. Programming languages often have richer semantics than logics' semantics of truth and falsity, and so an operator such as may fail to be referentially transparent for other reasons as well.

See also[edit]


https://en.wikipedia.org/wiki/Opaque_context


In philosophical logic, the masked-man fallacy (also known as the intensional fallacy or epistemic fallacy)[1] is committed when one makes an illicit use of Leibniz's law in an argument. Leibniz's law states that if A and B are the same object, then A and B are indiscernible (that is, they have all the same properties). By modus tollens, this means that if one object has a certain property, while another object does not have the same property, the two objects cannot be identical. The fallacy is "epistemic" because it posits an immediate identity between a subject's knowledge of an object with the object itself, failing to recognize that Leibniz's Law is not capable of accounting for intensional contexts.

https://en.wikipedia.org/wiki/Masked-man_fallacy


In computer programming, a pure function is a function that has the following properties:[1][2]

  1. The function return values are identical for identical arguments (no variation with local static variablesnon-local variables, mutable reference arguments or input streams).
  2. The function application has no side effects (no mutation of local static variables, non-local variables, mutable reference arguments or input/output streams).

Thus a pure function is a computational analogue of a mathematical function. Some authors, particularly from the imperative language community, use the term "pure" for all functions that just have the above property 2[3][4] (discussed below).

https://en.wikipedia.org/wiki/Pure_function


Referential transparency and referential opacity are properties of parts of computer programs. An expression is called referentially transparent if it can be replaced with its corresponding value (and vice-versa) without changing the program's behavior.[1] This requires that the expression be pure, that is to say the expression value must be the same for the same inputs and its evaluation must have no side effects. An expression that is not referentially transparent is called referentially opaque.

In mathematics all function applications are referentially transparent, by the definition of what constitutes a mathematical function. However, this is not always the case in programming, where the terms procedure and method are used to avoid misleading connotations. In functional programming only referentially transparent functions are considered. Some programming languagesprovide means to guarantee referential transparency. Some functional programming languages enforce referential transparency for all functions.

The importance of referential transparency is that it allows the programmer and the compiler to reason about program behavior as a rewrite system. This can help in proving correctness, simplifying an algorithm, assisting in modifying code without breaking it, or optimizing code by means of memoizationcommon subexpression eliminationlazy evaluation, or parallelization.

https://en.wikipedia.org/wiki/Referential_transparency


In philosophyidentity, from Latinidentitas ("sameness"), is the relation each thing bears only to itself.[1][2] The notion of identity gives rise to many philosophical problems, including the identity of indiscernibles (if x and y share all their properties, are they one and the same thing?), and questions about change and personal identity over time (what has to be the case for a person x at one time and a person y at a later time to be one and the same person?). It is important to distinguish between qualitative identity and numerical identity. For example, consider two children with identical bicycles engaged in a race while their mother is watching. The two children have the same bicycle in one sense (qualitative identity) and the same mother in another sense (numerical identity).[3]This article is mainly concerned with numerical identity, which is the stricter notion.

The philosophical concept of identity is distinct from the better-known notion of identity in use in psychology and the social sciences. The philosophical concept concerns a relation, specifically, a relation that x and y stand in if, and only if they are one and the same thing, or identical to each other (i.e. if, and only if x = y). The sociological notion of identity, by contrast, has to do with a person's self-conception, social presentation, and more generally, the aspects of a person that make them unique, or qualitatively different from others (e.g. cultural identitygender identitynational identityonline identity and processes of identity formation). Lately, identity has been conceptualized considering humans’ position within the ecological web of life.[4]

https://en.wikipedia.org/wiki/Identity_(philosophy)


The identity of indiscernibles is an ontological principle that states that there cannot be separate objects or entities that have all their properties in common. That is, entities x and y are identical if every predicate possessed by x is also possessed by y and vice versa. It states that no two distinct things (such as snowflakes) can be exactly alike, but this is intended as a metaphysical principle rather than one of natural science. A related principle is the indiscernibility of identicals, discussed below.

A form of the principle is attributed to the German philosopher Gottfried Wilhelm Leibniz. While some think that Leibniz's version of the principle is meant to be only the indiscernibility of identicals, others have interpreted it as the conjunction of the identity of indiscernibles and the indiscernibility of identicals (the converse principle). Because of its association with Leibniz, the indiscernibility of identicals is sometimes known as Leibniz's law. It is considered to be one of his great metaphysical principles, the other being the principle of noncontradiction and the principle of sufficient reason (famously been used in his disputes with Newtonand Clarke in the Leibniz–Clarke correspondence). 

Some philosophers have decided, however, that it is important to exclude certain predicates (or purported predicates) from the principle in order to avoid either triviality or contradiction. An example (detailed below) is the predicate that denotes whether an object is equal to x (often considered a valid predicate). As a consequence, there are a few different versions of the principle in the philosophical literature, of varying logical strength—and some of them are termed "the strong principle" or "the weak principle" by particular authors, in order to distinguish between them.[1]

Willard Van Orman Quine thought that the failure of substitution in intensional contexts (e.g., "Sally believes that p" or "It is necessarily the case that q") shows that modal logic is an impossible project.[2][relevant?] Saul Kripke holds that this failure may be the result of the use of the disquotational principle implicit in these proofs, and not a failure of substitutivity as such.[3][relevant?]

The identity of indiscernibles has been used to motivate notions of noncontextuality within quantum mechanics.

Associated with this principle is also the question as to whether it is a logical principle, or merely an empirical principle.

https://en.wikipedia.org/wiki/Identity_of_indiscernibles


https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_philosophy


In the philosophy of language, the distinction between sense and reference was an innovation of the German philosopher and mathematician Gottlob Frege in 1892 (in his paper "On Sense and Reference"; German: "Über Sinn und Bedeutung"),[1] reflecting the two ways he believed a singular term may have meaning.

The reference (or "referent"; Bedeutung) of a proper name is the object it means or indicates (bedeuten), whereas its sense (Sinn) is what the name expresses. The reference of a sentence is its truth value, whereas its sense is the thought that it expresses.[1] Frege justified the distinction in a number of ways.

  1. Sense is something possessed by a name, whether or not it has a reference. For example, the name "Odysseus" is intelligible, and therefore has a sense, even though there is no individual object (its reference) to which the name corresponds.
  2. The sense of different names is different, even when their reference is the same. Frege argued that if an identity statement such as "Hesperus is the same planet as Phosphorus" is to be informative, the proper names flanking the identity sign must have a different meaning or sense. But clearly, if the statement is true, they must have the same reference.[2] The sense is a 'mode of presentation', which serves to illuminate only a single aspect of the referent.[3]

Much of analytic philosophy is traceable to Frege's philosophy of language.[4] Frege's views on logic (i.e., his idea that some parts of speech are complete by themselves, and are analogous to the arguments of a mathematical function) led to his views on a theory of reference.[4]

https://en.wikipedia.org/wiki/Sense_and_reference


https://en.wikipedia.org/wiki/Opaque_context

https://en.wikipedia.org/wiki/Pragmatics


Monday, August 16, 2021

08-16-2021-0404 - Required to stabilize population turnover prompt; Including Land Future - Earthy - USA - NAC DOM - etc.

Earthy

USA

Stimulus Check Four 

Reoccurring Direct

Required to stabilize population turnover prompt

Including Reduce/Cease losses to non-turnover presence anomaly 

Required to secure lands possessed at location: Continent West (inc. North America, Americas, etc.)

Required to prevent mass emigration from United States of America, North America Continent

State of Dependence on foreign country exist

Compliance with foreign country, provision of funds to foreign country, etc., required to maintain relations.

Mass immigration to NAC with USA permission/asst. has occurred and has been escalated to hostage scenario due pandemic response by receiving country (USA) at NAC (USA has enhostaged immigrants at continent west/NAC, 2020; resource/fund deprivation).

Hierarchy, order, rank, complexity, sequence, pattern, etc.