Blog Archive

Friday, May 19, 2023

05-19-2023-0231 - Reification_(fallacy), ideal type, Morphological_analysis_(problem-solving), Tripartite_classification_of_authority, Verstehen, Social_action, Normal Type, etc. (draft)

Reification (also known as concretism, hypostatization, or the fallacy of misplaced concreteness) is a fallacy of ambiguity, when an abstraction (abstract belief or hypothetical construct) is treated as if it were a concrete real event or physical entity.[1][2] In other words, it is the error of treating something that is not concrete, such as an idea, as a concrete thing. A common case of reification is the confusion of a model with reality: "the map is not the territory". A potential consequence of reification is exemplified by Goodhart's law, where changes in the measurement of a phenomenon are mistaken for changes to the phenomenon itself.

Reification is part of normal usage of natural language (just like metonymy for instance), as well as of literature, where a reified abstraction is intended as a figure of speech, and actually understood as such. But the use of reification in logical reasoning or rhetoric is misleading and usually regarded as a fallacy.[3]

Etymology

From Latin res ("thing") and -fication, a suffix related to facere ("to make").[4] Thus reification can be loosely translated as "thing-making"; the turning of something abstract into a concrete thing or object.

Theory

Reification takes place when natural or social processes are misunderstood or simplified; for example, when human creations are described as "facts of nature, results of cosmic laws, or manifestations of divine will".[5]

Reification may derive from an inborn tendency to simplify experience by assuming constancy as much as possible.[6]

Fallacy of misplaced concreteness

According to Alfred North Whitehead, one commits the fallacy of misplaced concreteness when one mistakes an abstract belief, opinion, or concept about the way things are for a physical or "concrete" reality: "There is an error; but it is merely the accidental error of mistaking the abstract for the concrete. It is an example of what might be called the 'Fallacy of Misplaced Concreteness.'"[7] Whitehead proposed the fallacy in a discussion of the relation of spatial and temporal location of objects. He rejects the notion that a concrete physical object in the universe can be ascribed a simple spatial or temporal extension, that is, without reference to its relations to other spatial or temporal extensions.

[...] apart from any essential reference of the relations of [a] bit of matter to other regions of space [...] there is no element whatever which possesses this character of simple location. [... Instead,] I hold that by a process of constructive abstraction we can arrive at abstractions which are the simply located bits of material, and at other abstractions which are the minds included in the scientific scheme. Accordingly, the real error is an example of what I have termed: The Fallacy of Misplaced Concreteness.[8]

Vicious abstractionism

William James used the notion of "vicious abstractionism" and "vicious intellectualism" in various places, especially to criticize Immanuel Kant's and Georg Wilhelm Friedrich Hegel's idealistic philosophies. In The Meaning of Truth, James wrote:

Let me give the name of "vicious abstractionism" to a way of using concepts which may be thus described: We conceive a concrete situation by singling out some salient or important feature in it, and classing it under that; then, instead of adding to its previous characters all the positive consequences which the new way of conceiving it may bring, we proceed to use our concept privatively; reducing the originally rich phenomenon to the naked suggestions of that name abstractly taken, treating it as a case of "nothing but" that concept, and acting as if all the other characters from out of which the concept is abstracted were expunged. Abstraction, functioning in this way, becomes a means of arrest far more than a means of advance in thought. ... The viciously privative employment of abstract characters and class names is, I am persuaded, one of the great original sins of the rationalistic mind.[9]

In a chapter on "The Methods and Snares of Psychology" in The Principles of Psychology, James describes a related fallacy, the psychologist's fallacy, thus: "The great snare of the psychologist is the confusion of his own standpoint with that of the mental fact about which he is making his report. I shall hereafter call this the "psychologist's fallacy" par excellence" (volume 1, p. 196). John Dewey followed James in describing a variety of fallacies, including "the philosophic fallacy", "the analytic fallacy", and "the fallacy of definition".[10]

Use of constructs in science

The concept of a "construct" has a long history in science; it is used in many, if not most, areas of science. A construct is a hypothetical explanatory variable that is not directly observable. For example, the concepts of motivation in psychology, utility in economics, and gravitational field in physics are constructs; they are not directly observable, but instead are tools to describe natural phenomena.

The degree to which a construct is useful and accepted as part of the current paradigm in a scientific community depends on empirical research that has demonstrated that a scientific construct has construct validity (especially, predictive validity).[11] Thus, in contrast to Whitehead, many psychologists[who?] seem to believe that, if properly understood and empirically corroborated, the "reification fallacy" applied to scientific constructs is not a fallacy at all; it is one part of theory creation and evaluation in 'Normal science'.

Stephen Jay Gould draws heavily on the idea of fallacy of reification in his book The Mismeasure of Man. He argues that the error in using intelligence quotient scores to judge people's intelligence is that, just because a quantity called "intelligence" or "intelligence quotient" is defined as a measurable thing does not mean that intelligence is real; thus denying the validity of the construct "intelligence."[12]

Relation to other fallacies

Pathetic fallacy (also known as anthropomorphic fallacy or anthropomorphization) is a specific type[dubious ] of reification. Just as reification is the attribution of concrete characteristics to an abstract idea, a pathetic fallacy is committed when those characteristics are specifically human characteristics, especially thoughts or feelings.[13] Pathetic fallacy is also related to personification, which is a direct and explicit ascription of life and sentience to the thing in question, whereas the pathetic fallacy is much broader and more allusive.

The animistic fallacy involves attributing personal intention to an event or situation.

Reification fallacy should not be confused with other fallacies of ambiguity:

  • Accentus, where the ambiguity arises from the emphasis (accent) placed on a word or phrase
  • Amphiboly, a verbal fallacy arising from ambiguity in the grammatical structure of a sentence
  • Composition, when one assumes that a whole has a property solely because its various parts have that property
  • Division, when one assumes that various parts have a property solely because the whole has that same property
  • Equivocation, the misleading use of a word with more than one meaning

As a rhetorical device

The rhetorical devices of metaphor and personification express a form of reification, but short of a fallacy. These devices, by definition, do not apply literally and thus exclude any fallacious conclusion that the formal reification is real. For example, the metaphor known as the pathetic fallacy, "the sea was angry" reifies anger, but does not imply that anger is a concrete substance, or that water is sentient. The distinction is that a fallacy inhabits faulty reasoning, and not the mere illustration or poetry of rhetoric.[2]

Counterexamples

Reification, while usually fallacious, is sometimes considered a valid argument. Thomas Schelling, a game theorist during the Cold War, argued that for many purposes an abstraction shared between disparate people caused itself to become real. Some examples include the effect of round numbers in stock prices, the importance placed on the Dow Jones Industrial index, national borders, preferred numbers, and many others.[14]

See also

References


  • Reification, Encyclopædia Britannica

  • "Logical Fallacies, Formal and Informal". usabig.com. Archived from the original on 22 November 2011. Retrieved 10 April 2018.

  • Dowden, Bradley. "Fallacy". Internet Encyclopedia of Philosophy. ISSN 2161-0002. Retrieved 26 April 2021. Whether a phrase commits the fallacy depends crucially upon whether the use of the inaccurate phrase is inappropriate in the situation. In a poem, it is appropriate and very common to reify nature, hope, fear, forgetfulness, and so forth, that is, to treat them as if they were objects or beings with intentions. In any scientific claim, it is inappropriate.

  • "reification, n." OED Online. Oxford University Press, September 2016. Web. 24 September 2016. Format

  • David K. Naugle (2002). Worldview: the history of a concept. Wm. B. Eerdmans Publishing. p. 178. ISBN 978-0-8028-4761-4.

  • David Galin in B. Alan Wallace, editor, Buddhism & Science: Breaking New Ground. Columbia University Press, 2003, p. 132.

  • Whitehead, Alfred North (1997) [1925]. Science and the Modern World. Free Press (Simon & Schuster). p. 52. ISBN 978-0-684-83639-3.

  • Whitehead, Alfred North (1997) [1925]. Science and the Modern World. Free Press (Simon & Schuster). p. 58. ISBN 978-0-684-83639-3.

  • James, William, The Meaning of Truth, A Sequel to 'Pragmatism', (1909/1979), Harvard University Press, pp. 135-136

  • Winther, Rasmus G. (2014). James and Dewey on Abstraction. The Pluralist 9 (2), pp. 9-17 http://philpapers.org/archive/WINJAD.pdf

  • Kaplan, R. M., & Saccuzzo, D. P. (1997). Psychological Testing. Chapter 5. Pacific Grove: Brooks-Cole.

  • Pitkin, Hanna Fenichel (March 1987). "Rethinking reification". Theory and Society. 16 (2): 263–293. doi:10.1007/bf00135697. ISSN 0304-2421. S2CID 189890548.

  • http://www.britannica.com/EBchecked/topic/446415/pathetic-fallacy pathetic fallacy. Retrieved on: 9 October 2012

    1. Schelling, Thomas C. (1980). The Strategy of Conflict. Harvard University Press. ISBN 9780674840317.


    https://en.wikipedia.org/wiki/Reification_(fallacy)

    https://en.wikipedia.org/wiki/Ideal_type


    https://en.wikipedia.org/wiki/Morphological_analysis_(problem-solving)

    https://en.wikipedia.org/wiki/Tripartite_classification_of_authority

    https://en.wikipedia.org/wiki/Verstehen

    https://en.wikipedia.org/wiki/Social_action

    https://en.wikipedia.org/wiki/Balanced_hypergraph



    https://en.wikipedia.org/wiki/Normal_subgroup


    Fully characteristic subgroup

    For an even stronger constraint, a fully characteristic subgroup (also, fully invariant subgroup; cf. invariant subgroup), H, of a group G, is a group remaining invariant under every endomorphism of G; that is,

    ∀φ ∈ End(G): φ[H] ≤ H.

    Every group has itself (the improper subgroup) and the trivial subgroup as two of its fully characteristic subgroups. The commutator subgroup of a group is always a fully characteristic subgroup.[3][4]

    Every endomorphism of G induces an endomorphism of G/H, which yields a map End(G) → End(G/H)

    https://en.wikipedia.org/wiki/Characteristic_subgroup#Fully_characteristic_subgroup

     

    In the theory of abelian groups, the torsion subgroup AT of an abelian group A is the subgroup of A consisting of all elements that have finite order (the torsion elements of A[1]). An abelian group A is called a torsion group (or periodic group) if every element of A has finite order and is called torsion-free if every element of A except the identity is of infinite order.

    The proof that AT is closed under the group operation relies on the commutativity of the operation (see examples section).

    If A is abelian, then the torsion subgroup T is a fully characteristic subgroup of A and the factor group A/T is torsion-free. There is a covariant functor from the category of abelian groups to the category of torsion groups that sends every group to its torsion subgroup and every homomorphism to its restriction to the torsion subgroup. There is another covariant functor from the category of abelian groups to the category of torsion-free groups that sends every group to its quotient by its torsion subgroup, and sends every homomorphism to the obvious induced homomorphism (which is easily seen to be well-defined).

    If A is finitely generated and abelian, then it can be written as the direct sum of its torsion subgroup T and a torsion-free subgroup (but this is not true for all infinitely generated abelian groups). In any decomposition of A as a direct sum of a torsion subgroup S and a torsion-free subgroup, S must equal T (but the torsion-free subgroup is not uniquely determined). This is a key step in the classification of finitely generated abelian groups

    https://en.wikipedia.org/wiki/Torsion_subgroup

    In mathematics, particularly in the area of abstract algebra known as group theory, a characteristic subgroup is a subgroup that is mapped to itself by every automorphism of the parent group.[1][2] Because every conjugation map is an inner automorphism, every characteristic subgroup is normal; though the converse is not guaranteed. Examples of characteristic subgroups include the commutator subgroup and the center of a group

    https://en.wikipedia.org/wiki/Characteristic_subgroup

     In abstract algebra an inner automorphism is an automorphism of a group, ring, or algebra given by the conjugation action of a fixed element, called the conjugating element. They can be realized via simple operations from within the group itself, hence the adjective "inner". These inner automorphisms form a subgroup of the automorphism group, and the quotient of the automorphism group by this subgroup is defined as the outer automorphism group

    https://en.wikipedia.org/wiki/Inner_automorphism

     

    In mathematics, especially group theory, two elements and of a group are conjugate if there is an element in the group such that This is an equivalence relation whose equivalence classes are called conjugacy classes. In other words, each conjugacy class is closed under for all elements in the group.

    Members of the same conjugacy class cannot be distinguished by using only the group structure, and therefore share many properties. The study of conjugacy classes of non-abelian groups is fundamental for the study of their structure.[1][2] For an abelian group, each conjugacy class is a set containing one element (singleton set).

    Functions that are constant for members of the same conjugacy class are called class functions

    https://en.wikipedia.org/wiki/Conjugacy_class

     

    From Wikipedia, the free encyclopedia

    In mathematics, the general linear group of degree n is the set of n×n invertible matrices, together with the operation of ordinary matrix multiplication. This forms a group, because the product of two invertible matrices is again invertible, and the inverse of an invertible matrix is invertible, with identity matrix as the identity element of the group. The group is so named because the columns (and also the rows) of an invertible matrix are linearly independent, hence the vectors/points they define are in general linear position, and matrices in the general linear group take points in general linear position to points in general linear position.

    To be more precise, it is necessary to specify what kind of objects may appear in the entries of the matrix. For example, the general linear group over R (the set of real numbers) is the group of n×n invertible matrices of real numbers, and is denoted by GLn(R) or GL(n, R).

    More generally, the general linear group of degree n over any field F (such as the complex numbers), or a ring R (such as the ring of integers), is the set of n×n invertible matrices with entries from F (or R), again with matrix multiplication as the group operation.[1] Typical notation is GLn(F) or GL(n, F), or simply GL(n) if the field is understood.

    More generally still, the general linear group of a vector space GL(V) is the automorphism group, not necessarily written as matrices.

    The special linear group, written SL(n, F) or SLn(F), is the subgroup of GL(n, F) consisting of matrices with a determinant of 1.

    The group GL(n, F) and its subgroups are often called linear groups or matrix groups (the automorphism group GL(V) is a linear group but not a matrix group). These groups are important in the theory of group representations, and also arise in the study of spatial symmetries and symmetries of vector spaces in general, as well as the study of polynomials. The modular group may be realised as a quotient of the special linear group SL(2, Z).

    If n ≥ 2, then the group GL(n, F) is not abelian

    https://en.wikipedia.org/wiki/General_linear_group

    Definition

    Let be a group. Two elements are conjugate if there exists an element such that in which case is called a conjugate of and is called a conjugate of

    In the case of the general linear group of invertible matrices, the conjugacy relation is called matrix similarity.

    It can be easily shown that conjugacy is an equivalence relation and therefore partitions into equivalence classes. (This means that every element of the group belongs to precisely one conjugacy class, and the classes and are equal if and only if and are conjugate, and disjoint otherwise.) The equivalence class that contains the element is

    and is called the conjugacy class of The class number of is the number of distinct (nonequivalent) conjugacy classes. All elements belonging to the same conjugacy class have the same order.

    Conjugacy classes may be referred to by describing them, or more briefly by abbreviations such as "6A", meaning "a certain conjugacy class with elements of order 6", and "6B" would be a different conjugacy class with elements of order 6; the conjugacy class 1A is the conjugacy class of the identity which has order 1. In some cases, conjugacy classes can be described in a uniform way; for example, in the symmetric group they can be described by cycle type

    https://en.wikipedia.org/wiki/Conjugacy_class

    From Wikipedia, the free encyclopedia
    (Redirected from Invertible matrices)

    In linear algebra, an n-by-n square matrix A is called invertible (also nonsingular, nondegenerate or (rarely used) regular), if there exists an n-by-n square matrix B such that

    where In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication.[1] If this is the case, then the matrix B is uniquely determined by A, and is called the (multiplicative) inverse of A, denoted by A−1.[2] Matrix inversion is the process of finding the matrix B that satisfies the prior equation for a given invertible matrix A.

    A square matrix that is not invertible is called singular or degenerate. A square matrix with entries in a field is singular if and only if its determinant is zero.[3] Singular matrices are rare in the sense that if a square matrix's entries are randomly selected from any bounded region on the number line or complex plane, the probability that the matrix is singular is 0, that is, it will "almost never" be singular. Non-square matrices (m-by-n matrices for which mn) do not have an inverse. However, in some cases such a matrix may have a left inverse or right inverse. If A is m-by-n and the rank of A is equal to n (nm), then A has a left inverse, an n-by-m matrix B such that BA = In. If A has rank m (mn), then it has a right inverse, an n-by-m matrix B such that AB = Im.

    While the most common case is that of matrices over the real or complex numbers, all these definitions can be given for matrices over any ring. However, in the case of the ring being commutative, the condition for a square matrix to be invertible is that its determinant is invertible in the ring, which in general is a stricter requirement than being nonzero. For a noncommutative ring, the usual determinant is not defined. The conditions for existence of left-inverse or right-inverse are more complicated, since a notion of rank does not exist over rings.

    The set of n × n invertible matrices together with the operation of matrix multiplication (and entries from ring R) form a group, the general linear group of degree n, denoted GLn(R)

    https://en.wikipedia.org/wiki/Invertible_matrix

    From Wikipedia, the free encyclopedia

    In linear algebra, two n-by-n matrices A and B are called similar if there exists an invertible n-by-n matrix P such that

    Similar matrices represent the same linear map under two (possibly) different bases, with P being the change of basis matrix.[1][2]

    A transformation AP−1AP is called a similarity transformation or conjugation of the matrix A. In the general linear group, similarity is therefore the same as conjugacy, and similar matrices are also called conjugate; however, in a given subgroup H of the general linear group, the notion of conjugacy may be more restrictive than similarity, since it requires that P be chosen to lie in H

    https://en.wikipedia.org/wiki/Matrix_similarity

    From Wikipedia, the free encyclopedia

    In mathematics, the order of a finite group is the number of its elements. If a group is not finite, one says that its order is infinite. The order of an element of a group (also called period length or period) is the order of the subgroup generated by the element. If the group operation is denoted as a multiplication, the order of an element a of a group, is thus the smallest positive integer m such that am = e, where e denotes the identity element of the group, and am denotes the product of m copies of a. If no such m exists, the order of a is infinite.

    The order of a group G is denoted by ord(G) or |G|, and the order of an element a is denoted by ord(a) or |a|, instead of where the brackets denote the generated group.

    Lagrange's theorem states that for any subgroup H of a finite group G, the order of the subgroup divides the order of the group; that is, |H| is a divisor of |G|. In particular, the order |a| of any element is a divisor of |G|

    https://en.wikipedia.org/wiki/Order_(group_theory)

    From Wikipedia, the free encyclopedia
    A Cayley graph of the symmetric group S4
    Cayley table, with header omitted, of the symmetric group S3. The elements are represented as matrices. To the left of the matrices, are their two-line form. The black arrows indicate disjoint cycles and correspond to cycle notation. Green circle is an odd permutation, white is an even permutation and black is the identity.

    These are the positions of the six matrices
    Symmetric group 3; Cayley table; positions.svg
    Some matrices are not arranged symmetrically to the main diagonal – thus the symmetric group is not abelian.

    In abstract algebra, the symmetric group defined over any set is the group whose elements are all the bijections from the set to itself, and whose group operation is the composition of functions. In particular, the finite symmetric group defined over a finite set of symbols consists of the permutations that can be performed on the symbols.[1] Since there are ( factorial) such permutation operations, the order (number of elements) of the symmetric group is .

    Although symmetric groups can be defined on infinite sets, this article focuses on the finite symmetric groups: their applications, their elements, their conjugacy classes, a finite presentation, their subgroups, their automorphism groups, and their representation theory. For the remainder of this article, "symmetric group" will mean a symmetric group on a finite set.

    The symmetric group is important to diverse areas of mathematics such as Galois theory, invariant theory, the representation theory of Lie groups, and combinatorics. Cayley's theorem states that every group is isomorphic to a subgroup of the symmetric group on (the underlying set of)

    https://en.wikipedia.org/wiki/Symmetric_group

    From Wikipedia, the free encyclopedia
    (Redirected from Group operation)
    A Rubik's cube with one side rotated
    The manipulations of the Rubik's Cube form the Rubik's Cube group.

    In mathematics, a group is a non-empty set and an operation that combines any two elements of the set to produce a third element of the set, in such a way that the operation is associative, an identity element exists and every element has an inverse. These three axioms hold for number systems and many other mathematical structures. For example, the integers together with the addition operation form a group. The concept of a group and the axioms that define it were elaborated for handling, in a unified way, essential structural properties of very different mathematical entities such as numbers, geometric shapes and polynomial roots. Because the concept of groups is ubiquitous in numerous areas both within and outside mathematics, some authors consider it as a central organizing principle of contemporary mathematics.[1][2]

    In geometry groups arise naturally in the study of symmetries and geometric transformations: The symmetries of an object form a group, called the symmetry group of the object, and the transformations of a given type form a general group. Lie groups appear in symmetry groups in geometry, and also in the Standard Model of particle physics. The Poincaré group is a Lie group consisting of the symmetries of spacetime in special relativity. Point groups describe symmetry in molecular chemistry.

    The concept of a group arose in the study of polynomial equations, starting with Évariste Galois in the 1830s, who introduced the term group (French: groupe) for the symmetry group of the roots of an equation, now called a Galois group. After contributions from other fields such as number theory and geometry, the group notion was generalized and firmly established around 1870. Modern group theory—an active mathematical discipline—studies groups in their own right. To explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. In addition to their abstract properties, group theorists also study the different ways in which a group can be expressed concretely, both from a point of view of representation theory (that is, through the representations of the group) and of computational group theory. A theory has been developed for finite groups, which culminated with the classification of finite simple groups, completed in 2004. Since the mid-1980s, geometric group theory, which studies finitely generated groups as geometric objects, has become an active area in group theory. 

    https://en.wikipedia.org/wiki/Group_(mathematics)

     

    From Wikipedia, the free encyclopedia

    In mathematics, the automorphism group of an object X is the group consisting of automorphisms of X under composition of morphisms. For example, if X is a finite-dimensional vector space, then the automorphism group of X is the group of invertible linear transformations from X to itself (the general linear group of X). If instead X is a group, then its automorphism group is the group consisting of all group automorphisms of X.

    Especially in geometric contexts, an automorphism group is also called a symmetry group. A subgroup of an automorphism group is sometimes called a transformation group.

    Automorphism groups are studied in a general way in the field of category theory

    https://en.wikipedia.org/wiki/Automorphism_group

     

    Category theory is a general theory of mathematical structures and their relations that was introduced by Samuel Eilenberg and Saunders Mac Lane in the middle of the 20th century in their foundational work on algebraic topology. Nowadays, category theory is used in almost all areas of mathematics, and in many areas of computer science. In particular, numerous constructions of new mathematical objects from previous ones, that appear similarly in several contexts are conveniently expressed and unified in terms of categories. Examples include quotient spaces, direct products, completion, and duality.

    A category is formed by two sorts of objects: the objects of the category, and the morphisms, which relate two objects called the source and the target of the morphism. One often says that a morphism is an arrow that maps its source to its target. Morphisms can be composed if the target of the first morphism equals the source of the second one, and morphism composition has similar properties as function composition (associativity and existence of identity morphisms). Morphisms are often some sort of function, but this is not always the case. For example, a monoid may be viewed as a category with a single object, whose morphisms are the elements of the monoid.

    The second fundamental concept of category theory is the concept of a functor, which plays the role of a morphism between two categories and it maps objects of to objects of and morphisms of to morphisms of in such a way that sources are mapped to sources and targets are mapped to targets (or, in the case of a contravariant functor, sources are mapped to targets and vice-versa). A third fundamental concept is a natural transformation that may be viewed as a morphism of functors. 

    https://en.wikipedia.org/wiki/Category_theory

     

    From Wikipedia, the free encyclopedia

    In mathematics, a duality translates concepts, theorems or mathematical structures into other concepts, theorems or structures, in a one-to-one fashion, often (but not always) by means of an involution operation: if the dual of A is B, then the dual of B is A. Such involutions sometimes have fixed points, so that the dual of A is A itself. For example, Desargues' theorem is self-dual in this sense under the standard duality in projective geometry.

    In mathematical contexts, duality has numerous meanings.[1] It has been described as "a very pervasive and important concept in (modern) mathematics"[2] and "an important general theme that has manifestations in almost every area of mathematics".[3]

    Many mathematical dualities between objects of two types correspond to pairings, bilinear functions from an object of one type and another object of the second type to some family of scalars. For instance, linear algebra duality corresponds in this way to bilinear maps from pairs of vector spaces to scalars, the duality between distributions and the associated test functions corresponds to the pairing in which one integrates a distribution against a test function, and Poincaré duality corresponds similarly to intersection number, viewed as a pairing between submanifolds of a given manifold.[4]

    From a category theory viewpoint, duality can also be seen as a functor, at least in the realm of vector spaces. This functor assigns to each space its dual space, and the pullback construction assigns to each arrow f: VW its dual f: WV

    https://en.wikipedia.org/wiki/Duality_(mathematics) 

    From Wikipedia, the free encyclopedia
    (Redirected from Bilinear function)

    In mathematics, a bilinear map is a function combining elements of two vector spaces to yield an element of a third vector space, and is linear in each of its arguments. Matrix multiplication is an example. 

    https://en.wikipedia.org/wiki/Bilinear_map


     

     



     

     

     

    Triad refers to a group of three people in sociology. It is one of the simplest human groups that can be studied and is mostly looked at by microsociology. The study of triads and dyads was pioneered by German sociologist Georg Simmel at the end of the nineteenth century.

    A triad can be viewed as a group of three people that can create different group interactions. This specific grouping is common yet overlooked in society for many reasons. Those being that it is compared to the lives of others, how they shape society, and how communication plays a role in different relationships scenarios.[1]

    It was derived in the late 1800s to early 1900s and evolved throughout time to shape group interactions in the present. Simmel also hypothesized between dyads and triads and how they may differ. A dyad is a group of two people that interact while a triad is another person added on to create more communicational interactions.[2] For example: adding an extra person, therefore creating a triad, this can result in different language barriers, personal connection, and an overall impression of the third person.[2]

    Simmel wanted to convey to his audience that a triad is not a basic group with positive interactions, but how these interactions can differ depending on person to person. 

    https://en.wikipedia.org/wiki/Triad_(sociology)

     

    No comments:

    Post a Comment