https://en.wikipedia.org/wiki/Measurement
In master locksmithing, key relevance is the measurable difference between an original key and a copy made of that key, either from a wax impression or directly from the original, and how similar the two keys are in size and shape.[1] It can also refer to the measurable difference between a key and the size required to fit and operate the keyway of its paired lock.
https://en.wikipedia.org/wiki/Key_relevance
Level of measurement or scale of measure is a classification that describes the nature of information within the values assigned to variables.[1] Psychologist Stanley Smith Stevens developed the best-known classification with four levels, or scales, of measurement: nominal, ordinal, interval, and ratio.[1][2] This framework of distinguishing levels of measurement originated in psychology and has since had a complex history, being adopted and extended in some disciplines and by some scholars, and criticized or rejected by others.[3] Other classifications include those by Mosteller and Tukey,[4] and by Chrisman.[5]
https://en.wikipedia.org/wiki/Level_of_measurement
The limit of detection (LOD or LoD) is the lowest signal, or the lowest corresponding quantity to be determined (or extracted) from the signal, that can be observed with a sufficient degree of confidence or statistical significance. However, the exact threshold (level of decision) used to decide when a signal significantly emerges above the continuously fluctuating background noise remains arbitrary and is a matter of policy and often of debate among scientists, statisticians and regulators depending on the stakes in different fields.
https://en.wikipedia.org/wiki/Detection_limit
In the branch of experimental psychology focused on sense, sensation, and perception, which is called psychophysics, a just-noticeable difference or JND is the amount something must be changed in order for a difference to be noticeable, detectable at least half the time.[1] This limen is also known as the difference limen, difference threshold, or least perceptible difference.[2]
https://en.wikipedia.org/wiki/Just-noticeable_difference
An environmental error is an error in calculations that are being a part of observations due to environment. Any experiment performing anywhere in the universe has its surroundings, from which we cannot eliminate our system. The study of environmental effects has primary advantage of being able us to justify the fact that environment has impact on experiments and feasible environment will not only rectify our result but also amplify it.
https://en.wikipedia.org/wiki/Environmental_error
In behavioral psychology (or applied behavior analysis), stimulus control is a phenomenon in operant conditioning (also called contingency management) that occurs when an organism behaves in one way in the presence of a given stimulus and another way in its absence. A stimulus that modifies behavior in this manner is either a discriminative stimulus (Sd) or stimulus delta (S-delta). Stimulus-based control of behavior occurs when the presence or absence of an Sd or S-delta controls the performance of a particular behavior. For example, the presence of a stop sign (S-delta) at a traffic intersection alerts the driver to stop driving and increases the probability that "braking" behavior will occur. Such behavior is said to be emitted because it does not force the behavior to occur since stimulus control is a direct result of historical reinforcement contingencies, as opposed to reflexive behavior that is said to be elicited through respondent conditioning.
Some theorists believe that all behavior is under some form of stimulus control.[1] For example, in the analysis of B. F. Skinner,[2] verbal behavior is a complicated assortment of behaviors with a variety of controlling stimuli.[3]
https://en.wikipedia.org/wiki/Stimulus_control
Extinction is a behavioral phenomenon observed in both operantly conditioned and classically conditioned behavior, which manifests itself by fading of non-reinforced conditioned response over time. When operant behavior that has been previously reinforced no longer produces reinforcing consequences the behavior gradually stops occurring.[1] In classical conditioning, when a conditioned stimulus is presented alone, so that it no longer predicts the coming of the unconditioned stimulus, conditioned responding gradually stops. For example, after Pavlov's dog was conditioned to salivate at the sound of a metronome, it eventually stopped salivating to the metronome after the metronome had been sounded repeatedly but no food came. Many anxiety disorders such as post traumatic stress disorder are believed to reflect, at least in part, a failure to extinguish conditioned fear.[2]
https://en.wikipedia.org/wiki/Extinction_(psychology)
Discrimination is the act of making distinctions between people based on the groups, classes, or other categories to which they belong or are perceived to belong that are disadvantageous.[1] People may be discriminated on the basis of race, gender identity, sex, age, religion, disability, or sexual orientation, as well as other categories.[2] Discrimination especially occurs when individuals or groups are unfairly treated in a way which is worse than other people are treated, on the basis of their actual or perceived membership in certain groups or social categories.[2][3] It involves restricting members of one group from opportunities or privileges that are available to members of another group.[4]
Discriminatory traditions, policies, ideas, practices and laws exist in many countries and institutions in all parts of the world, including territories where discrimination is generally looked down upon. In some places, attempts such as quotas have been used to benefit those who are believed to be current or past victims of discrimination. These attempts have often been met with controversy, and have sometimes been called reverse discrimination.
https://en.wikipedia.org/wiki/Discrimination
In digital audio using pulse-code modulation (PCM), bit depth is the number of bits of information in each sample, and it directly corresponds to the resolution of each sample. Examples of bit depth include Compact Disc Digital Audio, which uses 16 bits per sample, and DVD-Audio and Blu-ray Disc which can support up to 24 bits per sample.
In basic implementations, variations in bit depth primarily affect the noise level from quantization error—thus the signal-to-noise ratio (SNR) and dynamic range. However, techniques such as dithering, noise shaping, and oversampling can mitigate these effects without changing the bit depth. Bit depth also affects bit rate and file size.
Bit depth is only meaningful in reference to a PCM digital signal. Non-PCM formats, such as lossy compression formats, do not have associated bit depths.[a]
https://en.wikipedia.org/wiki/Audio_bit_depth
Image resolution is the detail an image holds. The term applies to digital images, film images, and other types of images. "Higher resolution" means more image detail.
Image resolution can be measured in various ways. Resolution quantifies how close lines can be to each other and still be visibly resolved. Resolution units can be tied to physical sizes (e.g. lines per mm, lines per inch), to the overall size of a picture (lines per picture height, also known simply as lines, TV lines, or TVL), or to angular subtense. Instead of single lines, line pairs are often used, composed of a dark line and an adjacent light line; for example, a resolution of 10 lines per millimeter means 5 dark lines alternating with 5 light lines, or 5 line pairs per millimeter (5 LP/mm). Photographic lens and are most often quoted in line pairs per millimeter.
https://en.wikipedia.org/wiki/Image_resolution
Gestalt psychology, gestaltism, or configurationism is a school of psychology that emerged in the early twentieth century in Austria and Germany as a theory of perception that was a rejection of basic principles of Wilhelm Wundt's and Edward Titchener's elementalist and structuralist psychology.[1][2][3]
https://en.wikipedia.org/wiki/Gestalt_psychology
In psychology and cognitive science, a schema (plural schemata or schemas) describes a pattern of thought or behavior that organizes categories of information and the relationships among them.[1][2] It can also be described as a mental structure of preconceived ideas, a framework representing some aspect of the world, or a system of organizing and perceiving new information,[3] such as a mental schema or conceptual model. Schemata influence attention and the absorption of new knowledge: people are more likely to notice things that fit into their schema, while re-interpreting contradictions to the schema as exceptions or distorting them to fit. Schemata have a tendency to remain unchanged, even in the face of contradictory information.[4] Schemata can help in understanding the world and the rapidly changing environment.[5] People can organize new perceptions into schemata quickly as most situations do not require complex thought when using schema, since automatic thought is all that is required.[5]
People use schemata to organize current knowledge and provide a framework for future understanding. Examples of schemata include mental models, social schemas, stereotypes, social roles, scripts, worldviews, heuristics, and archetypes. In Piaget's theory of development, children construct a series of schemata, based on the interactions they experience, to help them understand the world.[6]
https://en.wikipedia.org/wiki/Schema_(psychology)
The earliest recorded systems of weights and measures originate in the 3rd or 4th millennium BC. Even the very earliest civilizations needed measurement for purposes of agriculture, construction and trade. Early standard units might only have applied to a single community or small region, with every area developing its own standards for lengths, areas, volumes and masses. Often such systems were closely tied to one field of use, so that volume measures used, for example, for dry grains were unrelated to those for liquids, with neither bearing any particular relationship to units of length used for measuring cloth or land. With development of manufacturing technologies, and the growing importance of trade between communities and ultimately across the Earth, standardized weights and measures became critical. Starting in the 18th century, modernized, simplified and uniform systems of weights and measures were developed, with the fundamental units defined by ever more precise methods in the science of metrology. The discovery and application of electricity was one factor motivating the development of standardized internationally applicable units.
https://en.wikipedia.org/wiki/History_of_measurement
The history of science and technology (HST) is a field of history that examines the understanding of the natural world (science) and the ability to manipulate it (technology) at different points in time. This academic discipline also studies the cultural, economic, and political impacts of and contexts for scientific practices.
https://en.wikipedia.org/wiki/History_of_science_and_technology
Instrumentation is a collective term for measuring instruments that are used for indicating, measuring and recording physical quantities. The term has its origins in the art and science of scientific instrument-making.
Instrumentation can refer to devices as simple as direct-reading thermometers, or as complex as multi-sensor components of industrial control systems. Today, instruments can be found in laboratories, refineries, factories and vehicles, as well as in everyday household use (e.g., smoke detectors and thermostats)
https://en.wikipedia.org/wiki/Instrumentation
ISO 10012:2003, Measurement management systems - Requirements for measurement processes and measuring equipment is the ISO standard that specifies generic requirements and provides guidance for the management of measurement processes and metrological confirmation of measuring equipment used to support and demonstrate compliance with metrological requirements. It specifies quality management requirements of a measurement management system that can be used by an organization performing measurements as part of the overall management system, and to ensure metrological requirements are met.
ISO 10012:2003 is not intended to be used as a requisite for demonstrating conformance with ISO 9001, ISO 14001 or any other standard. Interested parties can agree to use ISO 10012:2003 as an input for satisfying measurement management system requirements in certification activities.
Other standards and guides exist for particular elements affecting measurement results, e.g. details of measurement methods, competence of personnel, and interlaboratory comparisons.
ISO 10012:2003 is not intended as a substitute for, or as an addition to, the requirements of ISO/IEC 17025.
https://en.wikipedia.org/wiki/ISO_10012
A primary instrument is a scientific instrument, which by its physical characteristics is accurate and is not calibrated against anything else. A primary instrument must be able to be exactly duplicated anywhere, anytime with identical results.
https://en.wikipedia.org/wiki/Primary_instrument
An order of magnitude is an approximation of the logarithm of a value relative to some contextually understood reference value, usually 10, interpreted as the base of the logarithm and the representative of values of magnitude one. Logarithmic distributions are common in nature and considering the order of magnitude of values sampled from such a distribution can be more intuitive. When the reference value is 10, the order of magnitude can be understood as the number of digits in the base-10 representation of the value. Similarly, if the reference value is one of some powers of 2, since computers store data in a binary format, the magnitude can be understood in terms of the amount of computer memory needed to store that value.
Differences in order of magnitude can be measured on a base-10 logarithmic scale in “decades” (i.e., factors of ten).[1] Examples of numbers of different magnitudes can be found at Orders of magnitude (numbers).
https://en.wikipedia.org/wiki/Order_of_magnitude
In statistics, latent variables (from Latin: present participle of lateo, “lie hidden”) are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or measured.[1] Such latent variable models are used in many disciplines, including political science, demography, engineering, medicine, ecology, physics, machine learning/artificial intelligence, bioinformatics, chemometrics, natural language processing, management and the social sciences.
Latent variables may correspond to aspects of physical reality. These could in principle be measured, but may not be for practical reasons. In this situation, the term hidden variables is commonly used (reflecting the fact that the variables are meaningful, but not observable). Other latent variables correspond to abstract concepts, like categories, behavioral or mental states, or data structures. The terms hypothetical variables or hypothetical constructs may be used in these situations.
The use of latent variables can serve to reduce the dimensionality of data. Many observable variables can be aggregated in a model to represent an underlying concept, making it easier to understand the data. In this sense, they serve a function similar to that of scientific theories. At the same time, latent variables link observable "sub-symbolic" data in the real world to symbolic data in the modeled world.
https://en.wikipedia.org/wiki/Latent_and_observable_variables
Hidden variables may refer to:
- Confounding, in statistics, an extraneous variable in a statistical model that correlates (directly or inversely) with both the dependent variable and the independent variable
- Hidden transformation, in computer science, a way to transform a generic constraint satisfaction problem into a binary one by introducing new hidden variables
- Hidden-variable theories,
in physics, the proposition that statistical models of physical systems
(such as Quantum mechanics) are inherently incomplete, and that the
apparent randomness of a system depends not on collapsing wave
functions, but rather due to unseen or unmeasurable (and thus "hidden")
variables
- Local hidden-variable theory, in quantum mechanics, a hidden-variable theory in which distant events are assumed to have no instantaneous (or at least faster-than-light) effect on local events
- Latent variables, in statistics, variables that are inferred from other observed variables
See also
https://en.wikipedia.org/wiki/Hidden_variable
NCSL International (NCSLI) (from the founding name "National Conference of Standards Laboratories") is a global, non-profit organization whose membership is open to any organization with an interest in metrology (the science of measurement) and its application in research, development, education, and commerce.
https://en.wikipedia.org/wiki/NCSL_International
Measurement is a peer-reviewed scientific journal covering all aspects of metrology. It was established in 1983 and is published 18 times per year. It is published by Elsevier on behalf of the International Measurement Confederation and the editor-in-chief is Paolo Carbone (University of Perugia). According to the Journal Citation Reports, the journal has a 2021 impact factor of 5.131.[1]
https://en.wikipedia.org/wiki/Measurement_(journal)
An unusual unit of measurement is a unit of measurement that does not form part of a coherent system of measurement, especially because its exact quantity may not be well known or because it may be an inconvenient multiple or fraction of a base unit.
Many of the unusual units of measurements listed here are colloquial measurements, units devised to compare a measurement to common and familiar objects.
https://en.wikipedia.org/wiki/List_of_unusual_units_of_measurement
In the science of measurement, the least count of a measuring instrument is the smallest value in the measured quantity that can be resolved on the instrument's scale.[1] The least count is related to the precision of an instrument; an instrument that can measure smaller changes in a value relative to another instrument, has a smaller "least count" value and so is more precise. Any measurement made by the instrument can be considered repeatable to no less than the resolution of the least count. The least count of an instrument is inversely proportional to the precision of the instrument.
For example, a sundial may only have scale marks representing the hours of daylight; it would have a least count of one hour. A stopwatch used to time a race might resolve down to a hundredth of a second, its least count. The stopwatch is more precise at measuring time intervals than the sundial because it has more "counts" (scale intervals) in each hour of elapsed time. Least count of an instrument is one of the very important tools in order to get accurate readings of instruments like vernier caliper and screw gauge used in various experiments.
Least count uncertainty is one of the sources of experimental error in measurements. Least count of a vernier caliper is 0.1 mm and least count of a micrometer is 0.01 mm.
https://en.wikipedia.org/wiki/Least_count
In metrology (the science of measurement), a standard (or etalon) is an object, system, or experiment that bears a defined relationship to a unit of measurement of a physical quantity.[1] Standards are the fundamental reference for a system of weights and measures, against which all other measuring devices are compared. Historical standards for length, volume, and mass were defined by many different authorities, which resulted in confusion and inaccuracy of measurements. Modern measurements are defined in relationship to internationally standardized reference objects, which are used under carefully controlled laboratory conditions to define the units of length, mass, electrical potential, and other physical quantities.
https://en.wikipedia.org/wiki/Standard_(metrology)
Timeline of temperature and pressure measurement technology. A history of temperature measurement and pressure measurement technology.
https://en.wikipedia.org/wiki/Timeline_of_temperature_and_pressure_measurement_technology
Virtual instrumentation is the use of customizable software and modular measurement hardware to create user-defined measurement systems, called virtual instruments.
Traditional hardware instrumentation systems are made up of fixed hardware components, such as digital multimeters and oscilloscopes that are completely specific to their stimulus, analysis, or measurement function. Because of their hard-coded function, these systems are more limited in their versatility than virtual instrumentation systems. The primary difference between hardware instrumentation and virtual instrumentation is that software is used to replace a large amount of hardware. The software enables complex and expensive hardware to be replaced by already purchased computer hardware; e. g. analog-to-digital converter can act as a hardware complement of a virtual oscilloscope, a potentiostat enables frequency response acquisition and analysis in electrochemical impedance spectroscopy with virtual instrumentation.
The concept of a synthetic instrument is a subset of the virtual instrument concept. A synthetic instrument is a kind of virtual instrument that is purely software defined. A synthetic instrument performs a specific synthesis, analysis, or measurement function on completely generic, measurement agnostic hardware. Virtual instruments can still have measurement specific hardware, and tend to emphasize modular hardware approaches that facilitate this specificity. Hardware supporting synthetic instruments is by definition not specific to the measurement, nor is it necessarily (or usually) modular.
Leveraging commercially available technologies, such as the PC and the analog-to-digital converter, virtual instrumentation has grown significantly since its inception in the late 1970s. Additionally, software packages like National Instruments' LabVIEW and other graphical programming languages helped grow adoption by making it easier for non-programmers to develop systems.
The newly updated technology called "HARD VIRTUAL INSTRUMENTATION" is developed by some companies. It is said that with this technology the execution of the software is done by the hardware itself which can help in fast real time processing.
See also
https://en.wikipedia.org/wiki/Virtual_instrumentation
This article needs additional citations for verification. (June 2019) |
A unit of measurement is a definite magnitude of a quantity, defined and adopted by convention or by law, that is used as a standard for measurement of the same kind of quantity.[1] Any other quantity of that kind can be expressed as a multiple of the unit of measurement.[2]
For example, a length is a physical quantity. The metre (symbol m) is a unit of length that represents a definite predetermined length. For instance, when referencing "10 metres" (or 10 m), what is actually meant is 10 times the definite predetermined length called "metre".
The definition, agreement, and practical use of units of measurement have played a crucial role in human endeavour from early ages up to the present. A multitude of systems of units used to be very common. Now there is a global standard, the International System of Units (SI), the modern form of the metric system.
In trade, weights and measures is often a subject of governmental regulation, to ensure fairness and transparency. The International Bureau of Weights and Measures (BIPM) is tasked with ensuring worldwide uniformity of measurements and their traceability to the International System of Units (SI).
Metrology is the science of developing nationally and internationally accepted units of measurement.
In physics and metrology, units are standards for measurement of physical quantities that need clear definitions to be useful. Reproducibility of experimental results is central to the scientific method. A standard system of units facilitates this. Scientific systems of units are a refinement of the concept of weights and measures historically developed for commercial purposes.[3]
Science, medicine, and engineering often use larger and smaller units of measurement than those used in everyday life. The judicious selection of the units of measurement can aid researchers in problem solving (see, for example, dimensional analysis).
In the social sciences, there are no standard units of measurement and the theory and practice of measurement is studied in psychometrics and the theory of conjoint measurement.
https://en.wikipedia.org/wiki/Unit_of_measurement
Metric fixation refers to a tendency for decision-makers to place excessively large emphases on selected metrics.
In management (and many other social science fields), decision makers typically use metrics to measure how well a person or an organization attain desired goal(s). E.g., a company might use "the number of new customers gained" as a metric to evaluate the success of a marketing campaign. The issue of metric fixation is said to arise if the decision maker(s) focus excessively on the metrics, often to the point that they treat "attaining desired values on the metrics" as a core goal (instead of simply an indicator of successes). For example, a school may want to improve the number of students who pass a certain test (metric = "number of students who pass"). This is based on the assumption that the said test truly can evaluate students' ability to succeed in the real world (assuming there already is a good definition of what "success" means). If the said test fails to evaluate the students' ability to function in the working world, focusing solely on increasing their scores on this test might cause the school to ignore other learning goals also crucial for real world functioning. As a result, the students' developments might be impaired.[1][2]
The concept of metric fixation was first mentioned in the 2018 book The Tyranny of Metrics.[3] Since then, it has drawn the attention of some management researchers and data scientists.[2][4]
https://en.wikipedia.org/wiki/Metric_fixation
https://en.wikipedia.org/wiki/Category:Standards_and_measurement_stubs
https://en.wikipedia.org/wiki/Death
https://en.wikipedia.org/wiki/Drug_reference_standard
https://en.wikipedia.org/wiki/Dry_gallon
https://en.wikipedia.org/wiki/Bureau_of_Normalization
https://en.wikipedia.org/wiki/Air_track
https://en.wikipedia.org/wiki/Calibration_gas
https://en.wikipedia.org/wiki/Emission_test_cycle
https://en.wikipedia.org/wiki/Classification_of_Types_of_Construction
https://en.wikipedia.org/wiki/Atom_(time)
https://en.wikipedia.org/wiki/Bite_force_quotient
https://en.wikipedia.org/wiki/Day_of_Six_Billion
Population is the term typically used to refer to the number of people in a single area. Governments conduct a census to quantify the size of a resident population within a given jurisdiction. The term is also applied to animals, microorganisms, and plants, and has specific uses within such fields as ecology and genetics.
https://en.wikipedia.org/wiki/Population
Human overpopulation (or human population overshoot) is the hypothetical state in which human populations can become too large to be sustained by their environment or resources in the long term. The topic is usually discussed in the context of world population, though it may concern individual nations, regions, and cities.
Since 1804, the global human population has increased from 1 billion to 8 billion due to medical advancements and improved agricultural productivity. According to the most recent United Nations' projections, "[t]he global population is expected to reach 9.7 billion in 2050 and 10.4 billion in 2100 [assuming] a decline of fertility for countries where large families are still prevalent."[1] Those concerned by this trend argue that they result in levels of resource consumption and pollution which exceed the environment's carrying capacity, leading to population overshoot.[2] The population overshoot hypothesis is often discussed in relation to other population concerns such as population momentum, biodiversity loss,[3] hunger and malnutrition,[4] resource depletion, and the overall human impact on the environment.[5]
Early discussions of overpopulation in English were spurred by the work of Thomas Malthus. Discussions of overpopulation follow a similar line of inquiry as Malthusianism and its Malthusian catastrophe,[6][7] a hypothetical event where population exceeds agricultural capacity, causing famine or war over resources, resulting in poverty and depopulation. More recent discussion of overpopulation was popularized by Paul Ehrlich in his 1968 book The Population Bomb and subsequent writings.[8][9] Ehrlich described overpopulation as a function of overconsumption,[10] arguing that overpopulation should be defined by a population being unable to sustain itself without depleting non-renewable resources.[11][12][13]
https://en.wikipedia.org/wiki/Human_overpopulation
Population growth is the increase in the number of people in a population or dispersed group. Actual global human population growth amounts to around 83 million annually, or 1.1% per year.[2] The global population has grown from 1 billion in 1800 to 7.9 billion in 2020.[3] The UN projected population to keep growing, and estimates have put the total population at 8.6 billion by mid-2030, 9.8 billion by mid-2050 and 11.2 billion by 2100.[4] However, some academics outside the UN have increasingly developed human population models that account for additional downward pressures on population growth; in such a scenario population would peak before 2100.[5]
https://en.wikipedia.org/wiki/Population_growth
Population projections are attempts to show how the human population statistics might change in the future.[1] These projections are an important input to forecasts of the population's impact on this planet and humanity's future well-being.[2] Models of population growth take trends in human development, and apply projections into the future.[3] These models use trend-based-assumptions about how populations will respond to economic, social and technological forces to understand how they will affect fertility and mortality, and thus population growth.[3]
https://en.wikipedia.org/wiki/Projections_of_population_growth
The Journal of Computational and Nonlinear Dynamics is a quarterly peer-reviewed multidisciplinary scientific journal covering the study of nonlinear dynamics. It was established in 2006 and is published by the American Society of Mechanical Engineers. The editor-in-chief is Balakumar Balachandran (University of Maryland). According to the Journal Citation Reports, the journal has a 2017 impact factor of 1.996.[1]
https://en.wikipedia.org/wiki/Journal_of_Computational_and_Nonlinear_Dynamics
Systems science is the field of science surrounding systems theory, cybernetics, and the science of complex systems. As an interdisciplinary science, it is applicable in a variety of areas, such as engineering, biology, medicine and social sciences.
https://en.wikipedia.org/wiki/Category:Systems_science
In simple terms, risk is the possibility of something bad happening.[1] Risk involves uncertainty about the effects/implications of an activity with respect to something that humans value (such as health, well-being, wealth, property or the environment), often focusing on negative, undesirable consequences.[2] Many different definitions have been proposed. The international standard definition of risk for common understanding in different applications is “effect of uncertainty on objectives”.[3]
The understanding of risk, the methods of assessment and management, the descriptions of risk and even the definitions of risk differ in different practice areas (business, economics, environment, finance, information technology, health, insurance, safety, security etc). This article provides links to more detailed articles on these areas. The international standard for risk management, ISO 31000, provides principles and generic guidelines on managing risks faced by organizations.[4]
Definitions of risk
Oxford English Dictionary
The Oxford English Dictionary (OED) cites the earliest use of the word in English (in the spelling of risque from its French original, 'risque') as of 1621, and the spelling as risk from 1655. While including several other definitions, the OED 3rd edition defines risk as:
(Exposure to) the possibility of loss, injury, or other adverse or welcome circumstance; a chance or situation involving such a possibility.[5]
The Cambridge Advanced Learner's Dictionary gives a simple summary, defining risk as “the possibility of something bad happening”.[1]
International Organization for Standardization
The International Organization for Standardization (ISO) Guide 73 provides basic vocabulary to develop common understanding on risk management concepts and terms across different applications. ISO Guide 73:2009 defines risk as:
effect of uncertainty on objectives
Note 1: An effect is a deviation from the expected – positive or negative.
Note 2: Objectives can have different aspects (such as financial, health and safety, and environmental goals) and can apply at different levels (such as strategic, organization-wide, project, product and process).
Note 3: Risk is often characterized by reference to potential events and consequences or a combination of these.
Note 4: Risk is often expressed in terms of a combination of the consequences of an event (including changes in circumstances) and the associated likelihood of occurrence.
Note 5: Uncertainty is the state, even partial, of deficiency of information related to, understanding or knowledge of, an event, its consequence, or likelihood.[3]
This definition was developed by an international committee representing over 30 countries and is based on the input of several thousand subject matter experts. It was first adopted in 2002. Its complexity reflects the difficulty of satisfying fields that use the term risk in different ways. Some restrict the term to negative impacts (“downside risks”), while others include positive impacts (“upside risks”).
ISO 31000:2018 “Risk management — Guidelines” uses the same definition with a simpler set of notes.[4]
Other
Many other definitions of risk have been influential:
- “Source of harm”. The earliest use of the word “risk” was as a synonym for the much older word “hazard”, meaning a potential source of harm. This definition comes from Blount’s “Glossographia” (1661)[6] and was the main definition in the OED 1st (1914) and 2nd (1989) editions. Modern equivalents refer to “unwanted events” [7] or “something bad that might happen”.[1]
- “Chance of harm”. This definition comes from Johnson’s “Dictionary of the English Language” (1755), and has been widely paraphrased, including “possibility of loss” [5] or “probability of unwanted events”.[7]
- “Uncertainty about loss”. This definition comes from Willett’s “Economic Theory of Risk and Insurance” (1901).[8] This links “risk” to “uncertainty”, which is a broader term than chance or probability.
- “Measurable uncertainty”. This definition comes from Knight’s “Risk, Uncertainty and Profit” (1921).[9] It allows “risk” to be used equally for positive and negative outcomes. In insurance, risk involves situations with unknown outcomes but known probability distributions.[10]
- “Volatility of return”. Equivalence between risk and variance of return was first identified in Markovitz’s “Portfolio Selection” (1952).[11] In finance, volatility of return is often equated to risk.[12]
- “Statistically expected loss”. The expected value of loss was used to define risk by Wald (1939) in what is now known as decision theory.[13] The probability of an event multiplied by its magnitude was proposed as a definition of risk for the planning of the Delta Works in 1953, a flood protection program in the Netherlands.[14] It was adopted by the US Nuclear Regulatory Commission (1975),[15] and remains widely used.[7]
- “Likelihood and severity of events”. The “triplet” definition of risk as “scenarios, probabilities and consequences” was proposed by Kaplan & Garrick (1981).[16] Many definitions refer to the likelihood/probability of events/effects/losses of different severity/consequence, e.g. ISO Guide 73 Note 4.[3]
- “Consequences and associated uncertainty”. This was proposed by Kaplan & Garrick (1981).[16] This definition is preferred in Bayesian analysis, which sees risk as the combination of events and uncertainties about them.[17]
- “Uncertain events affecting objectives”. This definition was adopted by the Association for Project Management (1997).[18][19] With slight rewording it became the definition in ISO Guide 73.[3]
- “Uncertainty of outcome”. This definition was adopted by the UK Cabinet Office (2002)[20] to encourage innovation to improve public services. It allowed “risk” to describe either “positive opportunity or negative threat of actions and events”.
- “Asset, threat and vulnerability”. This definition comes from the Threat Analysis Group (2010) in the context of computer security.[21]
- “Human interaction with uncertainty”. This definition comes from Cline (2015)[22] in the context of adventure education.
Some resolve these differences by arguing that the definition of risk is subjective. For example:
No definition is advanced as the correct one, because there is no one definition that is suitable for all problems. Rather, the choice of definition is a political one, expressing someone’s views regarding the importance of different adverse effects in a particular situation.[23]
The Society for Risk Analysis concludes that “experience has shown that to agree on one unified set of definitions is not realistic”. The solution is “to allow for different perspectives on fundamental concepts and make a distinction between overall qualitative definitions and their associated measurements.”[2]
https://en.wikipedia.org/wiki/Risk
Economic risk
Economics is concerned with the production, distribution and consumption of goods and services. Economic risk arises from uncertainty about economic outcomes. For example, economic risk may be the chance that macroeconomic conditions like exchange rates, government regulation, or political stability will affect an investment or a company’s prospects.[24]
In economics, as in finance, risk is often defined as quantifiable uncertainty about gains and losses.
Environmental risk
Environmental risk arises from environmental hazards or environmental issues.
In the environmental context, risk is defined as “The chance of harmful effects to human health or to ecological systems”.[25]
Environmental risk assessment aims to assess the effects of stressors, often chemicals, on the local environment.[26]
Financial risk
Finance is concerned with money management and acquiring funds.[27] Financial risk arises from uncertainty about financial returns. It includes market risk, credit risk, liquidity risk and operational risk.
In finance, risk is the possibility that the actual return on an investment will be different from its expected return.[28] This includes not only "downside risk" (returns below expectations, including the possibility of losing some or all of the original investment) but also "upside risk" (returns that exceed expectations). In Knight’s definition, risk is often defined as quantifiable uncertainty about gains and losses. This contrasts with Knightian uncertainty, which cannot be quantified.
Financial risk modeling determines the aggregate risk in a financial portfolio. Modern portfolio theory measures risk using the variance (or standard deviation) of asset prices. More recent risk measures include value at risk.
Because investors are generally risk averse, investments with greater inherent risk must promise higher expected returns.[29]
Financial risk management uses financial instruments to manage exposure to risk. It includes the use of a hedge to offset risks by adopting a position in an opposing market or investment.
In financial audit, audit risk refers to the potential that an audit report may fail to detect material misstatement either due to error or fraud.
Health risk
Health risks arise from disease and other biological hazards.
Epidemiology is the study and analysis of the distribution, patterns and determinants of health and disease. It is a cornerstone of public health, and shapes policy decisions by identifying risk factors for disease and targets for preventive healthcare.
In the context of public health, risk assessment is the process of characterizing the nature and likelihood of a harmful effect to individuals or populations from certain human activities. Health risk assessment can be mostly qualitative or can include statistical estimates of probabilities for specific populations.
A health risk assessment (also referred to as a health risk appraisal and health & well-being assessment) is a questionnaire screening tool, used to provide individuals with an evaluation of their health risks and quality of life
Health, safety, and environment risks
Health, safety, and environment (HSE) are separate practice areas; however, they are often linked. The reason is typically to do with organizational management structures; however, there are strong links among these disciplines. One of the strongest links is that a single risk event may have impacts in all three areas, albeit over differing timescales. For example, the uncontrolled release of radiation or a toxic chemical may have immediate short-term safety consequences, more protracted health impacts, and much longer-term environmental impacts. Events such as Chernobyl, for example, caused immediate deaths, and in the longer term, deaths from cancers, and left a lasting environmental impact leading to birth defects, impacts on wildlife, etc.
Information technology risk
Information technology (IT) is the use of computers to store, retrieve, transmit, and manipulate data. IT risk (or cyber risk) arises from the potential that a threat may exploit a vulnerability to breach security and cause harm. IT risk management applies risk management methods to IT to manage IT risks. Computer security is the protection of IT systems by managing IT risks.
Information security is the practice of protecting information by mitigating information risks. While IT risk is narrowly focused on computer security, information risks extend to other forms of information (paper, microfilm).
Occupational risk
Occupational health and safety is concerned with occupational hazards experienced in the workplace.
The Occupational Health and Safety Assessment Series (OHSAS) standard OHSAS 18001 in 1999 defined risk as the “combination of the likelihood and consequence(s) of a specified hazardous event occurring”. In 2018 this was replaced by ISO 45001 “Occupational health and safety management systems”, which use the ISO Guide 73 definition.
Project risk
A project is an individual or collaborative undertaking planned to achieve a specific aim. Project risk is defined as, "an uncertain event or condition that, if it occurs, has a positive or negative effect on a project’s objectives”. Project risk management aims to increase the likelihood and impact of positive events and decrease the likelihood and impact of negative events in the project.[32] [33]
Safety risk
Safety is concerned with a variety of hazards that may result in accidents causing harm to people, property and the environment. In the safety field, risk is typically defined as the “likelihood and severity of hazardous events”. Safety risks are controlled using techniques of risk management.
A high reliability organisation (HRO) involves complex operations in environments where catastrophic accidents could occur. Examples include aircraft carriers, air traffic control, aerospace and nuclear power stations. Some HROs manage risk in a highly quantified way. The technique is usually referred to as Probabilistic Risk Assessment (PRA). See WASH-1400 for an example of this approach. The incidence rate can also be reduced due to the provision of better occupational health and safety programmes [34]
Security risk
Security is freedom from, or resilience against, potential harm caused by others.
A security risk is "any event that could result in the compromise of organizational assets i.e. the unauthorized use, loss, damage, disclosure or modification of organizational assets for the profit, personal interest or political interests of individuals, groups or other entities."[35]
Security risk management involves protection of assets from harm caused by deliberate acts.
https://en.wikipedia.org/wiki/Risk
ISO 31000, the international standard for risk management,[4] describes a risk management process that consists of the following elements:
- Communicating and consulting
- Establishing the scope, context and criteria
- Risk assessment - recognising and characterising risks, and evaluating their significance to support decision-making. This includes risk identification, risk analysis and risk evaluation.
- Risk treatment - selecting and implementing options for addressing risk.
- Monitoring and reviewing
- Recording and reporting
https://en.wikipedia.org/wiki/Risk
Psychology of risk
Risk perception
Intuitive risk assessment
An understanding that future events are uncertain and a particular concern about harmful ones may arise in anyone living in a community, experiencing seasons, hunting animals or growing crops. Most adults therefore have an intuitive understanding of risk. This may not be exclusive to humans.[47]
In ancient times, the dominant belief was in divinely determined fates, and attempts to influence the gods may be seen as early forms of risk management. Early uses of the word ‘risk’ coincided with an erosion of belief in divinely ordained fate.[48]
Risk perception is the subjective judgement that people make about the characteristics and severity of a risk. At its most basic, the perception of risk is an intuitive form of risk analysis.[49]
Heuristics and biases
Intuitive understanding of risk differs in systematic ways from accident statistics. When making judgements about uncertain events, people rely on a few heuristic principles, which convert the task of estimating probabilities to simpler judgements. These heuristics are useful but suffer from systematic biases.[50]
The “availability heuristic” is the process of judging the probability of an event by the ease with which instances come to mind. In general, rare but dramatic causes of death are over-estimated while common unspectacular causes are under-estimated.[51]
An “availability cascade” is a self-reinforcing cycle in which public concern about relatively minor events is amplified by media coverage until the issue becomes politically important.[52]
Despite the difficulty of thinking statistically, people are typically over-confident in their judgements. They over-estimate their understanding of the world and under-estimate the role of chance.[53] Even experts are over-confident in their judgements.[54]
Psychometric paradigm
The “psychometric paradigm” assumes that risk is subjectively defined by individuals, influenced by factors that can be elicited by surveys.[55] People’s perception of the risk from different hazards depends on three groups of factors:
- Dread – the degree to which the hazard is feared or might be fatal, catastrophic, uncontrollable, inequitable, involuntary, increasing or difficult to reduce.
- Unknown - the degree to which the hazard is unknown to those exposed, unobservable, delayed, novel or unknown to science.
- Number of people exposed.
Hazards with high perceived risk are in general seen as less acceptable and more in need of reduction.[56]
Cultural theory of risk
Cultural Theory views risk perception as a collective phenomenon by which different cultures select some risks for attention and ignore others, with the aim of maintaining their particular way of life.[57] Hence risk perception varies according to the preoccupations of the culture. The theory distinguishes variations known as “group” (the degree of binding to social groups) and “grid” (the degree of social regulation), leading to four world-views:[58]
- Hierarchists (high group /high grid), who tend to approve of technology providing its risks are evaluated as acceptable by experts.
- Egalitarians (high group/low grid), who tend to object to technology because it perpetuates inequalities that harm society and the environment.
- Individualists (low group/low grid), who tend to approve of technology and see risks as opportunities.
- Fatalists (low group/high grid), who do not knowingly take risks but tend to accept risks that are imposed on them
Cultural Theory helps explain why it can be difficult for people with different world-views to agree about whether a hazard is acceptable, and why risk assessments may be more persuasive for some people (e.g. hierarchists) than others. However, there is little quantitative evidence that shows cultural biases are strongly predictive of risk perception.[59]
Risk and emotion
The importance of emotion in risk
While risk assessment is often described as a logical, cognitive process, emotion also has a significant role in determining how people react to risks and make decisions about them.[60] Some argue that intuitive emotional reactions are the predominant method by which humans evaluate risk. A purely statistical approach to disasters lacks emotion and thus fails to convey the true meaning of disasters and fails to motivate proper action to prevent them.[61] This is consistent with psychometric research showing the importance of “dread” (an emotion) alongside more logical factors such as the number of people exposed.
The field of behavioural economics studies human risk-aversion, asymmetric regret, and other ways that human financial behaviour varies from what analysts call "rational". Recognizing and respecting the irrational influences on human decision making may improve naive risk assessments that presume rationality but in fact merely fuse many shared biases.
The affect heuristic
The “affect heuristic” proposes that judgements and decision-making about risks are guided, either consciously or unconsciously, by the positive and negative feelings associated with them. [62] This can explain why judgements about risks are often inversely correlated with judgements about benefits. Logically, risk and benefit are distinct entities, but it seems that both are linked to an individual’s feeling about a hazard.[63]
Fear, anxiety and risk
Worry or anxiety is an emotional state that is stimulated by anticipation of a future negative outcome, or by uncertainty about future outcomes. It is therefore an obvious accompaniment to risk, and is initiated by many hazards and linked to increases in perceived risk. It may be a natural incentive for risk reduction. However, worry sometimes triggers behaviour that is irrelevant or even increases objective measurements of risk.[64]
Fear is a more intense emotional response to danger, which increases the perceived risk. Unlike anxiety, it appears to dampen efforts at risk minimisation, possibly because it provokes a feeling of helplessness.[65]
Dread risk
It is common for people to dread some risks but not others: They tend to be very afraid of epidemic diseases, nuclear power plant failures, and plane accidents but are relatively unconcerned about some highly frequent and deadly events, such as traffic crashes, household accidents, and medical errors. One key distinction of dreadful risks seems to be their potential for catastrophic consequences,[66] threatening to kill a large number of people within a short period of time.[67] For example, immediately after the 11 September attacks, many Americans were afraid to fly and took their car instead, a decision that led to a significant increase in the number of fatal crashes in the time period following the 9/11 event compared with the same time period before the attacks.[68][69]
Different hypotheses have been proposed to explain why people fear dread risks. First, the psychometric paradigm suggests that high lack of control, high catastrophic potential, and severe consequences account for the increased risk perception and anxiety associated with dread risks. Second, because people estimate the frequency of a risk by recalling instances of its occurrence from their social circle or the media, they may overvalue relatively rare but dramatic risks because of their overpresence and undervalue frequent, less dramatic risks.[69] Third, according to the preparedness hypothesis, people are prone to fear events that have been particularly threatening to survival in human evolutionary history.[70] Given that in most of human evolutionary history people lived in relatively small groups, rarely exceeding 100 people,[71] a dread risk, which kills many people at once, could potentially wipe out one's whole group. Indeed, research found[72] that people's fear peaks for risks killing around 100 people but does not increase if larger groups are killed. Fourth, fearing dread risks can be an ecologically rational strategy.[73] Besides killing a large number of people at a single point in time, dread risks reduce the number of children and young adults who would have potentially produced offspring. Accordingly, people are more concerned about risks killing younger, and hence more fertile, groups.[74]
Outrage
Outrage is a strong moral emotion, involving anger over an adverse event coupled with an attribution of blame towards someone perceived to have failed to do what they should have done to prevent it. Outrage is the consequence of an event, involving a strong belief that risk management has been inadequate. Looking forward, it may greatly increase the perceived risk from a hazard.[75]
Human factors
One of the growing areas of focus in risk management is the field of human factors where behavioural and organizational psychology underpin our understanding of risk based decision making. This field considers questions such as "how do we make risk based decisions?", "why are we irrationally more scared of sharks and terrorists than we are of motor vehicles and medications?"
In decision theory, regret (and anticipation of regret) can play a significant part in decision-making, distinct from risk aversion[76][77](preferring the status quo in case one becomes worse off).
Framing[78] is a fundamental problem with all forms of risk assessment. In particular, because of bounded rationality (our brains get overloaded, so we take mental shortcuts), the risk of extreme events is discounted because the probability is too low to evaluate intuitively. As an example, one of the leading causes of death is road accidents caused by drunk driving – partly because any given driver frames the problem by largely or totally ignoring the risk of a serious or fatal accident.
For instance, an extremely disturbing event (an attack by hijacking, or moral hazards) may be ignored in analysis despite the fact it has occurred and has a nonzero probability. Or, an event that everyone agrees is inevitable may be ruled out of analysis due to greed or an unwillingness to admit that it is believed to be inevitable. These human tendencies for error and wishful thinking often affect even the most rigorous applications of the scientific method and are a major concern of the philosophy of science.
All decision-making under uncertainty must consider cognitive bias, cultural bias, and notational bias: No group of people assessing risk is immune to "groupthink": acceptance of obviously wrong answers simply because it is socially painful to disagree, where there are conflicts of interest.
Framing involves other information that affects the outcome of a risky decision. The right prefrontal cortex has been shown to take a more global perspective[79] while greater left prefrontal activity relates to local or focal processing.[80]
From the Theory of Leaky Modules[81] McElroy and Seta proposed that they could predictably alter the framing effect by the selective manipulation of regional prefrontal activity with finger tapping or monaural listening.[82] The result was as expected. Rightward tapping or listening had the effect of narrowing attention such that the frame was ignored. This is a practical way of manipulating regional cortical activation to affect risky decisions, especially because directed tapping or listening is easily done.
Psychology of risk taking
A growing area of research has been to examine various psychological aspects of risk taking. Researchers typically run randomised experiments with a treatment and control group to ascertain the effect of different psychological factors that may be associated with risk taking.[83] Thus, positive and negative feedback about past risk taking can affect future risk taking. In an experiment, people who were led to believe they are very competent at decision making saw more opportunities in a risky choice and took more risks, while those led to believe they were not very competent saw more threats and took fewer risks.[84]
Other considerations
Risk and uncertainty
In his seminal work Risk, Uncertainty, and Profit, Frank Knight (1921) established the distinction between risk and uncertainty.
... Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated. The term "risk," as loosely used in everyday speech and in economic discussion, really covers two things which, functionally at least, in their causal relations to the phenomena of economic organization, are categorically different. ... The essential fact is that "risk" means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomenon depending on which of the two is really present and operating. ... It will appear that a measurable uncertainty, or "risk" proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all. We ... accordingly restrict the term "uncertainty" to cases of the non-quantitive type.:[85]
Thus, Knightian uncertainty is immeasurable, not possible to calculate, while in the Knightian sense risk is measurable.
Another distinction between risk and uncertainty is proposed by Douglas Hubbard:[86][12]
- Uncertainty: The lack of complete certainty, that is, the existence of more than one possibility. The "true" outcome/state/result/value is not known.
- Measurement of uncertainty: A set of probabilities assigned to a set of possibilities. Example: "There is a 60% chance this market will double in five years"
- Risk: A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome.
- Measurement of risk: A set of possibilities each with quantified probabilities and quantified losses. Example: "There is a 40% chance the proposed oil well will be dry with a loss of $12 million in exploratory drilling costs".
In this sense, one may have uncertainty without risk but not risk without uncertainty. We can be uncertain about the winner of a contest, but unless we have some personal stake in it, we have no risk. If we bet money on the outcome of the contest, then we have a risk. In both cases there are more than one outcome. The measure of uncertainty refers only to the probabilities assigned to outcomes, while the measure of risk requires both probabilities for outcomes and losses quantified for outcomes.
Mild Versus Wild Risk
Benoit Mandelbrot distinguished between "mild" and "wild" risk and argued that risk assessment and analysis must be fundamentally different for the two types of risk.[87] Mild risk follows normal or near-normal probability distributions, is subject to regression to the mean and the law of large numbers, and is therefore relatively predictable. Wild risk follows fat-tailed distributions, e.g., Pareto or power-law distributions, is subject to regression to the tail (infinite mean or variance, rendering the law of large numbers invalid or ineffective), and is therefore difficult or impossible to predict. A common error in risk assessment and analysis is to underestimate the wildness of risk, assuming risk to be mild when in fact it is wild, which must be avoided if risk assessment and analysis are to be valid and reliable, according to Mandelbrot.
Risk attitude, appetite and tolerance
The terms risk attitude, appetite, and tolerance are often used similarly to describe an organisation's or individual's attitude towards risk-taking. One's attitude may be described as risk-averse, risk-neutral, or risk-seeking. Risk tolerance looks at acceptable/unacceptable deviations from what is expected.[clarification needed] Risk appetite looks at how much risk one is willing to accept. There can still be deviations that are within a risk appetite. For example, recent research finds that insured individuals are significantly likely to divest from risky asset holdings in response to a decline in health, controlling for variables such as income, age, and out-of-pocket medical expenses.[88]
Gambling is a risk-increasing investment, wherein money on hand is risked for a possible large return, but with the possibility of losing it all. Purchasing a lottery ticket is a very risky investment with a high chance of no return and a small chance of a very high return. In contrast, putting money in a bank at a defined rate of interest is a risk-averse action that gives a guaranteed return of a small gain and precludes other investments with possibly higher gain. The possibility of getting no return on an investment is also known as the rate of ruin.
Risk compensation is a theory which suggests that people typically adjust their behavior in response to the perceived level of risk, becoming more careful where they sense greater risk and less careful if they feel more protected.[89] By way of example, it has been observed that motorists drove faster when wearing seatbelts and closer to the vehicle in front when the vehicles were fitted with anti-lock brakes.
Risk and autonomy
The experience of many people who rely on human services for support is that 'risk' is often used as a reason to prevent them from gaining further independence or fully accessing the community, and that these services are often unnecessarily risk averse.[90] "People's autonomy used to be compromised by institution walls, now it's too often our risk management practices", according to John O'Brien.[91] Michael Fischer and Ewan Ferlie (2013) find that contradictions between formal risk controls and the role of subjective factors in human services (such as the role of emotions and ideology) can undermine service values, so producing tensions and even intractable and 'heated' conflict.[92]
Risk society
Anthony Giddens and Ulrich Beck argued that whilst humans have always been subjected to a level of risk – such as natural disasters – these have usually been perceived as produced by non-human forces. Modern societies, however, are exposed to risks such as pollution, that are the result of the modernization process itself. Giddens defines these two types of risks as external risks and manufactured risks. The term Risk society was coined in the 1980s and its popularity during the 1990s was both as a consequence of its links to trends in thinking about wider modernity, and also to its links to popular discourse, in particular the growing environmental concerns during the period.[citation needed]
https://en.wikipedia.org/wiki/Risk#Psychology_of_risk
Risk management is the identification, evaluation, and prioritization of risks (defined in ISO 31000 as the effect of uncertainty on objectives) followed by coordinated and economical application of resources to minimize, monitor, and control the probability or impact of unfortunate events[1] or to maximize the realization of opportunities.
https://en.wikipedia.org/wiki/Risk_management
Fear, anxiety and risk
Worry or anxiety is an emotional state that is stimulated by anticipation of a future negative outcome, or by uncertainty about future outcomes. It is therefore an obvious accompaniment to risk, and is initiated by many hazards and linked to increases in perceived risk. It may be a natural incentive for risk reduction. However, worry sometimes triggers behaviour that is irrelevant or even increases objective measurements of risk.[64]
Fear is a more intense emotional response to danger, which increases the perceived risk. Unlike anxiety, it appears to dampen efforts at risk minimisation, possibly because it provokes a feeling of helplessness.[65]
Dread risk
It is common for people to dread some risks but not others: They tend to be very afraid of epidemic diseases, nuclear power plant failures, and plane accidents but are relatively unconcerned about some highly frequent and deadly events, such as traffic crashes, household accidents, and medical errors. One key distinction of dreadful risks seems to be their potential for catastrophic consequences,[66] threatening to kill a large number of people within a short period of time.[67] For example, immediately after the 11 September attacks, many Americans were afraid to fly and took their car instead, a decision that led to a significant increase in the number of fatal crashes in the time period following the 9/11 event compared with the same time period before the attacks.[68][69]
Different hypotheses have been proposed to explain why people fear dread risks. First, the psychometric paradigm suggests that high lack of control, high catastrophic potential, and severe consequences account for the increased risk perception and anxiety associated with dread risks. Second, because people estimate the frequency of a risk by recalling instances of its occurrence from their social circle or the media, they may overvalue relatively rare but dramatic risks because of their overpresence and undervalue frequent, less dramatic risks.[69] Third, according to the preparedness hypothesis, people are prone to fear events that have been particularly threatening to survival in human evolutionary history.[70] Given that in most of human evolutionary history people lived in relatively small groups, rarely exceeding 100 people,[71] a dread risk, which kills many people at once, could potentially wipe out one's whole group. Indeed, research found[72] that people's fear peaks for risks killing around 100 people but does not increase if larger groups are killed. Fourth, fearing dread risks can be an ecologically rational strategy.[73] Besides killing a large number of people at a single point in time, dread risks reduce the number of children and young adults who would have potentially produced offspring. Accordingly, people are more concerned about risks killing younger, and hence more fertile, groups.[74]
Outrage
Outrage is a strong moral emotion, involving anger over an adverse event coupled with an attribution of blame towards someone perceived to have failed to do what they should have done to prevent it. Outrage is the consequence of an event, involving a strong belief that risk management has been inadequate. Looking forward, it may greatly increase the perceived risk from a hazard.[75]
Human factors
One of the growing areas of focus in risk management is the field of human factors where behavioural and organizational psychology underpin our understanding of risk based decision making. This field considers questions such as "how do we make risk based decisions?", "why are we irrationally more scared of sharks and terrorists than we are of motor vehicles and medications?"
In decision theory, regret (and anticipation of regret) can play a significant part in decision-making, distinct from risk aversion[76][77](preferring the status quo in case one becomes worse off).
Framing[78] is a fundamental problem with all forms of risk assessment. In particular, because of bounded rationality (our brains get overloaded, so we take mental shortcuts), the risk of extreme events is discounted because the probability is too low to evaluate intuitively. As an example, one of the leading causes of death is road accidents caused by drunk driving – partly because any given driver frames the problem by largely or totally ignoring the risk of a serious or fatal accident.
For instance, an extremely disturbing event (an attack by hijacking, or moral hazards) may be ignored in analysis despite the fact it has occurred and has a nonzero probability. Or, an event that everyone agrees is inevitable may be ruled out of analysis due to greed or an unwillingness to admit that it is believed to be inevitable. These human tendencies for error and wishful thinking often affect even the most rigorous applications of the scientific method and are a major concern of the philosophy of science.
All decision-making under uncertainty must consider cognitive bias, cultural bias, and notational bias: No group of people assessing risk is immune to "groupthink": acceptance of obviously wrong answers simply because it is socially painful to disagree, where there are conflicts of interest.
Framing involves other information that affects the outcome of a risky decision. The right prefrontal cortex has been shown to take a more global perspective[79] while greater left prefrontal activity relates to local or focal processing.[80]
From the Theory of Leaky Modules[81] McElroy and Seta proposed that they could predictably alter the framing effect by the selective manipulation of regional prefrontal activity with finger tapping or monaural listening.[82] The result was as expected. Rightward tapping or listening had the effect of narrowing attention such that the frame was ignored. This is a practical way of manipulating regional cortical activation to affect risky decisions, especially because directed tapping or listening is easily done.
Psychology of risk taking
A growing area of research has been to examine various psychological aspects of risk taking. Researchers typically run randomised experiments with a treatment and control group to ascertain the effect of different psychological factors that may be associated with risk taking.[83] Thus, positive and negative feedback about past risk taking can affect future risk taking. In an experiment, people who were led to believe they are very competent at decision making saw more opportunities in a risky choice and took more risks, while those led to believe they were not very competent saw more threats and took fewer risks.[84]
https://en.wikipedia.org/wiki/Risk
https://en.wikipedia.org/wiki/Risk
https://en.wikipedia.org/wiki/Ambiguity_aversion
https://en.wikipedia.org/wiki/External_risk
https://en.wikipedia.org/wiki/Event_chain_methodology
https://en.wikipedia.org/wiki/Global_catastrophic_risk
https://en.wikipedia.org/wiki/Hazard
https://en.wikipedia.org/wiki/Record_linkage#Entity_resolution
https://en.wikipedia.org/wiki/Inherent_risk
https://en.wikipedia.org/wiki/Inherent_risk_(accounting)
https://en.wikipedia.org/wiki/Legal_risk
https://en.wikipedia.org/wiki/Safety-critical_system
https://en.wikipedia.org/wiki/Liquidity_risk
https://en.wikipedia.org/wiki/Moral_hazard
https://en.wikipedia.org/wiki/Operational_risk
https://en.wikipedia.org/wiki/Probabilistic_risk_assessment
https://en.wikipedia.org/wiki/Reliability_engineering
https://en.wikipedia.org/wiki/Risk_compensation
https://en.wikipedia.org/wiki/Risk-neutral_measure
https://en.wikipedia.org/wiki/Risk_perception
https://en.wikipedia.org/wiki/Sampling_risk
https://en.wikipedia.org/wiki/Systemic_risk
https://en.wikipedia.org/wiki/Systematic_risk
https://en.wikipedia.org/wiki/Uncertainty
https://en.wikipedia.org/wiki/Vulnerability
https://en.wikipedia.org/wiki/Absolute_probability_judgement
https://en.wikipedia.org/wiki/Multiple-criteria_decision_analysis
https://en.wikipedia.org/wiki/Risk_metric
https://en.wikipedia.org/wiki/Risk_register
https://en.wikipedia.org/wiki/Risk_matrix
https://en.wikipedia.org/wiki/Probability_density_function
https://en.wikipedia.org/wiki/Expected_utility_hypothesis
https://en.wikipedia.org/wiki/Value_at_risk
https://en.wikipedia.org/wiki/Expected_value
https://en.wikipedia.org/wiki/Loss_function#Expected_loss
https://en.wikipedia.org/wiki/Decision_rule
https://en.wikipedia.org/wiki/Loss_function
https://en.wikipedia.org/wiki/Risk_neutral
https://en.wikipedia.org/wiki/Volatility_(finance)
https://en.wikipedia.org/wiki/Variance
https://en.wikipedia.org/wiki/Risk#Risk_evaluation_and_risk_criteria
https://en.wikipedia.org/wiki/Availability_cascade
https://en.wikipedia.org/wiki/Affect_heuristic
https://en.wikipedia.org/wiki/Category:Experimental_psychology
https://en.wikipedia.org/wiki/Category:Experimental_psychology
https://en.wikipedia.org/wiki/Debriefing
https://en.wikipedia.org/wiki/Missing_letter_effect
https://en.wikipedia.org/wiki/Pseudoscope
https://en.wikipedia.org/wiki/Pseudoword
https://en.wikipedia.org/wiki/Repetition_priming
https://en.wikipedia.org/wiki/Sensory_overload
https://en.wikipedia.org/wiki/Stimulus_onset_asynchrony
https://en.wikipedia.org/wiki/Theory_of_Deadly_Initials
https://en.wikipedia.org/wiki/Tunnel_effect
https://en.wikipedia.org/wiki/Pair_by_association
https://en.wikipedia.org/wiki/Pavlov%27s_typology
https://en.wikipedia.org/wiki/Perceptual_attack_time
https://en.wikipedia.org/wiki/Generality_(psychology)
https://en.wikipedia.org/wiki/Effort_heuristic
https://en.wikipedia.org/wiki/Experimental_analysis_of_behavior
https://en.wikipedia.org/wiki/Experimental_pragmatics
https://en.wikipedia.org/wiki/External_inhibition
https://en.wikipedia.org/wiki/Eyeblink_conditioning
https://en.wikipedia.org/wiki/Neutral_stimulus
https://en.wikipedia.org/wiki/Heuristic_(psychology)
The curse of knowledge is a cognitive bias that occurs when an individual, who is communicating with other individuals, assumes that other individuals have similar background and depth of knowledge to understand.[1] This bias is also called by some authors the curse of expertise.[2]
For example, in a classroom setting, teachers may have difficulty if they cannot put themselves in the position of the student. A knowledgeable professor might no longer remember the difficulties that a young student encounters when learning a new subject for the first time. This curse of knowledge also explains the danger behind thinking about student learning based on what appears best to faculty members, as opposed to what has been verified with students.[3]
https://en.wikipedia.org/wiki/Curse_of_knowledge
In contract theory and economics, information asymmetry deals with the study of decisions in transactions where one party has more or better information than the other.
Information asymmetry creates an imbalance of power in transactions, which can sometimes cause the transactions to be inefficient, causing market failure in the worst case. Examples of this problem are adverse selection,[1] moral hazard, [2] and monopolies of knowledge.[3]
A common way to visualise information asymmetry is with a scale, with one side being the seller and the other the buyer. When the seller has more or better information, the transaction will more likely occur in the seller's favour ("the balance of power has shifted to the seller"). An example of this could be when a used car is sold, the seller is likely to have a much better understanding of the car's condition and hence its market value than the buyer, who can only estimate the market value based on the information provided by the seller and their own assessment of the vehicle.[4] The balance of power can, however, also be in the hands of the buyer. When buying health insurance, the buyer is not always required to provide full details of future health risks. By not providing this information to the insurance company, the buyer will pay the same premium as someone much less likely to require a payout in the future.[5] The adjacent image illustrates the balance of power between two agents when there is Perfect information. Perfect information means that all parties have complete knowledge. If the buyer has more information, the power to manipulate the transaction will be represented by the scale leaning towards the buyer's side.
Information asymmetry extends to non-economic behaviour. Private firms have better information than regulators about the actions that they would take in the absence of regulation, and the effectiveness of a regulation may be undermined.[6] International relations theory has recognized that wars may be caused by asymmetric information[7] and that "Most of the great wars of the modern era resulted from leaders miscalculating their prospects for victory".[8] Jackson and Morelli wrote that there is asymmetric information between national leaders, when there are differences "in what they know [i.e. believe] about each other's armaments, quality of military personnel and tactics, determination, geography, political climate, or even just about the relative probability of different outcomes" or where they have "incomplete information about the motivations of other agents".[9]
Information asymmetries are studied in the context of principal–agent problems where they are a major cause of misinforming and is essential in every communication process.[10] Information asymmetry is in contrast to perfect information, which is a key assumption in neo-classical economics.[11]
In 1996, a Nobel Memorial Prize in Economics was awarded to James A. Mirrlees and William Vickrey for their "fundamental contributions to the economic theory of incentives under asymmetric information".[12] This led the Nobel Committee to acknowledge the importance of information problems in economics.[13] They later awarded another Nobel Prize in 2001 to George Akerlof, Michael Spence, and Joseph E. Stiglitz for their "analyses of markets with asymmetric information".[14]
https://en.wikipedia.org/wiki/Information_asymmetry
One of the most notable impacts of Akerlof's work is its impact on Keynesian theory.[13] Akerlof argues that the Keynesian theory of unemployment being voluntary implies that quits would rise with unemployment. He argues against his critics by drawing upon reasoning based on psychology and sociology rather than pure economics. He supplemented this with an argument that people do not always behave rationally, but rather information asymmetry leads to only "near rationality", which causes people to deviate from optimal behavior regarding employment practices.[19]
https://en.wikipedia.org/wiki/Information_asymmetry
Models
Information asymmetry models assume one party possesses some information that other parties have no access to. Some asymmetric information models can also be used in situations where at least one party can enforce, or effectively retaliate for breaches of, certain parts of an agreement, whereas the other(s) cannot.
Adverse Selection
Akerlof suggested that information asymmetry leads to adverse selections.[4] In adverse selection models, the ignorant party lacks of has differing information while negotiating an agreed understanding of or contract to the transaction. An example of adverse selection is when people who are high-risk are more likely to buy insurance because the insurance company cannot effectively discriminate against them, usually due to lack of information about the particular individual's risk but also sometimes by force of law or other constraints.
Credence Goods fits in the adverse selection model of information asymmetry. These are goods where the buyer lacks the knowledge even after a product is consumed to disguise the product's quality or where the buyer is unaware of the quality needed.[23] An example of this are complex medical treatments such as heart surgery.
Moral Hazard
Moral hazard occurs when the ignorant party lacks information about the performance of the agreed-upon transaction or lacks the ability to retaliate for a breach of the agreement. This can result in a situation where a party is more likely to take risks because they are not fully responsible for the consequences of their actions. An example of moral hazard is when people are more likely to behave recklessly after becoming insured, either because the insurer cannot observe this behaviour or cannot effectively retaliate against it, for example, by failing to renew the insurance.[24] Moral Hazard is not limited to individuals firms can act more recklessly if they know they will be bailed out. For example, banks will allow parties to take out risky loans if they know that the government will bail them out. [25]
Monopolies of Knowledge
In the model of monopolies of knowledge, the ignorant party has no right to access all the critical information about a situation for decision-making. Meaning one party has exclusive control over information. This type of information asymmetry can be seen in government. An example of monopolies of knowledge is that in some enterprises, only high-level management can fully access the corporate information provided by a third party. At the same time, lower-level employees are required to make important decisions with only limited information provided to them.[26]
Solutions
Countermeasures have widely been discussed to reduce information asymmetry. The classic paper on adverse selection is George Akerlof's "The Market for Lemons" from 1970, which brought informational issues to the forefront of economic theory. Exploring signaling and screening, the paper discusses two primary solutions to this problem.[27] A similar concept is moral hazard, which differs from adverse selection at the timing level. While adverse selection affects parties before the interaction, moral hazard affects parties after the interaction. Regulatory instruments such as mandatory information disclosure can also reduce information asymmetry.[28] Warranties can further help mitigate the effect of asymmetric information.[29]
Signalling
Michael Spence originally proposed the idea of signalling.[15] He suggested that in a situation with information asymmetry, it is possible for people to signal their type, thus believably transferring information to the other party and resolving the asymmetry.
This idea was initially studied in the context of matching in the job market. An employer is interested in hiring a new employee who is "skilled in learning". Of course, all prospective employees will claim to be "skilled in learning", but only they know if they really are. This is an information asymmetry.
Spence proposes, for example, that going to college can function as a credible signal of an ability to learn. Assuming that people who are skilled in learning can finish college more easily than people who are unskilled, then by finishing college, the skilled people signal their skills to prospective employers. No matter how much or how little they may have learned in college or what they studied, finishing functions as a signal of their capacity for learning. However, finishing college may merely function as a signal of their ability to pay for college; it may signal the willingness of individuals to adhere to orthodox views, or it may signal a willingness to comply with authority.
Signalling theory can be used in e-commerce research. Information asymmetry in e-commerce comes from information distortion that leads to the buyer's misunderstanding of the seller's true characteristics before the contract. Mavlanova, Benbunan-Fich and Koufaris (2012) noticed that signalling theory explains the relation between signals and qualities, illustrating why some signals are trustworthy and others are not. In e-commerce, signals deliver information about the characteristics of the seller. For instance, high-quality sellers are able to show their identity to buyers by using signs and logos, and then buyers check these signals to evaluate the credibility and validity of a seller's qualities. The study of Mavlanova, Benbunan-Fich and Koufaris (2012) also confirmed that signal usage is different between low-quality and high-quality online sellers. Low-quality sellers are more likely to avoid using expensive, easy-to-verify signals and tend to use fewer signals than high-quality sellers. Thus, signals help reduce information asymmetry.[30]
Screening
Joseph E. Stiglitz pioneered the theory of screening. In this way, the under informed party can induce the other party to reveal their information. They can provide a menu of choices in such a way that the choice depends on the private information of the other party.
The side of asymmetry can occur on either buyer or seller. For example, sellers with better information than buyers include used-car salespeople, mortgage brokers and loan originators, stockbrokers and real estate agents. Alternatively, situations where the buyer usually has better information than the seller include estate sales as specified in a last will and testament, life insurance, or sales of old art pieces without a prior professional assessment of their value. This situation was first described by Kenneth J. Arrow in an article on health care in 1963.[5]
George Akerlof, in The Market for Lemons notices that, in such a market, the average value of the commodity tends to go down, even for those of perfectly good quality. This is similar to the monetary principle of Gresham's law, which states that poor quality money is better than good money. Because of information asymmetry, unscrupulous sellers can sell "forgeries" (like replica goods such as watches) and defraud the buyer. Meanwhile, buyers usually do not have enough information to distinguish lemons from quality goods. As a result, many people not willing to risk getting ripped off will avoid certain types of purchases or will not spend as much for a given item. Akerlof demonstrates that it is even possible for the market to decay to the point of nonexistence.
An example of adverse selection and information asymmetry causing market failure is the market for health insurance. Policies usually group subscribers together, where people can leave, but no one can join after it is set. As health conditions are realized over time, information involving health costs will arise, and low-risk policyholders will realize the mismatch in the premiums and health conditions. Due to this, healthy policyholders are incentivized to leave and reapply to get a cheaper policy that matches their expected health costs, which causes the premiums to increase. As high-risk policyholders are more dependent on insurance, they are stuck with higher premium costs as the group size reduces, which causes premiums to increase even further. This cycle repeats until the high-risk policy holders also find similar health policies with cheaper premiums, in which the initial group disappears. This concept is known as the death spiral and has been researched as early as 1988.[31]
Akerlof also suggests different methods with which information asymmetry can be reduced. One of those instruments that can be used to reduce the information asymmetry between market participants is intermediary market institutions called counteracting institutions, for instance, guarantees for goods. By providing a guarantee, the buyer in the transaction can use extra time to obtain the same amount of information about the good as the seller before the buyer takes on the complete risk of the good being a "lemon". Other market mechanisms that help reduce the imbalance in information include brand names, chains and franchising that guarantee the buyer a threshold quality level. These mechanisms also let owners of high-quality products get the full value of the goods. These counteracting institutions then keep the market size from reducing to zero.
Warranty
Warranties are utilised as a method of verifying the credibility of a product and are a guarantee issued by the seller promising to replace or repair the good should the quality not be sufficient. Product warranties are often requested from buying parties or financial lenders and have been used as a form of mediation dating back to the Babylonian era.[32] Warranties can come in the form of insurance and can also come at the expense of the buyer. The implementation of "lemon laws" has eradicated the effect of information asymmetry upon customers who have received a faulty item. Essentially, this involves the customers returning a defective product regardless of circumstances within a certain time period.[33]
Mandatory information disclosure
Both signaling and screening resemble voluntary information disclosure, where the party having more information, for their own best interest, use various measures to inform the other party. However, voluntary information disclosure is not always feasible. Regulators can thus take active measures to facilitate the spread of information. For example, the Securities and Exchange Commission (SEC) initiated Regulation Fair Disclosure (RFD) so that companies must faithfully disclose material information to investors. The policy has reduced information asymmetry, reflected in the lower trading costs.[34]
Incentives and penalties
For firms to reduce moral hazard, they can implement penalties for bad behaviour and incentives to align objectives. [35] An example of building in an incentive is insurance companies not insuring customers for the total value; this provides an incentive to be less reckless as the customer will suffer financial liability as well.
Information gathering
Most models in traditional contract theory assume that asymmetric information is exogenously given.[36][37] Yet, some authors have also studied contract-theoretic models in which asymmetric information arises endogenously because agents decide whether or not to gather information. Specifically, Crémer and Khalil (1992) and Crémer, Khalil, and Rochet (1998a) study an agent's incentives to acquire private information after a principal has offered a contract.[38][39] In a laboratory experiment, Hoppe and Schmitz (2013) have provided empirical support for the theory.[40] Several further models have been developed which study variants of this setup. For instance, when the agent has not gathered information at the outset, does it make a difference whether or not he learns the information later on, before production starts?[41] What happens if the information can be gathered already before a contract is offered?[42] What happens if the principal observes the agent's decision to acquire information?[43] Finally, the theory has been applied in several contexts, such as public-private partnerships and vertical integration.[44][45]
https://en.wikipedia.org/wiki/Information_asymmetry
Effect of blogging
The effect of blogging as a source of information asymmetry as well as a tool reduce asymmetric information has also been well studied. Blogging on financial websites provides bottom-up communication among investors, analysts, journalists, and academics, as financial blogs help prevent people in charge from withholding financial information from their company and the general public.[52] Compared to traditional forms of media such as newspapers and magazines, blogging provides an easy-to-access venue for information. A 2013 study by Gregory Saxton and Ashley Anker concluded that more participation on blogging sites from credible individuals reduces information asymmetry between corporate insiders, additionally reducing the risk of insider trading.[53]
Game theory
Game theory can be used to analyse asymmetric information.[54] A large amount of the foundational ideas in game theory builds on the framework of information asymmetry. In simultaneous games, each player has no prior knowledge of an opponent's move. In sequential games, players may observe all or part of the opponent's moves. One example of information asymmetry is one player can observe the opponent's past activities while the other player cannot. Therefore, the existence and level of information asymmetry in a game determines the dynamics of the game. James Fearon in his study of the explanations for war in a game theoretic context notices that war could be a consequence of information asymmetry – two countries will not reach a non-violent settlement because they have incentives to distort the amount of military resources they possess.[55]
https://en.wikipedia.org/wiki/Information_asymmetry
However, cases of information sometimes arise, when certain parties obtain information that is not in the public domain. This can create market return abnormalities, such as an abrupt surge or decline in a security.
https://en.wikipedia.org/wiki/Information_asymmetry
https://en.wikipedia.org/wiki/Category:Law_and_economics
Decoupling usually refers to the ending, removal or reverse of coupling.
Decoupling may also refer to:
Economics
- Decoupling (advertising), the purchase of services directly from suppliers rather than via an advertising agency
- Decoupling (utility regulation), the disassociation of a utility's profits from its sales
- Decoupling and re-coupling in economics and organizational studies
- Decoupling (organizational studies), creating and maintaining separation between policy, implementation and/or practice
- Decoupling of wages from productivity, sometimes known as the Great Decoupling
- Eco-economic decoupling, economic growth without increase in environmental costs
Science
- Decoupling (cosmology), transition from close interactions between particles to their effective independence
- Decoupling (meteorology), change in the interaction between atmospheric layers at night
- Decoupling (neuropsychopharmacology), changes in neurochemical binding sites as a consequence of drug tolerance
- Nuclear magnetic resonance decoupling
- Decoupling (probability), reduction of a statistic to an average derived from independent random-variable sequences
- Decoupling for body-focused repetitive behaviors, technique for the reduction of body-focused repetitive behaviors
Engineering
- Decoupling (electronics), prevention of undesired energy transfer between electrical media
- Decoupling capacitor, most common implementation technique
- The amelioration of coupling in computer programming
Other
- Uncoupling of railway carriages
See also
https://en.wikipedia.org/wiki/Decoupling
The resource-based view (RBV) is a managerial framework used to determine the strategic resources a firm can exploit to achieve sustainable competitive advantage.
Barney's 1991 article "Firm Resources and Sustained Competitive Advantage" is widely cited as a pivotal work in the emergence of the resource-based view.[1] However, some scholars[who?] argue that there was evidence for a fragmentary resource-based theory from the 1930s.[citation needed] RBV proposes that firms are heterogeneous because they possess heterogeneous resources, meaning firms can have different strategies because they have different resource mixes.[2]
The RBV focuses managerial attention on the firm's internal resources in an effort to identify those assets, capabilities and competencies with the potential to deliver superior competitive advantages.
https://en.wikipedia.org/wiki/Resource-based_view
Impression management is a conscious or subconscious process in which people attempt to influence the perceptions of other people about a person, object or event by regulating and controlling information in social interaction.[1] It was first conceptualized by Erving Goffman in 1959 in The Presentation of Self in Everyday Life, and then was expanded upon in 1967.
Impression management behaviors include accounts (providing "explanations for a negative event to escape disapproval"), excuses (denying "responsibility for negative outcomes"), and opinion conformity ("speak(ing) or behav(ing) in ways consistent with the target"), along with many others.[2] By utilizing such behaviors, those who partake in impression management are able to control others' perception of them or events pertaining to them. Impression management is possible in nearly any situation, such as in sports (wearing flashy clothes or trying to impress fans with their skills), or on social media (only sharing positive posts). Impression management can be used with either benevolent or malicious intent.
Impression management is usually used synonymously with self-presentation, in which a person tries to influence the perception of their image. The notion of impression management was first applied to face-to-face communication, but then was expanded to apply to computer-mediated communication. The concept of impression management is applicable to academic fields of study such as psychology and sociology as well as practical fields such as corporate communication and media.
https://en.wikipedia.org/wiki/Impression_management
Goffman proposes that performers "can use dramaturgical discipline as a defense to ensure that the 'show' goes on without interruption."[3] Goffman contends that dramaturgical discipline includes:[3]
- coping with dramaturgical contingencies;
- demonstrating intellectual and emotional involvement;
- remembering one's part and not committing unmeant gestures or faux pas;
- not giving away secrets involuntarily;
- covering up inappropriate behavior on the part of teammates on the spur of the moment;
- offering plausible reasons or deep apologies for disruptive events;
- maintaining self-control (for example, speaking briefly and modestly);
- suppressing emotions to private problems; and
- suppressing spontaneous feelings.
https://en.wikipedia.org/wiki/Impression_management
Manipulation and ethics
In business, "managing impressions" normally "involves someone trying to control the image that a significant stakeholder has of them". The ethics of impression management has been hotly debated on whether we should see it as an effective self-revelation or as cynical manipulation.[3] Some people insist that impression management can reveal a truer version of the self by adopting the strategy of being transparent. Because transparency "can be provided so easily and because it produces information of value to the audience, it changes the nature of impression management from being cynically manipulative to being a kind of useful adaptation".
Virtue signalling is used within groups to criticize their own members for valuing outward appearance over substantive action (having a real or permanent, rather than apparent or temporary, existence).
Psychological manipulation is a type of social influence that aims to change the behavior or perception of others through abusive, deceptive, or underhanded tactics.[26] By advancing the interests of the manipulator, often at another's expense, such methods could be considered exploitative, abusive, devious, and deceptive. The process of manipulation involves bringing an unknowing victim under the domination of the manipulator, often using deception, and using the victim to serve their own purposes.
Machiavellianism is a term that some social and personality psychologists use to describe a person's tendency to be unemotional, and therefore able to detach him or herself from conventional morality and hence to deceive and manipulate others.[27] (See also Machiavellianism in the workplace.)
Lying constitutes a force that is destructive and can manipulate an environment allowing them to be narcissistic human beings. A person's mind can be manipulated into believing those antics are true as though it relates to being solely deceptive and unethical.[28] Theories show manipulation can cause a huge effect on the dynamic of one's relationship. The emotions of a person can stem from a trait that is mistrustful, triggering one's attitude and character to misbehave disapprovingly. Relationships with a positive force can provide a greater exchange whereas with relationships having poor moral values, the chances of the connection will be based on detachment and disengagement.[29] Dark personalities and manipulation are within the same entity. It will intervene between a person's attainable goal if their perspective is only focused on self-centeredness.[30] The personality entices a range of erratic behaviors that will corrupt the mind into practicing violent acts resulting in a rage of anger and physical harm.[31]
https://en.wikipedia.org/wiki/Impression_management
The affordances of a certain medium also influence the way a user self-presents.[46] Communication via a professional medium such as e-mail would result in professional self-presentation.[47] The individual would use greetings, correct spelling, grammar and capitalization as well as scholastic language. Personal communication mediums such as text-messaging would result in a casual self-presentation where the user shortens words, includes emojis and selfies and uses less academic language.
https://en.wikipedia.org/wiki/Impression_management
https://en.wikipedia.org/wiki/Category:Sociology_of_technology
Context collapse or "the flattening of multiple audiences into a single context"[1] is a term arising out of the study of human interaction on the internet, especially within social media.[2] Context collapse "generally occurs when a surfeit of different audiences occupy the same space, and a piece of information intended for one audience finds its way to another" with that new audience's reaction being uncharitable and highly negative for failing to understand the original context.[3]
https://en.wikipedia.org/wiki/Context_collapse
Ingratiating is a psychological technique in which an individual attempts to influence another person by becoming more likeable to their target. This term was coined by social psychologist Edward E. Jones, who further defined ingratiating as "a class of strategic behaviors illicitly designed to influence a particular other person concerning the attractiveness of one's personal qualities."[1] Ingratiation research has identified some specific tactics of employing ingratiation:
- Complimentary Other-Enhancement: the act of using compliments or flattery to improve the esteem of another individual.[1]
- Conformity in Opinion, Judgment, and Behavior: altering the expression of one's personal opinions to match the opinion(s) of another individual.[1]
- Self-Presentation or Self-Promotion: explicit presentation of an individual's own characteristics, typically done in a favorable manner.[1]
- Rendering Favors: Performing helpful requests for another individual.[1]
- Modesty: Moderating the estimation of one's own abilities, sometimes seen as self-deprecation.[2]
- Expression of Humour: any event shared by an individual with the target individual that is intended to be amusing.[3]
- Instrumental Dependency: the act of convincing the target individual that the ingratiator is completely dependent upon him/her.[4]
- Name-dropping: the act of referencing one or more other individuals in a conversation with the intent of using the reference(s) to increase perceived attractiveness or credibility.[4]
Research has also identified three distinct types of ingratiation, each defined by their ultimate goal. Regardless of the goal of ingratiation, the tactics of employment remain the same:
- Acquisitive ingratiation: ingratiation with the goal of obtaining some form of resource or reward from a target individual.[1][5]
- Protective Ingratiation: ingratiation used to prevent possible sanctions or other negative consequences elicited from a target individual.[1][5]
- Significance ingratiation: ingratiation designed to cultivate respect and/or approval from a target individual, rather than an explicit reward.[1]
Ingratiation has been confused with another social psychological term, Impression management. Impression management is defined as "the process by which people control the impressions others form of them."[6] While these terms may seem similar, it is important to note that impression management represents a larger construct of which ingratiation is a component. In other words, ingratiation is a method of impression management.[citation needed]
https://en.wikipedia.org/wiki/Ingratiation
In Marxist philosophy, a character mask (German: Charaktermaske) is a prescribed social role which conceals the contradictions of a social relation or order. The term was used by Karl Marx in published writings from the 1840s to the 1860s, and also by Friedrich Engels. It is related to the classical Greek concepts of mimesis (imitative representation using analogies) and prosopopoeia (impersonation or personification), and the Roman concept of persona,[1] but also differs from them.[2] Neo-Marxist and non-Marxist sociologists,[3] philosophers[4] and anthropologists[5] have used character masks to interpret how people relate in societies with a complex division of labour, where people depend on trade to meet many of their needs. Marx's own notion of the character mask was not a fixed idea with a singular definition.
https://en.wikipedia.org/wiki/Character_mask
Reputation management, originally a public relations term, refers to the influencing, controlling, enhancing, or concealing of an individual's or group's reputation. The growth of the internet and social media led to growth of reputation management companies, with search results as a core part of a client's reputation.[1] Online reputation management, sometimes abbreviated as ORM, focuses on the management of product and service search engine results.[2] Ethical grey areas include mug shot removal sites, astroturfing customer review sites, censoring complaints, and using search engine optimization tactics to influence results. In other cases, the ethical lines are clear; some reputation management companies are closely connected to websites that publish unverified and libelous statements about people.[3] Such unethical companies charge thousands of dollars to remove these posts – temporarily – from their websites.[3]
This field of public relations has developed extensively, with the growth of the internet and social media the advent of reputation management companies. The overall outlook of search results has become an integral part of what defines "reputation" and reputation management now exists under two spheres: online and offline reputation management.
Online reputation management focuses on the management of product and service search results within the digital space, that is why it is common to see the same suggested links in the first page of a Google search.[1] A variety of electronic markets and online communities like eBay, Amazon and Alibaba have ORM systems built in, and using effective control nodes these can minimize the threat and protect systems from possible misuses and abuses by malicious nodes in decentralized overlay networks.[4]
Offline reputation management shapes public perception of a said entity outside the digital sphere using select clearly defined controls and measures towards a desired result ideally representing what stakeholders think and feel about that entity.[5] The most popular controls for off-line reputation management include social responsibility, media visibility, press releases in print media and sponsorship amongst related tools.[6]
In the 2010s, marketing a company and promoting their products online have become large components of business strategies. Companies are trying to be more aware of how they are perceived by their audiences both inside and outside their target market. A problem which often arises from this is false advertising.[7] In the past, contribution of internet posts and blogs to a company would have been a foreign concept to most corporations and their consumers. However, with more competitors and more clutter, it is increasingly difficult to get noticed and become popular within the realm of online business or among influencers because of how the algorithms work on social media.
Reputation management is a marketing technique used to restore lost reputations by companies who have lost it, or to establish a new one.[8]
https://en.wikipedia.org/wiki/Reputation_management
Stigma management is the process of concealing or disclosing aspects of one's identity to minimize social stigma.[1]
When a person receives unfair treatment or alienation due to a social stigma, the effects can be detrimental. Social stigmas are defined as any aspect of an individual's identity that is devalued in a social context.[2] These stigmas can be categorized as visible or invisible, depending on whether the stigma is readily apparent to others. Visible stigmas refer to characteristics such as race, age, gender, physical disabilities, or deformities, whereas invisible stigmas refer to characteristics such sexual orientation, gender identity, religious affiliation, early pregnancy, certain diseases, or mental illnesses.[3]
When individuals possess invisible stigmas, they must decide whether or not to reveal their association with a devalued group to others.[4] This decision can be an incredibly difficult one, as revealing one's invisible stigma can have both positive[5] and negative[6] consequences depending on several situational factors. In contrast, a visible stigma requires immediate action to diminish communication tension and acknowledge a deviation from the norm. People possessing visible stigmas often use compensatory strategies to reduce potential interpersonal discrimination that they may face.[7]
https://en.wikipedia.org/wiki/Stigma_management
The social environment, social context, sociocultural context or milieu refers to the immediate physical and social setting in which people live or in which something happens or develops. It includes the culture that the individual was educated or lives in, and the people and institutions with whom they interact.[1] The interaction may be in person or through communication media, even anonymous or one-way,[2] and may not imply equality of social status. The social environment is a broader concept than that of social class or social circle.
The physical and social environment is a determining factor in active and healthy aging in place, being a central factor in the study of environmental gerontology.[3]
https://en.wikipedia.org/wiki/Social_environment
Social status is the level of social value a person is considered to possess.[1][2] More specifically, it refers to the relative level of respect, honour, assumed competence, and deference accorded to people, groups, and organizations in a society. Status is based in widely shared beliefs about who members of a society think holds comparatively more or less social value, in other words, who they believe is better in terms of competence or moral traits.[3] Status is determined by the possession of various characteristics culturally believed to indicate superiority or inferiority (e.g., confident manner of speech or race). As such, people use status hierarchies to allocate resources, leadership positions, and other forms of power. In doing so, these shared cultural beliefs make unequal distributions of resources and power appear natural and fair, supporting systems of social stratification.[4] Status hierarchies appear to be universal across human societies, affording valued benefits to those who occupy the higher rungs, such as better health, social approval, resources, influence, and freedom.[2]
https://en.wikipedia.org/wiki/Social_status
Social stratification refers to a society's categorization of its people into groups based on socioeconomic factors like wealth, income, race, education, ethnicity, gender, occupation, social status, or derived power (social and political). As such, stratification is the relative social position of persons within a social group, category, geographic region, or social unit.[1][2][3]
In modern Western societies, social stratification is typically defined in terms of three social classes: the upper class, the middle class, and the lower class; in turn, each class can be subdivided into the upper-stratum, the middle-stratum, and the lower stratum.[4] Moreover, a social stratum can be formed upon the bases of kinship, clan, tribe, or caste, or all four.
The categorization of people by social stratum occurs most clearly in complex state-based, polycentric, or feudal societies, the latter being based upon socio-economic relations among classes of nobility and classes of peasants. Whether social stratification first appeared in hunter-gatherer, tribal, and band societies or whether it began with agriculture and large-scale means of social exchange remains a matter of debate in the social sciences.[5] Determining the structures of social stratification arises from inequalities of status among persons, therefore, the degree of social inequality determines a person's social stratum. Generally, the greater the social complexity of a society, the more social stratification exists, by way of social differentiation.[6]
Stratification can have a number of effects. For example, neighborhood spatial and racial stratification can have an effect on differential access in mortgage credit.[7]
https://en.wikipedia.org/wiki/Social_stratification
In system theory. "differentiation" is the increase of subsystems in a modern society to increase the complexity of a society. Each subsystem can make different connections with other subsystems, and this leads to more variation within the system in order to respond to variation in the environment.
Differentiation that leads to more variation allows for better responses to the environment, and also for faster evolution (or perhaps sociocultural evolution), which is defined sociologically as a process of selection from variation; the more differentiation (and thus variation) that is available, the better the selection.[1]: 95–96
https://en.wikipedia.org/wiki/Differentiation_(sociology)
Stratifactory differentiation
Stratificatory differentiation or social stratification is a vertical differentiation according to rank or status in a system conceived as a hierarchy. Every rank fulfills a particular and distinct function in the system, for instance the manufacturing company president, the plant manager, trickling down to the assembly line worker. In segmentary differentiation inequality is an accidental variance and serves no essential function, however, inequality is systemic in the function of stratified systems. A stratified system is more concerned with the higher ranks (president, manager) than it is with the lower ranks (assembly worker) with regard to "influential communication." However, the ranks are dependent on each other and the social system will collapse unless all ranks realize their functions. This type of system tends to necessitate the lower ranks to initiate conflict in order to shift the influential communication to their level.[1]: 97
Code
Code is a way to distinguish elements within a system from those elements not belonging to that system. It is the basic language of a functional system. Examples are truth for the science system, payment for the economic system, legality for the legal system; its purpose is to limit the kinds of permissible communication. According to Luhmann a system will only understand and use its own code, and will not understand nor use the code of another system; there is no way to import the code of one system into another because the systems are closed and can only react to things within their environment.[1]: 100
https://en.wikipedia.org/wiki/Differentiation_(sociology)
Understanding the risk of complexity
It is exemplified that in Segmentary differentiation if a segment fails to fulfill its function it does not affect or threaten the larger system. If an auto plant in Michigan stops production this does not threaten the overall system, or the plants in other locations. However, as complexity increases so does the risk of system breakdown. If a rank structure in a Stratified system fails, it threatens the system; a Center-Periphery system might be threatened if the control measure, or the Center/Headquarters failed; and in a Functionally differentiated system, due to the existence of interdependence despite independence the failure of one unit will cause a problem for the social system, possibly leading to its breakdown. The growth of complexity increases the abilities of a system to deal with its environment, but complexity increases the risk of system breakdown. It is important to note that more complex systems do not necessarily exclude less complex systems, in some instances the more complex system may require the existence of the less complex system to function.[1]: 98–100
https://en.wikipedia.org/wiki/Differentiation_(sociology)
https://en.wikipedia.org/wiki/Differentiation_(sociology)
A paradox is a logically self-contradictory statement or a statement that runs contrary to one's expectation.[1][2] It is a statement that, despite apparently valid reasoning from true premises, leads to a seemingly self-contradictory or a logically unacceptable conclusion.[3][4] A paradox usually involves contradictory-yet-interrelated elements that exist simultaneously and persist over time.[5][6][7] They result in "persistent contradiction between interdependent elements" leading to a lasting "unity of opposites".[8]
https://en.wikipedia.org/wiki/Paradox
Common themes in paradoxes include self-reference, infinite regress, circular definitions, and confusion or equivocation between different levels of abstraction.
https://en.wikipedia.org/wiki/Paradox
.[10]
Thought-experiments can also yield interesting paradoxes. The grandfather paradox, for example, would arise if a time-traveler were to kill his own grandfather before his mother or father had been conceived, thereby preventing his own birth.[20] This is a specific example of the more general observation of the butterfly effect, or that a time-traveller's interaction with the past—however slight—would entail making changes that would, in turn, change the future in which the time-travel was yet to occur, and would thus change the circumstances of the time-travel itself.
https://en.wikipedia.org/wiki/Paradox
In linguistics, veridicality (from Latin "truthfully said") is a semantic or grammatical assertion of the truth of an utterance.
https://en.wikipedia.org/wiki/Veridicality
A society is a group of individuals involved in persistent social interaction, or a large social group sharing the same spatial or social territory, typically subject to the same political authority and dominant cultural expectations. Societies are characterized by patterns of relationships (social relations) between individuals who share a distinctive culture and institutions; a given society may be described as the sum total of such relationships among its constituent members. In the social sciences, a larger society often exhibits stratification or dominance patterns in subgroups.
https://en.wikipedia.org/wiki/Society
The bourgeoisie (/ˌbʊərʒwɑːˈziː/ (listen) BOORZH-wah-ZEE, French: [buʁʒwazi] (listen)) is a class of business owners and merchants which emerged in the Late Middle Ages, originally as a "middle class" between peasantry and aristocracy. They are traditionally contrasted with the proletariat by their wealth, political power, and education,[1] as well as their access to and control of cultural and financial capital. They are sometimes divided into a petty (petite), middle (moyenne), large (grande), upper (haute), and ancient (ancienne) bourgeoisie and collectively designated as "the bourgeoisie".
The bourgeoisie in its original sense is intimately linked to the political ideology of Liberalism and its existence within cities, recognized as such by their urban charters (e.g., municipal charters, town privileges, German town law), so there was no bourgeoisie apart from the citizenry of the cities.[citation needed] Rural peasants came under a different legal system.
In Communist philosophy, the bourgeoisie is the social class that came to own the means of production during modern industrialization and whose societal concerns are the value of private property and the preservation of capital to ensure the perpetuation of their economic dominance in society.[2]
https://en.wikipedia.org/wiki/Bourgeoisie
Pars pro toto (Latin for 'a part (taken) for the whole'; /ˌpɑːrz proʊ ˈtoʊtoʊ/;[1] Latin: [ˈpars proː ˈtoːtoː]),[2] is a figure of speech where the name of a portion of an object, place, or concept is used or taken to represent its entirety. It is distinct from a merism, which is a reference to a whole by an enumeration of parts; metonymy, where an object, place, or concept is called by something or some place associated with it; or synecdoche, which can refer both to pars pro toto and its inverse: Totum pro parte (Latin for 'the whole for a part').
In the context of language, pars pro toto means that something is named after a part or subset of it, or after a limited characteristic, which in itself is not necessarily representative of the whole. For example, "glasses" is a pars pro toto name for something that consists of more than literally just two pieces of glass (the frame, nosebridge, temples, etc. as well as the lenses). Pars pro toto usage is especially common in political geography, with examples including "Russia" or "Russians", used to refer to the entire former Russian Empire or former Soviet Union or its people; "Holland" for the Netherlands; and, particularly in languages other than English, using the translation of "England" in that language to refer to Great Britain or the United Kingdom. Among English-speakers, "Britain" is a common pars pro toto shorthand for the United Kingdom. "Schweiz", Switzerland's name in German, comes from its central canton of Schwyz.
The inverse of a pars pro toto is a totum pro parte, in which the whole is used to describe a part.[3] The term synecdoche is used for both.
https://en.wikipedia.org/wiki/Pars_pro_toto
In psychology, affordance is what the environment offers the individual. In design, affordance has a narrower meaning, it refers to possible actions that an actor can readily perceive.
American psychologist James J. Gibson coined the term in his 1966 book, The Senses Considered as Perceptual Systems,[1] and it occurs in many of his earlier essays.[2] His best-known definition is from his 1979 book, The Ecological Approach to Visual Perception:
The affordances of the environment are what it offers the animal, what it provides or furnishes, either for good or ill. ... It implies the complementarity of the animal and the environment.[3]
The word is used in a variety of fields: perceptual psychology, cognitive psychology, environmental psychology, criminology, industrial design, human–computer interaction (HCI), interaction design, user-centered design, communication studies, instructional design, science, technology and society (STS), sports science and artificial intelligence.
https://en.wikipedia.org/wiki/Affordance
Self-verification is a social psychological theory that asserts people want to be known and understood by others according to their firmly held beliefs and feelings about themselves, that is self-views (including self-concepts and self-esteem). It is one of the motives that drive self-evaluation, along with self-enhancement and self-assessment.
Because chronic self-concepts and self-esteem play an important role in understanding the world, providing a sense of coherence, and guiding action, people become motivated to maintain them through self-verification. Such strivings provide stability to people’s lives, making their experiences more coherent, orderly, and comprehensible than they would be otherwise. Self-verification processes are also adaptive for groups, groups of diverse backgrounds, and the larger society, in that they make people predictable to one another thus serve to facilitate social interaction.[1] To this end, people engage in a variety of activities that are designed to obtain self-verifying information.
Developed by William Swann (1981), the theory grew out of earlier writings which held that people form self-views so that they can understand and predict the responses of others and know how to act toward them.[2]
https://en.wikipedia.org/wiki/Self-verification_theory
Self-monitoring, a concept introduced in the 1970s by Mark Snyder, describes the extent to which people monitor their self-presentations, expressive behavior, and nonverbal affective displays.[1] Snyder held that human beings generally differ in substantial ways in their abilities and desires to engage in expressive controls (see dramaturgy).[2] Self-monitoring is defined as a personality trait that refers to an ability to regulate behavior to accommodate social situations. People concerned with their expressive self-presentation (see impression management) tend to closely monitor their audience in order to ensure appropriate or desired public appearances.[3] Self-monitors try to understand how individuals and groups will perceive their actions. Some personality types commonly act spontaneously (low self-monitors) and others are more apt to purposely control and consciously adjust their behavior (high self-monitors).[4] Recent studies suggest that a distinction should be made between acquisitive and protective self-monitoring due to their different interactions with metatraits.[5] This differentiates the motive behind self-monitoring behaviours: for the purpose of acquiring appraisal from others (acquisitive) or protecting oneself from social disapproval (protective).
https://en.wikipedia.org/wiki/Self-monitoring
Quantified self refers both to the cultural phenomenon of self-tracking with technology and to a community of users and makers of self-tracking tools who share an interest in "self-knowledge through numbers".[1] Quantified self practices overlap with the practice of lifelogging and other trends that incorporate technology and data acquisition into daily life, often with the goal of improving physical, mental, and emotional performance. The widespread adoption in recent years of wearable fitness and sleep trackers such as the Fitbit or the Apple Watch,[2] combined with the increased presence of Internet of things in healthcare and in exercise equipment, have made self-tracking accessible to a large segment of the population.
Other terms for using self-tracking data to improve daily functioning[3] are auto-analytics, body hacking, self-quantifying, self-surveillance, sousveillance (recording of personal activity), and personal informatics.[4][5][6]
https://en.wikipedia.org/wiki/Quantified_self
An intermediary (also known as a middleman or go-between) is a third party that offers intermediation services between two parties, which involves conveying messages between principals in a dispute, preventing direct contact and potential escalation of the issue. In law, intermediaries can facilitate communication between a vulnerable witness, defendant and court personnel to acquire valuable evidence, whilst in barter, the intermediary is a person or group who stores valuables in trade until they are needed, parties to the barter or others have space available to take delivery of them and store them, or until other conditions are met.
In diplomacy and international relations, an intermediary may convey messages between principals in a dispute, allowing the avoidance of direct principal-to-principal contact.[1] Where the two parties are geographically distant, the process may be termed shuttle diplomacy. Where parties do not want formal diplomatic relations, an intermediary state may serve as a protecting power facilitating diplomacy without diplomatic recognition.
https://en.wikipedia.org/wiki/Intermediary
In psychology, reactance is an unpleasant motivational reaction to offers, persons, rules, or regulations that threaten or eliminate specific behavioral freedoms. Reactance occurs when an individual feels that an agent is attempting to limit one's choice of response and/or range of alternatives.
Reactance can occur when someone is heavily pressured into accepting a certain view or attitude. Reactance can encourage an individual to adopt or strengthen a view or attitude which is indeed contrary to that which was intended — which is to say, to a response of noncompliance — and can also increase resistance to persuasion. Some individuals might employ reverse psychology in a bid to exploit reactance for their benefit, in an attempt to influence someone to choose the opposite of what is being requested. Reactance can occur when an individual senses that someone is trying to compel them to do something; often the individual will offer resistance and attempt to extricate themselves from the situation.
Some individuals are naturally high in reactance, a personality characteristic called trait reactance.[1]
https://en.wikipedia.org/wiki/Reactance_(psychology)
In psychology, precommitment refers to a strategy or a method of self-control that an agent may use to restrict the number of choices available to them at a future time.[1] The strategy may also involve the imposition of obstacles or additional costs to certain courses of action in advance. As theorized by the social scientist Jon Elster, agents may precommit themselves when they predict that their preferences will change but wish to ensure that their future actions will align with their current preferences.[2]
Precommitment has also been studied as a bargaining strategy in which agents bind themselves to one course of action in order to enhance the credibility of present threats. Some scholars have proposed that collective political agents may also engage in precommitment by adopting constitutions that limit the scope of future legislation.[3] The validity of this application of precommitment theory has been called into question, however.
https://en.wikipedia.org/wiki/Precommitment
In general, incentives are anything that persuade a person to alter their behaviour in the desired manner.[1] It is emphasised that incentives matter by the basic law of economists and the laws of behaviour, which state that higher incentives amount to greater levels of effort and therefore higher levels of performance.[2]
https://en.wikipedia.org/wiki/Incentive
Artificial scarcity is scarcity of items despite the technology for production or the sufficient capacity for sharing. The most common causes are monopoly pricing structures, such as those enabled by laws that restrict competition or by high fixed costs in a particular marketplace. The inefficiency associated with artificial scarcity is formally known as a deadweight loss.
https://en.wikipedia.org/wiki/Artificial_scarcity
Asymmetric competition refers to forms of business competition where firms are considered competitors in some markets or contexts but not in others.[1] In such cases a firm may choose to allocate competitive resources and marketing actions among its competitors out of proportion to their market share.[2][3][4][5] Asymmetric competition can be visualized using techniques such as multidimensional scaling and perceptual mapping.
https://en.wikipedia.org/wiki/Asymmetric_competition
Bounded rationality is the idea that rationality is limited when individuals make decisions, and under these limitations, rational individuals will select a decision that is satisfactory rather than optimal.[1]
Limitations include the difficulty of the problem requiring a decision, the cognitive capability of the mind, and the time available to make the decision. Decision-makers, in this view, act as satisficers, seeking a satisfactory solution, with everything that they have at the moment rather than an optimal solution. Therefore, humans do not undertake a full cost-benefit analysis to determine the optimal decision, but rather, choose an option that fulfils their adequacy criteria.[2] An example of this being within organisations when they must adhere to the operating conditions of their company, this has the opportunity to result in bounded rationality as the organisation is not able to choose the optimal option.[3]
Some models of human behavior in the social sciences assume that humans can be reasonably approximated or described as "rational" entities, as in rational choice theory or Downs' political agency model.[4] The concept of bounded rationality complements "rationality as optimization", which views decision-making as a fully rational process of finding an optimal choice given the information available.[5] Therefore, bounded rationality can be said to address the discrepancy between the assumed perfect rationality of human behaviour (which is utilised by other economics theories such as the Neoclassical approach), and the reality of human cognition.[6] In short, bounded rationality revises notions of "perfect" rationality to account for the fact that perfectly rational decisions are often not feasible in practice because of the intractability of natural decision problems and the finite computational resources available for making them. The concept of bounded rationality continues to influence (and be debated in) different disciplines, including political science, economics, psychology, law and cognitive science.[7]
https://en.wikipedia.org/wiki/Bounded_rationality
Caveat emptor (/ˈɛmptɔːr/; from caveat, "may he/she beware", a subjunctive form of cavēre, "to beware" + ēmptor, "buyer") is Latin for "Let the buyer beware".[1] It has become a proverb in English. Generally, caveat emptor is the contract law principle that controls the sale of real property after the date of closing, but may also apply to sales of other goods. The phrase caveat emptor and its use as a disclaimer of warranties arises from the fact that buyers typically have less information than the seller about the good or service they are purchasing. This quality of the situation is known as 'information asymmetry'. Defects in the good or service may be hidden from the buyer, and only known to the seller.
It is a short form of Caveat emptor, quia ignorare non debuit quod jus alienum emit ("Let a purchaser beware, for he ought not to be ignorant of the nature of the property which he is buying from another party.")[2] I.e. the buyer should assure himself that the product is good and that the seller had the right to sell it, as opposed to receiving stolen property.
A common way that information asymmetry between seller and buyer has been addressed is through a legally binding warranty, such as a guarantee of satisfaction.
https://en.wikipedia.org/wiki/Caveat_emptor
Inequality of bargaining power in law, economics and social sciences refers to a situation where one party to a bargain, contract or agreement, has more and better alternatives than the other party. This results in one party having greater power than the other to choose not to take the deal and makes it more likely that this party will gain more favourable terms and grant them more negotiating power (as they are in a better position to reject the deal). Inequality of bargaining power is generally thought to undermine the freedom of contract, resulting in a disproportionate level of freedom between parties, and that it represents a place at which markets fail.
Where bargaining power is persistently unequal, the concept of inequality of bargaining power serves as a justification for the implication of mandatory terms into contracts by law, or the non-enforcement of a contract by the courts.
https://en.wikipedia.org/wiki/Inequality_of_bargaining_power
In economics, perfect information (sometimes referred to as "no hidden information") is a feature of perfect competition. With perfect information in a market, all consumers and producers have complete and instantaneous knowledge of all market prices, their own utility, and own cost functions.
In game theory, a sequential game has perfect information if each player, when making any decision, is perfectly informed of all the events that have previously occurred, including the "initialization event" of the game (e.g. the starting hands of each player in a card game).[1][2][3][4]
Perfect information is importantly different from complete information, which implies common knowledge of each player's utility functions, payoffs, strategies and "types". A game with perfect information may or may not have complete information.
Games where some aspect of play is hidden from opponents – such as the cards in poker and bridge – are examples of games with imperfect information.[5][6]
https://en.wikipedia.org/wiki/Perfect_information
The distinction between real prices and ideal prices is a distinction between actual prices paid for products, services, assets and labour (the net amount of money that actually changes hands), and computed prices which are not actually charged or paid in market trade, although they may facilitate trade.[1] The difference is between actual prices paid, and information about possible, potential or likely prices, or "average" price levels.[2] This distinction should not be confused with the difference between "nominal prices" (current-value) and "real prices" (adjusted for price inflation, and/or tax and/or ancillary charges).[3] It is more similar to, though not identical with, the distinction between "theoretical value" and "market price" in financial economics.[4]
Characteristics
Ideal prices, expressed in money-units, can be "estimated", "theorized" or "imputed" for accounting, trading, marketing or calculation purposes, for example using the law of averages. Often the actual prices of real transactions are combined with assumed prices, for the purpose of a price calculation or estimate. Even if such prices therefore may not directly correspond to transactions involving actually traded products, assets or services, they can nevertheless provide "price signals" which influence economic behavior.
For example, if statisticians publish aggregated price estimates about the economy as a whole, market actors are likely to respond to this price information, even if it is far from exact, if it is based on a very large number of assumptions, and if it is later revised. The release of new GDP data, for instance, often has an immediate effect on stock market activity, insofar as it is interpreted as an indicator of whether and how fast the market – and consequently the incomes generated by it – is growing or declining.
Ideal prices are typically prices that would apply in trade, if certain assumed conditions apply (and they may not). The number of ideal prices used for calculations or signalling in the world vastly exceeds the number of real prices fetched. At any point in time, most economic goods and services in society are being owned or used, but not traded; nevertheless people are constantly extrapolating prices which would apply if they were traded in markets or if they had to be replaced. Such price information is essential to estimate the possible incomes, budgetary implications or expenditures associated with a transaction.
The distinction is currently best known in the profession of auditing.[5] It also has enormous significance for economic theory, and more specifically for econometric measurement and price theory; the main reason is that price data is very often the basis for making economic and policy decisions.
https://en.wikipedia.org/wiki/Real_prices_and_ideal_prices
Law and economics, or economic analysis of law, is the application of microeconomic theory to the analysis of law, which emerged primarily from scholars of the Chicago school of economics. Economic concepts are used to explain the effects of laws, to assess which legal rules are economically efficient, and to predict which legal rules will be promulgated.[1] There are two major branches of law and economics;[2] one based on the application of the methods and theories of neoclassical economics to the positive and normative analysis of the law, and a second branch which focuses on an institutional analysis of law and legal institutions, with a broader focus on economic, political, and social outcomes, and overlapping with analyses of the institutions of politics and governance.
https://en.wikipedia.org/wiki/Law_and_economics
In neoclassical economics, market failure is a situation in which the allocation of goods and services by a free market is not Pareto efficient, often leading to a net loss of economic value.[1] Market failures can be viewed as scenarios where individuals' pursuit of pure self-interest leads to results that are not efficient – that can be improved upon from the societal point of view.[2][3] The first known use of the term by economists was in 1958,[4] but the concept has been traced back to the Victorian philosopher Henry Sidgwick.[5] Market failures are often associated with public goods,[6] time-inconsistent preferences,[7] information asymmetries,[8] non-competitive markets, principal–agent problems, or externalities.[9]
The existence of a market failure is often the reason that self-regulatory organizations, governments or supra-national institutions intervene in a particular market.[10][11] Economists, especially microeconomists, are often concerned with the causes of market failure and possible means of correction.[12] Such analysis plays an important role in many types of public policy decisions and studies.
However, government policy interventions, such as taxes, subsidies, wage and price controls, and regulations, may also lead to an inefficient allocation of resources, sometimes called government failure.[13] Most mainstream economists believe that there are circumstances (like building codes or endangered species) in which it is possible for government or other organizations to improve the inefficient market outcome. Several heterodox schools of thought disagree with this as a matter of ideology.[14]
An ecological market failure exists when human activity in a market economy is exhausting critical non-renewable resources, disrupting fragile ecosystems, or overloading biospheric waste absorption capacities. In none of these cases does the criterion of Pareto efficiency obtain.[15] It is critical to create checks on human activities that cause societal negative externalities.
https://en.wikipedia.org/wiki/Market_failure
Pollution is the introduction of contaminants into the natural environment that cause adverse change.[1] Pollution can take the form of any substance (solid, liquid, or gas) or energy (such as radioactivity, heat, sound, or light). Pollutants, the components of pollution, can be either foreign substances/energies or naturally occurring contaminants.
Although environmental pollution can be caused by natural events, the word pollution generally implies that the contaminants have an anthropogenic source – that is, a source created by human activities, such as manufacturing, extractive industries, poor waste management, transportation or agriculture. Pollution is often classed as point source (coming from a highly concentrated specific site, such as a factory or mine) or nonpoint source pollution (coming from a widespread distributed sources, such as microplastics or agricultural runoff).
Many sources of pollution were unregulated parts of industrialization during the 19th and 20th centuries until the emergence of environmental regulation and pollution policy in the later half of the 20th century. Sites where historically polluting industries released persistent pollutants may have legacy pollution long after the source of the pollution is stopped. Major forms of pollution include air pollution, light pollution, litter, noise pollution, plastic pollution, soil contamination, radioactive contamination, thermal pollution, visual pollution, and water pollution.
Pollution has widespread consequence on human and environmental health, having systematic impact on social and economic systems. In 2015, pollution killed nine million people worldwide (one in six deaths).[2][3] Air pollution accounted for 3⁄4 of these earlier deaths.[4][5] A 2022 literature survey found that levels of anthropogenic chemical pollution have exceeded planetary boundaries and now threaten entire ecosystems around the world.[6][7] Pollutants frequently have outsized impacts on vulnerable populations, such as children and the elderly, and marginalized communities, because polluting industries and toxic waste sites tend to be collocated with populations with less economic and political power.[8] This outsized impact is a core reason for the formation of the environmental justice movement,[9][10] and continues to be a core element of environmental conflicts, particularly in the Global South.
Because of the impacts of these chemicals, local, country and international policy have increasingly sought to regulate pollutants, resulting in increasing air and water quality standards, alongside regulation of specific waste streams. Regional and national policy is typically supervised by environmental agencies or ministries, while international efforts are coordinated by the UN Environmental Program and other treaty bodies. Pollution mitigation is an important part of all of the Sustainable Development Goals.[11]
Definitions and types
Various definitions of pollution exist, which may or may not recognize certain types, such as noise pollution or greenhouse gases. The United States Environmental Protection Administration defines pollution as "Any substances in water, soil, or air that degrade the natural quality of the environment, offend the senses of sight, taste, or smell, or cause a health hazard. The usefulness of the natural resource is usually impaired by the presence of pollutants and contaminants."[12] In contrast, the United Nations considers pollution to be the "presence of substances and heat in environmental media (air, water, land) whose nature, location, or quantity produces undesirable environmental effects."[13]
The major forms of pollution are listed below along with the particular contaminants relevant to each of them:
- Air pollution: the release of chemicals and particulates into the atmosphere. Common gaseous pollutants include carbon monoxide, sulfur dioxide, chlorofluorocarbons (CFCs) and nitrogen oxides produced by industry and motor vehicles. Photochemical ozone and smog are created as nitrogen oxides and hydrocarbons react to sunlight. Particulate matter, or fine dust is characterized by their micrometre size PM10 to PM2.5.
- Electromagnetic pollution: the overabundance of electromagnetic radiation in their non-ionizing form, such as radio and television transmissions, Wi-fi etc. Although there is no demonstrable effect on humans there can be interference with radio-astronomy and effects on safety systems of aircraft and cars.
- Light pollution: includes light trespass, over-illumination and astronomical interference.
- Littering: the criminal throwing of inappropriate man-made objects, unremoved, onto public and private properties.
- Noise pollution: which encompasses roadway noise, aircraft noise, industrial noise as well as high-intensity sonar.
- Plastic pollution: involves the accumulation of plastic products and microplastics in the environment that adversely affects wildlife, wildlife habitat, or humans.
- Soil contamination occurs when chemicals are released by spill or underground leakage. Among the most significant soil contaminants are hydrocarbons, heavy metals, MTBE,[14] herbicides, pesticides and chlorinated hydrocarbons.
- Radioactive contamination, resulting from 20th century activities in atomic physics, such as nuclear power generation and nuclear weapons research, manufacture and deployment. (See alpha emitters and actinides in the environment.)
- Thermal pollution, is a temperature change in natural water bodies caused by human influence, such as use of water as coolant in a power plant.
- Visual pollution, which can refer to the presence of overhead power lines, motorway billboards, scarred landforms (as from strip mining), open storage of trash, municipal solid waste or space debris.
- Water pollution, caused by the discharge of industrial wastewater from commercial and industrial waste (intentionally or through spills) into surface waters; discharges of untreated sewage and chemical contaminants, such as chlorine, from treated sewage; and releases of waste and contaminants into surface runoff flowing to surface waters (including urban runoff and agricultural runoff, which may contain chemical fertilizers and pesticides, as well as human feces from open defecation).[15][16][17]
https://en.wikipedia.org/wiki/Pollution
Insider trading is the trading of a public company's stock or other securities (such as bonds or stock options) based on material, nonpublic information about the company. In various countries, some kinds of trading based on insider information is illegal. This is because it is seen as unfair to other investors who do not have access to the information, as the investor with insider information could potentially make larger profits than a typical investor could make. The rules governing insider trading are complex and vary significantly from country to country. The extent of enforcement also varies from one country to another. The definition of insider in one jurisdiction can be broad and may cover not only insiders themselves but also any persons related to them, such as brokers, associates, and even family members. A person who becomes aware of non-public information and trades on that basis may be guilty of a crime.
Trading by specific insiders, such as employees, is commonly permitted as long as it does not rely on material information, not in the public domain. Many jurisdictions require that such trading be reported so that the transactions can be monitored. In the United States and several other jurisdictions, trading conducted by corporate officers, key employees, directors, or significant shareholders must be reported to the regulator or publicly disclosed, usually within a few business days of the trade. In these cases, insiders in the United States are required to file Form 4 with the U.S. Securities and Exchange Commission (SEC) when buying or selling shares of their own companies. The authors of one study claim that illegal insider trading raises the cost of capital for securities issuers, thus decreasing overall economic growth.[1] However, some economists, such as Henry Manne, have argued that insider trading should be allowed and could, in fact, benefit markets.[2]
There has long been "considerable academic debate" among business and legal scholars over whether or not insider trading should be illegal.[3] Several arguments against outlawing insider trading have been identified: for example, although insider trading is illegal, most insider trading is never detected by law enforcement, and thus the illegality of insider trading might give the public the potentially misleading impression that "stock market trading is an unrigged game that anyone can play."[3] Some legal analysis has questioned whether insider trading actually harms anyone in the legal sense, since some have questioned whether insider trading causes anyone to suffer an actual "loss" and whether anyone who suffers a loss is owed an actual legal duty by the insiders in question.[3]
https://en.wikipedia.org/wiki/Insider_trading
In economics, insurance, and risk management, adverse selection is a market situation where buyers and sellers have different information. The result is that participants with key information might participate selectively in trades at the expense of other parties who do not have the same information.
In an ideal world, buyers should pay a price which reflects their willingness to pay and the value to them of the product or service, and sellers should sell at a price which reflects the quality of their goods and services.[1] For example, a poor quality product should be inexpensive and a high quality product should have a high price. However, when one party holds information that the other party does not have, they have the opportunity to damage the other party by maximizing self-utility, concealing relevant information, and perhaps even lying. Taking advantage of undisclosed information in an economic contract or trade of possession is known as adverse selection.
This opportunity has secondary effects: the party without the information can take steps to avoid entering into an unfair (maybe "rigged") contract, perhaps by withdrawing from the interaction, or a seller (buyer) asking a higher (lower) price, thus diminishing the volume of trade in the market. Furthermore, it can deter people from participating in the market, leading to less competition and thus higher profit margins for participants.
Sometimes the buyer may know the value of a good or service better than the seller. For example, a restaurant offering "all you can eat" at a fixed price may attract customers with a larger than average appetite, resulting in a loss for the restaurant.
A standard example is the market for used cars with hidden flaws ("lemons"). George Akerlof in his 1970 paper, "The Market for 'Lemons'", highlights the effect adverse selection has in the used car market, creating an imbalance between the sellers and the buyers that may lead to a market collapse. The paper further describes the effects of adverse selection in insurance as an example of the effect of information asymmetry on markets,[2] a sort of "generalized Gresham's law".[2] Since then, "adverse selection" has been widely used in many domains.
The theory behind market collapse starts with consumers who want to buy goods from an unfamiliar market. Sellers, who do have information about which good is high or poor quality, would aim to sell the poor quality goods at the same price as better goods, leading to a larger profit margin. The high quality sellers now no longer reap the full benefits of having superior goods, because poor quality goods pull the average price down to one which is no longer profitable for the sale of high quality goods. High quality sellers thus leave the market, thus reducing the quality and price of goods even further.[2] This market collapse is then caused by demand not rising in response to a fall in price, and the lower overall quality of market provisions. Sometimes the seller is the uninformed party instead, when consumers with undisclosed attributes purchase goods or contracts that are priced for other demographics.[2]
Adverse selection has been discussed for life insurance since the 1860s,[3] and the phrase has been used since the 1870s.[4]
https://en.wikipedia.org/wiki/Adverse_selection
https://en.wikipedia.org/w/index.php?title=Causal_ambiguity&redirect=no
https://en.wikipedia.org/wiki/Charades
https://en.wikipedia.org/wiki/Pantomime
https://en.wikipedia.org/wiki/Theory_of_mind
In social psychology, naïve realism is the human tendency to believe that we see the world around us objectively, and that people who disagree with us must be uninformed, irrational, or biased.
Naïve realism provides a theoretical basis for several other cognitive biases, which are systematic errors when it comes to thinking and making decisions. These include the false consensus effect, actor-observer bias, bias blind spot, and fundamental attribution error, among others.
The term, as it is used in psychology today, was coined by social psychologist Lee Ross and his colleagues in the 1990s.[1][2] It is related to the philosophical concept of naïve realism, which is the idea that our senses allow us to perceive objects directly and without any intervening processes.[3] Social psychologists in the mid-20th century argued against this stance and proposed instead that perception is inherently subjective.[4]
Several prominent social psychologists have studied naïve realism experimentally, including Lee Ross, Andrew Ward, Dale Griffin, Emily Pronin, Thomas Gilovich, Robert Robinson, and Dacher Keltner. In 2010, the Handbook of Social Psychology recognized naïve realism as one of "four hard-won insights about human perception, thinking, motivation and behavior that ... represent important, indeed foundational, contributions of social psychology."[5]
https://en.wikipedia.org/wiki/Na%C3%AFve_realism_(psychology)
In psychology, the false consensus effect, also known as consensus bias, is a pervasive cognitive bias that causes people to “see their own behavioral choices and judgments as relatively common and appropriate to existing circumstances”.[1] In other words, they assume that their personal qualities, characteristics, beliefs, and actions are relatively widespread through the general population.
This false consensus is significant because it increases self-esteem (overconfidence effect). It can be derived from a desire to conform and be liked by others in a social environment. This bias is especially prevalent in group settings where one thinks the collective opinion of their own group matches that of the larger population. Since the members of a group reach a consensus and rarely encounter those who dispute it, they tend to believe that everybody thinks the same way. The false-consensus effect is not restricted to cases where people believe that their values are shared by the majority, but it still manifests as an overestimate of the extent of their belief.[2]
Additionally, when confronted with evidence that a consensus does not exist, people often assume that those who do not agree with them are defective in some way.[3] There is no single cause for this cognitive bias; the availability heuristic, self-serving bias, and naïve realism have been suggested as at least partial underlying factors. The bias may also result, at least in part, from non-social stimulus-reward associations.[4] Maintenance of this cognitive bias may be related to the tendency to make decisions with relatively little information. When faced with uncertainty and a limited sample from which to make decisions, people often "project" themselves onto the situation. When this personal knowledge is used as input to make generalizations, it often results in the false sense of being part of the majority.[5]
The false consensus effect has been widely observed and supported by empirical evidence. Previous research has suggested that cognitive and perceptional factors (motivated projection, accessibility of information, emotion, etc.) may contribute to the consensus bias, while recent studies have focused on its neural mechanisms. One recent study has shown that consensus bias may improve decisions about other people's preferences.[4] Ross, Green and House first defined the false consensus effect in 1977 with emphasis on the relative commonness that people perceive about their own responses; however, similar projection phenomena had already caught attention in psychology. Specifically, concerns with respect to connections between individual’s personal predispositions and their estimates of peers appeared in the literature for a while. For instances, Katz and Allport in 1931 illustrated that students’ estimates of the amount of others on the frequency of cheating was positively correlated to their own behavior. Later, around 1970, same phenomena were found on political beliefs and prisoner’s dilemma situation. In 2017, researchers identified a persistent egocentric bias when participants learned about other people's snack-food preferences.[4] Moreover, recent studies suggest that the false consensus effect can also affect professional decision makers; specifically, it has been shown that even experienced marketing managers project their personal product preferences onto consumers.[6][7]
https://en.wikipedia.org/wiki/False_consensus_effect
See also
- Abilene paradox
- Attribution bias – Systematic errors made when people evaluate their own and others' behaviors
- Confirmation bias – Bias confirming existing attitudes
- "The Engineering of Consent"
- False-uniqueness effect
- Fundamental attribution error – Overattributing the cause of another's behavior to their personality instead of situational factors.
- Groupthink – Psychological phenomenon that occurs within a group of people
- Illusory superiority – Overestimating one's abilities and qualifications; a cognitive bias
- List of cognitive biases – Systematic patterns of deviation from norm or rationality in judgment
- Manufacturing Consent – Non-fiction book by Edward S. Herman and Noam Chomsky
- Omission bias – Tendency to favor inaction over action
- Overconfidence effect – Personal cognitive bias
- Pseudoconsensus
- Psychological projection – Attributing parts of the self to others
- Pluralistic ignorance – Incorrect perception of others' beliefs
- Social comparison bias
- Social projection – Psychological tendency of people to expect others to act or think similarly to themselves
- Value (ethics)
In psychoanalytic theory, a defence mechanism (American English: defense mechanism), is an unconscious psychological operation that functions to protect a person from anxiety-producing thoughts and feelings related to internal conflicts and outer stressors.[1][2][3]
Defence mechanisms (German: Abwehrmechanismen) are unconscious psychological processes employed to defend against feelings of anxiety and unacceptable impulses at the level of consciousness.[4] These processes include: repression, the exclusion of unacceptable desires and ideas from consciousness, though in certain circumstances they may resurface in a disguised or distorted form; identification, the incorporation of some aspects of an object into oneself, employed by the ego and superego to fortify the personality by attracting libido (sexual energy) away from objects and toward themselves;[5] rationalization, the justification of one's behaviour by using apparently logical reasons that are acceptable to the ego, thereby further suppressing awareness of the unconscious motivations;[6] and sublimation, the process of channeling libido into "socially useful" disciplines, such as artistic, cultural, and intellectual pursuits, which indirectly provide gratification for the original drives.[7]
According to this theory, healthy people normally use different defence mechanisms throughout life. A defence mechanism becomes pathological only when its persistent use leads to maladaptive behaviour such that the physical or mental health of the individual is adversely affected. Among the purposes of ego defence mechanisms is to protect the mind/self/ego from anxiety or social sanctions or to provide a refuge from a situation with which one cannot currently cope.[8]
Theories and classifications
In the first definitive book on defence mechanisms, The Ego and the Mechanisms of Defence (1936),[9] Anna Freud enumerated the ten defence mechanisms that appear in the works of her father, Sigmund Freud: repression, regression, reaction formation, isolation, undoing, projection, introjection, turning against one's own person, reversal into the opposite, and sublimation or displacement.[10]
https://en.wikipedia.org/wiki/Defence_mechanism
An empathy gap, sometimes referred to as an empathy bias, is a breakdown or reduction in empathy (the ability to recognize, understand, and share another's thoughts and feelings) where it might otherwise be expected to occur. Empathy gaps may occur due to a failure in the process of empathizing[1] or as a consequence of stable personality characteristics,[2][3][4] and may reflect either a lack of ability or motivation to empathize.
Empathy gaps can be interpersonal (toward others) or intrapersonal (toward the self, e.g. when predicting one's own future preferences). A great deal of social psychological research has focused on intergroup empathy gaps, their underlying psychological and neural mechanisms, and their implications for downstream behavior (e.g. prejudice toward outgroup members).
https://en.wikipedia.org/wiki/Empathy_gap
Stress related to the experience of empathy may cause empathic distress fatigue and occupational burnout,[29] particularly among those in the medical profession. Expressing empathy is an important component of patient-centered care, and can be expressed through behaviors such as concern, attentiveness, sharing emotions, vulnerability, understanding, dialogue, reflection, and authenticity.[30] However, expressing empathy can be cognitively and emotionally demanding for providers.[31] Physicians who lack proper support may experience depression and burnout, particularly in the face of the extended or frequent experiences of personal distress.
https://en.wikipedia.org/wiki/Empathy_gap
Forecasting failures
Within the domain of social psychology, "empathy gaps" typically describe breakdowns in empathy toward others (interpersonal empathy gaps). However, research in behavioral economics has also identified a number of intrapersonal empathy gaps (i.e. toward one's self). For example, "hot-cold empathy gaps" describe a breakdown in empathy for one's future self—specifically, a failure to anticipate how one's future affective states will affect one's preferences.[32] Such failures can negatively impact decision-making, particularly in regards to health outcomes. Hot-cold empathy gaps are related to the psychological concepts of affective forecasting and temporal discounting.https://en.wikipedia.org/wiki/Empathy_gap
Breakdowns in empathy may reduce helping behavior,[73][74] a phenomenon illustrated by the identifiable victim effect. Specifically, humans are less likely to assist others who are not identifiable on an individual level.[75] A related concept is psychological distance—that is, we are less likely to help those who feel more psychologically distant from us.[76]
Reduced empathy for outgroup members is associated with a reduction in willingness to entertain another's points of view, the likelihood of ignoring a customer's complaints, the likelihood of helping others during a natural disaster, and the chance that one opposes social programs designed to benefit disadvantaged individuals.[77][67]
https://en.wikipedia.org/wiki/Empathy_gap
The minimal group paradigm is a method employed in social psychology.[1][2][3] Although it may be used for a variety of purposes, it is best known as a method for investigating the minimal conditions required for discrimination to occur between groups. Experiments using this approach have revealed that even arbitrary distinctions between groups, such as preferences for certain paintings,[4] or the color of their shirts,[5] can trigger a tendency to favor one's own group at the expense of others, even when it means sacrificing in-group gain.[6][7][8][9]
https://en.wikipedia.org/wiki/Minimal_group_paradigm
Inequity aversion (IA) is the preference for fairness and resistance to incidental inequalities.[1] The social sciences that study inequity aversion include sociology, economics, psychology, anthropology, and ethology. Researches on inequity aversion aim to explain behaviors that are not purely driven by self-interests but fairness considerations.
In some literature, the terminology inequality aversion was used in the places of inequity aversion.[2][3] The discourses in social studies argue that "inequality" pertains to the gap between the distribution of resources, while "inequity" pertains to the fundamental and institutional unfairness.[4] Therefore, the choice between using inequity or inequality aversion may depend on the specific context.
https://en.wikipedia.org/wiki/Inequity_aversion
Inequity aversion is broadly consistent with observations of behavior in three standard economics experiments:
- Dictator game – The subject chooses how a reward should be split between themself and another subject. If the dictator acted self-interestedly, the split would consist of 0 for the partner and the full amount for the dictator. While the most common choice is indeed to keep everything, many dictators choose to give, with the second most common choice being the 50:50 split.
- Ultimatum game – The dictator game is played, but the recipient is allowed to veto the entire deal, so that both subjects receive nothing. The partner typically vetoes the deal when low offers are made. People consistently prefer getting nothing to receiving a small share of the pie. Rejecting the offer is in effect paying to punish the dictator (called the proposer).
- Trust game – The same result as found in the dictator game shows up when the dictator's initial endowment is provided by their partner, even though this requires the first player to trust that something will be returned (reciprocity). This experiment often yields a 50:50 split of the endowment, and has been used as evidence of the inequity aversion model.
https://en.wikipedia.org/wiki/Inequity_aversion
Behavioral economics studies the effects of psychological, cognitive, emotional, cultural and social factors in the decisions of individuals or institutions, and how these decisions deviate from those implied by classical economic theory.[1][2]
Behavioral economics is primarily concerned with the bounds of rationality of economic agents. Behavioral models typically integrate insights from psychology, neuroscience and microeconomic theory.[3][4] The study of behavioral economics includes how market decisions are made and the mechanisms that drive public opinion.
Behavioral economics began as a distinct field of study in the 1970s and '80s, but can be traced back to 18th-century economists, such as Adam Smith, who deliberated how the economic behavior of individuals could be influenced by their desires.[5]
The status of behavioral economics as a subfield of economics is a fairly recent development; the breakthroughs that laid the foundation for it were published through the last three decades of the 20th century.[6][7] Behavioral economics is still growing as a field, being used increasingly in research and in teaching.[8]History
Early Neoclassical economists included psychological reasoning in much of their writing, though psychology at the time was not a recognized field of study.[9] In The Theory of Moral Sentiments, Adam Smith wrote on concepts later popularized by modern Behavioral Economic theory, such as loss aversion.[9] Jeremy Bentham, another Neoclassical economist in the 1700s conceptualized utility as a product of psychology.[9] Other Neoclassical economists who incorporated psychological explanations in their works included Francis Edgeworth, Vilfredo Pareto and Irving Fisher.
A rejection and elimination of psychology from economics by the Neoclassical school in the early 1900s brought on a period defined by a reliance on empiricism.[9] There was a lack of confidence in hedonic theories, which saw pursuance of maximum benefit as an essential aspect in understanding human economic behavior.[6] Hedonic analysis had shown little success in predicting human behavior, leading many to question its viability as a reliable source for prediction name=":13" />
There was also a fear among economists that the involvement of psychology in shaping economic models was inordinate and a departure from contemporary Neoclassical principles.[10] They feared that an increased emphasis on psychology would undermine the mathematic components of the field. William Peter Hamilton, Wall Street Journal editor from 1907 to 1929, wrote in The Stock Market Barometer: "We have meddled so disastrously with the law of supply and demand that we cannot bring ourselves to the radical step of letting it alone."[11][12]
To boost the ability of economics to predict accurately, economists started looking to tangible phenomena rather than theories based on human psychology.[6] Psychology was seen as unreliable to many of these economists as it was a new field, not regarded as sufficiently scientific.[9] Though a number of scholars expressed concern towards the positivism within economics, models of study dependent on psychological insights became rare.[9] Economists instead conceptualized humans as purely rational and self-interested decision makers, illustrated in the concept of homo economicus.[12]
The re-emergence of psychology within economics that allowed for the spread of behavioral economics has been associated with the cognitive revolution.[9][7] In the 1960s, cognitive psychology began to shed more light on the brain as an information processing device (in contrast to behaviorist models). Psychologists in this field, such as Ward Edwards,[13] Amos Tversky and Daniel Kahneman began to compare their cognitive models of decision-making under risk and uncertainty to economic models of rational behavior. These developments spurred economists to reconsider how psychology could be applied to economic models and theories.[9] Concurrently, the Expected utility hypothesis and discounted utility models began to gain acceptance. In challenging the accuracy of generic utility, these concepts established a practice foundational in behavioral economics: Building on standard models by applying psychological knowledge.[6]
Mathematical psychology reflects a longstanding interest in preference transitivity and the measurement of utility.[14]
https://en.wikipedia.org/wiki/Behavioral_economics
The term Homo economicus, or economic man, is the portrayal of humans as agents who are consistently rational and narrowly self-interested, and who pursue their subjectively defined ends optimally. It is a word play on Homo sapiens, used in some economic theories and in pedagogy.[1]
In game theory, Homo economicus is often modelled through the assumption of perfect rationality. It assumes that agents always act in a way that maximize utility as a consumer and profit as a producer,[2] and are capable of arbitrarily complex deductions towards that end. They will always be capable of thinking through all possible outcomes and choosing that course of action which will result in the best possible result.
The rationality implied in Homo economicus does not restrict what sort of preferences are admissible. Only naive applications of the Homo economicus model assume that agents know what is best for their long-term physical and mental health. For example, an agent's utility function could be linked to the perceived utility of other agents (such as one's husband or children), making Homo economicus compatible with other models such as Homo reciprocans, which emphasizes human cooperation.
As a theory on human conduct, it contrasts to the concepts of behavioral economics, which examines cognitive biases and other irrationalities, and to bounded rationality, which assumes that practical elements such as cognitive and time limitations restrict the rationality of agents.
https://en.wikipedia.org/wiki/Homo_economicus
Homo Oeconomicus is a quarterly peer-reviewed academic journal covering studies in classical and neoclassical economics, public and social choice theory, law and economics, and philosophy of economics. It was established in 1983 as an occasional series concerned with aspects of the Homo economicus concept. The founding editor-in-chief was Manfred J. Holler (University of Hamburg). The current editors-in-chief are Manfred J. Holler (University of Hamburg), John Hudson (University of Bath), Hartmut Kliemt (Frankfurt School of Finance & Management), and Martin Leroch (University of Mainz). Originally published in German, all articles have been in English since 1998.[1] The mixture of German and English articles deterred some subscribers and subscriptions picked up after the journal stopped accepting German language submissions.[2]
https://en.wikipedia.org/wiki/Homo_Oeconomicus
Homo reciprocans, or reciprocating human, is the concept in some economic theories of humans as cooperative actors who are motivated by improving their environment through positive reciprocity (rewarding other individuals) or negative reciprocity (punishing other individuals), even in situations without foreseeable benefit for themselves.
This concept stands in contrast to the idea of homo economicus, which states the opposite theory that human beings are exclusively motivated by self-interest. However, the two ideas can be reconciled if we assume that utility functions of the homo economicus can have parameters that are dependent to the perceived utility of other agents (such as one's spouse or children).
Russian theorist Peter Kropotkin wrote about the concept of "mutual aid" in the early part of the 20th century.
https://en.wikipedia.org/wiki/Homo_reciprocans
Adam Smith, in The Theory of Moral Sentiments, had claimed that individuals have sympathy for the well-being of others. On the other hand, in The Wealth of Nations, Smith wrote:
It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest.[5]
https://en.wikipedia.org/wiki/Homo_economicus
This could be seen as prefiguring one part of Marx's theory of alienation of labor; and also as a pro-worker argument against the division of labor and the restrictions it places upon freedom of occupation. But even so, taken in the context of the work as a whole, Smith clearly intends it in a pro-capitalism, pro-bourgeoisie, way: "removing difficulties", such as reducing the time needed for travel and trade, through "expedients", such as steam-engine ships, here means the typical argument that capitalism brings freedom of entrepreneurship and innovation, which then bring prosperity. Thus, Smith is not unreasonably called "The Father of Capitalism"; early on, he theorized many of today's most widespread and deep-seated pro-capitalism arguments.
https://en.wikipedia.org/wiki/Homo_economicus
The early role of Homo Economicus within neoclassical theory was summarised to include a general objective of discovering laws and principles to accelerate further growth within the national economy and the welfare of ordinary citizens. These laws and principles were determined by two governing factors, natural and social.[6] It had been found to be the foundation of neoclassical theory of the firm which assumed that individual agents would act rationally amongst other rational individuals.[7] In which Adam Smith explains that the actions of those that are rational and self-interested under homo economicus promotes the general good overall which was understood as the efficient allocation of material wealth. However, social scientists had doubted the actual importance of income and wealth to overall happiness in societies.[8]
https://en.wikipedia.org/wiki/Homo_economicus
Other critics of the Homo economicus model of humanity, such as Bruno Frey, point to the excessive emphasis on extrinsic motivation (rewards and punishments from the social environment) as opposed to intrinsic motivation. For example, it is difficult if not impossible to understand how Homo economicus would be a hero in war or would get inherent pleasure from craftsmanship. Frey and others argue that too much emphasis on rewards and punishments can "crowd out" (discourage) intrinsic motivation: paying a boy for doing household tasks may push him from doing those tasks "to help the family" to doing them simply for the reward.
https://en.wikipedia.org/wiki/Homo_economicus
Defenders of the Homo economicus model see many critics of the dominant school as using a straw man technique. For example, it is common for critics to argue that real people do not have cost-less access to infinite information and an innate ability to instantly process it[citation needed]. However, in advanced-level theoretical economics, scholars have found ways of addressing these problems, modifying models enough to more realistically depict real-life decision-making. For example, models of individual behavior under bounded rationality and of people suffering from envy can be found in the literature.[24] It is primarily when targeting the limiting assumptions made in constructing undergraduate models that the criticisms listed above are valid. These criticisms are especially valid to the extent that the professor asserts that the simplifying assumptions are true or uses them in a propagandistic way.
https://en.wikipedia.org/wiki/Homo_economicus
Economists[citation needed] tend to disagree with these critiques, arguing that it may be relevant to analyze the consequences of enlightened egoism just as it may be worthwhile to consider altruistic or social behavior. Others[citation needed] argue that we need to understand the consequences of such narrow-minded greed even if only a small percentage of the population embraces such motives. Free riders, for example, would have a major negative impact on the provision of public goods. However, economists' supply and demand predictions might obtain even if only a significant minority of market participants act like Homo economicus. In this view, the assumption of Homo economicus can and should be simply a preliminary step on the road to a more sophisticated model.
https://en.wikipedia.org/wiki/Homo_economicus
The more sophisticated economists are quite conscious of the empirical limitations of the Homo economicus model. In theory, the views of the critics can be combined with the Homo economicus model to attain a more accurate model.[citation needed]
https://en.wikipedia.org/wiki/Homo_economicus
https://en.wikipedia.org/wiki/Homo_economicus
Tabula rasa (/ˈtæbjələ ˈrɑːsə, -zə, ˈreɪ-/; "blank slate") is the theory that individuals are born without built-in mental content, and therefore all knowledge comes from experience or perception. Epistemological proponents of tabula rasa disagree with the doctrine of innatism, which holds that the mind is born already in possession of certain knowledge. Proponents of the tabula rasa theory also favour the "nurture" side of the nature versus nurture debate when it comes to aspects of one's personality, social and emotional behaviour, knowledge, and sapience.
https://en.wikipedia.org/wiki/Tabula_rasa
Homo Sovieticus (cod Latin for 'Soviet Man') is a pejorative term for an average conformist person in the Soviet Union and other countries of the Eastern Bloc. The term was popularized by Soviet writer and sociologist Aleksandr Zinovyev, who wrote the book titled Homo Sovieticus.[1]
Michel Heller asserted that the term was coined in the introduction of a 1974 monograph "Sovetskye lyudi" ("Soviet People") to describe the next level of evolution of humanity, where the USSR becomes the "kingdom of freedom", the birthplace of "a new, higher type of Homo sapiens - Homo sovieticus".[2]
In a book published in 1981, but available in underground samizdat in the 1970s, Zinovyev also coined an abbreviation homosos (гомосос, literally a homosucker).[3]. A synonym of Homo Sovieticus is Sovok.[4]
https://en.wikipedia.org/wiki/Homo_Sovieticus
Homo duplex is a view promulgated by Émile Durkheim, a macro-sociologist of the 19th century, saying that a man on the one hand is a biological organism, driven by instincts, with desire and appetite and on the other hand is being led by morality and other elements generated by society. What allows a person to go beyond the "animal" nature is the most common religion that imposes specific normative system and is a way to regulate behaviour.
Left unchecked the individualism leads to a lifetime of seeking to slake selfish desires which leads to unhappiness and despair. On the other hand collective conscience serves as a check on the will. This is created by socialisation. Highly anomic societies are characterized by weak primary group ties—family, church, community, and other such groups.
https://en.wikipedia.org/wiki/Homo_duplex
Behavioral economics studies the effects of psychological, cognitive, emotional, cultural and social factors in the decisions of individuals or institutions, and how these decisions deviate from those implied by classical economic theory.[1][2]
Behavioral economics is primarily concerned with the bounds of rationality of economic agents. Behavioral models typically integrate insights from psychology, neuroscience and microeconomic theory.[3][4] The study of behavioral economics includes how market decisions are made and the mechanisms that drive public opinion.
Behavioral economics began as a distinct field of study in the 1970s and '80s, but can be traced back to 18th-century economists, such as Adam Smith, who deliberated how the economic behavior of individuals could be influenced by their desires.[5]
The status of behavioral economics as a subfield of economics is a fairly recent development; the breakthroughs that laid the foundation for it were published through the last three decades of the 20th century.[6][7] Behavioral economics is still growing as a field, being used increasingly in research and in teaching.[8]
|
|
|
|
The post-autistic economics movement (French: autisme-économie),[1] or movement of students for the reform of economics teaching (French: mouvement des étudiants pour une réforme de l'enseignement de l'économie),[2] is a political movement that criticises neoclassical economics and advocates for pluralism in economics. The movement gained attention after an open letter signed by almost a thousand economics students at French universities and Grandes Écoles was published in Le Monde in 2000.[3]
Terminology
The French term autisme has an older meaning and signifies "abnormal subjectivity, acceptance of fantasy rather than reality". However, post-autistic economists also "assert that neoclassical economics has the characteristics of an autistic child".[4]
The pejorative reference to the neurodevelopmental disorder autism is considered offensive by some economists.[5] Greg Mankiw has said that "use of the term indicates a lack of empathy and understanding for those who live with actual, severe autism".[6]
Response
The French minister of education appointed a panel headed by Jean-Paul Fitoussi to inquire into economics teaching.[7] In 2000, the panel called for limited reform.[8]
Articles associated with the movement were published in the Post-Autistic Economics Newsletter from September 2000. This electronic newsletter became the Post-Autistic Economics Review and, since 2008, has existed as the peer-reviewed journal Real-World Economics Review.[9]
Several responses to the French students' open letter were also published in Le Monde. A counter-petition signed by 15 French economists was published in October 2000.[10] Robert Solow adhered to the "main thesis" of the French students' petition, but criticised the "opaque and almost incomprehensible" debate that followed among academics.[11] Olivier Blanchard also published a response defending mainstream economics.[9] Other notable economists, such as Steve Keen and James K. Galbraith, wrote elsewhere in support of the French students.[12]
See also
References
- Galbraith, James K. (January 2001). "A contribution on the state of economics in France and the world". In Fullbrook, Edward (ed.). The crisis in economics: the post-autistic economics movement: the first 600 days. p. 47. ISBN 0415308976. Retrieved 31 December 2016 – via Post-Autistic Economics Network.
Further reading
- Fullbrook, Edward, ed. (2007). Real world economics: a post-autistic economics reader. Anthem Press. ISBN 978-1843312369.
- Fullbrook, Edward, ed. (2003). The crisis in economics: the post-autistic economics movement: the first 600 days. Psychology Press. ISBN 0415308976.
- Jacobsen, Kurt (4 April 2001). "Unreal, man". The Guardian. Retrieved 31 December 2016.
- History of and documents from the PAE Movement
- Real-World Economics Review Blog
- What Every Economics Student Needs To Know. Routledge. 2014. ISBN 9780765639233.
Human | |
---|---|
An adult human male (left) and female (right) from the Akha tribe in Northern Thailand | |
Scientific classification | |
Kingdom: | Animalia |
Phylum: | Chordata |
Class: | Mammalia |
Order: | Primates |
Suborder: | Haplorhini |
Infraorder: | Simiiformes |
Family: | Hominidae |
Subfamily: | Homininae |
Tribe: | Hominini |
Genus: | Homo |
Species: | H. sapiens
|
Binomial name | |
Homo sapiens Linnaeus, 1758
| |
Subspecies | |
| |
Synonyms | |
|
Species synonymy[1] |
In addition to the generally accepted taxonomic name Homo sapiens (Latin: "sapient human", Linnaeus 1758), other Latin-based names for the human species have been created to refer to various aspects of the human character.
The common name of the human species in English is historically man (from Germanic), often replaced by the Latinate human (since the 16th century).
In the world's languages
The Indo-European languages have a number of inherited terms for mankind. The etymon of man is found in the Germanic languages, and is cognate with Manu, the name of the human progenitor in Hindu mythology, and found in Indic terms for "man" (manuṣya, manush, manava etc.).
Latin homo is derived from an Indo-European root dʰǵʰm- "earth", as it were "earthling". It has cognates in Baltic (Old Prussian zmūi), Germanic (Gothic guma) and Celtic (Old Irish duine). This is comparable to the explanation given in the Genesis narrative to the Hebrew Adam (אָדָם) "man", derived from a word for "red, reddish-brown". Etymologically, it may be an ethnic or racial classification (after "reddish" skin colour contrasting with both "white" and "black"), but Genesis takes it to refer to the reddish colour of earth, as in the narrative the first man is formed from earth.[2]
Other Indo-European languages name man for his mortality, *mr̥tós meaning "mortal", so in Armenian mard, Persian mard, Sanskrit marta and Greek βροτός meaning "mortal; human". This is comparable to the Semitic word for "man", represented by Arabic insan إنسان (cognate with Hebrew ʼenōš אֱנוֹשׁ), from a root for "sick, mortal".[3] The Arabic word has been influential in the Islamic world, and was adopted in many Turkic languages. The native Turkic word is kiši.[4]
Greek ἄνθρωπος (anthropos) is of uncertain, possibly pre-Greek origin.[5] Slavic čelověkъ also is of uncertain etymology.[6]
The Chinese character used in East Asian languages is 人, originating as a pictogram of a human being. The reconstructed Old Chinese pronunciation of the Chinese word is /ni[ŋ]/.[7] A Proto-Sino-Tibetan r-mi(j)-n gives rise to Old Chinese /*miŋ/, modern Chinese 民 mín "people" and to Tibetan མི mi "person, human being".
In some tribal or band societies, the local endonym is indistinguishable from the word for "men, human beings". Examples include Ainu: ainu, Inuktitut: inuk, Bantu: bantu, Khoekhoe: khoe-khoe (etc.), possibly in Uralic: Hungarian magyar, Mansi mäńćī, mańśi, from a Proto-Ugric *mańć- "man, person".
In philosophy
The mixture of serious and tongue-in-cheek self-designation originates with Plato, who on one hand defined man as it were taxonomically as "featherless biped"[8] and on the other as ζῷον πολιτικόν zōon politikon, as "political" or "state-building animal" (Aristotle's term, based on Plato's Statesman).
Harking back to Plato's zōon politikon are a number of later descriptions of man as an animal with a certain characteristic. Notably animal rationabile "animal capable of rationality", a term used in medieval scholasticism (with reference to Aristotle), and also used by Carl von Linné (1760)[citation needed] and Immanuel Kant (1798).[citation needed] Based on the same pattern is animal sociale or "social animal"[according to whom?][year needed] animal laborans "laboring animal" (Hannah Arendt 1958[9]) and animal symbolicum "symbolizing animal" (Ernst Cassirer 1944).
Taxonomy
The binomial name Homo sapiens was coined by Carl Linnaeus (1758).[10] Names for other human species were introduced beginning in the second half of the 19th century (Homo neanderthalensis 1864, Homo erectus 1892).
There is no consensus on the taxonomic delineation between human species, human subspecies and the human races. On the one hand, there is the proposal that H. sapiens idaltu (2003) is not distinctive enough to warrant classification as a subspecies.[11] On the other, there is the position that genetic variation in the extant human population is large enough to justify its division into several subspecies[citation needed]. Linneaeus (1758) proposed division into five subspecies, H. sapiens europaeus alongside H. s. afer, H. s. americanus and H. s. asiaticus for Europeans, Africans, Americans and Asians. This convention remained commonly observed until the mid-20th century, sometimes with variations or additions such as H. s. tasmanianus for Australians.[12] The conventional division of extant human populations into taxonomic subspecies was gradually abandoned beginning in the 1970s.[13] Similarly, there are proposals to classify Neanderthals[14] and Homo rhodesiensis as subspecies of H. sapiens, although it remains more common to treat these last two as separate species within the genus Homo rather than as subspecies within H. sapiens.[15]
List of binomial names
The following names mimick binomial nomenclature, mostly consisting of Homo followed by a Latin adjective characterizing human nature. Most of them were coined since the mid 20th century in imitation of Homo sapiens in order to make some philosophical point (either serious or ironic), but some go back to the 18th to 19th century, as in Homo aestheticus vs. Homo oeconomicus; Homo loquens is a serious suggestion by Herder, taking the human species as defined by the use of language;[16] Homo creator is medieval, coined by Nicolaus Cusanus in reference to man as imago Dei.
Name | Translation | Notes | |
---|---|---|---|
Homo absconditus | "man the inscrutable" | Soloveitchik 1965 Lonely Man of Faith | |
Homo absurdus | “absurd man” | Giovanni Patriarca Homo Economicus, Absurdus, or Viator? 2014 | |
Homo adaptabilis | “adaptable man” | Giovanni Patriarca Homo Economicus, Absurdus, or Viator? 2014 | |
Homo adorans | "worshipping man" | Man as a worshipping agent, a servant of God or gods.[17] | |
Homo aestheticus | "aesthetic man" | in Goethe's Wilhelm Meisters Lehrjahre, the main antagonist of Homo oeconomicus in the internal conflict tormenting the philosopher. Homo aestheticus is "man the aristocrat" in feelings and emotions.[18]
Dissanayake (1992) uses the term to suggest that the emergence of art was central to the formation of the human species. | |
Homo amans | "loving man" | man as a loving agent; Humberto Maturana 2008[19] | |
Homo animalis | "man with a soul" | Man as in possession of an animus sive mens (a soul or mind), Heidegger (1975).[18] | |
Homo apathetikos | “apathetic man” | Used by Abraham Joshua Heschel in his book The Prophets to refer to the Stoic notion of the ideal human being, one who has attained apatheia. | |
Homo avarus | "man the greedy" | used for Man "activated by greed" by Barnett (1977).[20] | |
Homo combinans | "combining man" | man as the only species that performs the unbounded combinatorial operations that underlie syntax and possibly other cognitive capacities; Cedric Boeckx 2009.[21] | |
Homo communicans | "communicating man" |
| |
Homo contaminatus | "contaminated man" | suggested by Romeo (1979) alongside Homo inquinatus ("polluted man") "to designate contemporary Man polluted by his own technological advances".[22] | |
Homo creator | "creator man" | due to Nicolaus Cusanus in reference to man as imago Dei; expanded to Homo alter deus by K.-O. Apel (1955).[23] | |
Homo degeneratus | "degenerative man" | a man or the mankind as a whole if they undergo any regressive development (devolution); Andrej Poleev 2013[24] | |
Homo demens | "mad man" | man as the only being with irrational delusions. Edgar Morin 1973 [The Lost Paradigm: Human Nature] | |
Homo deus | "human god" | Man as god, endowed with supernatural abilities such as eternal life as outlined in Yuval Noah Harari's 2015 book Homo Deus: A Brief History of Tomorrow | |
Homo dictyous | "network man" | Humankind as having a brain evolved for social connections | |
Homo discens | "learning man" | human capability to learn and adapt, Heinrich Roth, Theodor Wilhelm[year needed][citation needed] | |
Homo documentator | "documenting man" | human need and propensity to document and organize knowledge, Suzanne Briet in What Is Documentation?, 1951 | |
Homo domesticus | "domestic man" | a human conditioned by the built environment; Oscar Carvajal 2005[25] Derrick Jensen 2006[26] | |
Homo donans et recipiens | "giving and receiving (hu)man" | a human conditioned by free gifting and receiving; Genevieve Vaughan 2021[27] | |
Homo duplex | "double man" | Georges-Louis Leclerc, Comte de Buffon 1754.[citation needed] Honoré de Balzac 1846. Joseph Conrad 1903. The idea of the double or divided man is developed by Émile Durkheim (1912) to figure the interaction of man's animal and social tendencies. | |
Homo economicus | "economic man" | man as a rational and self-interested agent (19th century). | |
Homo educandus | "to be educated" | human need of education before reaching maturity, Heinrich Roth 1966[citation needed] | |
Homo ethicus | "ethical man" | Man as an ethical agent. | |
Homo excentricus | "not self-centered" | human capability for objectivity, human self-reflection, theory of mind, Helmuth Plessner 1928[citation needed] | |
Homo faber | "toolmaker man" "fabricator man" "worker man" |
Karl Marx, Kenneth Oakley 1949, Max Frisch 1957, Hannah Arendt.[9] | |
Homo ferox | "ferocious man" | T.H. White 1958 | |
Homo generosus | "generous man" | Tor Nørretranders, Generous Man (2005) | |
Homo geographicus | "man in place" | Robert D. Sack, Homo Geographicus (1997) | |
Homo grammaticus | "grammatical man" | human use of grammar, language, Frank Palmer 1971 | |
Homo hierarchicus | "hierarchical man" | Louis Dumont 1966 | |
Homo humanus | "human man" | used as a term for mankind considered as human in the cultural sense, as opposed to homo biologicus, man considered as a biological species (and thus synonymous with Homo sapiens); the distinction was made in these terms by John N. Deely (1973).[28] | |
Homo hypocritus | "hypocritical man" | Robin Hanson (2010);[29] also called "man the sly rule bender" | |
Homo imitans | "imitating man" | human capability of learning and adapting by imitation, Andrew N. Meltzoff 1988, Jürgen Lethmate 1992[citation needed] | |
Homo inermis | "helpless man" | man as defenseless, unprotected, devoid of animal instincts. J. F. Blumenbach 1779, J. G. Herder 1784–1791, Arnold Gehlen 1940[citation needed] | |
Homo interrogans | “questioning man” | The human is a questioning / inquiring being, a being who not only asks questions but capable of questioning/questing without there being an object referent for the inquiry itself and capable of ever-asking. Abraham Joshua Heschel discussed this idea in his 1965 book Who is Man? but John Bruin coined the term in his 2001 book Homo Interrogans: Questioning and the Intentional Structure of Cognition |
|
Homo ignorans | "ignorant man" | antonym to sciens (Bazán 1972, Romeo 1979:64) | |
Homo interreticulatus | "buried-within-the-rectangle man" | used by philosopher David Bentley Hart to describe humanity lost within the screens of computers and other devices [30] | |
Homo investigans | "investigating man" | human curiosity and capability to learn by deduction, Werner Luck 1976[citation needed] | |
Homo juridicus | "juridical man" | Homo juridicus identifies normative primacy of law, Alain Supiot, 2007.[31] | |
Homo laborans | "working man" | human capability for division of labour, specialization and expertise in craftsmanship and, Theodor Litt 1948[citation needed] | |
Homo liturgicus | "the man who participates with others in rituals that recognize and enact meaning" | Philosopher James K. A. Smith uses this terms to describe a basic way in which humans dwell together with habitual practices that both embody and reorient us toward shared higher goods.[32] | |
Homo logicus | "the man who wants to understand" | Homo logicus are driven by an irresistible desire to understand how things work. By contrast, Homo sapiens have a strong desire for success. Alan Cooper 1999 | |
Homo loquens | "talking man" | man as the only animal capable of language, J. G. Herder 1772, J. F. Blumenbach 1779.[citation needed] | |
Homo loquax | "chattering man" | parody variation of Homo loquens, used by Henri Bergson (1943), Tom Wolfe (2006),[33] also in A Canticle for Leibowitz (1960). | |
Homo ludens | "playing man" | Friedrich Schiller 1795; Johan Huizinga, Homo Ludens (1938); Hideo Kojima (2016). The characterization of human culture as essentially bearing the character of play. | |
Homo mendax | "lying man" | man with the ability to tell lies. Fernando Vallejo[citation needed] | |
Homo metaphysicus | "metaphysical man" | Arthur Schopenhauer 1819[citation needed] | |
Homo narrans | "storytelling man" | man not only as an intelligent species, but also as the only one who tells stories, used by Walter Fisher in 1984.[34] Also Pan narrans "storytelling ape" in The Science of Discworld II: The Globe by Terry Pratchett, Ian Stewart and Jack Cohen | |
Homo necans | "killing man" | Walter Burkert 1972 | |
Homo neophilus and Homo neophobus | "Novelty-loving man" and "Novelty-fearing man", respectively | coined by characters in the Illuminatus! Trilogy by Robert Shea and Robert Anton Wilson to describe two distinct types of human being: one which seeks out and embraces new ideas and situations (neophilus), and another which clings to habit and fears the new (neophobus). | |
Homo otiosus | “slacker man” | The 11th Edition of The Encyclopædia Britannica defines man as “a seeker after the greatest degree of comfort for the least necessary expenditure of energy”. In The Restless Compendium Michael Greaney credits Sociologist Robert Stebbins with coining the term “homo otiosus” to refer to the privileged economic class of “persons of leisure”, asserting that a distinctiveness of humans is that they (unlike other animals and machines) are capable of intentional laziness.[35] | |
Homo patiens | "suffering man" | human capability for suffering, Viktor Frankl 1988[citation needed] | |
Homo viator | "man the pilgrim" | man as on his way towards finding God, Gabriel Marcel 1945[citation needed] | |
Homo pictor | "depicting man", "man the artist" | human sense of aesthetics, Hans Jonas 1961 | |
Homo poetica | "man the poet", "man the meaning maker" | Ernest Becker, in The Structure of Evil: An Essay on the Unification of the Science of Man (1968). | |
Homo religiosus | "religious man" | Alister Hardy[year needed][citation needed] | |
Homo ridens | "laughing man" | G.B. Milner 1969[36] | |
Homo reciprocans | "reciprocal man" | man as a cooperative actor who is motivated by improving his environment and wellbeing; Samuel Bowles and Herbert Gintis 1997[37] | |
Homo sacer | "the sacred man" or "the accursed man" | in Roman law, a person who is banned and may be killed by anybody, but may not be sacrificed in a religious ritual. Italian philosopher Giorgio Agamben takes the concept as the starting point of his main work Homo Sacer: Sovereign Power and Bare Life (1998) | |
Homo sanguinis | "bloody man" | A comment on human foreign relations and the increasing ability of man to wage war by anatomist W. M. Cobb in the Journal of the National Medical Association in 1969 and 1975.[38][39] | |
Homo sciens | "knowing man" | used by Siger of Brabant, noted as a precedent of Homo sapiens by Bazán (1972) (Romeo 1979:128) | |
Homo sentimentalis | "sentimental man" | man born to a civilization of sentiment, who has raised feelings to a category of value; the human ability to empathize, but also to idealize emotions and make them servants of ideas. Milan Kundera in Immortality (1990), Eugene Halton in Bereft of Reason: On the Decline of Social Thought and Prospects for Its Renewal (1995). | |
Homo socius | "social man" | man as a social being. Inherent to humans as long as they have not lived entirely in isolation. Peter Berger & Thomas Luckmann in The Social Construction of Reality (1966). | |
Homo sociologicus | "sociological man" | parody term; the human species as prone to sociology, Ralf Dahrendorf.[year needed] | |
Homo Sovieticus | (Dog Latin for "Soviet Man") | a sarcastic and critical reference to an average conformist person in the USSR and other countries of the Eastern Bloc. The term was popularized by Soviet writer and sociologist Aleksandr Zinovyev, who wrote the book titled Homo Sovieticus. | |
Homo superior | “superior man” | Coined by the titular character in Olaf Stapledon's novel Odd John (1935) to refer to superpowered mutants like himself. Also occurs in Marvel Comics' The X-Men (1963–present), the BBC series The Tomorrow People (1973-1979), and David Bowie's song “Oh! You Pretty Things” 1971. | |
Homo symbolicus | "symbolic culture man" | The emergence of symbolic culture. 2011 [Editors Christopher S. Henshilwood & Francesco d'Errico, Homo Symbolicus: The dawn of language, imagination and spirituality[40]] and [41] | |
Homo sympathetikos | “sympathetic man” | The term used by Abraham Joshua Heschel in his book The Prophets to refer to the prophetic ideal for humans: sympathetic feeling or sharing in the concerns of others, the highest expression of which is sharing in God's concern / feeling / pathos. | |
Homo technologicus | "technological man" | Yves Gingras 2005, similar to homo faber, in a sense of man creating technology as an antithesis to nature.[42][43] | |
Jocko Homo | “ape-man” | Coined and defined by Bertram Henry Shadduck in his 1924 tract Jocko-Homo Heavenbound the phrase gained prominence via the release DEVO’s 1977 song Jocko Homo. |
In fiction
In fiction, specifically science fiction and fantasy, occasionally names for the human species are introduced reflecting the fictional situation of humans existing alongside other, non-human civilizations. In science fiction, Earthling (also "Terran", "Earther", and "Gaian") is frequently used, as it were naming humanity by its planet of origin. Incidentally, this situation parallels the naming motive of ancient terms for humanity, including "human" (homo, humanus) itself, derived from a word for "earth" to contrast humans as earth-bound with celestial beings (i.e. deities) in mythology.
See also
References
- Warwick, Kevin (2016). "Homo Technologicus: Threat or Opportunity?". Philosophies. 1 (3): 199–208. doi:10.3390/philosophies1030199.
Further reading
- Luigi Romeo, Ecce Homo!: A Lexicon of Man, John Benjamins Publishing, 1979.
https://en.wikipedia.org/wiki/Names_for_the_human_species
Examples |
In economics, a public good (also referred to as a social good or collective good)[1] is a good that is both non-excludable and non-rivalrous. For such goods, users cannot be barred from accessing or using them for failing to pay for them. Also, use by one person neither prevents access of other people nor does it reduce availability to others.[1] Therefore, the good can be used simultaneously by more than one person.[2] This is in contrast to a common good, such as wild fish stocks in the ocean, which is non-excludable but rivalrous to a certain degree. If too many fish were harvested, the stocks would deplete, limiting the access of fish for others. A public good must be valuable to more than one user, otherwise, the fact that it can be used simultaneously by more than one person would be economically irrelevant.
Capital goods may be used to produce public goods or services that are "...typically provided on a large scale to many consumers."[3] Unlike other types of economic goods, public goods are described as “non-rivalrous” or “non-exclusive,” and use by one person neither prevents access of other people nor does it reduce availability to others.[1] Similarly, using capital goods to produce public goods may result in the creation of new capital goods. In some cases, public goods or services are considered "...insufficiently profitable to be provided by the private sector.... (and), in the absence of government provision, these goods or services would be produced in relatively small quantities or, perhaps, not at all."[3]
Public goods include knowledge,[4] official statistics, national security, common languages,[5] law enforcement, public parks, free roads, television and radio broadcasts.[6] Additionally, flood control systems, lighthouses, and street lighting are also common social goods. Collective goods that are spread all over the face of the earth may be referred to as global public goods. This is not limited to physical book literature, but also media, pictures and videos.[7] For instance, knowledge is well shared globally. Information about men, women and youth health awareness, environmental issues, and maintaining biodiversity is common knowledge that every individual in the society can get without necessarily preventing others access. Also, sharing and interpreting contemporary history with a cultural lexicon, particularly about protected cultural heritage sites and monuments are other sources of knowledge that the people can freely access.
Public goods problems are often closely related to the "free-rider" problem, in which people not paying for the good may continue to access it. Thus, the good may be under-produced, overused or degraded.[8] Public goods may also become subject to restrictions on access and may then be considered to be club goods; exclusion mechanisms include toll roads, congestion pricing, and pay television with an encoded signal that can be decrypted only by paid subscribers.
There is a good deal of debate and literature on how to measure the significance of public goods problems in an economy, and to identify the best remedies.
Academic literature on public goods
Paul A. Samuelson is usually credited as the economist who articulated the modern theory of public goods in a mathematical formalism, building on earlier work of Wicksell and Lindahl. In his classic 1954 paper The Pure Theory of Public Expenditure,[9] he defined a public good, or as he called it in the paper a "collective consumption good", as follows:
[goods] which all enjoy in common in the sense that each individual's consumption of such a good leads to no subtractions from any other individual's consumption of that good...
A Lindahl tax is a type of taxation brought forward by Erik Lindahl, an economist from Sweden in 1919. His idea was to tax individuals, for the provision of a public good, according to the marginal benefit they receive. Public goods are costly and eventually someone needs to pay the cost.[10] It is difficult to determine how much each person should pay. So, Lindahl developed a theory of how the expense of public utilities needs to be settled. His argument was that people would pay for the public goods according to the way they benefit from the good. The more a person benefits from these goods, the higher the amount they pay. People are more willing to pay for goods that they value. Taxes are needed to fund public goods and people are willing to bear the burden of taxes.[11] Additionally, the theory dwells on people's willingness to pay for the public good. From the fact that public goods are paid through taxation according to the Lindahl idea, the basic duty of the organization that should provide the people with this services and products is the government.[12] The services and public utility in most cases are part of the many governmental activities that government engage purely for the satisfaction of the public and not generation of profits.[13] In the introductory section of his book, Public Good Theories of the Nonprofit Sector, Bruce R. Kingma stated that;
In the Weisbrod model nonprofit organizations satisfy a demand for public goods, which is left unfilled by government provision. The government satisfies the demand of the median voters and therefore provides a level of the public good less than some citizens'-with a level of demand greater than the median voter's-desire. This unfilled demand for the public good is satisfied by nonprofit organizations. These nonprofit organizations are financed by the donations of citizens who want to increase the output of the public good.[14]
Terminology, and types of goods
Non-rivalrous: accessible by all while one's usage of the product does not affect the availability for subsequent use.[12]
Non-excludability: that is, it is impossible to exclude any individuals from consuming the good. Pay walls and memberships are common ways to create excludability.
Pure public: when a good exhibits the two traits, non-rivalry and non-excludability, it is referred to as the pure public good. Pure public goods are rare.
Impure public goods: the goods that satisfy the two public good conditions (non-rivalry and non-excludability) only to a certain extent or only some of the time.
Private good: The opposite of a public good which does not possess these properties. A loaf of bread, for example, is a private good; its owner can exclude others from using it, and once it has been consumed, it cannot be used by others.
Common-pool resource: A good that is rivalrous but non-excludable. Such goods raise similar issues to public goods: the mirror to the public goods problem for this case is the 'tragedy of the commons', where the unfettered access to a good sometimes results in the overconsumption and thus depletion of that resource. For example, it is so difficult to enforce restrictions on deep-sea fishing that the world's fish stocks can be seen as a non-excludable resource, but one which is finite and diminishing.
Club goods: are the goods that excludable but are non-rivalrous such as private parks.
Mixed good: final goods that are intrinsically private but that are produced by the individual consumer by means of private and public good inputs. The benefits enjoyed from such a good for any one individual may depend on the consumption of others, as in the cases of a crowded road or a congested national park.[15]
Definition matrix
|
Excludable | Non-excludable |
Rivalrous | Private goods food, clothing, cars, parking spaces |
Common-pool resources fish stocks, timber, coal, free public transport |
Non-rivalrous | Club goods cinemas, private parks, satellite television, public transport |
Public goods free-to-air television, air, national defense, free and open-source software |
Challenges in identifying public goods
The definition of non-excludability states that it is impossible to exclude individuals from consumption. Technology now allows radio or TV broadcasts to be encrypted such that persons without a special decoder are excluded from the broadcast. Many forms of information goods have characteristics of public goods. For example, a poem can be read by many people without reducing the consumption of that good by others; in this sense, it is non-rivalrous. Similarly, the information in most patents can be used by any party without reducing consumption of that good by others. Official statistics provide a clear example of information goods that are public goods, since they are created to be non-excludable. Creative works may be excludable in some circumstances, however: the individual who wrote the poem may decline to share it with others by not publishing it. Copyrights and patents both encourage the creation of such non-rival goods by providing temporary monopolies, or, in the terminology of public goods, providing a legal mechanism to enforce excludability for a limited period of time. For public goods, the "lost revenue" of the producer of the good is not part of the definition: a public good is a good whose consumption does not reduce any other's consumption of that good.[16] Public goods also incorporate private goods, which makes it challenging to define what is private or public. For instance, you may think that the community soccer field is a public good. However, you need to bring your own cleats and ball to be able to play. There is also a rental fee that you would have to pay for you to be able to occupy that space. It is a mixed case of public and private goods.
Debate has been generated among economists whether such a category of "public goods" exists. Steven Shavell has suggested the following:
when professional economists talk about public goods they do not mean that there are a general category of goods that share the same economic characteristics, manifest the same dysfunctions, and that may thus benefit from pretty similar corrective solutions...there is merely an infinite series of particular problems (some of overproduction, some of underproduction, and so on), each with a particular solution that cannot be deduced from the theory, but that instead would depend on local empirical factors.[17]
There is a common misconception that public goods are goods provided by the public sector. Although it is often the case that government is involved in producing public goods, this is not always true. Public goods may be naturally available, or they may be produced by private individuals, by firms, or by non-state groups, called collective action.[18]
The theoretical concept of public goods does not distinguish geographic region in regards to how a good may be produced or consumed. However, some theorists, such as Inge Kaul, use the term "global public good" for a public good which is non-rivalrous and non-excludable throughout the whole world, as opposed to a public good which exists in just one national area. Knowledge has been argued as an example of a global public good,[4] but also as a commons, the knowledge commons.[19]
Graphically, non-rivalry means that if each of several individuals has a demand curve for a public good, then the individual demand curves are summed vertically to get the aggregate demand curve for the public good. This is in contrast to the procedure for deriving the aggregate demand for a private good, where individual demands are summed horizontally.
Some writers have used the term "public good" to refer only to non-excludable "pure public goods" and refer to excludable public goods as "club goods".[20]
Digital public goods
Digital public goods include software, data sets, AI models, standards and content that are open source.
Use of the term “digital public good” appears as early as April, 2017 when Nicholas Gruen wrote Building the Public Goods of the Twenty-First Century, and has gained popularity with the growing recognition of the potential for new technologies to be implemented at scale to effectively serve people. Digital technologies have also been identified by countries, NGOs and private sector entities as a means to achieve the Sustainable Development Goals (SDGs).
A digital public good is defined by the UN Secretary-General's Roadmap for Digital Cooperation, as: “open source software, open data, open AI models, open standards and open content that adhere to privacy and other applicable laws and best practices, do no harm, and help attain the SDGs.”
Examples
Common examples of public goods include
- public fireworks
- clean air and other environmental goods
- information goods, such as official statistics
- open-source software
- authorship
- public television
- radio
- invention
- herd immunity
- Wikipedia
- National defense
- Fire service
- Flood defense
- Street lights
Class and type of Good | Nonexcludable | Nonrival | Common problem |
---|---|---|---|
Ozone Layer | Yes | No | Overuse |
Atmosphere | Yes | No | Overuse |
Universal human rights | Partly | Yes | Underuse (Repression) |
Knowledge | Partly | Yes | Underuse (lack of access) |
Internet | Partly | Yes | Underuse (Entry Barriers) |
Shedding light on some misclassified public goods
- Some goods, like orphan drugs, require special governmental incentives to be produced, but cannot be classified as public goods since they do not fulfill the above requirements (non-excludable and non-rivalrous.)
- Law enforcement, streets, libraries, museums, and education are commonly misclassified as public goods, but they are technically classified in economic terms as quasi-public goods because excludability is possible, but they do still fit some of the characteristics of public goods.[22][23]
- The provision of a lighthouse is a standard example of a public good, since it is difficult to exclude ships from using its services. No ship's use detracts from that of others, but since most of the benefit of a lighthouse accrues to ships using particular ports, lighthouse maintenance can be profitably bundled with port fees (Ronald Coase, The Lighthouse in Economics 1974). This has been sufficient to fund actual lighthouses.
- Technological progress can create new public goods. The most simple examples are street lights, which are relatively recent inventions (by historical standards). One person's enjoyment of them does not detract from other persons' enjoyment, and it currently would be prohibitively expensive to charge individuals separately for the amount of light they presumably use.
- Official statistics are another example. The government's ability to collect, process and provide high-quality information to guide decision-making at all levels has been strongly advanced by technological progress. On the other hand, a public good's status may change over time. Technological progress can significantly impact excludability of traditional public goods: encryption allows broadcasters to sell individual access to their programming. The costs for electronic road pricing have fallen dramatically, paving the way for detailed billing based on actual use.
Public goods are not restricted to human beings.[24] It is one aspect of the study of cooperation in biology.[25]
Free rider problem
The free rider problem is a primary issue in collective decision-making.[26] An example is that some firms in a particular industry will choose not to participate in a lobby whose purpose is to affect government policies that could benefit the industry, under the assumption that there are enough participants to result in a favourable outcome without them. The free rider problem is also a form of market failure, in which market-like behavior of individual gain-seeking does not produce economically efficient results. The production of public goods results in positive externalities which are not remunerated. If private organizations do not reap all the benefits of a public good which they have produced, their incentives to produce it voluntarily might be insufficient. Consumers can take advantage of public goods without contributing sufficiently to their creation. This is called the free rider problem, or occasionally, the "easy rider problem". If too many consumers decide to "free-ride", private costs exceed private benefits and the incentive to provide the good or service through the market disappears. The market thus fails to provide a good or service for which there is a need.[27]
The free rider problem depends on a conception of the human being as homo economicus: purely rational and also purely selfish—extremely individualistic, considering only those benefits and costs that directly affect him or her. Public goods give such a person an incentive to be a free rider.
For example, consider national defence, a standard example of a pure public good. Suppose homo economicus thinks about exerting some extra effort to defend the nation. The benefits to the individual of this effort would be very low, since the benefits would be distributed among all of the millions of other people in the country. There is also a very high possibility that he or she could get injured or killed during the course of his or her military service. On the other hand, the free rider knows that he or she cannot be excluded from the benefits of national defense, regardless of whether he or she contributes to it. There is also no way that these benefits can be split up and distributed as individual parcels to people. The free rider would not voluntarily exert any extra effort, unless there is some inherent pleasure or material reward for doing so (for example, money paid by the government, as with an all-volunteer army or mercenaries).
The free-riding problem is even more complicated than it was thought to be until recently. Any time non-excludability results in failure to pay the true marginal value (often called the "demand revelation problem"), it will also result in failure to generate proper income levels, since households will not give up valuable leisure if they cannot individually increment a good.[28] This implies that, for public goods without strong special interest support, under-provision is likely since cost-benefit analysis is being conducted at the wrong income levels, and all of the un-generated income would have been spent on the public good, apart from general equilibrium considerations.
In the case of information goods, an inventor of a new product may benefit all of society, but hardly anyone is willing to pay for the invention if they can benefit from it for free. In the case of an information good, however, because of its characteristics of non-excludability and also because of almost zero reproduction costs, commoditization is difficult and not always efficient even from a neoclassical economic point of view.[29]
Efficient production levels of public goods
The Pareto optimal provision of a public good in a society occurs when the sum of the marginal valuations of the public good (taken across all individuals) is equal to the marginal cost of providing that public good. These marginal valuations are, formally, marginal rates of substitution relative to some reference private good, and the marginal cost is a marginal rate of transformation that describes how much of that private good it costs to produce an incremental unit of the public good. This contrasts to the Pareto optimality condition of private goods, which equates each consumer's valuation of the private good to its marginal cost of production.[9][30]
For an example, consider a community of just two consumers and the government is considering whether or not to build a public park. One person is prepared to pay up to $200 for its use, while the other is willing to pay up to $100. The total value to the two individuals of having the park is $300. If it can be produced for $225, there is a $75 surplus to maintaining the park, since it provides services that the community values at $300 at a cost of only $225.
The classical theory of public goods defines efficiency under idealized conditions of complete information, a situation already acknowledged in Wicksell (1896).[31] Samuelson emphasized that this poses problems for the efficient provision of public goods in practice and the assessment of an efficient Lindahl tax to finance public goods, because individuals have incentives to underreport how much they value public goods.[9] Subsequent work, especially in mechanism design and the theory of public finance developed how valuations and costs could actually be elicited in practical conditions of incomplete information, using devices such as the Vickrey–Clarke–Groves mechanism. Thus, deeper analysis of problems of public goods motivated much work that is at the heart of modern economic theory.[32]
Local public goods
The basic theory of public goods as discussed above begins with situations where the level of a public good (e.g., quality of the air) is equally experienced by everyone. However, in many important situations of interest, the incidence of benefits and costs is not so simple. For example, when people keep an office clean or monitor a neighborhood for signs of trouble, the benefits of that effort accrue to some people (those in their neighborhoods) more than to others. The overlapping structure of these neighborhoods is often modeled as a network.[33] (When neighborhoods are totally separate, i.e., non-overlapping, the standard model is the Tiebout model.)
An example of locally public good that could help everyone, even ones not from the neighborhood, is a bus. Let's say you are a college student who is visiting their friend who goes to school in another city. You get to benefit from this services just like everyone that resides and goes to school in said city. There is also a correlation of benefit and cost that you are now a part of. You are benefiting by not having to walk to your destination and taking a bus instead. However, others might prefer to walk so they do not become a part of the problem, which is pollution due to gas given out by auto mobiles.
Recently, economists have developed the theory of local public goods with overlapping neighborhoods, or public goods in networks: both their efficient provision, and how much can be provided voluntarily in a non-cooperative equilibrium. When it comes to socially efficient provision, networks that are more dense or close-knit in terms of how much people can benefit each other have more scope for improving on an inefficient status quo.[34] On the other hand, voluntary provision is typically below the efficient level, and equilibrium outcomes tend to involve strong specialization, with a few individuals contributing heavily and their neighbors free-riding on those contributions.[33][35]
Global public goods
Academic discussion around the concept of public goods that are global in scope became prevalent in the 20th century with the rise of issues such as global warming and nuclear proliferation.[36] Global public goods are a unique category of public goods in that their benefits are non-rivalrous, non-excludable and are present on or affect large parts of the world.[37] Some examples of goods that fit this category include:
- Identification and containment of virulent pathogens (e.g. SARS-CoV-2, Cholera, Ebola virus)
- Geostationary orbit allocation
- Climate change action
- Solar radiation
These goods transcend national boundaries, where economic agents consist of nation states, global organisations or institutions. This differs from non-global public goods where economic agents are individuals or firms.[37] The containment and identification of SARS-CoV-2 during the COVID-19 pandemic is an example of a global public good where all countries benefit from containment and development of a vaccine for the highly contagious virus regardless of if they invest in its regulation or development.
Resolving the distribution of costs associated with maintaining or producing global public goods often requires significant cooperation between international bodies and governments. For example, acid rain produced from the sulphur emissions of power plants and factories was a significant issue in the 1980-90s that affected large parts of Europe.[37] A trans-European solution was enacted in the Helsinki Protocol of 1987 and the Oslo Protocol of 1998 which sought to reduce sulphur emissions by 30% and 80% respectively on 1980 emission levels.[37]
However, significant economic issues often impede international cooperation attempts. The final cost-benefit values for participating countries under an international agreement would vary significantly from agent to agent, even when using a Pareto optimal solution for the world.[36] This would mean that for some countries, participating in the agreement is not an economically rational decision, despite the supposed benefit to the world as a public good.
Ownership
Economic theorists such as Oliver Hart (1995) have emphasized that ownership matters for investment incentives when contracts are incomplete.[38] The incomplete contracting paradigm has been applied to public goods by Besley and Ghatak (2001).[39] They consider the government and a non-governmental organization (NGO) who can both make investments to provide a public good. Besley and Ghatak argue that the party who has a larger valuation of the public good should be the owner, regardless of whether the government or the NGO has a better investment technology. This result contrasts with the case of private goods studied by Hart (1995), where the party with the better investment technology should be the owner. However, it has been shown that the investment technology may matter also in the public-good case when a party is indispensable or when there are bargaining frictions between the government and the NGO.[40][41] Halonen-Akatwijuka and Pafilis (2020) have demonstrated that Besley and Ghatak's results are not robust when there is a long-term relationship, such that the parties interact repeatedly.[42] Moreover, Schmitz (2021) has shown that when the parties have private information about their valuations of the public good, then the investment technology can be an important determinant of the optimal ownership structure.[43]
Quasi-public goods
Generally speaking, these are items that are neither excludable nor rival in nature. Roads are a good illustration of this. Once they have been made available, the vast majority of people can make use of them, such as those who have a driving license. Other than toll roads, there is no charge to use roads so it is non excludable in nature. However, when you utilize a road, the amount of advantage that others can receive is limited to a certain extent, as a result of increasing traffic congestion. Therefore, the utility you get from roads is rival in the sense that your enjoyment of a road can reduce someone else's enjoyment. Education is another example of a quasi-public good. Different degrees of schooling require distinct classifications. While elementary and secondary education are considered meritocracies, higher education is better regarded as a quasi-public utility. In comparison, knowledge is frequently referred to as a global public good(Chattopadhyay, 2012).
See also
- Anti-rival good
- Excludability
- Lindahl tax, a method proposed by Erik Lindahl for financing public goods
- Private-collective model of innovation, which explains the creation of public goods by private investors
- Public bad
- Public trust doctrine
- Public goods game, a standard of experimental economics
- Public works, government-financed constructions
- Privileged group
- Tragedy of the commons
- Tragedy of the anticommons
- Rivalry (economics)
- Quadratic funding, a mechanism to allocate funding for the production of public goods based on democratic principles
References
Knowledge is a pure public good: once something is known, that knowledge can be used by anyone, and its use by any one person docs not preclude its use by others.
See also Samuelson, Paul A. (1955). "Diagrammatic Exposition of a Theory of Public Expenditure". Review of Economics and Statistics. 37 (4): 350–56. doi:10.2307/1925849. JSTOR 1925849.
- Schmitz, Patrick W. (2021). "Optimal ownership of public goods under asymmetric information". Journal of Public Economics. 198: 104424. doi:10.1016/j.jpubeco.2021.104424. ISSN 0047-2727. S2CID 236397476.
- Chattopadhyay, Saumen (2012). Education and Economics: Disciplinary Evolution and Policy Discourse. Oxford University Press. ISBN 9780198082255.
Bibliography
- Coase, Ronald (1974). "The Lighthouse in Economics". Journal of Law and Economics. 17 (2): 357–376. doi:10.1086/466796. S2CID 153715526.
Further reading
- Acoella, Nicola (2006), ‘Distributive issues in the provision and use of global public goods’, in: ‘Studi economici’, 88(1): 23–42.
- Anomaly, Jonathan (2015). "Public Goods and Government Action". Politics, Philosophy & Economics. 14 (2): 109–128. doi:10.1177/1470594X13505414. hdl:10161/9732. S2CID 154904308.
- Lipsey, Richard (2008). Economics (11 ed.). pp. 281–283.
- Zittrain, Jonathan, The Future of the Internet: And How to Stop It. 2008
- Lessig, Lawrence, Code 2.0, Chapter 7, What Things Regulate
External links
Library resources about Public good (economics) |
- Public Goods: A Brief Introduction, by The Linux Information Project (LINFO)
- Cowen, Tyler (2008). "Public Goods". In David R. Henderson (ed.). Concise Encyclopedia of Economics (2nd ed.). Indianapolis: Library of Economics and Liberty. ISBN 978-0-86597-665-8. OCLC 237794267.
- Global Public Goods – analysis from Global Policy Forum
- The Nature of Public Goods
- Hardin, Russell, "The Free Rider Problem", The Stanford Encyclopedia of Philosophy (Spring 2013 Edition), Edward N. Zalta (ed.)
https://en.wikipedia.org/wiki/Public_good_(economics)
In philosophy, economics, and political science, the common good (also Commonwealth, general welfare, or public benefit) is either what is shared and beneficial for all or most members of a given community, or alternatively, what is achieved by citizenship, collective action, and active participation in the realm of politics and public service. The concept of the common good differs significantly among philosophical doctrines.[1] Early conceptions of the common good were set out by Ancient Greek philosophers, including Aristotle and Plato. One understanding of the common good rooted in Aristotle's philosophy remains in common usage today, referring to what one contemporary scholar calls the "good proper to, and attainable only by, the community, yet individually shared by its members."[2]
The concept of common good developed through the work of political theorists, moral philosophers, and public economists, including Thomas Aquinas, Niccolò Machiavelli, John Locke, Jean-Jacques Rousseau, James Madison, Adam Smith, Karl Marx, John Stuart Mill, John Maynard Keynes, John Rawls, and many other thinkers. In contemporary economic theory, a common good is any good which is rivalrous yet non-excludable, while the common good, by contrast, arises in the subfield of welfare economics and refers to the outcome of a social welfare function. Such a social welfare function, in turn, would be rooted in a moral theory of the good (such as utilitarianism). Social choice theory aims to understand processes by which the common good may or may not be realized in societies through the study of collective decision rules. Public choice theory applies microeconomic methodology to the study of political science in order to explain how private interests affect political activities and outcomes.
https://en.wikipedia.org/wiki/Common_good
https://en.wikipedia.org/wiki/Common_good
https://en.wikipedia.org/wiki/Utilitarianism
https://en.wikipedia.org/wiki/Social_welfare_function
https://en.wikipedia.org/wiki/Group_decision-making
https://en.wikipedia.org/wiki/Respect
https://en.wikipedia.org/wiki/Dignity
https://en.wikipedia.org/wiki/Thomas_Aquinas
https://en.wikipedia.org/wiki/General_will
https://en.wikipedia.org/wiki/Philosopher_king
https://en.wikipedia.org/wiki/Communism
https://en.wikipedia.org/wiki/Discourses_on_Livy
https://en.wikipedia.org/wiki/The_Social_Contract
https://en.wikipedia.org/wiki/Original_position
https://en.wikipedia.org/wiki/Minimax#Maximin_in_philosophy
https://en.wikipedia.org/wiki/Equal_opportunity
https://en.wikipedia.org/wiki/John_Rawls
https://en.wikipedia.org/wiki/Confucianism
https://en.wikipedia.org/wiki/Goods#Goods_classified_by_exclusivity_and_competitiveness
https://en.wikipedia.org/wiki/Public_interest
https://en.wikipedia.org/wiki/Externality
https://en.wikipedia.org/wiki/Social_choice_theory
https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem
https://en.wikipedia.org/wiki/The_Calculus_of_Consent
https://en.wikipedia.org/wiki/Condorcet_winner_criterion
https://en.wikipedia.org/wiki/Public_choice
https://en.wikipedia.org/wiki/Rational_ignorance
https://en.wikipedia.org/wiki/Political_corruption
https://en.wikipedia.org/wiki/Altruism
https://en.wikipedia.org/wiki/Commons
https://en.wikipedia.org/wiki/Open_source
https://en.wikipedia.org/wiki/Common_Good
https://en.wikipedia.org/wiki/Rational_choice_theory
https://en.wikipedia.org/wiki/Utility#Functions
https://en.wikipedia.org/wiki/Homo_economicus#Homo_sociologicus
https://en.wikipedia.org/wiki/Marx%27s_theory_of_alienation
Human[1] | |
---|---|
An adult human male (left) and female (right) (Thailand, 2007) | |
Scientific classification | |
Kingdom: | Animalia |
Phylum: | Chordata |
Class: | Mammalia |
Order: | Primates |
Suborder: | Haplorhini |
Infraorder: | Simiiformes |
Family: | Hominidae |
Subfamily: | Homininae |
Tribe: | Hominini |
Genus: | Homo |
Species: | H. sapiens
|
Binomial name | |
Homo sapiens Linnaeus, 1758
| |
Homo sapiens population density (2005) |
Humans (Homo sapiens) are the most common and widespread species of primate in the great ape family Hominidae. Humans are broadly characterized by their bipedalism and high intelligence. Humans' large brain and resulting cognitive skills have allowed them to thrive in a variety of environments and develop complex societies and civilizations. Humans are highly social and tend to live in complex social structures composed of many cooperating and competing groups, from families and kinship networks to political states. As such, social interactions between humans have established a wide variety of values, social norms, languages, and rituals, each of which bolsters human society. The desire to understand and influence phenomena has motivated humanity's development of science, technology, philosophy, mythology, religion, and other conceptual frameworks.
Although some scientists equate the term "humans" with all members of the genus Homo, in common usage it generally refers to Homo sapiens, the only extant member. Anatomically modern humans emerged around 300,000 years ago in Africa, evolving from Homo heidelbergensis or a similar species and migrating out of Africa, gradually replacing or interbreeding with local populations of archaic humans. For most of history, humans were nomadic hunter-gatherers. Humans began exhibiting behavioral modernity about 160,000–60,000 years ago. The Neolithic Revolution, which began in Southwest Asia around 13,000 years ago (and separately in a few other places), saw the emergence of agriculture and permanent human settlement. As populations became larger and denser, forms of governance developed within and between communities, and a large number of civilizations have risen and fallen. Humans have continued to expand, with a global population of over 8 billion as of 2022.
Genes and the environment influence human biological variation in visible characteristics, physiology, disease susceptibility, mental abilities, body size, and life span. Though humans vary in many traits (such as genetic predispositions and physical features), any two humans are at least 99% genetically similar. Humans are sexually dimorphic: generally, males have greater body strength and females have a higher body fat percentage. At puberty, humans develop secondary sexual characteristics. Females are capable of pregnancy, usually between puberty, at around 12 years old, and menopause, around the age of 50.
Humans are omnivorous, capable of consuming a wide variety of plant and animal material, and have used fire and other forms of heat to prepare and cook food since the time of Homo erectus. Humans can survive for up to eight weeks without food and three or four days without water. Humans are generally diurnal, sleeping on average seven to nine hours per day. Childbirth is dangerous, with a high risk of complications and death. Often, both the mother and the father provide care for their children, who are helpless at birth.
Humans have a large, highly developed, and complex prefrontal cortex, the region of the brain associated with higher cognition. Humans are highly intelligent, capable of episodic memory, have flexible facial expressions, self-awareness, and a theory of mind. The human mind is capable of introspection, private thought, imagination, volition, and forming views on existence. This has allowed great technological advancements and complex tool development to be possible through complex reasoning and the transmission of knowledge to subsequent generations. Language, art, and trade are defining characteristics of humans. Long-distance trade routes might have led to cultural explosions and resource distribution that gave humans an advantage over other similar species.
Etymology and definition
All modern humans are classified into the species Homo sapiens, coined by Carl Linnaeus in his 1735 work Systema Naturae.[2] The generic name "Homo" is a learned 18th-century derivation from Latin homō, which refers to humans of either sex.[3][4] The word human can refer to all members of the Homo genus,[5] although in common usage it generally just refers to Homo sapiens, the only extant species.[6] The name "Homo sapiens" means 'wise man' or 'knowledgeable man'.[7] There is disagreement if certain extinct members of the genus, namely Neanderthals, should be included as a separate species of humans or as a subspecies of H. sapiens.[5]
Human is a loanword of Middle English from Old French humain, ultimately from Latin hūmānus, the adjectival form of homō ('man' – in the sense of humankind).[8] The native English term man can refer to the species generally (a synonym for humanity) as well as to human males. It may also refer to individuals of either sex, though this form is less common in contemporary English.[9]
Despite the fact that the word animal is colloquially used as an antonym for human,[10] and contrary to a common biological misconception, humans are animals.[11] The word person is often used interchangeably with human, but philosophical debate exists as to whether personhood applies to all humans or all sentient beings, and further if one can lose personhood (such as by going into a persistent vegetative state).[12]
Evolution
Humans are apes (superfamily Hominoidea).[13] The lineage of apes that eventually gave rise to humans first split from gibbons (family Hylobatidae) and orangutans (genus Pongo), then gorillas (genus Gorilla), and finally, chimpanzees and bonobos (genus Pan). The last split, between the human and chimpanzee–bonobo lineages, took place around 8–4 million years ago, in the late Miocene epoch.[14][15] During this split, chromosome 2 was formed from the joining of two other chromosomes, leaving humans with only 23 pairs of chromosomes, compared to 24 for the other apes.[16] Following their split with chimpanzees and bonobos, the hominins diversified into many species and at least two distinct genera. All but one of these lineages – representing the genus Homo and its sole extant species Homo sapiens – are now extinct.[17]
Hominoidea (hominoids, apes) |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
The genus Homo evolved from Australopithecus.[18][19] Though fossils from the transition are scarce, the earliest members of Homo share several key traits with Australopithecus.[20][21] The earliest record of Homo is the 2.8 million-year-old specimen LD 350-1 from Ethiopia, and the earliest named species are Homo habilis and Homo rudolfensis which evolved by 2.3 million years ago.[21] H. erectus (the African variant is sometimes called H. ergaster) evolved 2 million years ago and was the first archaic human species to leave Africa and disperse across Eurasia.[22] H. erectus also was the first to evolve a characteristically human body plan. Homo sapiens emerged in Africa around 300,000 years ago from a species commonly designated as either H. heidelbergensis or H. rhodesiensis, the descendants of H. erectus that remained in Africa.[23] H. sapiens migrated out of the continent, gradually replacing or interbreeding with local populations of archaic humans.[24][25][26] Humans began exhibiting behavioral modernity about 160,000–70,000 years ago,[27] and possibly earlier.[28]
The "out of Africa" migration took place in at least two waves, the first around 130,000 to 100,000 years ago, the second (Southern Dispersal) around 70,000 to 50,000 years ago.[29][30] H. sapiens proceeded to colonize all the continents and larger islands, arriving in Eurasia 125,000 years ago,[31][32] Australia around 65,000 years ago,[33] the Americas around 15,000 years ago, and remote islands such as Hawaii, Easter Island, Madagascar, and New Zealand between the years 300 and 1280 CE.[34][35]
Human evolution was not a simple linear or branched progression but involved interbreeding between related species.[36][37][38] Genomic research has shown that hybridization between substantially diverged lineages was common in human evolution.[39] DNA evidence suggests that several genes of Neanderthal origin are present among all non sub-Saharan African populations, and Neanderthals and other hominins, such as Denisovans, may have contributed up to 6% of their genome to present-day non sub-Saharan African humans.[36][40][41]
Human evolution is characterized by a number of morphological, developmental, physiological, and behavioral changes that have taken place since the split between the last common ancestor of humans and chimpanzees. The most significant of these adaptations are obligate bipedalism, increased brain size and decreased sexual dimorphism (neoteny). The relationship between all these changes is the subject of ongoing debate.[42]
History
Until about 12,000 years ago, all humans lived as hunter-gatherers.[43][44] The Neolithic Revolution (the invention of agriculture) first took place in Southwest Asia and spread through large parts of the Old World over the following millennia.[45] It also occurred independently in Mesoamerica (about 6,000 years ago),[46] China,[47][48] Papua New Guinea,[49] and the Sahel and West Savanna regions of Africa.[50][51][52] Access to food surplus led to the formation of permanent human settlements, the domestication of animals and the use of metal tools for the first time in history. Agriculture and sedentary lifestyle led to the emergence of early civilizations.[53][54][55]
An urban revolution took place in the 4th millennium BCE with the development of city-states, particularly Sumerian cities located in Mesopotamia.[56] It was in these cities that the earliest known form of writing, cuneiform script, appeared around 3000 BCE.[57] Other major civilizations to develop around this time were Ancient Egypt and the Indus Valley Civilisation.[58] They eventually traded with each other and invented technology such as wheels, plows and sails.[59][60][61][62] Astronomy and mathematics were also developed and the Great Pyramid of Giza was built.[63][64][65] There is evidence of a severe drought lasting about a hundred years that may have caused the decline of these civilizations,[66] with new ones appearing in the aftermath. Babylonians came to dominate Mesopotamia while others,[67] such as the Poverty Point culture, Minoans and the Shang dynasty, rose to prominence in new areas.[68][69][70] The Late Bronze Age collapse around 1200 BCE resulted in the disappearance of a number of civilizations and the beginning of the Greek Dark Ages.[71][72] During this period iron started replacing bronze, leading to the Iron Age.[73]
In the 5th century BCE, history started being recorded as a discipline, which provided a much clearer picture of life at the time.[74] Between the 8th and 6th century BCE, Europe entered the classical antiquity age, a period when ancient Greece and ancient Rome flourished.[75][76] Around this time other civilizations also came to prominence. The Maya civilization started to build cities and create complex calendars.[77][78] In Africa, the Kingdom of Aksum overtook the declining Kingdom of Kush and facilitated trade between India and the Mediterranean.[79] In West Asia, the Achaemenid Empire's system of centralized governance became the precursor to many later empires,[80] while the Gupta Empire in India and the Han dynasty in China have been described as golden ages in their respective regions.[81][82]
Following the fall of the Western Roman Empire in 476, Europe entered the Middle Ages.[83] During this period, Christianity and the Church would provide centralized authority and education.[84] In the Middle East, Islam became the prominent religion and expanded into North Africa. It led to an Islamic Golden Age, inspiring achievements in architecture, the revival of old advances in science and technology, and the formation of a distinct way of life.[85][86] The Christian and Islamic worlds would eventually clash, with the Kingdom of England, the Kingdom of France and the Holy Roman Empire declaring a series of holy wars to regain control of the Holy Land from Muslims.[87] In the Americas, complex Mississippian societies would arise starting around 800 CE,[88] while further south, the Aztecs and Incas would become the dominant powers.[89] The Mongol Empire would conquer much of Eurasia in the 13th and 14th centuries.[90] Over this same time period, the Mali Empire in Africa grew to be the largest empire on the continent, stretching from Senegambia to Ivory Coast.[91] Oceania would see the rise of the Tuʻi Tonga Empire which expanded across many islands in the South Pacific.[92]
The early modern period in Europe and the Near East (c. 1450–1800) began with the final defeat of the Byzantine Empire, and the rise of the Ottoman Empire.[93] Meanwhile, Japan entered the Edo period,[94] the Qing dynasty rose in China[95] and the Mughal Empire ruled much of India.[96] Europe underwent the Renaissance, starting in the 15th century,[97] and the Age of Discovery began with the exploring and colonizing of new regions.[98] This includes the British Empire expanding to become the world's largest empire[99] and the colonization of the Americas.[100] This expansion led to the Atlantic slave trade[101] and the genocide of Native American peoples.[102] This period also marked the Scientific Revolution, with great advances in mathematics, mechanics, astronomy and physiology.[103]
The late modern period (1800–present) saw the Technological and Industrial Revolution bring such discoveries as imaging technology, major innovations in transport and energy development.[104] The United States of America underwent great change, going from a small group of colonies to one of the global superpowers.[105] The Napoleonic Wars raged through Europe in the early 1800s,[106] Spain lost most of its colonies in the New World,[107] while Europeans continued expansion into Africa – where European control went from 10% to almost 90% in less than 50 years[108] – and Oceania.[109] A tenuous balance of power among European nations collapsed in 1914 with the outbreak of the First World War, one of the deadliest conflicts in history.[110] In the 1930s, a worldwide economic crisis led to the rise of authoritarian regimes and a Second World War, involving almost all of the world's countries.[111] Following its conclusion in 1945, the Cold War between the USSR and the United States saw a struggle for global influence, including a nuclear arms race and a space race.[112][113] The current Information Age sees the world becoming increasingly globalized and interconnected.[114]
Habitat and population
World population | 8 billion |
---|---|
Population density | 16/km2 (41/sq mi) by total area 54/km2 (139/sq mi) by land area |
Largest cities[n 2] | Tokyo, Delhi, Shanghai, São Paulo, Mexico City, Cairo, Mumbai, Beijing, Dhaka, Osaka, New York-Newark, Karachi, Buenos Aires, Chongqing, Istanbul, Kolkata, Manila, Lagos, Rio de Janeiro, Tianjin, Kinshasa, Guangzhou, Los Angeles-Long Beach-Santa Ana, Moscow, Shenzhen, Lahore, Bangalore, Paris, Jakarta, Chennai, Lima, Bogota, Bangkok, London |
Early human settlements were dependent on proximity to water and – depending on the lifestyle – other natural resources used for subsistence, such as populations of animal prey for hunting and arable land for growing crops and grazing livestock.[118] Modern humans, however, have a great capacity for altering their habitats by means of technology, irrigation, urban planning, construction, deforestation and desertification.[119] Human settlements continue to be vulnerable to natural disasters, especially those placed in hazardous locations and with low quality of construction.[120] Grouping and deliberate habitat alteration is often done with the goals of providing protection, accumulating comforts or material wealth, expanding the available food, improving aesthetics, increasing knowledge or enhancing the exchange of resources.[121]
Humans are one of the most adaptable species, despite having a low or narrow tolerance for many of the earth's extreme environments.[122] Through advanced tools, humans have been able to extend their tolerance to a wide variety of temperatures, humidity, and altitudes.[122] As a result, humans are a cosmopolitan species found in almost all regions of the world, including tropical rainforest, arid desert, extremely cold arctic regions, and heavily polluted cities; in comparison, most other species are confined to a few geographical areas by their limited adaptability.[123] The human population is not, however, uniformly distributed on the Earth's surface, because the population density varies from one region to another, and large stretches of surface are almost completely uninhabited, like Antarctica and vast swathes of the ocean.[122][124] Most humans (61%) live in Asia; the remainder live in the Americas (14%), Africa (14%), Europe (11%), and Oceania (0.5%).[125]
Within the last century, humans have explored challenging environments such as Antarctica, the deep sea, and outer space.[126] Human habitation within these hostile environments is restrictive and expensive, typically limited in duration, and restricted to scientific, military, or industrial expeditions.[126] Humans have briefly visited the Moon and made their presence felt on other celestial bodies through human-made robotic spacecraft.[127][128][129] Since the early 20th century, there has been continuous human presence in Antarctica through research stations and, since 2000, in space through habitation on the International Space Station.[130]
Estimates of the population at the time agriculture emerged in around 10,000 BC have ranged between 1 million and 15 million.[132][133] Around 50–60 million people lived in the combined eastern and western Roman Empire in the 4th century AD.[134] Bubonic plagues, first recorded in the 6th century AD, reduced the population by 50%, with the Black Death killing 75–200 million people in Eurasia and North Africa alone.[135] Human population is believed to have reached one billion in 1800. It has since then increased exponentially, reaching two billion in 1930 and three billion in 1960, four in 1975, five in 1987 and six billion in 1999.[136] It passed seven billion in 2011[137] and passed eight billion in November 2022.[138] It took over two million years of human prehistory and history for the human population to reach one billion and only 207 years more to grow to 7 billion.[139] The combined biomass of the carbon of all the humans on Earth in 2018 was estimated at 60 million tons, about 10 times larger than that of all non-domesticated mammals.[131]
In 2018, 4.2 billion humans (55%) lived in urban areas, up from 751 million in 1950.[140] The most urbanized regions are Northern America (82%), Latin America (81%), Europe (74%) and Oceania (68%), with Africa and Asia having nearly 90% of the world's 3.4 billion rural population.[140] Problems for humans living in cities include various forms of pollution and crime,[141] especially in inner city and suburban slums. Humans have had a dramatic effect on the environment. They are apex predators, being rarely preyed upon by other species.[142] Human population growth, industrialization, land development, overconsumption and combustion of fossil fuels have led to environmental destruction and pollution that significantly contributes to the ongoing mass extinction of other forms of life.[143][144]
Biology
Anatomy and physiology
Most aspects of human physiology are closely homologous to corresponding aspects of animal physiology. The human body consists of the legs, the torso, the arms, the neck, and the head. An adult human body consists of about 100 trillion (1014) cells. The most commonly defined body systems in humans are the nervous, the cardiovascular, the digestive, the endocrine, the immune, the integumentary, the lymphatic, the musculoskeletal, the reproductive, the respiratory, and the urinary system.[145][146] The dental formula of humans is: 2.1.2.32.1.2.3. Humans have proportionately shorter palates and much smaller teeth than other primates. They are the only primates to have short, relatively flush canine teeth. Humans have characteristically crowded teeth, with gaps from lost teeth usually closing up quickly in young individuals. Humans are gradually losing their third molars, with some individuals having them congenitally absent.[147]
Humans share with chimpanzees a vestigial tail, appendix, flexible shoulder joints, grasping fingers and opposable thumbs.[148] Apart from bipedalism and brain size, humans differ from chimpanzees mostly in smelling, hearing and digesting proteins.[149] While humans have a density of hair follicles comparable to other apes, it is predominantly vellus hair, most of which is so short and wispy as to be practically invisible.[150][151] Humans have about 2 million sweat glands spread over their entire bodies, many more than chimpanzees, whose sweat glands are scarce and are mainly located on the palm of the hand and on the soles of the feet.[152]
It is estimated that the worldwide average height for an adult human male is about 171 cm (5 ft 7 in), while the worldwide average height for adult human females is about 159 cm (5 ft 3 in).[153] Shrinkage of stature may begin in middle age in some individuals but tends to be typical in the extremely aged.[154] Throughout history, human populations have universally become taller, probably as a consequence of better nutrition, healthcare, and living conditions.[155] The average mass of an adult human is 59 kg (130 lb) for females and 77 kg (170 lb) for males.[156][157] Like many other conditions, body weight and body type are influenced by both genetic susceptibility and environment and varies greatly among individuals.[158][159]
Humans have a far faster and more accurate throw than other animals.[160] Humans are also among the best long-distance runners in the animal kingdom, but slower over short distances.[161][149] Humans' thinner body hair and more productive sweat glands help avoid heat exhaustion while running for long distances.[162]
Genetics
Like most animals, humans are a diploid and eukaryotic species. Each somatic cell has two sets of 23 chromosomes, each set received from one parent; gametes have only one set of chromosomes, which is a mixture of the two parental sets. Among the 23 pairs of chromosomes, there are 22 pairs of autosomes and one pair of sex chromosomes. Like other mammals, humans have an XY sex-determination system, so that females have the sex chromosomes XX and males have XY.[163] Genes and environment influence human biological variation in visible characteristics, physiology, disease susceptibility and mental abilities. The exact influence of genes and environment on certain traits is not well understood.[164][165]
While no humans – not even monozygotic twins – are genetically identical,[166] two humans on average will have a genetic similarity of 99.5%-99.9%.[167][168] This makes them more homogeneous than other great apes, including chimpanzees.[169][170] This small variation in human DNA compared to many other species suggests a population bottleneck during the Late Pleistocene (around 100,000 years ago), in which the human population was reduced to a small number of breeding pairs.[171][172] The forces of natural selection have continued to operate on human populations, with evidence that certain regions of the genome display directional selection in the past 15,000 years.[173]
The human genome was first sequenced in 2001[174] and by 2020 hundreds of thousands of genomes had been sequenced.[175] In 2012 the International HapMap Project had compared the genomes of 1,184 individuals from 11 populations and identified 1.6 million single nucleotide polymorphisms.[176] African populations harbor the highest number of private genetic variants. While many of the common variants found in populations outside of Africa are also found on the African continent, there are still large numbers that are private to these regions, especially Oceania and the Americas.[177] By 2010 estimates, humans have approximately 22,000 genes.[178] By comparing mitochondrial DNA, which is inherited only from the mother, geneticists have concluded that the last female common ancestor whose genetic marker is found in all modern humans, the so-called mitochondrial Eve, must have lived around 90,000 to 200,000 years ago.[179][180][181][182]
Life cycle
Most human reproduction takes place by internal fertilization via sexual intercourse, but can also occur through assisted reproductive technology procedures.[183] The average gestation period is 38 weeks, but a normal pregnancy can vary by up to 37 days.[184] Embryonic development in the human covers the first eight weeks of development; at the beginning of the ninth week the embryo is termed a fetus.[185] Humans are able to induce early labor or perform a caesarean section if the child needs to be born earlier for medical reasons.[186] In developed countries, infants are typically 3–4 kg (7–9 lb) in weight and 47–53 cm (19–21 in) in height at birth.[187][188] However, low birth weight is common in developing countries, and contributes to the high levels of infant mortality in these regions.[189]
Compared with other species, human childbirth is dangerous, with a much higher risk of complications and death.[190] The size of the fetus's head is more closely matched to the pelvis than other primates.[191] The reason for this is not completely understood,[n 3] but it contributes to a painful labor that can last 24 hours or more.[193] The chances of a successful labor increased significantly during the 20th century in wealthier countries with the advent of new medical technologies. In contrast, pregnancy and natural childbirth remain hazardous ordeals in developing regions of the world, with maternal death rates approximately 100 times greater than in developed countries.[194]
Both the mother and the father provide care for human offspring, in contrast to other primates, where parental care is mostly done by the mother.[195] Helpless at birth, humans continue to grow for some years, typically reaching sexual maturity at 15 to 17 years of age.[196][197][198] The human life span has been split into various stages ranging from three to twelve. Common stages include infancy, childhood, adolescence, adulthood and old age.[199] The lengths of these stages have varied across cultures and time periods but is typified by an unusually rapid growth spurt during adolescence.[200] Human females undergo menopause and become infertile at around the age of 50.[201] It has been proposed that menopause increases a woman's overall reproductive success by allowing her to invest more time and resources in her existing offspring, and in turn their children (the grandmother hypothesis), rather than by continuing to bear children into old age.[202][203]
The life span of an individual depends on two major factors, genetics and lifestyle choices.[204] For various reasons, including biological/genetic causes, women live on average about four years longer than men.[205] As of 2018, the global average life expectancy at birth of a girl is estimated to be 74.9 years compared to 70.4 for a boy.[206][207] There are significant geographical variations in human life expectancy, mostly correlated with economic development – for example, life expectancy at birth in Hong Kong is 87.6 years for girls and 81.8 for boys, while in the Central African Republic, it is 55.0 years for girls and 50.6 for boys.[208][209] The developed world is generally aging, with the median age around 40 years. In the developing world, the median age is between 15 and 20 years. While one in five Europeans is 60 years of age or older, only one in twenty Africans is 60 years of age or older.[210] In 2012, the United Nations estimated that there were 316,600 living centenarians (humans of age 100 or older) worldwide.[211]
Human life stages | ||||
---|---|---|---|---|
Infant boy and girl | Boy and girl before puberty (children) | Adolescent male and female | Adult man and woman | Elderly man and woman |
Diet
Humans are omnivorous, capable of consuming a wide variety of plant and animal material.[212][213] Human groups have adopted a range of diets from purely vegan to primarily carnivorous. In some cases, dietary restrictions in humans can lead to deficiency diseases; however, stable human groups have adapted to many dietary patterns through both genetic specialization and cultural conventions to use nutritionally balanced food sources.[214] The human diet is prominently reflected in human culture and has led to the development of food science.[215]
Until the development of agriculture approximately 10,000 years ago, Homo sapiens employed a hunter-gatherer method as their sole means of food collection.[215] This involved combining stationary food sources (such as fruits, grains, tubers, and mushrooms, insect larvae and aquatic mollusks) with wild game, which must be hunted and captured in order to be consumed.[216] It has been proposed that humans have used fire to prepare and cook food since the time of Homo erectus.[217] Around ten thousand years ago, humans developed agriculture,[218][219][220] which substantially altered their diet. This change in diet may also have altered human biology; with the spread of dairy farming providing a new and rich source of food, leading to the evolution of the ability to digest lactose in some adults.[221][222] The types of food consumed, and how they are prepared, have varied widely by time, location, and culture.[223][224]
In general, humans can survive for up to eight weeks without food, depending on stored body fat.[225] Survival without water is usually limited to three or four days, with a maximum of one week.[226] In 2020 it is estimated 9 million humans die every year from causes directly or indirectly related to starvation.[227][228] Childhood malnutrition is also common and contributes to the global burden of disease.[229] However, global food distribution is not even, and obesity among some human populations has increased rapidly, leading to health complications and increased mortality in some developed and a few developing countries. Worldwide, over one billion people are obese,[230] while in the United States 35% of people are obese, leading to this being described as an "obesity epidemic."[231] Obesity is caused by consuming more calories than are expended, so excessive weight gain is usually caused by an energy-dense diet.[230]
Biological variation
There is biological variation in the human species – with traits such as blood type, genetic diseases, cranial features, facial features, organ systems, eye color, hair color and texture, height and build, and skin color varying across the globe. The typical height of an adult human is between 1.4 and 1.9 m (4 ft 7 in and 6 ft 3 in), although this varies significantly depending on sex, ethnic origin, and family bloodlines.[232][233] Body size is partly determined by genes and is also significantly influenced by environmental factors such as diet, exercise, and sleep patterns.[234]
There is evidence that populations have adapted genetically to various external factors. The genes that allow adult humans to digest lactose are present in high frequencies in populations that have long histories of cattle domestication and are more dependent on cow milk.[235] Sickle cell anemia, which may provide increased resistance to malaria, is frequent in populations where malaria is endemic.[236][237] Populations that have for a very long time inhabited specific climates tend to have developed specific phenotypes that are beneficial for those environments – short stature and stocky build in cold regions, tall and lanky in hot regions, and with high lung capacities or other adaptations at high altitudes.[238] Some populations have evolved highly unique adaptations to very specific environmental conditions, such as those advantageous to ocean-dwelling lifestyles and freediving in the Bajau.[239]
Human hair ranges in color from red to blond to brown to black, which is the most frequent.[240] Hair color depends on the amount of melanin, with concentrations fading with increased age, leading to grey or even white hair. Skin color can range from darkest brown to lightest peach, or even nearly white or colorless in cases of albinism.[241] It tends to vary clinally and generally correlates with the level of ultraviolet radiation in a particular geographic area, with darker skin mostly around the equator.[242] Skin darkening may have evolved as protection against ultraviolet solar radiation.[243] Light skin pigmentation protects against depletion of vitamin D, which requires sunlight to make.[244] Human skin also has a capacity to darken (tan) in response to exposure to ultraviolet radiation.[245][246]
There is relatively little variation between human geographical populations, and most of the variation that occurs is at the individual level.[241][247][248] Much of human variation is continuous, often with no clear points of demarcation.[249][250][251][252] Genetic data shows that no matter how population groups are defined, two people from the same population group are almost as different from each other as two people from any two different population groups.[253][254][255] Dark-skinned populations that are found in Africa, Australia, and South Asia are not closely related to each other.[256][257]
Genetic research has demonstrated that human populations native to the African continent are the most genetically diverse[258] and genetic diversity decreases with migratory distance from Africa, possibly the result of bottlenecks during human migration.[259][260] These non-African populations acquired new genetic inputs from local admixture with archaic populations and have much greater variation from Neanderthals and Denisovans than is found in Africa,[177] though Neanderthal admixture into African populations may be underestimated.[261] Furthermore, recent studies have found that populations in sub-Saharan Africa, and particularly West Africa, have ancestral genetic variation which predates modern humans and has been lost in most non-African populations. Some of this ancestry is thought to originate from admixture with an unknown archaic hominin that diverged before the split of Neanderthals and modern humans.[262][263]
Humans are a gonochoric species, meaning they are divided into male and female sexes.[264][265][266] The greatest degree of genetic variation exists between males and females. While the nucleotide genetic variation of individuals of the same sex across global populations is no greater than 0.1%–0.5%, the genetic difference between males and females is between 1% and 2%. Males on average are 15% heavier and 15 cm (6 in) taller than females.[267][268] On average, men have about 40–50% more upper body strength and 20–30% more lower body strength than women at the same weight, due to higher amounts of muscle and larger muscle fibers.[269] Women generally have a higher body fat percentage than men.[270] Women have lighter skin than men of the same population; this has been explained by a higher need for vitamin D in females during pregnancy and lactation.[271] As there are chromosomal differences between females and males, some X and Y chromosome-related conditions and disorders only affect either men or women.[272] After allowing for body weight and volume, the male voice is usually an octave deeper than the female voice.[273] Women have a longer life span in almost every population around the world.[274]There are intersex conditions in the human population, however these are rare.[275]
Psychology
The human brain, the focal point of the central nervous system in humans, controls the peripheral nervous system. In addition to controlling "lower," involuntary, or primarily autonomic activities such as respiration and digestion, it is also the locus of "higher" order functioning such as thought, reasoning, and abstraction.[276] These cognitive processes constitute the mind, and, along with their behavioral consequences, are studied in the field of psychology.
Humans have a larger and more developed prefrontal cortex than other primates, the region of the brain associated with higher cognition.[277] This has led humans to proclaim themselves to be more intelligent than any other known species.[278] Objectively defining intelligence is difficult, with other animals adapting senses and excelling in areas that humans are unable to.[279]
There are some traits that, although not strictly unique, do set humans apart from other animals.[280] Humans may be the only animals who have episodic memory and who can engage in "mental time travel".[281] Even compared with other social animals, humans have an unusually high degree of flexibility in their facial expressions.[282] Humans are the only animals known to cry emotional tears.[283] Humans are one of the few animals able to self-recognize in mirror tests[284] and there is also debate over to what extent humans are the only animals with a theory of mind.[285]
Sleep and dreaming
Humans are generally diurnal. The average sleep requirement is between seven and nine hours per day for an adult and nine to ten hours per day for a child; elderly people usually sleep for six to seven hours. Having less sleep than this is common among humans, even though sleep deprivation can have negative health effects. A sustained restriction of adult sleep to four hours per day has been shown to correlate with changes in physiology and mental state, including reduced memory, fatigue, aggression, and bodily discomfort.[286]
During sleep humans dream, where they experience sensory images and sounds. Dreaming is stimulated by the pons and mostly occurs during the REM phase of sleep.[287] The length of a dream can vary, from a few seconds up to 30 minutes.[288] Humans have three to five dreams per night, and some may have up to seven.[289] Dreamers are more likely to remember the dream if awakened during the REM phase. The events in dreams are generally outside the control of the dreamer, with the exception of lucid dreaming, where the dreamer is self-aware.[290] Dreams can at times make a creative thought occur or give a sense of inspiration.[291]
Consciousness and thought
Human consciousness, at its simplest, is sentience or awareness of internal or external existence.[292] Despite centuries of analyses, definitions, explanations and debates by philosophers and scientists, consciousness remains puzzling and controversial,[293] being "at once the most familiar and most mysterious aspect of our lives".[294] The only widely agreed notion about the topic is the intuition that it exists.[295] Opinions differ about what exactly needs to be studied and explained as consciousness. Some philosophers divide consciousness into phenomenal consciousness, which is sensory experience itself, and access consciousness, which can be used for reasoning or directly controlling actions.[296] It is sometimes synonymous with 'the mind', and at other times, an aspect of it. Historically it is associated with introspection, private thought, imagination and volition.[297] It now often includes some kind of experience, cognition, feeling or perception. It may be 'awareness', or 'awareness of awareness', or self-awareness.[298] There might be different levels or orders of consciousness,[299] or different kinds of consciousness, or just one kind with different features.[300]
The process of acquiring knowledge and understanding through thought, experience, and the senses is known as cognition.[301] The human brain perceives the external world through the senses, and each individual human is influenced greatly by his or her experiences, leading to subjective views of existence and the passage of time.[302] The nature of thought is central to psychology and related fields. Cognitive psychology studies cognition, the mental processes underlying behavior.[303] Largely focusing on the development of the human mind through the life span, developmental psychology seeks to understand how people come to perceive, understand, and act within the world and how these processes change as they age.[304][305] This may focus on intellectual, cognitive, neural, social, or moral development. Psychologists have developed intelligence tests and the concept of intelligence quotient in order to assess the relative intelligence of human beings and study its distribution among population.[306]
Motivation and emotion
Human motivation is not yet wholly understood. From a psychological perspective, Maslow's hierarchy of needs is a well-established theory that can be defined as the process of satisfying certain needs in ascending order of complexity.[307] From a more general, philosophical perspective, human motivation can be defined as a commitment to, or withdrawal from, various goals requiring the application of human ability. Furthermore, incentive and preference are both factors, as are any perceived links between incentives and preferences. Volition may also be involved, in which case willpower is also a factor. Ideally, both motivation and volition ensure the selection, striving for, and realization of goals in an optimal manner, a function beginning in childhood and continuing throughout a lifetime in a process known as socialization.[308]
Emotions are biological states associated with the nervous system[309][310] brought on by neurophysiological changes variously associated with thoughts, feelings, behavioral responses, and a degree of pleasure or displeasure.[311][312] They are often intertwined with mood, temperament, personality, disposition, creativity,[313] and motivation. Emotion has a significant influence on human behavior and their ability to learn.[314] Acting on extreme or uncontrolled emotions can lead to social disorder and crime,[315] with studies showing criminals may have a lower emotional intelligence than normal.[316]
Emotional experiences perceived as pleasant, such as joy, interest or contentment, contrast with those perceived as unpleasant, like anxiety, sadness, anger, and despair.[317] Happiness, or the state of being happy, is a human emotional condition. The definition of happiness is a common philosophical topic. Some define it as experiencing the feeling of positive emotional affects, while avoiding the negative ones.[318][319] Others see it as an appraisal of life satisfaction or quality of life.[320] Recent research suggests that being happy might involve experiencing some negative emotions when humans feel they are warranted.[321]
Sexuality and love
For humans, sexuality involves biological, erotic, physical, emotional, social, or spiritual feelings and behaviors.[322][323] Because it is a broad term, which has varied with historical contexts over time, it lacks a precise definition.[323] The biological and physical aspects of sexuality largely concern the human reproductive functions, including the human sexual response cycle.[322][323] Sexuality also affects and is affected by cultural, political, legal, philosophical, moral, ethical, and religious aspects of life.[322][323] Sexual desire, or libido, is a basic mental state present at the beginning of sexual behavior. Studies show that men desire sex more than women and masturbate more often.[324]
Humans can fall anywhere along a continuous scale of sexual orientation,[325] although most humans are heterosexual.[326][327] While homosexual behavior occurs in some other animals, only humans and domestic sheep have so far been found to exhibit exclusive preference for same-sex relationships.[326] Most evidence supports nonsocial, biological causes of sexual orientation,[326] as cultures that are very tolerant of homosexuality do not have significantly higher rates of it.[327][328] Research in neuroscience and genetics suggests that other aspects of human sexuality are biologically influenced as well.[329]
Love most commonly refers to a feeling of strong attraction or emotional attachment. It can be impersonal (the love of an object, ideal, or strong political or spiritual connection) or interpersonal (love between humans).[330] When in love dopamine, norepinephrine, serotonin and other chemicals stimulate the brain's pleasure center, leading to side effects such as increased heart rate, loss of appetite and sleep, and an intense feeling of excitement.[331]
Culture
Most widely spoken languages[332][333] | English, Mandarin Chinese, Hindi, Spanish, Standard Arabic, Bengali, French, Russian, Portuguese, Urdu |
---|---|
Most practiced religions[333][334] | Christianity, Islam, Hinduism, Buddhism, folk religions, Sikhism, Judaism, unaffiliated |
Humanity's unprecedented set of intellectual skills were a key factor in the species' eventual technological advancement and concomitant domination of the biosphere.[335] Disregarding extinct hominids, humans are the only animals known to teach generalizable information,[336] innately deploy recursive embedding to generate and communicate complex concepts,[337] engage in the "folk physics" required for competent tool design,[338][339] or cook food in the wild.[340] Teaching and learning preserves the cultural and ethnographic identity of human societies.[341] Other traits and behaviors that are mostly unique to humans include starting fires,[342] phoneme structuring[343] and vocal learning.[344]
Language
While many species communicate, language is unique to humans, a defining feature of humanity, and a cultural universal.[345] Unlike the limited systems of other animals, human language is open – an infinite number of meanings can be produced by combining a limited number of symbols.[346][347] Human language also has the capacity of displacement, using words to represent things and happenings that are not presently or locally occurring but reside in the shared imagination of interlocutors.[147]
Language differs from other forms of communication in that it is modality independent; the same meanings can be conveyed through different media, audibly in speech, visually by sign language or writing, and through tactile media such as braille.[348] Language is central to the communication between humans, and to the sense of identity that unites nations, cultures and ethnic groups.[349] There are approximately six thousand different languages currently in use, including sign languages, and many thousands more that are extinct.[350]
The arts
Human arts can take many forms including visual, literary and performing. Visual art can range from paintings and sculptures to film, interaction design and architecture.[351] Literary arts can include prose, poetry and dramas; while the performing arts generally involve theatre, music and dance.[352][353] Humans often combine the different forms (for example, music videos).[354] Other entities that have been described as having artistic qualities include food preparation, video games and medicine.[355][356][357] As well as providing entertainment and transferring knowledge, the arts are also used for political purposes.[358]
Art is a defining characteristic of humans and there is evidence for a relationship between creativity and language.[359] The earliest evidence of art was shell engravings made by Homo erectus 300,000 years before modern humans evolved.[360] Art attributed to H. sapiens existed at least 75,000 years ago, with jewellery and drawings found in caves in South Africa.[361][362] There are various hypotheses as to why humans have adapted to the arts. These include allowing them to better problem solve issues, providing a means to control or influence other humans, encouraging cooperation and contribution within a society or increasing the chance of attracting a potential mate.[363] The use of imagination developed through art, combined with logic may have given early humans an evolutionary advantage.[359]
Evidence of humans engaging in musical activities predates cave art and so far music has been practiced by virtually all known human cultures.[364] There exists a wide variety of music genres and ethnic musics; with humans' musical abilities being related to other abilities, including complex social human behaviours.[364] It has been shown that human brains respond to music by becoming synchronized with the rhythm and beat, a process called entrainment.[365] Dance is also a form of human expression found in all cultures[366] and may have evolved as a way to help early humans communicate.[367] Listening to music and observing dance stimulates the orbitofrontal cortex and other pleasure sensing areas of the brain.[368]
Unlike speaking, reading and writing does not come naturally to humans and must be taught.[369] Still, literature has been present before the invention of words and language, with 30,000-year-old paintings on walls inside some caves portraying a series of dramatic scenes.[370] One of the oldest surviving works of literature is the Epic of Gilgamesh, first engraved on ancient Babylonian tablets about 4,000 years ago.[371] Beyond simply passing down knowledge, the use and sharing of imaginative fiction through stories might have helped develop humans' capabilities for communication and increased the likelihood of securing a mate.[372] Storytelling may also be used as a way to provide the audience with moral lessons and encourage cooperation.[370]
Tools and technologies
Stone tools were used by proto-humans at least 2.5 million years ago.[374] The use and manufacture of tools has been put forward as the ability that defines humans more than anything else[375] and has historically been seen as an important evolutionary step.[376] The technology became much more sophisticated about 1.8 million years ago,[375] with the controlled use of fire beginning around 1 million years ago.[377][378] The wheel and wheeled vehicles appeared simultaneously in several regions some time in the fourth millennium BC.[60] The development of more complex tools and technologies allowed land to be cultivated and animals to be domesticated, thus proving essential in the development of agriculture – what is known as the Neolithic Revolution.[379]
China developed paper, the printing press, gunpowder, the compass and other important inventions.[380] The continued improvements in smelting allowed forging of copper, bronze, iron and eventually steel, which is used in railways, skyscrapers and many other products.[381] This coincided with the Industrial Revolution, where the invention of automated machines brought major changes to humans' lifestyles.[382] Modern technology is observed as progressing exponentially,[383] with major innovations in the 20th century including: electricity, penicillin, semiconductors, internal combustion engines, the Internet, nitrogen fixing fertilisers, airplanes, computers, automobiles, contraceptive pills, nuclear fission, the green revolution, radio, scientific plant breeding, rockets, air conditioning, television and the assembly line.[384]
Religion and spirituality
Religion is generally defined as a belief system concerning the supernatural, sacred or divine, and practices, values, institutions and rituals associated with such belief. Some religions also have a moral code. The evolution and the history of the first religions have recently become areas of active scientific investigation.[385][386][387] While the exact time when humans first became religious remains unknown, research shows credible evidence of religious behaviour from around the Middle Paleolithic era (45–200 thousand years ago).[388] It may have evolved to play a role in helping enforce and encourage cooperation between humans.[389]
There is no accepted academic definition of what constitutes religion.[390] Religion has taken on many forms that vary by culture and individual perspective in alignment with the geographic, social, and linguistic diversity of the planet.[390] Religion can include a belief in life after death (commonly involving belief in an afterlife),[391] the origin of life,[392] the nature of the universe (religious cosmology) and its ultimate fate (eschatology), and what is moral or immoral.[393] A common source for answers to these questions are beliefs in transcendent divine beings such as deities or a singular God, although not all religions are theistic.[394][395]
Although the exact level of religiosity can be hard to measure,[396] a majority of humans profess some variety of religious or spiritual belief.[397] In 2015 the plurality were Christian followed by Muslims, Hindus and Buddhists.[398] As of 2015, about 16%, or slightly under 1.2 billion humans, were irreligious, including those with no religious beliefs or no identity with any religion.[399]
Science and philosophy
An aspect unique to humans is their ability to transmit knowledge from one generation to the next and to continually build on this information to develop tools, scientific laws and other advances to pass on further.[400] This accumulated knowledge can be tested to answer questions or make predictions about how the universe functions and has been very successful in advancing human ascendancy.[401]
Aristotle has been described as the first scientist,[402] and preceded the rise of scientific thought through the Hellenistic period.[403] Other early advances in science came from the Han Dynasty in China and during the Islamic Golden Age.[404][85] The scientific revolution, near the end of the Renaissance, led to the emergence of modern science.[405]
A chain of events and influences led to the development of the scientific method, a process of observation and experimentation that is used to differentiate science from pseudoscience.[406] An understanding of mathematics is unique to humans, although other species of animals have some numerical cognition.[407] All of science can be divided into three major branches, the formal sciences (e.g., logic and mathematics), which are concerned with formal systems, the applied sciences (e.g., engineering, medicine), which are focused on practical applications, and the empirical sciences, which are based on empirical observation and are in turn divided into natural sciences (e.g., physics, chemistry, biology) and social sciences (e.g., psychology, economics, sociology).[408]
Philosophy is a field of study where humans seek to understand fundamental truths about themselves and the world in which they live.[409] Philosophical inquiry has been a major feature in the development of humans' intellectual history.[410] It has been described as the "no man's land" between definitive scientific knowledge and dogmatic religious teachings.[411] Philosophy relies on reason and evidence, unlike religion, but does not require the empirical observations and experiments provided by science.[412] Major fields of philosophy include metaphysics, epistemology, logic, and axiology (which includes ethics and aesthetics).[413]
Society
Society is the system of organizations and institutions arising from interaction between humans. Humans are highly social and tend to live in large complex social groups. They can be divided into different groups according to their income, wealth, power, reputation and other factors. The structure of social stratification and the degree of social mobility differs, especially between modern and traditional societies.[414][unreliable source?] Human groups range from the size of families to nations. The first form of human social organization is thought to have resembled hunter-gatherer band societies.[415][better source needed]
Gender
Human societies typically exhibit gender identities and gender roles that distinguish between masculine and feminine characteristics and prescribe the range of acceptable behaviours and attitudes for their members based on their sex.[416][417] The most common categorisation is a gender binary of men and women.[418] Many societies recognise a third gender,[419] or less commonly a fourth or fifth.[420][421] In some other societies, non-binary is used as an umbrella term for a range of gender identities that are not solely male or female.[422]
Gender roles are often associated with a division of norms, practices, dress, behavior, rights, duties, privileges, status, and power, with men enjoying more rights and privileges than women in most societies, both today and in the past.[423] As a social construct,[424] gender roles are not fixed and vary historically within a society. Challenges to predominant gender norms have recurred in many societies.[425][426] Little is known about gender roles in the earliest human societies. Early modern humans probably had a range of gender roles similar to that of modern cultures from at least the Upper Paleolithic, while the Neanderthals were less sexually dimorphic and there is evidence that the behavioural difference between males and females was minimal.[427]
Kinship
All human societies organize, recognize and classify types of social relationships based on relations between parents, children and other descendants (consanguinity), and relations through marriage (affinity). There is also a third type applied to godparents or adoptive children (fictive). These culturally defined relationships are referred to as kinship. In many societies, it is one of the most important social organizing principles and plays a role in transmitting status and inheritance.[428] All societies have rules of incest taboo, according to which marriage between certain kinds of kin relations are prohibited, and some also have rules of preferential marriage with certain kin relations.[429]
Ethnicity
Human ethnic groups are a social category that identifies together as a group based on shared attributes that distinguish them from other groups. These can be a common set of traditions, ancestry, language, history, society, culture, nation, religion, or social treatment within their residing area.[430][431] Ethnicity is separate from the concept of race, which is based on physical characteristics, although both are socially constructed.[432] Assigning ethnicity to a certain population is complicated, as even within common ethnic designations there can be a diverse range of subgroups, and the makeup of these ethnic groups can change over time at both the collective and individual level.[169] Also, there is no generally accepted definition of what constitutes an ethnic group.[433] Ethnic groupings can play a powerful role in the social identity and solidarity of ethnopolitical units. This has been closely tied to the rise of the nation state as the predominant form of political organization in the 19th and 20th centuries.[434][435][436]
Government and politics
As farming populations gathered in larger and denser communities, interactions between these different groups increased. This led to the development of governance within and between the communities.[437] Humans have evolved the ability to change affiliation with various social groups relatively easily, including previously strong political alliances, if doing so is seen as providing personal advantages.[438] This cognitive flexibility allows individual humans to change their political ideologies, with those with higher flexibility less likely to support authoritarian and nationalistic stances.[439]
Governments create laws and policies that affect the citizens that they govern. There have been many forms of government throughout human history, each having various means of obtaining power and the ability to exert diverse controls on the population.[440] As of 2017, more than half of all national governments are democracies, with 13% being autocracies and 28% containing elements of both.[441] Many countries have formed international political organizations and alliances, the largest being the United Nations with 193 member states.[442]
Trade and economics
Trade, the voluntary exchange of goods and services, is seen as a characteristic that differentiates humans from other animals and has been cited as a practice that gave Homo sapiens a major advantage over other hominids.[443] Evidence suggests early H. sapiens made use of long-distance trade routes to exchange goods and ideas, leading to cultural explosions and providing additional food sources when hunting was sparse, while such trade networks did not exist for the now extinct Neanderthals.[444][445] Early trade likely involved materials for creating tools like obsidian.[446] The first truly international trade routes were around the spice trade through the Roman and medieval periods.[447]
Early human economies were more likely to be based around gift giving instead of a bartering system.[448] Early money consisted of commodities; the oldest being in the form of cattle and the most widely used being cowrie shells.[449] Money has since evolved into governmental issued coins, paper and electronic money.[449] Human study of economics is a social science that looks at how societies distribute scarce resources among different people.[450] There are massive inequalities in the division of wealth among humans; the eight richest humans are worth the same monetary value as the poorest half of all the human population.[451]
Conflict
Humans commit violence on other humans at a rate comparable to other primates, but have an increased preference for killing adults, infanticide being more common among other primates.[452] It is predicted that 2% of early H. sapiens would be murdered, rising to 12% during the medieval period, before dropping to below 2% in modern times.[453] There is great variation in violence between human populations with rates of homicide in societies that have legal systems and strong cultural attitudes against violence at about 0.01%.[454]
The willingness of humans to kill other members of their species en masse through organized conflict (i.e., war) has long been the subject of debate. One school of thought holds that war evolved as a means to eliminate competitors, and has always been an innate human characteristic. Another suggests that war is a relatively recent phenomenon and has appeared due to changing social conditions.[455] While not settled, current evidence indicates warlike predispositions only became common about 10,000 years ago, and in many places much more recently than that.[455] War has had a high cost on human life; it is estimated that during the 20th century, between 167 million and 188 million people died as a result of war.[456]
See also
- List of human evolution fossils
- Timeline of human evolution – Chronological outline of major events in the development of the human species
Notes
- Traditionally this has been explained by conflicting evolutionary pressures involved in bipedalism and encephalization (called the obstetrical dilemma), but recent research suggest it might be more complicated than that.[191][192]
References
Definition 2: a man belonging to a particular category (as by birth, residence, membership, or occupation) – usually used in combination
For the first time at a global scale, the report has ranked the causes of damage. Topping the list, changes in land use – principally agriculture – that have destroyed habitat. Second, hunting and other kinds of exploitation. These are followed by climate change, pollution, and invasive species, which are being spread by trade and other activities. Climate change will likely overtake the other threats in the next decades, the authors note. Driving these threats are the growing human population, which has doubled since 1970 to 7.6 billion, and consumption. (Per capita of use of materials is up 15% over the past 5 decades.)
Between any two humans, the amount of genetic variation – biochemical individuality – is about 0.1%.
Populations in central and southern Africa, the Americas, and Oceania each harbor tens to hundreds of thousands of private, common genetic variants. Most of these variants arose as new mutations rather than through archaic introgression, except in Oceanian populations, where many private variants derive from Denisovan admixture.
The fetal stage is from the beginning of the 9th week after fertilization and continues until birth
A woman dies in childbirth every minute, most often due to uncontrolled bleeding and infection, with the world's poorest women most vulnerable. The lifetime risk is 1 in 16 in sub-Saharan Africa, compared to 1 in 2,800 in developed countries.
The changes that occur during puberty usually happen in an ordered sequence, beginning with thelarche (breast development) at around age 10 or 11, followed by adrenarche (growth of pubic hair due to androgen stimulation), peak height velocity, and finally menarche (the onset of menses), which usually occurs around age 12 or 13.
On average, the onset of puberty is about 18 months earlier for girls (usually starting around the age of 10 or 11 and lasting until they are 15 to 17) than for boys (who usually begin puberty at about the age of 11 to 12 and complete it by the age of 16 to 17, on average).
Since the evolutionary split between hominins and pongids approximately 7 million years ago, the available evidence shows that all species of hominins ate an omnivorous diet composed of minimally processed, wild-plant, and animal foods.
Almost all (99.9%) nucleotide bases are exactly the same in all people.
In fact, research results consistently demonstrate that about 85 percent of all human genetic variation exists within human populations, whereas about only 15 percent of variation exists between populations.
genetic evidence [demonstrate] that strong levels of natural selection acted about 1.2 mya to produce darkly pigmented skin in early members of the genus Homo
An analysis of archaic sequences in modern populations identifies ancestral genetic variation in African populations that likely predates modern humans and has been lost in most non-African populations.
Our analyses of site frequency spectra indicate that these populations derive 2 to 19% of their genetic ancestry from an archaic population that diverged before the split of Neanderthals and modern humans.
Maslow's hierarchy of needs is a motivational theory in psychology comprising a five-tier model of human needs, often depicted as hierarchical levels within a pyramid. Needs lower down in the hierarchy must be satisfied before individuals can attend to needs higher up.
Emotional processing, but not emotions, can occur unconsciously.
Emotion is any mental experience with high intensity and high hedonic content (pleasure/displeasure)
I would suggest that when we talk about happiness, we are actually referring, much of the time, to a complex emotional phenomenon. Call it emotional well-being. Happiness as emotional well-being concerns your emotions and moods, more broadly your emotional condition as a whole. To be happy is to inhabit a favorable emotional state.... On this view, we can think of happiness, loosely, as the opposite of anxiety and depression. Being in good spirits, quick to laugh and slow to anger, at peace and untroubled, confident and comfortable in your own skin, engaged, energetic and full of life.
Human sexuality is a part of your total personality. It involves the interrelationship of biological, psychological, and sociocultural dimensions. [...] It is the total of our physical, emotional, and spiritual responses, thoughts, and feelings.
Homo sapiens and our close relatives may have some unique physical attributes, such as our dextrous hands, upright walking and resonant voices. However, these on their own cannot explain our success. They went together with our intelligence...
In short, the evidence to date that animals have an understanding of folk physics is at best mixed.
Most animals are not vocal learners.
Most cultures currently construct their societies based on the understanding of gender binary – the two gender categorizations (male and female). Such societies divide their population based on biological sex assigned to individuals at birth to begin the process of gender socialization.
In essence, an ethnic group is a named social category of people based on perceptions of shared social experience or one's ancestors' experiences. Members of the ethnic group see themselves as sharing cultural traditions and history that distinguish them from other groups. Ethnic group identity has a strong psychological or emotional component that divides the people of the world into opposing categories of 'us' and 'them.' In contrast to social stratification, which divides and unifies people along a series of horizontal axes based on socioeconomic factors, ethnic identities divide and unify people along a series of vertical axes. Thus, ethnic groups, at least theoretically, cut across socioeconomic class differences, drawing members from all strata of the population.
- Ferguson N (September–October 2006). "The Next War of the World". Foreign Affairs. Archived from the original on 25 April 2022. Retrieved 30 July 2022.
External links
https://en.wikipedia.org/wiki/Human
https://en.wikipedia.org/wiki/Homo_reciprocans
https://en.wikipedia.org/wiki/Life_satisfaction
https://en.wikipedia.org/wiki/Gratitude
The ultimatum game is a game that has become a popular instrument of economic experiments. An early description is by Nobel laureate John Harsanyi in 1961.[1] One player, the proposer, is endowed with a sum of money. The proposer is tasked with splitting it with another player, the responder (who knows what the total sum is). Once the proposer communicates his decision, the responder may accept it or reject it. If the responder accepts, the money is split per the proposal; if the responder rejects, both players receive nothing. Both players know in advance the consequences of the responder accepting or rejecting the offer.
https://en.wikipedia.org/wiki/Ultimatum_game
https://en.wikipedia.org/wiki/Ultimatum_game
https://en.wikipedia.org/wiki/Subgame_perfect_equilibrium
https://en.wikipedia.org/wiki/Observer-expectancy_effect
https://en.wikipedia.org/wiki/Impunity_game
Social preferences describe the human tendency to not only care about one's own material payoff, but also the reference group's payoff or/and the intention that leads to the payoff.[1] Social preferences are studied extensively in behavioral and experimental economics and social psychology. Types of social preferences include altruism, fairness, reciprocity, and inequity aversion.[2] The field of economics originally assumed that humans were rational economic actors, and as it became apparent that this was not the case, the field began to change. The research of social preferences in economics started with lab experiments in 1980, where experimental economists found subjects' behavior deviated systematically from self-interest behavior in economic games such as ultimatum game and dictator game. These experimental findings then inspired various new economic models to characterize agent's altruism, fairness and reciprocity concern between 1990 and 2010. More recently, there are growing amounts of field experiments that study the shaping of social preference and its applications throughout society.[1][3]
https://en.wikipedia.org/wiki/Social_preferences
Reciprocity selection
Reciprocity selection suggests that one's altruistic act may evolve from the anticipation of future reciprocal altruistic behavior from others.[11] An application of reciprocity selection in game theory is the Tit-For-Tat strategy in prisoner's dilemma, which is the strategy that the player cooperate at the initial encounter, and then follow the opponent's behavior on the previous encounter.[12] Robert Axelrod and W. D. Hamilton showed that Tit-For-Tat strategy can be an evolutionary stable strategy in a population where the probability of repeated encounters between two persons in a population is above a certain threshold.[13]
https://en.wikipedia.org/wiki/Social_preferences
https://en.wikipedia.org/wiki/Ophelimity
https://en.wikipedia.org/wiki/Moral_development
https://en.wikipedia.org/wiki/Moral_reasoning
In social psychology, social value orientation (SVO) is a person's preference about how to allocate resources (e.g. money) between the self and another person. SVO corresponds to how much weight a person attaches to the welfare of others in relation to the own. Since people are assumed to vary in the weight they attach to other peoples' outcomes in relation to their own, SVO is an individual difference variable. The general concept underlying SVO has become widely studied in a variety of different scientific disciplines, such as economics, sociology, and biology under a multitude of different names (e.g. social preferences, other-regarding preferences, welfare tradeoff ratios, social motives, etc.).
https://en.wikipedia.org/wiki/Social_value_orientations
The norm of reciprocity requires that we repay in kind what another has done for us.[1] It can be understood as the expectation that people will respond favorably to each other by returning benefits for benefits, and responding with either indifference or hostility to harms. The social norm of reciprocity often takes different forms in different areas of social life, or in different societies. All of them, however, are distinct from related ideas such as gratitude, the Golden Rule, or mutual goodwill. See reciprocity (social and political philosophy) for an analysis of the concepts involved.
https://en.wikipedia.org/wiki/Norm_of_reciprocity
The Golden Rule is the principle of treating others as one wants to be treated. Various expressions of this rule can be found in the tenets of most religions and creeds through the ages.[1] It can be considered an ethic of reciprocity in some religions, although different religions treat it differently.
The maxim may appear as a positive or negative injunction governing conduct:
- Treat others as you would like others to treat you (positive or directive form)[1]
- Do not treat others in ways that you would not like to be treated (negative or prohibitive form)
- What you wish upon others, you wish upon yourself (empathetic or responsive form)
The idea dates at least to the early Confucian times (551–479 BCE), according to Rushworth Kidder, who identifies the concept appearing prominently in Buddhism, Christianity, Hinduism, Islam, Judaism, Taoism, Zoroastrianism, and "the rest of the world's major religions".[2] As part of the 1993 "Declaration Toward a Global Ethic", 143 leaders of the world's major faiths endorsed the Golden Rule.[3][4] According to Greg M. Epstein, it is "a concept that essentially no religion misses entirely", but belief in God is not necessary to endorse it.[5] Simon Blackburn also states that the Golden Rule can be "found in some form in almost every ethical tradition".[6]
Etymology
The term "Golden Rule", or "Golden law", began to be used widely in the early 17th century in Britain by Anglican theologians and preachers;[7] the earliest known usage is that of Anglicans Charles Gibbon and Thomas Jackson in 1604.[8]
https://en.wikipedia.org/wiki/Golden_Rule
An autotelic[1] is someone or something that has a purpose in, and not apart from, itself.
https://en.wikipedia.org/wiki/Autotelic
In game theory, an extensive-form game is a specification of a game allowing (as the name suggests) for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, the (possibly imperfect) information each player has about the other player's moves when they make a decision, and their payoffs for all possible game outcomes. Extensive-form games also allow for the representation of incomplete information in the form of chance events modeled as "moves by nature". Extensive-form representations differ from normal-form in that they provide a more complete description of the game in question, whereas normal-form simply boils down the game into a payoff matrix.
https://en.wikipedia.org/wiki/Extensive-form_game
In convex geometry and vector algebra, a convex combination is a linear combination of points (which can be vectors, scalars, or more generally points in an affine space) where all coefficients are non-negative and sum to 1.[1] In other words, the operation is equivalent to a standard weighted average, but whose weights are expressed as a percent of the total weight, instead of as a fraction of the count of the weights as in a standard weighted average.
https://en.wikipedia.org/wiki/Convex_combination
The pirate game is a simple mathematical game. It is a multi-player version of the ultimatum game.
https://en.wikipedia.org/wiki/Pirate_game
In psychology, the false consensus effect, also known as consensus bias, is a pervasive cognitive bias that causes people to “see their own behavioral choices and judgments as relatively common and appropriate to existing circumstances”.[1] In other words, they assume that their personal qualities, characteristics, beliefs, and actions are relatively widespread through the general population.
This false consensus is significant because it increases self-esteem (overconfidence effect). It can be derived from a desire to conform and be liked by others in a social environment. This bias is especially prevalent in group settings where one thinks the collective opinion of their own group matches that of the larger population. Since the members of a group reach a consensus and rarely encounter those who dispute it, they tend to believe that everybody thinks the same way. The false-consensus effect is not restricted to cases where people believe that their values are shared by the majority, but it still manifests as an overestimate of the extent of their belief.[2]
Additionally, when confronted with evidence that a consensus does not exist, people often assume that those who do not agree with them are defective in some way.[3] There is no single cause for this cognitive bias; the availability heuristic, self-serving bias, and naïve realism have been suggested as at least partial underlying factors. The bias may also result, at least in part, from non-social stimulus-reward associations.[4] Maintenance of this cognitive bias may be related to the tendency to make decisions with relatively little information. When faced with uncertainty and a limited sample from which to make decisions, people often "project" themselves onto the situation. When this personal knowledge is used as input to make generalizations, it often results in the false sense of being part of the majority.[5]
The false consensus effect has been widely observed and supported by empirical evidence. Previous research has suggested that cognitive and perceptional factors (motivated projection, accessibility of information, emotion, etc.) may contribute to the consensus bias, while recent studies have focused on its neural mechanisms. One recent study has shown that consensus bias may improve decisions about other people's preferences.[4] Ross, Green and House first defined the false consensus effect in 1977 with emphasis on the relative commonness that people perceive about their own responses; however, similar projection phenomena had already caught attention in psychology. Specifically, concerns with respect to connections between individual’s personal predispositions and their estimates of peers appeared in the literature for a while. For instances, Katz and Allport in 1931 illustrated that students’ estimates of the amount of others on the frequency of cheating was positively correlated to their own behavior. Later, around 1970, same phenomena were found on political beliefs and prisoner’s dilemma situation. In 2017, researchers identified a persistent egocentric bias when participants learned about other people's snack-food preferences.[4] Moreover, recent studies suggest that the false consensus effect can also affect professional decision makers; specifically, it has been shown that even experienced marketing managers project their personal product preferences onto consumers.[6][7]
https://en.wikipedia.org/wiki/False_consensus_effect
Hindsight bias, also known as the knew-it-all-along phenomenon[1] or creeping determinism,[2] is the common tendency for people to perceive past events as having been more predictable than they were.[3][4]
People often believe that after an event has occurred, they would have predicted or perhaps even would have known with a high degree of certainty what the outcome of the event would have been before the event occurred. Hindsight bias may cause distortions of memories of what was known or believed before an event occurred, and is a significant source of overconfidence regarding an individual's ability to predict the outcomes of future events.[5] Examples of hindsight bias can be seen in the writings of historians describing outcomes of battles, physicians recalling clinical trials, and in judicial systems as individuals attribute responsibility on the basis of the supposed predictability of accidents.[6][7][2]
In some countries, 20/20 indicates normal visual acuity at 20 feet, from which derives the idiom "hindsight is 20/20".
https://en.wikipedia.org/wiki/Hindsight_bias
No comments:
Post a Comment