Blog Archive

Monday, May 15, 2023

05-15-2023-0722 - variety, etc. continued... (draft)

https://en.wikipedia.org/wiki/Measurement

In master locksmithing, key relevance is the measurable difference between an original key and a copy made of that key, either from a wax impression or directly from the original, and how similar the two keys are in size and shape.[1] It can also refer to the measurable difference between a key and the size required to fit and operate the keyway of its paired lock

https://en.wikipedia.org/wiki/Key_relevance

Level of measurement or scale of measure is a classification that describes the nature of information within the values assigned to variables.[1] Psychologist Stanley Smith Stevens developed the best-known classification with four levels, or scales, of measurement: nominal, ordinal, interval, and ratio.[1][2] This framework of distinguishing levels of measurement originated in psychology and has since had a complex history, being adopted and extended in some disciplines and by some scholars, and criticized or rejected by others.[3] Other classifications include those by Mosteller and Tukey,[4] and by Chrisman.[5] 

https://en.wikipedia.org/wiki/Level_of_measurement

The limit of detection (LOD or LoD) is the lowest signal, or the lowest corresponding quantity to be determined (or extracted) from the signal, that can be observed with a sufficient degree of confidence or statistical significance. However, the exact threshold (level of decision) used to decide when a signal significantly emerges above the continuously fluctuating background noise remains arbitrary and is a matter of policy and often of debate among scientists, statisticians and regulators depending on the stakes in different fields. 

https://en.wikipedia.org/wiki/Detection_limit

In the branch of experimental psychology focused on sense, sensation, and perception, which is called psychophysics, a just-noticeable difference or JND is the amount something must be changed in order for a difference to be noticeable, detectable at least half the time.[1] This limen is also known as the difference limen, difference threshold, or least perceptible difference.[2] 

https://en.wikipedia.org/wiki/Just-noticeable_difference

An environmental error is an error in calculations that are being a part of observations due to environment. Any experiment performing anywhere in the universe has its surroundings, from which we cannot eliminate our system. The study of environmental effects has primary advantage of being able us to justify the fact that environment has impact on experiments and feasible environment will not only rectify our result but also amplify it.  

https://en.wikipedia.org/wiki/Environmental_error

In behavioral psychology (or applied behavior analysis), stimulus control is a phenomenon in operant conditioning (also called contingency management) that occurs when an organism behaves in one way in the presence of a given stimulus and another way in its absence. A stimulus that modifies behavior in this manner is either a discriminative stimulus (Sd) or stimulus delta (S-delta). Stimulus-based control of behavior occurs when the presence or absence of an Sd or S-delta controls the performance of a particular behavior. For example, the presence of a stop sign (S-delta) at a traffic intersection alerts the driver to stop driving and increases the probability that "braking" behavior will occur. Such behavior is said to be emitted because it does not force the behavior to occur since stimulus control is a direct result of historical reinforcement contingencies, as opposed to reflexive behavior that is said to be elicited through respondent conditioning.

Some theorists believe that all behavior is under some form of stimulus control.[1] For example, in the analysis of B. F. Skinner,[2] verbal behavior is a complicated assortment of behaviors with a variety of controlling stimuli.[3] 

https://en.wikipedia.org/wiki/Stimulus_control

Extinction is a behavioral phenomenon observed in both operantly conditioned and classically conditioned behavior, which manifests itself by fading of non-reinforced conditioned response over time. When operant behavior that has been previously reinforced no longer produces reinforcing consequences the behavior gradually stops occurring.[1] In classical conditioning, when a conditioned stimulus is presented alone, so that it no longer predicts the coming of the unconditioned stimulus, conditioned responding gradually stops. For example, after Pavlov's dog was conditioned to salivate at the sound of a metronome, it eventually stopped salivating to the metronome after the metronome had been sounded repeatedly but no food came. Many anxiety disorders such as post traumatic stress disorder are believed to reflect, at least in part, a failure to extinguish conditioned fear.[2] 

https://en.wikipedia.org/wiki/Extinction_(psychology)

Discrimination is the act of making distinctions between people based on the groups, classes, or other categories to which they belong or are perceived to belong that are disadvantageous.[1] People may be discriminated on the basis of race, gender identity, sex, age, religion, disability, or sexual orientation, as well as other categories.[2] Discrimination especially occurs when individuals or groups are unfairly treated in a way which is worse than other people are treated, on the basis of their actual or perceived membership in certain groups or social categories.[2][3] It involves restricting members of one group from opportunities or privileges that are available to members of another group.[4]

Discriminatory traditions, policies, ideas, practices and laws exist in many countries and institutions in all parts of the world, including territories where discrimination is generally looked down upon. In some places, attempts such as quotas have been used to benefit those who are believed to be current or past victims of discrimination. These attempts have often been met with controversy, and have sometimes been called reverse discrimination

https://en.wikipedia.org/wiki/Discrimination

In digital audio using pulse-code modulation (PCM), bit depth is the number of bits of information in each sample, and it directly corresponds to the resolution of each sample. Examples of bit depth include Compact Disc Digital Audio, which uses 16 bits per sample, and DVD-Audio and Blu-ray Disc which can support up to 24 bits per sample.

In basic implementations, variations in bit depth primarily affect the noise level from quantization error—thus the signal-to-noise ratio (SNR) and dynamic range. However, techniques such as dithering, noise shaping, and oversampling can mitigate these effects without changing the bit depth. Bit depth also affects bit rate and file size.

Bit depth is only meaningful in reference to a PCM digital signal. Non-PCM formats, such as lossy compression formats, do not have associated bit depths.[a] 

https://en.wikipedia.org/wiki/Audio_bit_depth

Image resolution is the detail an image holds. The term applies to digital images, film images, and other types of images. "Higher resolution" means more image detail.

Image resolution can be measured in various ways. Resolution quantifies how close lines can be to each other and still be visibly resolved. Resolution units can be tied to physical sizes (e.g. lines per mm, lines per inch), to the overall size of a picture (lines per picture height, also known simply as lines, TV lines, or TVL), or to angular subtense. Instead of single lines, line pairs are often used, composed of a dark line and an adjacent light line; for example, a resolution of 10 lines per millimeter means 5 dark lines alternating with 5 light lines, or 5 line pairs per millimeter (5 LP/mm). Photographic lens and are most often quoted in line pairs per millimeter. 

https://en.wikipedia.org/wiki/Image_resolution

Gestalt psychology, gestaltism, or configurationism is a school of psychology that emerged in the early twentieth century in Austria and Germany as a theory of perception that was a rejection of basic principles of Wilhelm Wundt's and Edward Titchener's elementalist and structuralist psychology.[1][2][3] 

https://en.wikipedia.org/wiki/Gestalt_psychology

In psychology and cognitive science, a schema (plural schemata or schemas) describes a pattern of thought or behavior that organizes categories of information and the relationships among them.[1][2] It can also be described as a mental structure of preconceived ideas, a framework representing some aspect of the world, or a system of organizing and perceiving new information,[3] such as a mental schema or conceptual model. Schemata influence attention and the absorption of new knowledge: people are more likely to notice things that fit into their schema, while re-interpreting contradictions to the schema as exceptions or distorting them to fit. Schemata have a tendency to remain unchanged, even in the face of contradictory information.[4] Schemata can help in understanding the world and the rapidly changing environment.[5] People can organize new perceptions into schemata quickly as most situations do not require complex thought when using schema, since automatic thought is all that is required.[5]

People use schemata to organize current knowledge and provide a framework for future understanding. Examples of schemata include mental models, social schemas, stereotypes, social roles, scripts, worldviews, heuristics, and archetypes. In Piaget's theory of development, children construct a series of schemata, based on the interactions they experience, to help them understand the world.[6] 

https://en.wikipedia.org/wiki/Schema_(psychology)

The earliest recorded systems of weights and measures originate in the 3rd or 4th millennium BC. Even the very earliest civilizations needed measurement for purposes of agriculture, construction and trade. Early standard units might only have applied to a single community or small region, with every area developing its own standards for lengths, areas, volumes and masses. Often such systems were closely tied to one field of use, so that volume measures used, for example, for dry grains were unrelated to those for liquids, with neither bearing any particular relationship to units of length used for measuring cloth or land. With development of manufacturing technologies, and the growing importance of trade between communities and ultimately across the Earth, standardized weights and measures became critical. Starting in the 18th century, modernized, simplified and uniform systems of weights and measures were developed, with the fundamental units defined by ever more precise methods in the science of metrology. The discovery and application of electricity was one factor motivating the development of standardized internationally applicable units.  

https://en.wikipedia.org/wiki/History_of_measurement

The history of science and technology (HST) is a field of history that examines the understanding of the natural world (science) and the ability to manipulate it (technology) at different points in time. This academic discipline also studies the cultural, economic, and political impacts of and contexts for scientific practices. 

https://en.wikipedia.org/wiki/History_of_science_and_technology

Instrumentation is a collective term for measuring instruments that are used for indicating, measuring and recording physical quantities. The term has its origins in the art and science of scientific instrument-making.

Instrumentation can refer to devices as simple as direct-reading thermometers, or as complex as multi-sensor components of industrial control systems. Today, instruments can be found in laboratories, refineries, factories and vehicles, as well as in everyday household use (e.g., smoke detectors and thermostats

https://en.wikipedia.org/wiki/Instrumentation

ISO 10012:2003, Measurement management systems - Requirements for measurement processes and measuring equipment is the ISO standard that specifies generic requirements and provides guidance for the management of measurement processes and metrological confirmation of measuring equipment used to support and demonstrate compliance with metrological requirements. It specifies quality management requirements of a measurement management system that can be used by an organization performing measurements as part of the overall management system, and to ensure metrological requirements are met.

ISO 10012:2003 is not intended to be used as a requisite for demonstrating conformance with ISO 9001, ISO 14001 or any other standard. Interested parties can agree to use ISO 10012:2003 as an input for satisfying measurement management system requirements in certification activities.

Other standards and guides exist for particular elements affecting measurement results, e.g. details of measurement methods, competence of personnel, and interlaboratory comparisons.

ISO 10012:2003 is not intended as a substitute for, or as an addition to, the requirements of ISO/IEC 17025

https://en.wikipedia.org/wiki/ISO_10012

A primary instrument is a scientific instrument, which by its physical characteristics is accurate and is not calibrated against anything else. A primary instrument must be able to be exactly duplicated anywhere, anytime with identical results.  

https://en.wikipedia.org/wiki/Primary_instrument

An order of magnitude is an approximation of the logarithm of a value relative to some contextually understood reference value, usually 10, interpreted as the base of the logarithm and the representative of values of magnitude one. Logarithmic distributions are common in nature and considering the order of magnitude of values sampled from such a distribution can be more intuitive. When the reference value is 10, the order of magnitude can be understood as the number of digits in the base-10 representation of the value. Similarly, if the reference value is one of some powers of 2, since computers store data in a binary format, the magnitude can be understood in terms of the amount of computer memory needed to store that value.

Differences in order of magnitude can be measured on a base-10 logarithmic scale in “decades” (i.e., factors of ten).[1] Examples of numbers of different magnitudes can be found at Orders of magnitude (numbers)

https://en.wikipedia.org/wiki/Order_of_magnitude

(Redirected from Observable quantity)

In statistics, latent variables (from Latin: present participle of lateo, “lie hidden”) are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or measured.[1] Such latent variable models are used in many disciplines, including political science, demography, engineering, medicine, ecology, physics, machine learning/artificial intelligence, bioinformatics, chemometrics, natural language processing, management and the social sciences.

Latent variables may correspond to aspects of physical reality. These could in principle be measured, but may not be for practical reasons. In this situation, the term hidden variables is commonly used (reflecting the fact that the variables are meaningful, but not observable). Other latent variables correspond to abstract concepts, like categories, behavioral or mental states, or data structures. The terms hypothetical variables or hypothetical constructs may be used in these situations.

The use of latent variables can serve to reduce the dimensionality of data. Many observable variables can be aggregated in a model to represent an underlying concept, making it easier to understand the data. In this sense, they serve a function similar to that of scientific theories. At the same time, latent variables link observable "sub-symbolic" data in the real world to symbolic data in the modeled world. 

https://en.wikipedia.org/wiki/Latent_and_observable_variables

From Wikipedia, the free encyclopedia

Hidden variables may refer to:

  • Confounding, in statistics, an extraneous variable in a statistical model that correlates (directly or inversely) with both the dependent variable and the independent variable
  • Hidden transformation, in computer science, a way to transform a generic constraint satisfaction problem into a binary one by introducing new hidden variables
  • Hidden-variable theories, in physics, the proposition that statistical models of physical systems (such as Quantum mechanics) are inherently incomplete, and that the apparent randomness of a system depends not on collapsing wave functions, but rather due to unseen or unmeasurable (and thus "hidden") variables
    • Local hidden-variable theory, in quantum mechanics, a hidden-variable theory in which distant events are assumed to have no instantaneous (or at least faster-than-light) effect on local events
  • Latent variables, in statistics, variables that are inferred from other observed variables

See also

 

 

 

 

 

 

 

 

 

 

 

 

 

No comments:

Post a Comment