The measure in quantum physics is the integration measure used for performing a path integral.
In quantum field theory, one must sum over all possible histories of a system. When summing over possible histories, which may be very similar to each other, one has to decide when two histories are to be considered different, and when they are to be considered the same, in order not to count the same history twice. This decision is coded within the concept of the measure by an observer.
In fact, the possible histories can be deformed continuously, and therefore the sum is in fact an integral, known as path integral.
In the limit where the sum is becoming an integral, the concept of the measure described above is replaced by an integration measure.
See also[edit]
The observable universe is a ball-shaped region of the universe comprising all matter that can be observed from Earth or its space-based telescopes and exploratory probes at the present time, because the electromagnetic radiationfrom these objects has had time to reach the Solar System and Earth since the beginning of the cosmological expansion. There may be 2 trillion galaxies in the observable universe,[7][8] although that number has recently been estimated at only several hundred billion based on new data from New Horizons.[9][10]Assuming the universe is isotropic, the distance to the edge of the observable universe is roughly the same in every direction. That is, the observable universe has a spherical volume (a ball) centered on the observer. Every location in the universe has its own observable universe, which may or may not overlap with the one centered on Earth.
The word observable in this sense does not refer to the capability of modern technology to detect light or other information from an object, or whether there is anything to be detected. It refers to the physical limit created by the speed of light itself. No signal can travel faster than light, hence there is a maximum distance (called the particle horizon) beyond which nothing can be detected, as the signals could not have reached us yet. Sometimes astrophysicists distinguish between the visible universe, which includes only signals emitted since recombination (when hydrogen atoms were formed from protons and electrons and photons were emitted)—and the observable universe, which includes signals since the beginning of the cosmological expansion (the Big Bang in traditional physical cosmology, the end of the inflationary epoch in modern cosmology).
According to calculations, the current comoving distance—proper distance, which takes into account that the universe has expanded since the light was emitted—to particles from which the cosmic microwave background radiation(CMBR) was emitted, which represents the radius of the visible universe, is about 14.0 billion parsecs (about 45.7 billion light-years), while the comoving distance to the edge of the observable universe is about 14.3 billion parsecs (about 46.6 billion light-years),[11] about 2% larger. The radius of the observable universe is therefore estimated to be about 46.5 billion light-years[12][13] and its diameter about 28.5 gigaparsecs (93 billion light-years, or 8.8×1026 metres or 2.89×1027 feet), which equals 880 yottametres.[14] Using the critical density and the diameter of the observable universe, the total mass of ordinary matter in the universe can be calculated to be about 1.5 × 1053 kg.[15] In November 2018, astronomers reported that the extragalactic background light (EBL) amounted to 4 × 1084 photons.[16][17]
As the universe's expansion is accelerating, all currently observable objects, outside our local supercluster, will eventually appear to freeze in time, while emitting progressively redder and fainter light. For instance, objects with the current redshift z from 5 to 10 will remain observable for no more than 4–6 billion years. In addition, light emitted by objects currently situated beyond a certain comoving distance (currently about 19 billion parsecs) will never reach Earth.[18]
https://en.wikipedia.org/wiki/Observable_universe
Some interpretations of quantum mechanics posit a central role for an observer of a quantum phenomenon.[1] The quantum mechanical observer is tied to the issue of observer effect, where a measurement necessarily requires interacting with the physical object being measured, affecting its properties through the interaction. The term "observable" has gained a technical meaning, denoting a Hermitian operator that represents a measurement.[2]: 55
The prominence of seemingly subjective or anthropocentric ideas like "observer" in the early development of the theory has been a continuing source of disquiet and philosophical dispute.[3] A number of new-age religious or philosophical views give the observer a more special role, or place constraints on who or what can be an observer. There is no credible peer-reviewed research that backs such claims. As an example of such claims, Fritjof Capra declared, "The crucial feature of atomic physics is that the human observer is not only necessary to observe the properties of an object, but is necessary even to define these properties."[4]
The Copenhagen interpretation, which is the most widely accepted interpretation of quantum mechanics among physicists,[1][5]: 248 posits that an "observer" or a "measurement" is merely a physical process. One of the founders of the Copenhagen interpretation, Werner Heisenberg, wrote:
Niels Bohr, also a founder of the Copenhagen interpretation, wrote:
Likewise, Asher Peres stated that "observers" in quantum physics are
Critics of the special role of the observer also point out that observers can themselves be observed, leading to paradoxes such as that of Wigner's friend; and that it is not clear how much consciousness is required. As John Bell inquired, "Was the wave function waiting to jump for thousands of millions of years until a single-celled living creature appeared? Or did it have to wait a little longer for some highly qualified measurer—with a PhD?"[9]
See also[edit]
See also[edit]
In physics, an observable is a physical quantity that can be measured. Examples include position and momentum. In systems governed by classical mechanics, it is a real-valued "function" on the set of all possible system states. In quantum physics, it is an operator, or gauge, where the property of the quantum state can be determined by some sequence of operations. For example, these operations might involve submitting the system to various electromagnetic fields and eventually reading a value.
Physically meaningful observables must also satisfy transformation laws which relate observations performed by different observersin different frames of reference. These transformation laws are automorphisms of the state space, that is bijective transformationswhich preserve certain mathematical properties of the space in question.
https://en.wikipedia.org/wiki/Observable
Incompatibility of observables in quantum mechanics[edit]
A crucial difference between classical quantities and quantum mechanical observables is that the latter may not be simultaneously measurable, a property referred to as complementarity. This is mathematically expressed by non-commutativity of the corresponding operators, to the effect that the commutator
This inequality expresses a dependence of measurement results on the order in which measurements of observables and are performed. Observables corresponding to non-commutative operators are called as incompatible observables. Incompatible observables cannot have a complete set of common eigenfunctions. Note that there can be some simultaneous eigenvectors of and , but not enough in number to constitute a complete basis.[1][2]
https://en.wikipedia.org/wiki/Observable#Incompatibility_of_observables_in_quantum_mechanics
https://en.wikipedia.org/wiki/Gravity
https://en.wikipedia.org/wiki/Philosophy_of_science
https://en.wikipedia.org/wiki/Belief
https://en.wikipedia.org/wiki/Causality
https://en.wikipedia.org/wiki/Primary/secondary_quality_distinction
Scientific realism is the view that the universe described by science is real regardless of how it may be interpreted.[clarification needed]
Within philosophy of science, this view is often an answer to the question "how is the success of science to be explained?" The discussion on the success of science in this context centers primarily on the status of unobservable entities apparently talked about by scientific theories. Generally, those who are scientific realists assert that one can make valid claims about unobservables (viz., that they have the same ontological status) as observables, as opposed to instrumentalism.
https://en.wikipedia.org/wiki/Scientific_realism
In philosophy of science and in epistemology, instrumentalism is a methodological view that ideas are useful instruments, and that the worth of an idea is based on how effective it is in explaining and predicting phenomena.[1]
According to instrumentalists, a successful scientific theory reveals nothing known either true or false about nature's unobservable objects, properties or processes.[2] Scientific theory is merely a tool whereby humans predict observations in a particular domain of nature by formulating laws, which state or summarize regularities, while theories themselves do not reveal supposedly hidden aspects of nature that somehow explain these laws.[3] Instrumentalism is a perspective originally introduced by Pierre Duhem in 1906.[3]
Rejecting scientific realism's ambitions to uncover metaphysical truth about nature,[3] instrumentalism is usually categorized as an antirealism, although its mere lack of commitment to scientific theory's realism can be termed nonrealism. Instrumentalism merely bypasses debate concerning whether, for example, a particle spoken about in particle physics is a discrete entity enjoying individual existence, or is an excitation mode of a region of a field, or is something else altogether.[4][5][6] Instrumentalism holds that theoretical terms need only be useful to predict the phenomena, the observed outcomes.[4]
There are multiple versions of instrumentalism.
https://en.wikipedia.org/wiki/Instrumentalism
An ontological commitment of a language is one or more objects postulated to exist by that language. The 'existence' referred to need not be 'real', but exist only in a universe of discourse. As an example, legal systems use vocabulary referring to 'legal persons' that are collective entities that have rights. One says the legal doctrine has an ontological commitment to non-singular individuals.[1]
In information systems and artificial intelligence, where an ontology refers to a specific vocabulary and a set of explicit assumptions about the meaning and usage of these words, an ontological commitment is an agreement to use the shared vocabulary in a coherent and consistent manner within a specific context.[2]
In philosophy, a "theory is ontologically committed to an object only if that object occurs in all the ontologies of that theory."[3]
https://en.wikipedia.org/wiki/Ontological_commitment
https://en.wikipedia.org/wiki/Unobservable
In physics, hidden-variable theories are proposals to provide explanations of quantum mechanical phenomena through the introduction of unobservable hypothetical entities. The existence of fundamental indeterminacy for some measurements is assumed as part of the mathematical formulation of quantum mechanics; moreover, bounds for indeterminacy can be expressed in a quantitative form by the Heisenberg uncertainty principle. Most hidden-variable theories are attempts at a deterministic description of quantum mechanics, to avoid quantum indeterminacy, but at the expense of requiring the existence of nonlocal interactions.
Albert Einstein objected to the fundamentally probabilistic nature of quantum mechanics,[1]and famously declared "I am convinced God does not play dice".[2][3] Einstein, Podolsky, and Rosen argued that quantum mechanics is an incomplete description of reality.[4][5] Bell's theorem would later suggest that local hidden variables (a way for finding a complete description of reality) of certain types are impossible. A famous non-local theory is the De Broglie–Bohm theory.
https://en.wikipedia.org/wiki/Hidden-variable_theory
In statistics, a proxy or proxy variable is a variable that is not in itself directly relevant, but that serves in place of an unobservable or immeasurable variable.[1] In order for a variable to be a good proxy, it must have a close correlation, not necessarily linear, with the variable of interest. This correlation might be either positive or negative.
Proxy variable must relate to unobserved variable, must correlate with disturbance, and must not correlate with regressors once disturbance is controlled for.
https://en.wikipedia.org/wiki/Proxy_(statistics)
In philosophy, empiricism is a theory that states that knowledge comes only or primarily from sensory experience.[1] It is one of several views of epistemology, along with rationalism and skepticism. Empiricism emphasizes the role of empirical evidence in the formation of ideas, rather than innate ideas or traditions.[2] However, empiricists may argue that traditions (or customs) arise due to relations of previous sense experiences.[3]
Historically, empiricism was associated with the "blank slate" concept (tabula rasa), according to which the human mind is "blank" at birth and develops its thoughts only through experience.[4]
Empiricism in the philosophy of science emphasizes evidence, especially as discovered in experiments. It is a fundamental part of the scientific method that all hypotheses and theories must be tested against observations of the natural world rather than resting solely on a priori reasoning, intuition, or revelation.
Empiricism, often used by natural scientists, says that "knowledge is based on experience" and that "knowledge is tentative and probabilistic, subject to continued revision and falsification".[5] Empirical research, including experiments and validated measurement tools, guides the scientific method.
https://en.wikipedia.org/wiki/Empiricism
In philosophy, rationalism is the epistemological view that "regards reason as the chief source and test of knowledge"[1] or "any view appealing to reason as a source of knowledge or justification".[2] More formally, rationalism is defined as a methodology or a theory "in which the criterion of the truth is not sensory but intellectual and deductive".[3]
In an old controversy, rationalism was opposed to empiricism, where the rationalists believed that reality has an intrinsically logical structure. Because of this, the rationalists argued that certain truths exist and that the intellect can directly grasp these truths. That is to say, rationalists asserted that certain rational principles exist in logic, mathematics, ethics, and metaphysics that are so fundamentally true that denying them causes one to fall into contradiction. The rationalists had such a high confidence in reason that empirical proof and physical evidence were regarded as unnecessary to ascertain certain truths – in other words, "there are significant ways in which our concepts and knowledge are gained independently of sense experience".[4]
Different degrees of emphasis on this method or theory lead to a range of rationalist standpoints, from the moderate position "that reason has precedence over other ways of acquiring knowledge" to the more extreme position that reason is "the unique path to knowledge".[5] Given a pre-modern understanding of reason, rationalism is identical to philosophy, the Socratic life of inquiry, or the zetetic (skeptical) clear interpretation of authority (open to the underlying or essential cause of things as they appear to our sense of certainty). In recent decades, Leo Strauss sought to revive "Classical Political Rationalism" as a discipline that understands the task of reasoning, not as foundational, but as maieutic.
In the 17th-century Dutch Republic, the rise of early modern rationalism – as a highly systematic school of philosophy in its own right for the first time in history – exerted an immense and profound influence on modern Western thought in general,[6][7] with the birth of two influential rationalistic philosophical systems of Descartes[8][9] (who spent most of his adult life and wrote all his major work in the United Provinces of the Netherlands)[10][11] and Spinoza[12][13]–namely Cartesianism[14][15][16] and Spinozism.[17] 17th-century arch-rationalists[18][19][20][21] such as Descartes, Spinoza and Leibniz gave the "Age of Reason" its name and place in history.[22]
In politics, rationalism, since the Enlightenment, historically emphasized a "politics of reason" centered upon rational choice, deontology, utilitarianism, secularism, and irreligion[23] – the latter aspect's antitheism was later softened by the adoption of pluralistic reasoning methods practicable regardless of religious or irreligious ideology.[24][25] In this regard, the philosopher John Cottingham[26] noted how rationalism, a methodology, became socially conflated with atheism, a worldview:
https://en.wikipedia.org/wiki/Rationalism
Logical positivism, later called logical empiricism, and both of which together are also known as neopositivism, was a movement in Western philosophy whose central thesis was the verification principle (also known as the verifiability criterion of meaning).[1] This theory of knowledge asserted that only statements verifiable through direct observation or logical proof are meaningful in terms of conveying truth value, information or factual content. Starting in the late 1920s, groups of philosophers, scientists, and mathematicians formed the Berlin Circle and the Vienna Circle, which, in these two cities, would propound the ideas of logical positivism.
Flourishing in several European centres through the 1930s, the movement sought to prevent confusion rooted in unclear language and unverifiable claims by converting philosophy into "scientific philosophy", which, according to the logical positivists, ought to share the bases and structures of empirical sciences' best examples, such as Albert Einstein's general theory of relativity.[2] Despite its ambition to overhaul philosophy by studying and mimicking the extant conduct of empirical science, logical positivism became erroneously stereotyped as a movement to regulate the scientific process and to place strict standards on it.[2]
After World War II, the movement shifted to a milder variant, logical empiricism, led mainly by Carl Hempel, who, during the rise of Nazism, had immigrated into the United States. In the ensuing years, the movement's central premises, still unresolved, were heavily criticised by leading philosophers, particularly Willard van Orman Quine and Karl Popper, and even, within the movement itself, by Hempel. The 1962 publication of Thomas Kuhn's landmark book The Structure of Scientific Revolutions dramatically shifted academic philosophy's focus. In 1967 philosopher John Passmore pronounced logical positivism "dead, or as dead as a philosophical movement ever becomes".[3]
https://en.wikipedia.org/wiki/Logical_positivism
Phenomenology (from Greek phainómenon "that which appears" and lógos "study") is the philosophical study of the structures of experience and consciousness. As a philosophical movement it was founded in the early years of the 20th century by Edmund Husserl and was later expanded upon by a circle of his followers at the universities of Göttingen and Munich in Germany. It then spread to France, the United States, and elsewhere, often in contexts far removed from Husserl's early work.[1]
https://en.wikipedia.org/wiki/Phenomenology_(philosophy)
Philosophy of science is a branch of philosophy concerned with the foundations, methods, and implications of science. The central questions of this study concern what qualifies as science, the reliability of scientific theories, and the ultimate purpose of science. This discipline overlaps with metaphysics, ontology, and epistemology, for example, when it explores the relationship between science and truth. Philosophy of science focuses on metaphysical, epistemic and semantic aspects of science. Ethical issues such as bioethics and scientific misconduct are often considered ethics or science studies rather than philosophy of science.
https://en.wikipedia.org/wiki/Philosophy_of_science
Scientific realism is the view that the universe described by science is real regardless of how it may be interpreted.[clarification needed]
Within philosophy of science, this view is often an answer to the question "how is the success of science to be explained?" The discussion on the success of science in this context centers primarily on the status of unobservable entities apparently talked about by scientific theories. Generally, those who are scientific realists assert that one can make valid claims about unobservables (viz., that they have the same ontological status) as observables, as opposed to instrumentalism.
https://en.wikipedia.org/wiki/Scientific_realism
Structuralism[α] (also known as scientific structuralism[1] or as the structuralistic theory-concept)[2] is an active research program in the philosophy of science, which was first developed in the late 1960s and throughout the 1970s by several analytic philosophers.
Structuralism asserts that all aspects of reality are best understood in terms of empirical scientific constructs of entities and their relations, rather than in terms of concrete entities in themselves.[3] For instance, the concept of matter should be interpreted not as an absolute property of nature in itself, but instead of how scientifically-grounded mathematical relations describe how the concept of matter interacts with other properties, whether that be in a broad sense such as the gravitational fields that mass produces or more empirically as how matter interacts with sense systems of the body to produce sensations such as weight.[4] Its aim is to comprise all important aspects of an empirical theory in one formal framework. The proponents of this meta-theoretic theory are Bas van Fraassen, Frederick Suppe, Patrick Suppes, Ronald Giere,[5][3] Joseph D. Sneed, Wolfgang Stegmüller, Carlos Ulises Moulines , Wolfgang Balzer, John Worrall, Elie Georges Zahar, Pablo Lorenzano, Otávio Bueno, Anjan Chakravartty, Tian Yu Cao, Steven French, and Michael Redhead.
The term "structural realism" for the variation of scientific realism motivated by structuralist arguments, was coined by American philosopher Grover Maxwell in 1968.[6] In 1998, the British structural realist philosopher James Ladyman distinguished epistemicand ontic forms of structural realism.[7][3]
https://en.wikipedia.org/wiki/Structuralism_(philosophy_of_science)
In philosophy of science, constructive empiricism is a form of empiricism. While it is sometimes referred to as an empiricist form of structuralism, its main proponent, Bas van Fraassen, has consistently distinguished between the two views.[1]
Bas van Fraassen is nearly solely responsible for the initial development of constructive empiricism; its historically most important presentation appears in his The Scientific Image (1980). Constructive empiricism states that scientific theories are semanticallyliteral, that they aim to be empirically adequate, and that their acceptance involves, as belief, only that they are empirically adequate. A theory is empirically adequate if and only if everything that it says about observable entities is true (regardless of what it says about unobservable entities). A theory is semantically literal if and only if the language of the theory is interpreted in such a way that the claims of the theory are either true or false (as opposed to an instrumentalist reading).
Constructive empiricism is thus a normative, semantic and epistemological thesis. That science aims to be empirically adequate expresses the normative component. That scientific theories are semantically literal expresses the semantic component. That acceptance involves, as belief, only that a theory is empirically adequate expresses the epistemological component.
Constructive empiricism opposes scientific realism, logical positivism (or logical empiricism) and instrumentalism. Constructive empiricism and scientific realism agree that theories are semantically literal, which logical positivism and instrumentalism deny. Constructive empiricism, logical positivism and instrumentalism agree that theories do not aim for truth about unobservables, which scientific realism denies.
Constructive empiricism has been used to analyze various scientific fields, from physics to psychology (especially computational psychology).
https://en.wikipedia.org/wiki/Constructive_empiricism
Philosophy of science
Concepts
Analysis Analytic–synthetic distinction A priori and a posteriori Causality Commensurability Consilience Construct Creative synthesis Demarcation problem Empirical evidence Explanatory power Fact Falsifiability Feminist method Functional contextualism
Ignoramus et ignorabimus Inductive reasoning Intertheoretic reduction Inquiry Nature Objectivity Observation Paradigm Problem of induction Scientific law Scientific method Scientific revolution Scientific theory Testability Theory choice Theory-ladenness Underdetermination Unity of science
Metatheory
of science
Coherentism Confirmation holism Constructive empiricism Constructive realism Constructivist epistemology Contextualism Conventionalism Deductive-nomological model Hypothetico-deductive model Inductionism Epistemological anarchism Evolutionism Fallibilism Foundationalism Instrumentalism Pragmatism Model-dependent realism Naturalism Physicalism Positivism / Reductionism / Determinism Rationalism / Empiricism Received view / Semantic view of theories Scientific realism / Anti-realism Scientific essentialism Scientific formalism Scientific skepticism Scientism Structuralism Uniformitarianism Vitalism
Philosophy of
Physics thermal and statistical Motion Chemistry Biology Geography Social science Technology Engineering Artificial intelligence Computer science Information Mind Psychiatry Psychology Perception Space and time
Related topics
Alchemy Criticism of science Descriptive science Epistemology Faith and rationality Hard and soft science History and philosophy of science History of science History of evolutionary thought Logic Metaphysics Normative science Pseudoscience Relationship between religion and science Rhetoric of science Science studies Sociology of scientific knowledge Sociology of scientific ignorance
show
Philosophers of science by era
Category Socrates.png Philosophy portal Nuvola apps kalzium.svg Science portal
Categories: Metatheory of scienceConstructivismEmpiricism
https://en.wikipedia.org/wiki/Constructive_empiricism
Kant on noumena[edit]
The distinction between "observable" and "unobservable" is similar to Immanuel Kant's distinction between noumena and phenomena. Noumena are the things-in-themselves, i.e., raw things in their necessarily unknowable state,[3] before they pass through the formalizing apparatus of the senses and the mind in order to become perceived objects, which he refers to as "phenomena". According to Kant, humans can never know noumena; all that humans know is the phenomena.
Locke on primary and secondary qualities[edit]
Kant's distinction is similar to John Locke's distinction between primary and secondary qualities. Secondary qualities are what humans perceive such as redness, chirping, heat, mustiness or sweetness. Primary qualities would be the actual qualities of the things themselves which give rise to the secondary qualities which humans perceive.
Philosophy of science[edit]
The ontological nature and epistemological issues concerning unobservables are central topics in philosophy of science. The theory that unobservables posited by scientific theories exist is referred to as scientific realism. It contrasts with instrumentalism, which asserts that we should withhold ontological commitments to unobservables even though it is useful for scientific theories to refer to them.
The notion of observability plays a central role in constructive empiricism. According to van Fraassen, the goal of scientific theories is not truth about all entities but only truth about all observable entities.[4] If a theory is true in this restricted sense, it is called an empirically adequate theory. Van Fraassen characterizes observability counterfactually: "X is observable if there are circumstances which are such that, if X is present to us under those circumstances, then we observe it".[5]
A problem with this and similar characterizations is to determine the exact extension of what is unobservable. There is little controversy that regular everyday-objects that we can perceive without any aids are observable. Such objects include e.g. trees, chairs or dogs. But controversy starts with cases where unaided perception fails. This includes cases like using telescopes to study distant galaxies,[6] using microscopes to study bacteria or using cloud chambers to study positrons.[5]
Some philosophers have been motivated by these and similar examples to question the value of the distinction between observable and unobservable in general.[7]
Kinds of unobservables[edit]
W. V. Metcalf distinguishes three kinds of unobservables.[8] One is the logically unobservable, which involves a contradiction. An example would be a length which is both longer and shorter than a given length. The second is the practically unobservable, that which we can conceive of as observable by the known sense-faculties of man but we are prevented from observing by practical difficulties. The third kind is the physically unobservable, that which can never be observed by any existing sense-faculties of man.
https://en.wikipedia.org/wiki/Unobservable
https://en.wikipedia.org/wiki/Observable_universe
Causality is the relationship between causes and effects.[1][2] While causality is also a topic studied from the perspectives of philosophy, from the perspective of physics, it is operationalized so that causes of an event must be in the past light cone of the event and ultimately reducible to fundamental interactions. Similarly, a cause cannot have an effect outside its future light cone.
https://en.wikipedia.org/wiki/Causality_(physics)
https://en.wikipedia.org/wiki/Measure_(physics)
https://en.wikipedia.org/wiki/Action_(physics)
https://en.wikipedia.org/wiki/Supersymmetry
https://en.wikipedia.org/wiki/Unobservable
Mechanism is the belief that natural wholes (principally living things) are similar to complicated machines or artifacts, composed of parts lacking any intrinsic relationship to each other.
The doctrine of mechanism in philosophy comes in two different flavors. They are both doctrines of metaphysics, but they are different in scope and ambitions: the first is a global doctrine about nature; the second is a local doctrine about humans and their minds, which is hotly contested. For clarity, we might distinguish these two doctrines as universal mechanism and anthropic mechanism.
https://en.wikipedia.org/wiki/Mechanism_(philosophy)
Quantum entanglement is a physical phenomenon that occurs when a group of particles are generated, interact, or share spatial proximity in a way such that the quantum state of each particle of the group cannot be described independently of the state of the others, including when the particles are separated by a large distance. The topic of quantum entanglement is at the heart of the disparity between classical and quantum physics: entanglement is a primary feature of quantum mechanics lacking in classical mechanics.
Measurements of physical properties such as position, momentum, spin, and polarization performed on entangled particles can, in some cases, be found to be perfectly correlated. For example, if a pair of entangled particles is generated such that their total spin is known to be zero, and one particle is found to have clockwise spin on a first axis, then the spin of the other particle, measured on the same axis, is found to be counterclockwise. However, this behavior gives rise to seemingly paradoxical effects: any measurement of a particle's properties results in an irreversible wave function collapse of that particle and changes the original quantum state. With entangled particles, such measurements affect the entangled system as a whole.
Such phenomena were the subject of a 1935 paper by Albert Einstein, Boris Podolsky, and Nathan Rosen,[1] and several papers by Erwin Schrödinger shortly thereafter,[2][3] describing what came to be known as the EPR paradox. Einstein and others considered such behavior impossible, as it violated the local realism view of causality (Einstein referring to it as "spooky action at a distance")[4] and argued that the accepted formulation of quantum mechanics must therefore be incomplete.
Later, however, the counterintuitive predictions of quantum mechanics were verified[5][6][7] in tests where polarization or spin of entangled particles was measured at separate locations, statistically violating Bell's inequality. In earlier tests, it couldn't be ruled out that the result at one point could have been subtly transmitted to the remote point, affecting the outcome at the second location.[7] However, so-called "loophole-free" Bell tests have been performed where the locations were sufficiently separated that communications at the speed of light would have taken longer—in one case, 10,000 times longer—than the interval between the measurements.[6][5]
According to some interpretations of quantum mechanics, the effect of one measurement occurs instantly. Other interpretations which don't recognize wavefunction collapse dispute that there is any "effect" at all. However, all interpretations agree that entanglement produces correlation between the measurements and that the mutual information between the entangled particles can be exploited, but that any transmission of information at faster-than-light speeds is impossible.[8][9]
Quantum entanglement has been demonstrated experimentally with photons,[10][11]neutrinos,[12] electrons,[13][14] molecules as large as buckyballs,[15][16] and even small diamonds.[17][18] The utilization of entanglement in communication, computation and quantum radar is a very active area of research and development.
Part of a series of articles about |
Quantum mechanics |
---|
https://en.wikipedia.org/wiki/Quantum_entanglement
white christmas bing crosby
No comments:
Post a Comment