09-17-2021-0217 - Mirror three basic types of spatial symmetry: reflection, rotation, and translation. known elementary particles respect rotation and translation symmetry but do not respect mirror reflection symmetry (also called P-symmetry or parity) 0.669 ± 0.038 (2018)
Parity violation in weak interactions was first postulated by Tsung Dao Lee and Chen Ning Yang[2] in 1956 as a solution to the τ-θ puzzle. They suggested a number of experiments to test if the weak interaction is invariant under parity. These experiments were performed half a year later and they confirmed that the weak interactions of the known particles violate parity.[3][4][5]
However, parity symmetry can be restored as a fundamental symmetry of nature if the particle content is enlarged so that every particle has a mirror partner. The theory in its modern form was described in 1991,[6] although the basic idea dates back further.[2][7][8] Mirror particles interact amongst themselves in the same way as ordinary particles, except where ordinary particles have left-handed interactions, mirror particles have right-handed interactions. In this way, it turns out that mirror reflection symmetry can exist as an exact symmetry of nature, provided that a "mirror" particle exists for every ordinary particle. Parity can also be spontaneously broken depending on the Higgs potential.[9][10] While in the case of unbroken parity symmetry the masses of particles are the same as their mirror partners, in case of broken parity symmetry the mirror partners are lighter or heavier.
Mirror matter, if it exists, would need to use the weak force to interact with ordinary matter. This is because the forces between mirror particles are mediated by mirror bosons. With the exception of the graviton, none of the known bosons can be identical to their mirror partners. The only way mirror matter can interact with ordinary matter via forces other than gravity is via kinetic mixing of mirror bosons with ordinary bosons or via the exchange of Holdom particles.[11] These interactions can only be very weak. Mirror particles have therefore been suggested as candidates for the inferred dark matter in the universe.[12][13][14][15][16]
In another context[which?], mirror matter has been proposed to give rise to an effective Higgs mechanism responsible for the electroweak symmetry breaking. In such a scenario, mirror fermions have masses on the order of 1 TeV since they interact with an additional interaction, while some of the mirror bosons are identical to the ordinary gauge bosons. In order to emphasize the distinction of this model from the ones above[which?], these mirror particles are usually called katoptrons.[17][18]
Most modern cosmological models are based on the cosmological principle, which states that our observational location in the universe is not unusual or special; on a large-enough scale, the universe looks the same in all directions (isotropy) and from every location (homogeneity).[6]
The model includes an expansion of metric space that is well documented both as the red shift of prominent spectral absorption or emission lines in the light from distant galaxies and as the time dilation in the light decay of supernova luminosity curves. Both effects are attributed to a Doppler shift in electromagnetic radiation as it travels across expanding space. Although this expansion increases the distance between objects that are not under shared gravitational influence, it does not increase the size of the objects (e.g. galaxies) in space. It also allows for distant galaxies to recede from each other at speeds greater than the speed of light; local expansion is less than the speed of light, but expansion summed across great distances can collectively exceed the speed of light.
The letter (lambda) represents the cosmological constant, which is currently associated with a vacuum energy or dark energy in empty space that is used to explain the contemporary accelerating expansion of space against the attractive effects of gravity. A cosmological constant has negative pressure, , which contributes to the stress–energy tensor that, according to the general theory of relativity, causes accelerating expansion. The fraction of the total energy density of our (flat or almost flat) universe that is dark energy, , is estimated to be 0.669 ± 0.038 based on the 2018 Dark Energy Survey results using Type Ia Supernovae[7] or 0.6847 ± 0.0073 based on the 2018 release of Planck satellite data, or more than 68.3% (2018 estimate) of the mass-energy density of the universe.[8]
Dark matter is postulated in order to account for gravitational effects observed in very large-scale structures (the "flat" rotation curves of galaxies; the gravitational lensing of light by galaxy clusters; and enhanced clustering of galaxies) that cannot be accounted for by the quantity of observed matter.
It consists of matter other than protons and neutrons (and electrons, by convention, although electrons are not baryons).
cold
Its velocity is far less than the speed of light at the epoch of radiation-matter equality (thus neutrinos are excluded, being non-baryonic but not cold).
dissipationless
It cannot cool by radiating photons.
collisionless
The dark matter particles interact with each other and other particles only through gravity and possibly the weak force.
Dark matter constitutes about 26.5%[9] of the mass-energy density of the universe. The remaining 4.9%[9] comprises all ordinary matter observed as atoms, chemical elements, gas and plasma, the stuff of which visible planets, stars and galaxies are made. The great majority of ordinary matter in the universe is unseen, since visible stars and gas inside galaxies and clusters account for less than 10% of the ordinary matter contribution to the mass-energy density of the universe.[10]
Also, the energy density includes a very small fraction (~ 0.01%) in cosmic microwave background radiation, and not more than 0.5% in relic neutrinos. Although very small today, these were much more important in the distant past, dominating the matter at redshift > 3200.
https://en.wikipedia.org/wiki/Lambda-CDM_model
In modernphysical cosmology, the cosmological principle is the notion that the spatial distribution of matter in the universe is homogeneous and isotropic when viewed on a large enough scale, since the forces are expected to act uniformly throughout the universe, and should, therefore, produce no observable irregularities in the large-scale structuring over the course of evolution of the matter field that was initially laid down by the Big Bang.
The cosmological principle is usually stated formally as 'Viewed on a sufficiently large scale, the properties of the universe are the same for all observers.' This amounts to the strongly philosophical statement that the part of the universe which we can see is a fair sample, and that the same physical laws apply throughout. In essence, this in a sense says that the universe is knowable and is playing fair with scientists.[1]
The cosmological principle depends on a definition of "observer," and contains an implicit qualification and two testable consequences.
"Observers" means any observer at any location in the universe, not simply any human observer at any location on Earth: as Andrew Liddle puts it, "the cosmological principle [means that] the universe looks the same whoever and wherever you are."[2]
The qualification is that variation in physical structures can be overlooked, provided this does not imperil the uniformity of conclusions drawn from observation: the Sun is different from the Earth, our galaxy is different from a black hole, some galaxies advance toward rather than recede from us, and the universe has a "foamy" texture of galaxy clusters and voids, but none of these different structures appears to violate the basic laws of physics.
The two testable structural consequences of the cosmological principle are homogeneity and isotropy. Homogeneity means that the same observational evidence is available to observers at different locations in the universe ("the part of the universe which we can see is a fair sample"). Isotropy means that the same observational evidence is available by looking in any direction in the universe ("the same physical laws apply throughout"[dubious– discuss]). The principles are distinct but closely related, because a universe that appears isotropic from any two (for a spherical geometry, three) locations must also be homogeneous.
The cosmological principle is first clearly asserted in the Philosophiæ Naturalis Principia Mathematica (1687) of Isaac Newton.[dubious– discuss] In contrast to earlier classical or medieval cosmologies, in which Earth rested at the center of universe, Newton conceptualized the Earth as a sphere in orbital motion around the Sun within an empty space that extended uniformly in all directions to immeasurably large distances. He then showed, through a series of mathematical proofs on detailed observational data of the motions of planets and comets, that their motions could be explained by a single principle of "universal gravitation" that applied as well to the orbits of the Galilean moons around Jupiter, the Moon around the Earth, the Earth around the Sun, and to falling bodies on Earth. That is, he asserted the equivalent material nature of all bodies within the Solar System, the identical nature of the Sun and distant stars and thus the uniform extension of the physical laws of motion to a great distance beyond the observational location of Earth itself.
Karl Popper criticized the cosmological principle on the grounds that it makes "our lack of knowledge a principle of knowing something". He summarized his position as:
the “cosmological principles” were, I fear, dogmas that should not have been proposed.[7]
The perfect cosmological principle is an extension of the cosmological principle, and states that the universe is homogeneous and isotropic in space and time. In this view the universe looks the same everywhere (on the large scale), the same as it always has and always will. The perfect cosmological principle underpins Steady State theory and emerges[clarification needed] from chaotic inflation theory.[30][31][32]
The size of the whole universe is unknown, and it might be infinite in extent.[19] Some parts of the universe are too far away for the light emitted since the Big Bang to have had enough time to reach Earth or space-based instruments, and therefore lie outside the observable universe. In the future, light from distant galaxies will have had more time to travel, so additional regions will become observable. However, owing to Hubble's law, regions sufficiently distant from the Earth are expanding away from it faster than the speed of light (special relativity prevents nearby objects in the same local region from moving faster than the speed of light with respect to each other, but there is no such constraint for distant objects when the space between them is expanding; see uses of the proper distance for a discussion) and furthermore the expansion rate appears to be accelerating owing to dark energy.
Visualization of the whole observable universe. The scale is such that the fine grains represent collections of large numbers of superclusters. The Virgo Supercluster—home of Milky Way—is marked at the center, but is too small to be seen.
The observable universe is a ball-shaped region of the universe comprising all matter that can be observed from Earth or its space-based telescopes and exploratory probes at the present time, because the electromagnetic radiationfrom these objects has had time to reach the Solar System and Earth since the beginning of the cosmological expansion. There may be 2 trillion galaxies in the observable universe,[7][8] although that number has recently been estimated at only several hundred billion based on new data from New Horizons.[9][10]Assuming the universe is isotropic, the distance to the edge of the observable universe is roughly the same in every direction. That is, the observable universe has a spherical volume (a ball) centered on the observer. Every location in the universe has its own observable universe, which may or may not overlap with the one centered on Earth.
The word observable in this sense does not refer to the capability of modern technology to detect light or other information from an object, or whether there is anything to be detected. It refers to the physical limit created by the speed of light itself. No signal can travel faster than light, hence there is a maximum distance (called the particle horizon) beyond which nothing can be detected, as the signals could not have reached us yet. Sometimes astrophysicists distinguish between the visible universe, which includes only signals emitted since recombination (when hydrogen atoms were formed from protons and electrons and photons were emitted)—and the observable universe, which includes signals since the beginning of the cosmological expansion (the Big Bang in traditional physical cosmology, the end of the inflationary epoch in modern cosmology).
According to calculations, the current comoving distance—proper distance, which takes into account that the universe has expanded since the light was emitted—to particles from which the cosmic microwave background radiation(CMBR) was emitted, which represents the radius of the visible universe, is about 14.0 billion parsecs (about 45.7 billion light-years), while the comoving distance to the edge of the observable universe is about 14.3 billion parsecs (about 46.6 billion light-years),[11] about 2% larger. The radius of the observable universe is therefore estimated to be about 46.5 billion light-years[12][13] and its diameter about 28.5 gigaparsecs (93 billion light-years, or 8.8×1026 metres or 2.89×1027 feet), which equals 880 yottametres.[14] Using the critical density and the diameter of the observable universe, the total mass of ordinary matter in the universe can be calculated to be about 1.5 × 1053 kg.[15] In November 2018, astronomers reported that the extragalactic background light (EBL) amounted to 4 × 1084 photons.[16][17]
As the universe's expansion is accelerating, all currently observable objects, outside our local supercluster, will eventually appear to freeze in time, while emitting progressively redder and fainter light. For instance, objects with the current redshift z from 5 to 10 will remain observable for no more than 4–6 billion years. In addition, light emitted by objects currently situated beyond a certain comoving distance (currently about 19 billion parsecs) will never reach Earth.[18]
Newton's law of universal gravitation is usually stated as that every particle attracts every other particle in the universe with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between their centers.[note 1] The publication of the theory has become known as the "first great unification", as it marked the unification of the previously described phenomena of gravity on Earth with known astronomical behaviors.[1][2][3]
In today's language, the law states that every pointmass attracts every other point mass by a force acting along the line intersecting the two points. The force is proportional to the product of the two masses, and inversely proportional to the squareof the distance between them.[5]
The equation for universal gravitation thus takes the form:
where F is the gravitational force acting between two objects, m1 and m2 are the masses of the objects, r is the distance between the centers of their masses, and G is the gravitational constant.
The first test of Newton's theory of gravitation between masses in the laboratory was the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798.[6] It took place 111 years after the publication of Newton's Principia and approximately 71 years after his death.
Newton's law of gravitation resembles Coulomb's law of electrical forces, which is used to calculate the magnitude of the electrical force arising between two charged bodies. Both are inverse-square laws, where force is inversely proportional to the square of the distance between the bodies. Coulomb's law has the product of two charges in place of the product of the masses, and the Coulomb constant in place of the gravitational constant.
Newton's law has since been superseded by Albert Einstein's theory of general relativity, but it continues to be used as an excellent approximation of the effects of gravity in most applications. Relativity is required only when there is a need for extreme accuracy, or when dealing with very strong gravitational fields, such as those found near extremely massive and dense objects, or at small distances (such as Mercury's orbit around the Sun).
Dark matter is a hypothetical form of matter thought to account for approximately 85% of the matter in the universe.[1] Its presence is implied in a variety of astrophysicalobservations, including gravitational effects that cannot be explained by accepted theories of gravity unless more matter is present than can be seen. For this reason, most experts think that dark matter is abundant in the universe and that it has had a strong influence on its structure and evolution. Dark matter is called dark because it does not appear to interact with the electromagnetic field, which means it does not absorb, reflect or emit electromagnetic radiation, and is therefore difficult to detect.[2]
Because dark matter has not yet been observed directly, if it exists, it must barely interact with ordinary baryonic matter and radiation, except through gravity. Most dark matter is thought to be non-baryonic in nature; it may be composed of some as-yet undiscovered subatomic particles.[b] The primary candidate for dark matter is some new kind of elementary particle that has not yet been discovered, in particular, weakly interacting massive particles (WIMPs).[14] Many experiments to directly detect and study dark matter particles are being actively undertaken, but none have yet succeeded.[15] Dark matter is classified as "cold", "warm", or "hot" according to its velocity (more precisely, its free streaming length). Current models favor a cold dark matter scenario, in which structures emerge by gradual accumulation of particles.
Although the existence of dark matter is generally accepted by the scientific community,[16] some astrophysicists, intrigued by certain observations which are not well-explained by standard dark matter, argue for various modifications of the standard laws of general relativity, such as modified Newtonian dynamics, tensor–vector–scalar gravity, or entropic gravity. These models attempt to account for all observations without invoking supplemental non-baryonic matter.
In physical cosmology and astronomy, dark energy is an unknown form of energy that affects the universe on the largest scales. The first observational evidence for its existence came from measurements of supernovae, which showed that the universe does not expand at a constant rate; rather, the expansion of the universe is accelerating.[1][2] Understanding the evolution of the universe requires knowledge of its starting conditions and its composition. Prior to these observations, it was thought that all forms of matter and energy in the universe would only cause the expansion to slow down over time. Measurements of the cosmic microwave background suggest the universe began in a hot Big Bang, from which general relativity explains its evolution and the subsequent large-scale motion. Without introducing a new form of energy, there was no way to explain how an accelerating universe could be measured. Since the 1990s, dark energy has been the most accepted premise to account for the accelerated expansion. As of 2021, there are active areas of cosmology researchaimed at understanding the fundamental nature of dark energy.[3]
Assuming that the lambda-CDM model of cosmology is correct, the best current measurements indicate that dark energy contributes 68% of the total energy in the present-day observable universe. The mass–energy of dark matter and ordinary (baryonic)matter contributes 26% and 5%, respectively, and other components such as neutrinos and photons contribute a very small amount.[4][5][6][7] The density of dark energy is very low (~ 7 × 10−30 g/cm3), much less than the density of ordinary matter or dark matter within galaxies. However, it dominates the mass–energy of the universe because it is uniform across space.[8][9][10]
Two proposed forms of dark energy are the cosmological constant,[11][12] representing a constant energy density filling space homogeneously, and scalar fields such as quintessence or moduli, dynamic quantities having energy densities that can vary in time and space. Contributions from scalar fields that are constant in space are usually also included in the cosmological constant. The cosmological constant can be formulated to be equivalent to the zero-point radiation of space i.e. the vacuum energy.[13] Scalar fields that change in space can be difficult to distinguish from a cosmological constant because the change may be extremely slow.
Due to the toy model nature of concordance cosmology, some experts believe[14] that a more accurate general relativistic treatment of the structures that exist on all scales[15] in the real universe may do away with the need to invoke dark energy. Inhomogeneous cosmologies, which attempt to account for the back-reaction of structure formation on the metric, generally do not acknowledge any dark energy contribution to the energy density of the Universe.
In astronomy and cosmology, dark fluid theories attempt to explain dark matter and dark energy in a single framework.[1][2] The theory proposes that dark matter and dark energy are not separate physical phenomena, nor do they have separate origins, but that they are strongly linked together and can be considered as two facets of a single fluid. At galactic scales, the dark fluid behaves like dark matter, and at larger scales its behavior becomes similar to dark energy.
In 2018 astrophysicist Jamie Farnes proposed that a dark fluid with negative masswould have the properties required to explain both dark matter and dark energy.[3][4]
https://en.wikipedia.org/wiki/Dark_fluid
In astrophysics, dark flow is a theoretical non-random component of the peculiar velocity of galaxy clusters. The actual measured velocity is the sum of the velocity predicted by Hubble's Law plus a possible small and unexplained (or dark) velocity flowing in a common direction.
The researchers had suggested that the motion may be a remnant of the influence of no-longer-visible regions of the universe prior to inflation. Telescopes cannot see events earlier than about 380,000 years after the Big Bang, when the universe became transparent (the cosmic microwave background); this corresponds to the particle horizon at a distance of about 46 billion (4.6×1010) light years. Since the matter causing the net motion in this proposal is outside this range, it would in a certain sense be outside our visible universe; however, it would still be in our past light cone.
In 2013, data from the Planck space telescope showed no evidence of "dark flow" on that sort of scale, discounting the claims of evidence for either gravitational effects reaching beyond the visible universe or existence of a multiverse.[5] However, in 2015 Kashlinsky et al claim to have found support for its existence using both Planck and WMAP data.[6]
A chemical anisotropic filter, as used to filter particles, is a filter with increasingly smaller interstitial spaces in the direction of filtration so that the proximalregions filter out larger particles and distal regions increasingly remove smaller particles, resulting in greater flow-through and more efficient filtration.
In NMR spectroscopy, the orientation of nuclei with respect to the applied magnetic field determines their chemical shift. In this context, anisotropic systems refer to the electron distribution of molecules with abnormally high electron density, like the pi system of benzene. This abnormal electron density affects the applied magnetic field and causes the observed chemical shift to change.
In fluorescence spectroscopy, the fluorescence anisotropy, calculated from the polarization properties of fluorescence from samples excited with plane-polarized light, is used, e.g., to determine the shape of a macromolecule. Anisotropy measurements reveal the average angular displacement of the fluorophore that occurs between absorption and subsequent emission of a photon.
Images of a gravity-bound or man-made environment are particularly anisotropic in the orientation domain, with more image structure located at orientations parallel with or orthogonal to the direction of gravity (vertical and horizontal).
Physicists use the term anisotropy to describe direction-dependent properties of materials. Magnetic anisotropy, for example, may occur in a plasma, so that its magnetic field is oriented in a preferred direction. Plasmas may also show "filamentation" (such as that seen in lightning or a plasma globe) that is directional.
An anisotropic liquid has the fluidity of a normal liquid, but has an average structural order relative to each other along the molecular axis, unlike water or chloroform, which contain no structural ordering of the molecules. Liquid crystals are examples of anisotropic liquids.
Some materials conduct heat in a way that is isotropic, that is independent of spatial orientation around the heat source. Heat conduction is more commonly anisotropic, which implies that detailed geometric modeling of typically diverse materials being thermally managed is required. The materials used to transfer and reject heat from the heat source in electronics are often anisotropic.[2]
Many crystals are anisotropic to light ("optical anisotropy"), and exhibit properties such as birefringence. Crystal optics describes light propagation in these media. An "axis of anisotropy" is defined as the axis along which isotropy is broken (or an axis of symmetry, such as normal to crystalline layers). Some materials can have multiple such optical axes.
Seismic anisotropy is the variation of seismic wavespeed with direction. Seismic anisotropy is an indicator of long range order in a material, where features smaller than the seismic wavelength (e.g., crystals, cracks, pores, layers or inclusions) have a dominant alignment. This alignment leads to a directional variation of elasticity wavespeed. Measuring the effects of anisotropy in seismic data can provide important information about processes and mineralogy in the Earth; significant seismic anisotropy has been detected in the Earth's crust, mantle and inner core.
Geological formations with distinct layers of sedimentary material can exhibit electrical anisotropy; electrical conductivity in one direction (e.g. parallel to a layer), is different from that in another (e.g. perpendicular to a layer). This property is used in the gas and oil exploration industry to identify hydrocarbon-bearing sands in sequences of sand and shale. Sand-bearing hydrocarbon assets have high resistivity (low conductivity), whereas shales have lower resistivity. Formation evaluation instruments measure this conductivity/resistivity and the results are used to help find oil and gas in wells. The mechanical anisotropy measured for some of the sedimentary rocks like coal and shale can change with corresponding changes in their surface properties like sorption when gases are produced from the coal and shale reservoirs.[3]
The hydraulic conductivity of aquifers is often anisotropic for the same reason. When calculating groundwater flow to drains[4] or to wells,[5] the difference between horizontal and vertical permeability must be taken into account, otherwise the results may be subject to error.
Most common rock-forming minerals are anisotropic, including quartz and feldspar. Anisotropy in minerals is most reliably seen in their optical properties. An example of an isotropic mineral is garnet.
Anisotropy is also a well-known property in medical ultrasound imaging describing a different resulting echogenicity of soft tissues, such as tendons, when the angle of the transducer is changed. Tendon fibers appear hyperechoic (bright) when the transducer is perpendicular to the tendon, but can appear hypoechoic (darker) when the transducer is angled obliquely. This can be a source of interpretation error for inexperienced practitioners.
Anisotropy, in materials science, is a material's directional dependence of a physical property. This is a critical consideration for materials selection in engineering applications. A material with physical properties that are symmetric about an axis that is normal to a plane of isotropy is called a transversely isotropic material. Tensor descriptions of material properties can be used to determine the directional dependence of that property. For a monocrystalline material, anisotropy is associated with the crystal symmetry in the sense that more symmetric crystal types have fewer independent coefficients in the tensor description of a given property.[6][7]When a material is polycrystalline, the directional dependence on properties is often related to the processing techniques it has undergone. A material with randomly oriented grains will be isotropic, whereas materials with texture will be often be anisotropic. Textured materials are often the result of processing techniques like hot rolling, wire-drawing, and heat treatment.
Mechanical properties of materials such as Young's modulus, ductility, yield strength, and high temperature creep rate, are often dependent on the direction of measurement.[8] Fourth rank tensor properties, like the elastic constants, are anisotropic, even for materials with cubic symmetry. The Young's modulus relates stress and strain when an isotropic material is elastically deformed; to describe elasticity in an anisotropic material, stiffness (or compliance) tensors are used instead.
In metals, anisotropic elasticity behavior is present in all single crystals with three independent coefficients for cubic crystals, for example. For face centered cubic materials such as Nickel and Copper, the stiffness is highest along the <111> direction, normal to the close packed planes, and smallest parallel to <100>. Tungsten is so nearly isotropic at room temperature that it can be considered to have only two stiffness coefficients; Aluminum is another metal that is nearly isotropic.
For an isotropic material,
,
where is the shear modulus, is the Young's modulus, and is the material's Poisson's ratio. Therefore, for cubic materials, we can think of anisotropy, , as the ratio between the empirically determined shear modulus for the cubic material and its (isotropic) equivalent:
Fiber-reinforced or layered composite materials exhibit anisotropic mechanical properties, due to orientation of the reinforcement material. In many fiber-reinforced composites like carbon fiber or glass fiber based composites, the weave of the material (e.g. unidirectional or plain weave) can determine the extent of the anisotropy of the bulk material.[9] The tunability of orientation of the fibers, allows for application-based designs of composite materials, depending on the direction of stresses applied onto the material.
Amorphous materials such as glass and polymers are typically isotropic. Due to the highly randomized orientation of macromolecules in polymeric materials, polymers are in general described as isotropic. However, polymers can be engineered to have directionally dependent properties through processing techniques or introduction of anisotropy-inducing elements. Researchers have built composite materials with aligned fibers and voids to generate anisotropic hydrogels, in order to mimic hierarchically ordered biological soft matter.[10] 3D printing, especially Fused Deposition Modeling, can introduce anisotropy into printed parts. This is due to the fact that FDM is designed to extrude and print layers of thermoplastic materials.[11] This creates materials that are strong when tensile stress is applied in parallel to the layers and weak when the material is perpendicular to the layers.
Anisotropic etching techniques (such as deep reactive ion etching) are used in microfabrication processes to create well defined microscopic features with a high aspect ratio. These features are commonly used in MEMS and microfluidic devices, where the anisotropy of the features is needed to impart desired optical, electrical, or physical properties to the device. Anisotropic etching can also refer to certain chemical etchants used to etch a certain material preferentially over certain crystallographic planes (e.g., KOH etching of silicon [100] produces pyramid-like structures)
Diffusion tensor imaging is an MRI technique that involves measuring the fractional anisotropy of the random motion (Brownian motion) of water molecules in the brain. Water molecules located in fiber tracts are more likely to be anisotropic, since they are restricted in their movement (they move more in the dimension parallel to the fiber tract rather than in the two dimensions orthogonal to it), whereas water molecules dispersed in the rest of the brain have less restricted movement and therefore display more isotropy. This difference in fractional anisotropy is exploited to create a map of the fiber tracts in the brains of the individual.
Radiance fields (see BRDF) from a reflective surface are often not isotropic in nature. This makes calculations of the total energy being reflected from any scene a difficult quantity to calculate. In remote sensing applications, anisotropy functions can be derived for specific scenes, immensely simplifying the calculation of the net reflectance or (thereby) the net irradiance of a scene. For example, let the BRDF be where 'i' denotes incident direction and 'v' denotes viewing direction (as if from a satellite or other instrument). And let P be the Planar Albedo, which represents the total reflectance from the scene.
It is of interest because, with knowledge of the anisotropy function as defined, a measurement of the BRDF from a single viewing direction (say, ) yields a measure of the total scene reflectance (Planar Albedo) for that specific incident geometry (say, ).
On the other hand, in places where there is relatively little matter, as in the voidsbetween galactic superclusters, this hypothesis predicts that the dark fluid relaxes and acquires a negative pressure. Thus dark fluid becomes a repulsive force, with an effect similar to that of dark energy.
Dark fluid is not analyzed like a standard fluid mechanics model, because the complete equations in fluid mechanics are as yet too difficult to solve. A formalized fluid mechanical approach, like the generalized Chaplygin gas model, would be an ideal method for modeling dark fluid, but it currently requires too many observational data points for the computations to be feasible, and not enough data points are available to cosmologists. A simplification step was undertaken by modeling the hypothesis through scalar field models instead, as is done in other alternative approaches to dark energy and dark matter.[2][5]
https://en.wikipedia.org/wiki/Dark_fluid
[Submitted on 28 Jul 2020 (v1), last revised 31 Jul 2020 (this version, v2)]
In 2005, Dharam Ahluwalia and Daniel Grumiller reported an unexpected theoretical discovery of mass dimension one fermions. These are an entirely new class of spin one half particles, and because of their mass dimensionality mismatch with the standard model fermions they are a first-principle dark matter candidate. Written by one of the physicists involved in the discovery, this is the first book to outline the discovery of mass dimension one fermions. Using a foundation of Lorentz algebra it provides a detailed construction of the eigenspinors of the charge conjugation operator (Elko) and their properties. The theory of dual spaces is then covered, before mass dimension one fermions are discussed in detail. With mass dimension one fermions having applications to cosmology and high energy physics, this book is essential for graduate students and researchers in quantum field theory, mathematical physics, and particle theory.
In particle physics, the lightest supersymmetric particle (LSP) is the generic name given to the lightest of the additional hypothetical particles found in supersymmetric models. In models with R-parity conservation, the LSP is stable; in other words, it cannot decay into any Standard Model particle, since all SM particles have the opposite R-parity. There is extensive observational evidence for an additional component of the matter density in the universe, which goes under the name dark matter. The LSP of supersymmetric models is a dark matter candidate and is a weakly interacting massive particle (WIMP).[1]
No comments:
Post a Comment