Blog Archive

Friday, March 18, 2022

03-17-2022-2306 - Photoevaporation


Jump to navigation
Jump to search

Photoevaporation denotes the process where energetic radiation ionises gas and causes it to disperse away from the ionising source. This typically refers to an astrophysical context where ultraviolet radiation from hot stars acts on clouds of material such as molecular clouds, protoplanetary disks, or planetary atmospheres.[1][2][3] 

https://en.wikipedia.org/wiki/Photoevaporation

An evaporating gas globule or EGG is a region of hydrogen gas in outer space approximately 100 astronomical units in size, such that gases shaded by it are shielded from ionizing UV rays.[2] Dense areas of gas shielded by an evaporating gas globule can be conducive to the birth of stars.[2] Evaporating gas globules were first conclusively identified via photographs of the Pillars of Creation in the Eagle Nebula taken by the Hubble Space Telescope in 1995.[2][3]

EGG's are the likely predecessors of new protostars. Inside an EGG the gas and dust are denser than in the surrounding dust cloud. Gravity pulls the cloud even more tightly together as the EGG continues to draw in material from its surroundings. As the cloud density builds up the globule becomes hotter under the weight of the outer layers, a protostar is formed inside the EGG.

A protostar may have too little mass to become a star. If so it becomes a brown dwarf. If the protostar has sufficient mass, the density reaches a critical level where the temperature exceeds 10 million kelvin at its center. At this point, a nuclear reaction starts converting hydrogen to helium and releasing large amounts of energy. The protostar then becomes a star and joins the main sequence on the HR diagram.[4]

A study of 73 EGGs in the Pillars of Creation (Eagle Nebula) with the Very Large Telescope showed that only 15% of the EGGs show signs of star-formation. The star-formation is not everywhere the same: The largest pillar has a small cluster of these sources at the head of the pillar.[5]

https://en.wikipedia.org/wiki/Evaporating_gaseous_globule

Hydra (/ˈhdrə/ HY-drə) is a genus of small, freshwater organisms of the phylum Cnidaria and class Hydrozoa. They are native to the temperate and tropical regions.[2][3] Biologists are especially interested in Hydra because of their regenerative ability; they do not appear to die of old age, or to age at all.

https://en.wikipedia.org/wiki/Hydra_(genus)

Turritopsis dohrnii, also known as the immortal jellyfish, is a species of small, biologically immortal jellyfish[2][3] found worldwide in temperate to tropic waters. It is one of the few known cases of animals capable of reverting completely to a sexually immature, colonial stage after having reached sexual maturity as a solitary individual. Others include the jellyfish Laodicea undulata[4] and species of the genus Aurelia.[5]

Like most other hydrozoansT. dohrnii begin their life as tiny, free-swimming larvae known as planulae. As a planula settles down, it gives rise to a colony of polyps that are attached to the sea floor. All the polyps and jellyfish arising from a single planula are genetically identical clones.[6] The polyps form into an extensively branched form, which is not commonly seen in most jellyfish. Jellyfish, also known as medusae, then bud off these polyps and continue their life in a free-swimming form, eventually becoming sexually mature. When sexually mature, they have been known to prey on other jellyfish species at a rapid pace. If the T. dohrnii jellyfish is exposed to environmental stress, physical assault, or is sick or old, it can revert to the polyp stage, forming a new polyp colony.[7] It does this through the cell development process of transdifferentiation, which alters the differentiated state of the cells and transforms them into new types of cells.

Theoretically, this process can go on indefinitely, effectively rendering the jellyfish biologically immortal,[3][8] although in practice individuals can still die. In nature, most Turritopsis dohrnii are likely to succumb to predation or disease in the medusa stage without reverting to the polyp form.[9]

The capability of biological immortality with no maximum lifespan makes T. dohrnii an important target of basic biological, aging and pharmaceutical research.[10]

The "immortal jellyfish" was formerly classified as T. nutricula.[11]

https://en.wikipedia.org/wiki/Turritopsis_dohrnii

Hydroids are a life stage for most animals of the class Hydrozoa, small predators related to jellyfish.

https://en.wikipedia.org/wiki/Hydroid_(zoology)

Brown dwarfs are substellar objects that are not massive enough to sustain nuclear fusion of ordinary hydrogen (1H) into helium in their cores, unlike a main-sequence star. They have a mass between the most massive gas giant planets and the least massive stars, approximately 13 to 80 times that of Jupiter (MJ).[2][3] However, they are able to fuse deuterium (2H), and the most massive ones (> 65 MJ) are able to fuse lithium (7Li).[3]

https://en.wikipedia.org/wiki/Brown_dwarf

Magenta (/məˈɛntə/) is a color that is variously defined as purplish-red,[1] reddish-purple or mauvish-crimson.[2] On color wheels of the RGB (additive) and CMY (subtractive) color models, it is located exactly midway between red and blue. It is one of the four colors of ink used in color printing by an inkjet printer, along with yellowblack, and cyan, to make all other colors. The tone of magenta used in printing is called "printer's magenta". It is also a shade of purple.

https://en.wikipedia.org/wiki/Magenta

black dwarf is a theoretical stellar remnant, specifically a white dwarf that has cooled sufficiently that it no longer emits significant heat or light. Because the time required for a white dwarf to reach this state is calculated to be longer than the current age of the universe (13.77 billion years), no black dwarfs are expected to exist in the universe as of now, and the temperature of the coolest white dwarfs is one observational limit on the age of the universe.[1]

The name "black dwarf" has also been applied to hypothetical late-stage cooled brown dwarfs – substellar objects that do not have sufficient mass (less than approximately 0.07 M) to maintain hydrogen-burning nuclear fusion.[2][3][4][5]

https://en.wikipedia.org/wiki/Black_dwarf


Jump to navigation
Jump to search

Electron degeneracy pressure is a particular manifestation of the more general phenomenon of quantum degeneracy pressure. The Pauli exclusion principle disallows two identical half-integer spin particles (electrons and all other fermions) from simultaneously occupying the same quantum state. The result is an emergent pressure against compression of matter into smaller volumes of space. Electron degeneracy pressure results from the same underlying mechanism that defines the electron orbital structure of elemental matter. For bulk matter with no net electric charge, the attraction between electrons and nuclei exceeds (at any scale) the mutual repulsion of electrons plus the mutual repulsion of nuclei; so in absence of electron degeneracy pressure, the matter would collapse into a single nucleus. In 1967, Freeman Dyson showed that solid matter is stabilized by quantum degeneracy pressure rather than electrostatic repulsion.[1][2][3] Because of this, electron degeneracy creates a barrier to the gravitational collapse of dying stars and is responsible for the formation of white dwarfs.

https://en.wikipedia.org/wiki/Electron_degeneracy_pressure

https://en.wikipedia.org/wiki/Thermonuclear_fusion

https://en.wikipedia.org/wiki/Plasma_(physics)

https://en.wikipedia.org/wiki/Nuclear_fusion#Important_reactions

Aneutronic fusion is any form of fusion power in which very little of the energy released is carried by neutrons. While the lowest-threshold nuclear fusion reactions release up to 80% of their energy in the form of neutrons, aneutronic reactions release energy in the form of charged particles, typically protons or alpha particles. Successful aneutronic fusion would greatly reduce problems associated with neutron radiation such as damaging ionizing radiationneutron activation, and requirements for biological shielding, remote handling and safety.

Since it is simpler to convert the energy of charged particles into electrical power than it is to convert energy from uncharged particles, an aneutronic reaction would be attractive for power systems. Some proponents see a potential for dramatic cost reductions by converting energy directly to electricity, as well as in eliminating the radiation from neutrons, which are difficult to shield against.[1][2] However, the conditions required to harness aneutronic fusion are much more extreme than those required for deuterium-tritium fusion being investigated in ITER or Wendelstein 7-X.

https://en.wikipedia.org/wiki/Aneutronic_fusion

Neutron radiation is a form of ionizing radiation that presents as free neutrons. Typical phenomena are nuclear fission or nuclear fusion causing the release of free neutrons, which then react with nuclei of other atoms to form new isotopes—which, in turn, may trigger further neutron radiation. Free neutrons are unstable, decaying into a proton, an electron, plus an electron antineutrino with a mean lifetime of 887 seconds (14 minutes, 47 seconds).[1]

https://en.wikipedia.org/wiki/Neutron_radiation

The electron neutrino has a corresponding antiparticle, the electron antineutrino (
ν
e
), which differs only in that some of its properties have equal magnitude but opposite sign. One major open question in particle physics is whether or not neutrinos and anti-neutrinos are the same particle, in which case it would be a Majorana fermion, or whether they are different particles, in which case they would be Dirac fermions. They are produced in beta decay and other types of weak interactions.

https://en.wikipedia.org/wiki/Electron_neutrino#Electron_antineutrino


In chemistry and physics, the exchange interaction (with an exchange energy and exchange term) is a quantum mechanical effect that only occurs between identical particles. Despite sometimes being called an exchange force in an analogy to classical force, it is not a true force as it lacks a force carrier.


https://en.wikipedia.org/wiki/Exchange_interaction

In quantum field theory, a force carrier, also known as messenger particle or intermediate particle, is a type of particle that gives rise to forces between other particles. These particles serve as the quanta of a particular kind of physical field.[1][2]
https://en.wikipedia.org/wiki/Force_carrier

The photon (Greekφῶς, phōs, light) is a type of elementary particle that serves as the quantum of the electromagnetic field, including electromagnetic radiation such as light and radio waves, and the force carrier for the electromagnetic force. Photons are massless,[a] so they always move at the speed of light in vacuum299792458 m/s (or about 186,282 mi/s). The photon belongs to the class of bosons.
https://en.wikipedia.org/wiki/Photon

In physicsspacetime is any mathematical model which fuses the three dimensions of space and the one dimension of time into a single four-dimensional manifoldSpacetime diagrams can be used to visualize relativistic effects, such as why different observers perceive differently where and when events occur.
https://en.wikipedia.org/wiki/Spacetime

In physics, a gauge theory is a type of field theory in which the Lagrangian (and hence the dynamics of the system itself) does not change (is invariant) under local transformations according to certain smooth families of operations (Lie groups).
https://en.wikipedia.org/wiki/Gauge_theory

An electromagnetic four-potential is a relativistic vector function from which the electromagnetic field can be derived. It combines both an electric scalar potential and a magnetic vector potential into a single four-vector.[1]
https://en.wikipedia.org/wiki/Electromagnetic_four-potential

In multilinear algebra and tensor analysiscovariance and contravariance describe how the quantitative description of certain geometric or physical entities changes with a change of basis.
https://en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors

In particle physics, the Pontecorvo–Maki–Nakagawa–Sakata matrix (PMNS matrix), Maki–Nakagawa–Sakata matrix (MNS matrix), lepton mixing matrix, or neutrino mixing matrix is a unitary[a] mixing matrix which contains information on the mismatch of quantum states of neutrinos when they propagate freely and when they take part in weak interactions. It is a model of neutrino oscillation. This matrix was introduced in 1962 by Ziro MakiMasami Nakagawa, and Shoichi Sakata,[1] to explain the neutrino oscillations predicted by Bruno Pontecorvo.[2]
https://en.wikipedia.org/wiki/Pontecorvo%E2%80%93Maki%E2%80%93Nakagawa%E2%80%93Sakata_matrix

Charged-current interaction[edit]

The Feynman diagram for beta-minus decay of a neutron into a protonelectron and electron anti-neutrino, via an intermediate heavy 
W
 boson

In one type of charged current interaction, a charged lepton (such as an electron or a muon, having a charge of −1) can absorb a 
W+
 boson
 (a particle with a charge of +1) and be thereby converted into a corresponding neutrino (with a charge of 0), where the type ("flavour") of neutrino (electron, muon or tau) is the same as the type of lepton in the interaction, for example:

Similarly, a down-type quark (d with a charge of −13) can be converted into an up-type quark (u, with a charge of +23), by emitting a 
W
 boson or by absorbing a 
W+
 boson. More precisely, the down-type quark becomes a quantum superposition of up-type quarks: that is to say, it has a possibility of becoming any one of the three up-type quarks, with the probabilities given in the CKM matrix tables. Conversely, an up-type quark can emit a 
W+
 boson, or absorb a 
W
 boson, and thereby be converted into a down-type quark, for example:

The W boson is unstable so will rapidly decay, with a very short lifetime. For example:

Decay of a W boson to other products can happen, with varying probabilities.[16]

In the so-called beta decay of a neutron (see picture, above), a down quark within the neutron emits a virtual 
W
 boson and is thereby converted into an up quark, converting the neutron into a proton. Because of the limited energy involved in the process (i.e., the mass difference between the down quark and the up quark), the virtual 
W
 boson can only carry sufficient energy to produce an electron and an electron-antineutrino – the two lowest-possible masses among its prospective decay products.[17] At the quark level, the process can be represented as:

Neutral-current interaction[edit]

In neutral current interactions, a quark or a lepton (e.g., an electron or a muon) emits or absorbs a neutral Z boson. For example:

Like the 
W±
 
bosons, the 
Z0
 
boson also decays rapidly,[16] for example:

Unlike the charged-current interaction, whose selection rules are strictly limited by chirality, electric charge, and / or weak isospin, the neutral-current 
Z0
 
interaction can cause any two fermions in the standard model to deflect: Either particles or anti-particles, with any electric charge, and both left- and right-chirality, although the strength of the interaction differs.[g]

The quantum number weak charge (Qw) serves the same role in the neutral current interaction with the 
Z0
 
that electric charge (Q, with no subscript) does in the electromagnetic interaction: It quantifies the vector part of the interaction. Its value is given by:[19]

Since the weak mixing angle  the parenthetic expression  with its value varying slightly with the momentum difference (called “running”) between the particles involved. Hence

since by convention  and for all fermions involved in the weak interaction  The weak charge of charged leptons is then close to zero, so these mostly interact with the Z boson through the axial coupling.

https://en.wikipedia.org/wiki/Weak_interaction#Charged-current_interaction

In quantum field theory the vacuum expectation value (also called condensate or simply VEV) of an operator is its average or expectation value in the vacuum. The vacuum expectation value of an operator O is usually denoted by  One of the most widely used examples of an observable physical effect that results from the vacuum expectation value of an operator is the Casimir effect.

This concept is important for working with correlation functions in quantum field theory. It is also important in spontaneous symmetry breaking. Examples are:

  • The Higgs field has a vacuum expectation value of 246 GeV.[1] This nonzero value underlies the Higgs mechanism of the Standard Model. This value is given by , where MW is the mass of the W Boson,  the reduced Fermi constant, and g the weak isospin coupling, in natural units. It is also near the limit of the most massive nuclei, at v = 264.3 Da.
  • The chiral condensate in Quantum chromodynamics, about a factor of a thousand smaller than the above, gives a large effective mass to quarks, and distinguishes between phases of quark matter. This underlies the bulk of the mass of most hadrons.
  • The gluon condensate in Quantum chromodynamics may also be partly responsible for masses of hadrons.

The observed Lorentz invariance of space-time allows only the formation of condensates which are Lorentz scalars and have vanishing charge.[citation needed] Thus fermion condensates must be of the form , where ψ is the fermion field. Similarly a tensor fieldGμν, can only have a scalar expectation value such as .

In some vacua of string theory, however, non-scalar condensates are found.[which?] If these describe our universe, then Lorentz symmetry violation may be observable.

https://en.wikipedia.org/wiki/Vacuum_expectation_value


Vacuum energy is an underlying background energy that exists in space throughout the entire Universe.[1] The vacuum energy is a special case of zero-point energy that relates to the quantum vacuum.[2]

Unsolved problem in physics:

Why does the zero-point energy of the vacuum not cause a large cosmological constant? What cancels it out?

The effects of vacuum energy can be experimentally observed in various phenomena such as spontaneous emission, the Casimir effect and the Lamb shift, and are thought to influence the behavior of the Universe on cosmological scales. Using the upper limit of the cosmological constant, the vacuum energy of free space has been estimated to be 10−9 joules (10−2 ergs), or ~5 GeV per cubic meter.[3] However, in quantum electrodynamics, consistency with the principle of Lorentz covariance and with the magnitude of the Planck constant suggest a much larger value of 10113 joules per cubic meter. This huge discrepancy is known as the cosmological constant problem.

https://en.wikipedia.org/wiki/Vacuum_energy


Zero-point energy (ZPE) is the lowest possible energy that a quantum mechanical system may have. Unlike in classical mechanics, quantum systems constantly fluctuate in their lowest energy state as described by the Heisenberg uncertainty principle.[1] Therefore, even at absolute zero, atoms and molecules retain some vibrational motion. Apart from atoms and molecules, the empty space of the vacuum also has these properties. According to quantum field theory, the universe can be thought of not as isolated particles but continuous fluctuating fieldsmatter fields, whose quanta are fermions (i.e., leptons and quarks), and force fields, whose quanta are bosons (e.g., photons and gluons). All these fields have zero-point energy.[2] These fluctuating zero-point fields lead to a kind of reintroduction of an aether in physics[1][3] since some systems can detect the existence of this energy. However, this aether cannot be thought of as a physical medium if it is to be Lorentz invariant such that there is no contradiction with Einstein's theory of special relativity.[1]

https://en.wikipedia.org/wiki/Zero-point_energy



Jump to navigation
Jump to search
Structures of hydrazinium N
2
H+
5
 and hydrazinediium N
2
H2+
6
.

Hydrazinium is the cation with the formula N
2
H+
5
. It can be derived from hydrazine by protonation (treatment with a strong acid). Hydrazinium is a weak acid with pKa = 8.1.

Salts of hydrazinium are common reagents in chemistry and are often used in certain industrial processes.[1] Notable examples are hydrazinium hydrogensulfateN
2
H
6
SO
4
 or [N
2
H+
5
]
[HSO
4
]
, and hydrazinium azideN
5
H
5
 or [N
2
H+
5
]
[N
3
]
. In the common names of such salts, the cation is often called "hydrazine", as in "hydrazine sulfate" for hydrazinium hydrogensulfate.

The terms "hydrazinium" and "hydrazine" may also be used for the doubly protonated cation N
2
H2+
6
, more properly called hydrazinediium or hydrazinium(2+). This cation has an ethane-like structure. Salts of this cation include hydrazinediium sulfate [N
2
H2+
6
]
[SO2−
4
]
[2] and hydrazinediium bis(6-carboxypyridazine-3-carboxylate), [N
2
H2+
6
]
[C
6
H
3
N
2
O
4
]
2
.[3]


https://en.wikipedia.org/wiki/Hydrazinium

https://en.wikipedia.org/wiki/Onium_ion

Tetraphenylphosphonium chloride is the chemical compound with the formula (C6H5)4PCl, abbreviated Ph4PCl or PPh4Cl. Tetraphenylphosphonium and especially tetraphenylarsonium salts were formerly of interest in gravimetric analysis of perchlorate and related oxyanions. This colourless salt is used to generate lipophilic salts from inorganic and organometallic anions. Thus, Ph4P+ is useful as a phase-transfer catalyst, again because it allows inorganic anions to dissolve in organic solvents.
https://en.wikipedia.org/wiki/Tetraphenylphosphonium_chloride

Quaternary ammonium cations, also known as quats, are positively charged polyatomic ions of the structure NR+
4
, R being an alkyl group or an aryl group.[1] Unlike the ammonium ion (NH+
4
) and the primary, secondary, or tertiary ammonium cations, the quaternary ammonium cations are permanently charged, independent of the pH of their solution. Quaternary ammonium salts or quaternary ammonium compounds (called quaternary amines in oilfield parlance) are salts of quaternary ammonium cations. Polyquats are a variety of engineered polymer forms which provide multiple quat molecules within a larger molecule.

Quats are used in consumer applications including as antimicrobials (such as detergents and disinfectants), fabric softeners, and hair conditioners. As an antimicrobial, they are able to inactivate enveloped viruses (such as SARS-CoV-2). Quats tend to be gentler on surfaces than bleach-based disinfectants, and are generally fabric-safe.[2]

https://en.wikipedia.org/wiki/Quaternary_ammonium_cation


Polyquaternium is the International Nomenclature for Cosmetic Ingredients designation for several polycationic polymers that are used in the personal care industry. Polyquaternium is a neologism used to emphasize the presence of quaternary ammonium centers in the polymer. INCI has approved at least 40 different polymers under the polyquaternium designation. Different polymers are distinguished by the numerical value that follows the word "polyquaternium". Polyquaternium-5, polyquaternium-7, and polyquaternium-47 are three examples, each a chemically different type of polymer. The numbers are assigned in the order in which they are registered rather than because of their chemical structure.

Polyquaterniums find particular application in conditionersshampoohair moussehair sprayhair dyepersonal lubricant, and contact lens solutions. Because they are positively charged, they neutralize the negative charges of most shampoos and hair proteins and help hair lie flat. Their positive charges also ionically bond them to hair and skin. Some have antimicrobial properties.

https://en.wikipedia.org/wiki/Polyquaternium


An ionomer (/ˌˈɑːnəmər/) (iono- + -mer) is a polymer composed of repeat units of both electrically neutral repeating units and ionized units covalently bonded to the polymer backbone as pendant group moieties. Usually no more than 15 mole percent are ionized. The ionized units are often carboxylic acid groups.

The classification of a polymer as an ionomer depends on the level of substitution of ionic groups as well as how the ionic groups are incorporated into the polymer structure. For example, polyelectrolytes also have ionic groups covalently bonded to the polymer backbone, but have a much higher ionic group molar substitution level (usually greater than 80%); ionenes are polymers where ionic groups are part of the actual polymer backbone. These two classes of ionic-group-containing polymers have vastly different morphological and physical properties and are therefore not considered ionomers.

Ionomers have unique physical properties including electrical conductivity and viscosity—increase in ionomer solution viscosity with increasing temperatures (see conducting polymer). Ionomers also have unique morphological properties as the non-polar polymer backbone is energetically incompatible with the polar ionic groups. As a result, the ionic groups in most ionomers will undergo microphase separation to form ionic-rich domains.

Commercial applications for ionomers include golf ball covers, semipermeable membranes, sealing tape and thermoplastic elastomers. Common examples of ionomers include polystyrene sulfonateNafion and Hycar.

https://en.wikipedia.org/wiki/Ionomer


Thermoplastic elastomers (TPE), sometimes referred to as thermoplastic rubbers, are a class of copolymers or a physical mix of polymers (usually a plastic and a rubber) that consist of materials with both thermoplastic and elastomeric properties. While most elastomers are thermosets, thermoplastics are in contrast relatively easy to use in manufacturing, for example, by injection molding. Thermoplastic elastomers show advantages typical of both rubbery materials and plastic materials. The benefit of using thermoplastic elastomers is the ability to stretch to moderate elongations and return to its near original shape creating a longer life and better physical range than other materials.[1] The principal difference between thermoset elastomers and thermoplastic elastomers is the type of cross-linking bond in their structures. In fact, crosslinking is a critical structural factor which imparts high elastic properties.

https://en.wikipedia.org/wiki/Thermoplastic_elastomer


Compression molding is a method of molding in which the molding material, generally preheated, is first placed in an open, heated mold cavity. The mold is closed with a top force or plug member, pressure is applied to force the material into contact with all mold areas, while heat and pressure are maintained until the molding material has cured; this process is known as compression molding method and in case of rubber it is also known as 'Vulcanisation'.[1] The process employs thermosetting resins in a partially cured stage, either in the form of granules, putty-like masses, or preforms.

https://en.wikipedia.org/wiki/Compression_molding


Flash, also known as flashing, is excess material attached to a moldedforged, or cast product, which must usually be removed. This is typically caused by leakage of the material between the two surfaces of a mold (beginning along the parting line[1]) or between the base material and the mold in the case of overmolding.

https://en.wikipedia.org/wiki/Flash_(manufacturing)

Cryogenic deflashing is a deflashing process that uses cryogenic temperatures to aid in the removal of flash on cast or molded workpieces. These temperatures cause the flash to become stiff or brittle and to break away cleanly. Cryogenic deflashing is the preferred process when removing excess material from oddly shaped, custom molded products.

A wide range of molded materials can utilize cryogenic deflashing with proven results. These include:

Examples of applications that use cryogenic deflashing include:

  • O-rings & gaskets
  • Catheters and other in-vitro medical
  • Insulators and other electric / electronic
  • Valve stems, washers and fittings
  • Tubes and flexible boots
  • Face masks & goggles

Today, many molding operations are using cryogenic deflashing instead of rebuilding or repairing molds on products that are approaching their end-of-life. It is often more prudent and economical to add a few cents of production cost for a part than invest in a new molding tool that can cost hundreds of thousands of dollars and has a limited service life due to declining production forecasts.

In other cases, cryogenic deflashing has proven to be an enabling technology, permitting the economical manufacture of high quality, high precision parts fabricated with cutting edge materials and compounds.

https://en.wikipedia.org/wiki/Cryogenic_deflashing


Liquid crystal polymers (LCPs) are polymers with the property of liquid crystal, usually containing aromatic rings as mesogens. Despite uncrosslinked LCPs, polymeric materials like liquid crystal elastomers (LCEs) and liquid crystal networks (LCNs) can exhibit liquid crystallinity as well. They are both crosslinked LCPs but have different cross link density.[1] They are widely used in the digital display market.[2] In addition, LCPs have unique properties like thermal actuation, anisotropic swelling, and soft elasticity. Therefore, they can be good actuators and sensors.[3] One of the most famous and classical applications for LCPs is Kevlar, a strong but light fiber with wide applications including bulletproof vests.   

https://en.wikipedia.org/wiki/Liquid-crystal_polymer


An artificial cellsynthetic cell or minimal cell is an engineered particle that mimics one or many functions of a biological cell. Often, artificial cells are biological or polymeric membranes which enclose biologically active materials.[1] As such, liposomespolymersomesnanoparticles, microcapsules and a number of other particles can qualify as artificial cells.

https://en.wikipedia.org/wiki/Artificial_cell

Electric field actuation[edit]

Electro-Active Polymers (EAPs) are polymers that can be actuated through the application of electric fields. Currently, the most prominent EAPs include piezoelectric polymers, dielectric actuators (DEAs), electrostrictive graft elastomersliquid crystal elastomers (LCE) and ferroelectric polymers. While these EAPs can be made to bend, their low capacities for torque motion currently limit their usefulness as artificial muscles. Moreover, without an accepted standard material for creating EAP devices, commercialization has remained impractical. However, significant progress has been made in EAP technology since the 1990s.[7]

Ion-based actuation[edit]

Ionic EAPs are polymers that can be actuated through the diffusion of ions in an electrolyte solution (in addition to the application of electric fields). Current examples of ionic electroactive polymers include polyelectrode gels, ionomeric polymer metallic composites (IPMC), conductive polymers and electrorheological fluids (ERF). In 2011, it was demonstrated that twisted carbon nanotubes could also be actuated by applying an electric field.[8]

https://en.wikipedia.org/wiki/Artificial_muscle#Electric_field_actuation


An electroactive polymer (EAP) is a polymer that exhibits a change in size or shape when stimulated by an electric field. The most common applications of this type of material are in actuators[1] and sensors.[2][3] A typical characteristic property of an EAP is that they will undergo a large amount of deformation while sustaining large forces.

The majority of historic actuators are made of ceramic piezoelectric materials. While these materials are able to withstand large forces, they commonly will only deform a fraction of a percent. In the late 1990s, it has been demonstrated that some EAPs can exhibit up to a 380% strain, which is much more than any ceramic actuator.[1] One of the most common applications for EAPs is in the field of robotics in the development of artificial muscles; thus, an electroactive polymer is often referred to as an artificial muscle.

https://en.wikipedia.org/wiki/Electroactive_polymer


Piezoelectricity (/ˌpz-, ˌpts-, pˌz-/US/piˌz-, piˌts-/)[1] is the electric charge that accumulates in certain solid materials—such as crystals, certain ceramics, and biological matter such as bone, DNA, and various proteins—in response to applied mechanical stress.[2] The word piezoelectricity means electricity resulting from pressure and latent heat. It is derived from the Greek word πιέζεινpiezein, which means to squeeze or press, and ἤλεκτρον ēlektron, which means amber, an ancient source of electric charge.[3][4]

https://en.wikipedia.org/wiki/Piezoelectricity

clock or a timepiece[1] is a device used to measure and indicate time. The clock is one of the oldest human inventions, meeting the need to measure intervals of time shorter than the natural units: the day, the lunar monthyear and galactic year. Devices operating on several physical processes have been used over the millennia.

https://en.wikipedia.org/wiki/Clock

A crystal oscillator is an electronic oscillator that makes use of crystal as a frequency selective element to obtain an inverse piezoelectric effect.The term crystal oscillator refers to the circuit, not the resonator: Graf, Rudolf F. (1999). Modern Dictionary of Electronics, 7th Ed. US: Newnes. pp. 162, 163. ISBN 978-0750698665.</ref>[1][2] This frequency is often used to keep track of time, as in quartz wristwatches, to provide a stable clock signal for digital integrated circuits, and to stabilize frequencies for radio transmitters and receivers. The most common type of piezoelectric resonator used is a quartz crystal, so oscillator circuits incorporating them became known as crystal oscillators.[3] However, other piezoelectric materials including polycrystalline ceramics are used in similar circuits.

https://en.wikipedia.org/wiki/Crystal_oscillator

pressure sensor is a device for pressure measurement of gases or liquids. Pressure is an expression of the force required to stop a fluid from expanding, and is usually stated in terms of force per unit area. A pressure sensor usually acts as a transducer; it generates a signal as a function of the pressure imposed. For the purposes of this article, such a signal is electrical.

https://en.wikipedia.org/wiki/Pressure_sensor


hermetic seal is any type of sealing that makes a given object airtight (preventing the passage of air, oxygen, or other gases). The term originally applied to airtight glass containers, but as technology advanced it applied to a larger category of materials, including rubber and plastics. Hermetic seals are essential to the correct and safe functionality of many electronic and healthcare products. Used technically, it is stated in conjunction with a specific test method and conditions of use.

https://en.wikipedia.org/wiki/Hermetic_seal

transducer is a device that converts energy from one form to another. Usually a transducer converts a signal in one form of energy to a signal in another.[1]

Transducers are often employed at the boundaries of automationmeasurement, and control systems, where electrical signals are converted to and from other physical quantities (energy, force, torque, light, motion, position, etc.). The process of converting one form of energy to another is known as transduction.[2]

https://en.wikipedia.org/wiki/Transducer


transductor is type of magnetic amplifier used in power systems for compensating reactive power. It consists of an iron-cored inductor with two windings - a main winding through which an alternating current flows from the power system, and a secondary control winding which carries a small direct current. By varying the direct current, the iron core of the transductor can be arranged to saturate at different levels and thus vary the amount of reactive power absorbed.

https://en.wikipedia.org/wiki/Transductor


Hall effect sensor (or simply Hall sensor) is a type of sensor which detects the presence and magnitude of a magnetic field using the Hall effect. The output voltage of a Hall sensor is directly proportional to the strength of the field. It is named for the American physicist Edwin Hall.[1]

https://en.wikipedia.org/wiki/Hall_effect_sensor


In a spark ignition internal combustion engineIgnition timing refers to the timing, relative to the current piston position and crankshaft angle, of the release of a spark in the combustion chamber near the end of the compression stroke.

https://en.wikipedia.org/wiki/Ignition_timing

Compression stroke[edit]

The compression stroke is the second of the four stages in a four-stroke engine.

In this stage, the air-fuel mixture (or air alone, in the case of a direct injection engine) is compressed to the top of the cylinder by the piston. This is the result of the piston moving upwards, reducing the volume of the chamber. Towards the end of this phase, the mixture is ignited, by a spark plug for petrol engines or by self-ignition for diesel engines.

Exhaust stroke[edit]

The exhaust stroke is the final phase in a four stroke engine. In this phase, the piston moves upwards, squeezing out the gasses that were created during the combustion stroke. The gasses exit the cylinder through an exhaust valve at the top of the cylinder. At the end of this phase, the exhaust valve closes and the intake valve opens, which then closes to allow a fresh air-fuel mixture into the cylinder so the process can repeat itself.

https://en.wikipedia.org/wiki/Stroke_(engine)#Compression_stroke


An oscillating cylinder steam engine (also known as a wobbler in the US)[citation needed] is a simple steam-engine design (proposed by William Murdoch at the end of 18th century) that requires no valve gear. Instead the cylinder rocks, or oscillates, as the crank moves the piston, pivoting in the mounting trunnion so that ports in the cylinder line up with ports in a fixed port face alternately to direct steam into or out of the cylinder.

Oscillating cylinder steam engines are now mainly used in toys and models but, in the past, have been used in full-size working engines, mainly on ships and small stationary engines. They have the advantage of simplicity and, therefore, low manufacturing costs. They also tend to be more compact than other types of cylinder of the same capacity, which makes them advantageous for use in ships.

https://en.wikipedia.org/wiki/Oscillating_cylinder_steam_engine


Stationary steam engines are fixed steam engines used for pumping or driving mills and factories, and for power generation. They are distinct from locomotive engines used on railwaystraction engines for heavy steam haulage on roads, steam cars (and other motor vehicles), agricultural engines used for ploughing or threshing, marine engines, and the steam turbines used as the mechanism of power generation for most nuclear power plants.

They were introduced during the 18th century and widely made for the whole of the 19th century and most of the first half of the 20th century, only declining as electricity supply and the internal combustion engine became more widespread.

https://en.wikipedia.org/wiki/Stationary_steam_engine


beam engine is a type of steam engine where a pivoted overhead beam is used to apply the force from a vertical piston to a vertical connecting rod. This configuration, with the engine directly driving a pump, was first used by Thomas Newcomen around 1705 to remove water from mines in Cornwall. The efficiency of the engines was improved by engineers including James Watt, who added a separate condenserJonathan Hornblower and Arthur Woolf, who compounded the cylinders; and William McNaught, who devised a method of compounding an existing engine. Beam engines were first used to pump water out of mines or into canals but could be used to pump water to supplement the flow for a waterwheel powering a mill.

The cast-iron beam of the 1812 Boulton & Watt engine at Crofton Pumping Station – the oldest working, in situ example in the world
Back of Museum De Cruquius near Amsterdam, an old pumping station used to pump dry the Haarlemmermeer. It shows the beams of the pumping engine and the 9 meter drop in water level from the Spaarne river. The beam engine is the largest ever constructed, and was in use till 1933.
The remains of a water-powered beam engine at Wanlockhead

The rotative beam engine is a later design of beam engine where the connecting rod drives a flywheel by means of a crank (or, historically, by means of a sun and planet gear). These beam engines could be used to directly power the line-shafting in a mill. They also could be used to power steam ships.


































Watt engine: showing entry of steam and water

https://en.wikipedia.org/wiki/Beam_engine


line shaft is a power-driven rotating shaft for power transmission that was used extensively from the Industrial Revolution until the early 20th century. Prior to the widespread use of electric motors small enough to be connected directly to each piece of machinery, line shafting was used to distribute power from a large central power source to machinery throughout a workshop or an industrial complex. The central power source could be a water wheel, turbine, windmill, animal power or a steam engine. Power was distributed from the shaft to the machinery by a system of beltspulleys and gears known as millwork.[1]

https://en.wikipedia.org/wiki/Line_shaft

plain bearing, or more commonly sliding contact bearing and slide bearing (in railroading sometimes called a solid bearingjournal bearing, or friction bearing[1]), is the simplest type of bearing, comprising just a bearing surface and no rolling elements. Therefore, the journal (i.e., the part of the shaft in contact with the bearing) slides over the bearing surface. The simplest example of a plain bearing is a shaft rotating in a hole. A simple linear bearing can be a pair of flat surfaces designed to allow motion; e.g., a drawer and the slides it rests on[2] or the ways on the bed of a lathe.

Plain bearings, in general, are the least expensive type of bearing. They are also compact and lightweight, and they have a high load-carrying capacity.[3]

https://en.wikipedia.org/wiki/Plain_bearing

bearing surface in mechanical engineering is the area of contact between two objects. It usually is used in reference to bolted joints and bearings, but can be applied to a wide variety of engineering applications.

https://en.wikipedia.org/wiki/Bearing_surface

Helical Compression Spring Technology

The notes on helical compression spring terminology refer to diagram at bottom of page.  At Master Spring, we've been creating custom helical compression springs for more than 75 years.  Learn more about how we can help you design and build the springs for your next application by calling us at 708-453-2570 today.

https://web.archive.org/web/20101101174850/http://www.masterspring.com/technical_resources/helical_compression_spring_terminology/default.html

diamond anvil cell (DAC) is a high-pressure device used in geologyengineering, and materials science experiments. It enables the compression of a small (sub-millimeter-sized) piece of material to extreme pressures, typically up to around 100–200 gigapascals, although it is possible to achieve pressures up to 770 gigapascals (7,700,000 bars or 7.7 million atmospheres).[1][2]

The device has been used to recreate the pressure existing deep inside planets to synthesise materials and phases not observed under normal ambient conditions. Notable examples include the non-molecular ice X,[3] polymeric nitrogen[4] and metallic phases of xenon,[5] lonsdaleite, and potentially hydrogen.[6]

A DAC consists of two opposing diamonds with a sample compressed between the polished culets (tips). Pressure may be monitored using a reference material whose behavior under pressure is known. Common pressure standards include ruby[7] fluorescence, and various structurally simple metals, such as copper or platinum.[8] The uniaxial pressure supplied by the DAC may be transformed into uniform hydrostatic pressure using a pressure-transmitting medium, such as argonxenonhydrogenheliumparaffin oil or a mixture of methanol and ethanol.[9] The pressure-transmitting medium is enclosed by a gasket and the two diamond anvils. The sample can be viewed through the diamonds and illuminated by X-rays and visible light. In this way, X-ray diffraction and fluorescenceoptical absorption and photoluminescenceMössbauerRaman and Brillouin scatteringpositron annihilation and other signals can be measured from materials under high pressure. Magnetic and microwave fields can be applied externally to the cell allowing nuclear magnetic resonanceelectron paramagnetic resonance and other magnetic measurements.[10] Attaching electrodes to the sample allows electrical and magnetoelectrical measurements as well as heating up the sample to a few thousand degrees. Much higher temperatures (up to 7000 K)[11] can be achieved with laser-induced heating,[12] and cooling down to millikelvins has been demonstrated.[9]

https://en.wikipedia.org/wiki/Diamond_anvil_cell

Magnetoresistance is the tendency of a material (often ferromagnetic) to change the value of its electrical resistance in an externally-applied magnetic field. There are a variety of effects that can be called magnetoresistance. Some occur in bulk non-magnetic metals and semiconductors, such as geometrical magnetoresistance, Shubnikov–de Haas oscillations, or the common positive magnetoresistance in metals.[1] Other effects occur in magnetic metals, such as negative magnetoresistance in ferromagnets[2] or anisotropic magnetoresistance (AMR). Finally, in multicomponent or multilayer systems (e.g. magnetic tunnel junctions), giant magnetoresistance (GMR), tunnel magnetoresistance (TMR), colossal magnetoresistance (CMR), and extraordinary magnetoresistance (EMR) can be observed.

https://en.wikipedia.org/wiki/Magnetoresistance


Filling of the electronic states in various types of materials at equilibrium. Here, height is energy while width is the density of available states for a certain energy in the material listed. The shade follows the Fermi–Dirac distribution (black: all states filled, white: no state filled). In metals and semimetals the Fermi level EF lies inside at least one band.
In insulators and semiconductors the Fermi level is inside a band gap; however, in semiconductors the bands are near enough to the Fermi level to be thermally populated with electrons or holes.

semimetal is a material with a very small overlap between the bottom of the conduction band and the top of the valence band. According to electronic band theory, solids can be classified as insulatorssemiconductors, semimetals, or metals. In insulators and semiconductors the filled valence band is separated from an empty conduction band by a band gap. For insulators, the magnitude of the band gap is larger (e.g., > 4 eV) than that of a semiconductor (e.g., < 4 eV). Because of the slight overlap between the conduction and valence bands, semimetals have no band gap and a negligible density of states at the Fermi level. A metal, by contrast, has an appreciable density of states at the Fermi level because the conduction band is partially filled.[1]


https://en.wikipedia.org/wiki/Semimetal


In solid state physics and condensed matter physics, the density of states (DOS) of a system describes the proportion of states that are to be occupied by the system at each energy. The density of states is defined as , where  is the number of states in the system of volume  whose energies lie in the range from  to . It is mathematically represented as a distribution by a probability density function, and it is generally an average over the space and time domains of the various states occupied by the system. The density of states is directly related to the dispersion relations of the properties of the system. High DOS at a specific energy level means that many states are available for occupation.

https://en.wikipedia.org/wiki/Density_of_states

Luttinger liquid, or Tomonaga–Luttinger liquid, is a theoretical model describing interacting electrons (or other fermions) in a one-dimensional conductor (e.g. quantum wires such as carbon nanotubes).[1] Such a model is necessary as the commonly used Fermi liquid model breaks down for one dimension.

The Tomonaga–Luttinger liquid was first proposed by Tomonaga in 1950. The model showed that under certain constraints, second-order interactions between electrons could be modelled as bosonic interactions. In 1963, J.M. Luttinger reformulated the theory in terms of Bloch sound waves and showed that the constraints proposed by Tomonaga were unnecessary in order to treat the second-order perturbations as bosons. But his solution of the model was incorrect; the correct solution was given by Daniel C. Mattis [de] and Elliot H. Lieb 1965.[2]

https://en.wikipedia.org/wiki/Luttinger_liquid


carbon nanotube (CNT) is a tube made of carbon with diameters typically measured in nanometers.

Single-wall carbon nanotubes (SWCNTs) Single-wall carbon nanotubes are one of the allotropes of carbon, intermediate between fullerene cages and flat graphene, with diameters in the range of a nanometer. Although not made this way, single-wall carbon nanotubes can be idealized as cutouts from a two-dimensional hexagonal lattice of carbon atoms rolled up along one of the Bravais lattice vectors of the hexagonal lattice to form a hollow cylinder. In this construction, periodic boundary conditions are imposed over the length of this roll-up vector to yield a helical lattice of seamlessly bonded carbon atoms on the cylinder surface.[1]

https://en.wikipedia.org/wiki/Carbon_nanotube


Carbon fibers or carbon fibres (alternatively CF, graphite fiber or graphite fibre) are fibers about 5 to 10 micrometers (0.00020–0.00039 in) in diameter and composed mostly of carbon atoms.[1] Carbon fibers have several advantages: high stiffness, high tensile strength, high strength to weight ratio, high chemical resistance, high-temperature tolerance, and low thermal expansion.[2] These properties have made carbon fiber very popular in aerospace, civil engineering, military, motorsports, and other competition sports. However, they are relatively expensive compared to similar fibers, such as glass fiberbasalt fibers, or plastic fibers.[3]

https://en.wikipedia.org/wiki/Carbon_fibers


Glass fiber (or glass fibre) is a material consisting of numerous extremely fine fibers of glass.

https://en.wikipedia.org/wiki/Glass_fiber


Glass wool is an insulating material made from fibres of glass arranged using a binder into a texture similar to wool. The process traps many small pockets of air between the glass, and these small air pockets result in high thermal insulation properties. Glass wool is produced in rolls or in slabs, with different thermal and mechanical properties. It may also be produced as a material that can be sprayed or applied in place, on the surface to be insulated. The modern method for producing glass wool was invented by Games Slayter while he was working at the Owens-Illinois Glass Co. (Toledo, Ohio). He first applied for a patent for a new process to make glass wool in 1933.[1]

https://en.wikipedia.org/wiki/Glass_wool


In polymer chemistry and materials scienceresin is a solid or highly viscous substance of plant or synthetic origin that is typically convertible into polymers.[1] Resins are usually mixtures of organic compounds. This article focuses on naturally occurring resins.

Plants secrete resins for their protective benefits in response to injury. The resin protects the plant from insects and pathogens.[2] Resins confound a wide range of herbivores, insects, and pathogens, while the volatile phenolic compounds may attract benefactors such as parasitoids or predators of the herbivores that attack the plant.[3]

https://en.wikipedia.org/wiki/Resin


Jump to navigation
Jump to search

Calendering of textiles is a finishing process used to smooth, coat, or thin a material. With textiles, fabric is passed between calender rollers at high temperatures and pressures. Calendering is used on fabrics such as moire to produce its watered effect and also on cambric and some types of sateens.

https://en.wikipedia.org/wiki/Calendering_(textiles)

Coal is a combustible black or brownish-black sedimentary rock, formed as rock strata called coal seams. Coal is mostly carbon with variable amounts of other elements, chiefly hydrogensulfuroxygen, and nitrogen.[1] Coal is formed when dead plant matter decays into peat and is converted into coal by the heat and pressure of deep burial over millions of years.[2] Vast deposits of coal originate in former wetlands—called coal forests—that covered much of the Earth's tropical land areas during the late Carboniferous (Pennsylvanian) and Permian times.[3][4] However, many significant coal deposits are younger than this and originate from the Mesozoic and Cenozoic eras.

Coal is primarily used as a fuel. While coal has been known and used for thousands of years, its usage was limited until the Industrial Revolution. With the invention of the steam engine, coal consumption increased. In 2020 coal supplied about a quarter of the world's primary energy and over a third of its electricity.[5] Some iron and steel making and other industrial processes burn coal.

Map of coal production

The extraction and use of coal causes premature deaths and illness.[6] The use of coal damages the environment, and it is the largest anthropogenic source of carbon dioxide contributing to climate change. 14 billion tonnes of carbon dioxide was emitted by burning coal in 2020,[7] which is 40% of the total fossil fuel emissions[8] and over 25% of total global greenhouse gas emissions.[9] As part of the worldwide energy transition many countries have reduced or eliminated their use of coal power.[10][11] The UN Secretary General asked governments to stop building new coal plants by 2020.[12] Global coal use peaked in 2013.[13] To meet the Paris Agreement target of keeping global warming to below 2 °C (3.6 °F) coal use needs to halve from 2020 to 2030,[14] and phasing down coal was agreed in the Glasgow Climate Pact.

The largest consumer and importer of coal in 2020 was ChinaChina accounts for almost half the world's annual coal production, followed by India with about a tenth. Indonesia and Australia export the most, followed by Russia.[15]


Sedimentary rock
Coal bituminous.jpg
Composition
Primarycarbon
Secondary

Lignite (brown coal)


Anthracite (hard coal)


https://en.wikipedia.org/wiki/Coal


Anthracite (Ibbenbüren, Germany)


Anthracite coal breaker and power house buildings, New Mexico, circa 1935


Mining[edit]

China today mines by far the largest share of global anthracite production, accounting for more than three-quarters of global output.[7] Most Chinese production is of standard-grade anthracite, which is used in power generation.[citation needed] Increased demand in China has made that country into a net importer of the fuel, mostly from Vietnam, another major producer of anthracite for power generation, although increasing domestic consumption in Vietnam means that exports may be scaled back.[20]

Current U.S. anthracite production averages around five million tons per year. Of that, about 1.8 million tons were mined in the state of Pennsylvania.[21] Mining of anthracite coal continues to this day in eastern Pennsylvania, and contributes up to 1% to the gross state product. More than 2,000 people were employed in the mining of anthracite coal in 1995. Most of the mining as of that date involved reclaiming coal from slag heaps (waste piles from past coal mining) at nearby closed mines. Some underground anthracite coal is also being mined.

Countries producing HG and UHG anthracite include Russia and South Africa. HG and UHG anthracite are used as a coke or coal substitute in various metallurgical coal applications (sinteringPCI, direct BF charge, pelletizing). It plays an important role in cost reduction in the steel making process and is also used in production of ferroalloyssilicomanganesecalcium carbide and silicon carbide. South Africa exports lower-quality, higher-ash anthracite to Brazil to be used in steel-making.[citation needed]


https://en.wikipedia.org/wiki/Anthracite

The Carboniferous (/ˌkɑːr.bəˈnɪf.ər.əs/ KAHR-bə-NIF-ər-əs)[6] is a geologic period and system of the Paleozoic that spans 60 million years from the end of the Devonian Period 358.9 million years ago (Mya), to the beginning of the Permian Period, 298.9 million years ago. The name Carboniferous means "coal-bearing", from the Latin carbō ("coal") and ferō ("bear, carry"), and refers to the many coal beds formed globally during that time.[7]

https://en.wikipedia.org/wiki/Carboniferous

Mining is the extraction of valuable minerals or other geological materials from the Earth, usually from an ore body, lodeveinseamreef, or placer deposit. Exploitation of these deposits for raw material is based on the economic viability of investing in the equipment, labor, and energy required to extract, refine and transport the materials found at the mine to manufacturers who can use the material.

Ores recovered by mining include metalscoaloil shalegemstoneslimestonechalkdimension stonerock saltpotashgravel, and clay. Mining is required to obtain most materials that cannot be grown through agricultural processes, or feasibly created artificially in a laboratory or factory. Mining in a wider sense includes extraction of any non-renewable resource such as petroleumnatural gas, or even water. Modern mining processes involve prospecting for ore bodies, analysis of the profit potential of a proposed mine, extraction of the desired materials, and final reclamation of the land after the mine is closed.[1]

https://en.wikipedia.org/wiki/Mining


De re metallica (Latin for On the Nature of Metals [Minerals]) is a book in Latin cataloguing the state of the art of miningrefining, and smelting metals, published a year posthumously in 1556 due to a delay in preparing woodcuts for the text. The author was Georg Bauer, whose pen name was the Latinized Georgius Agricola ("Bauer" and "Agricola" being respectively the German and Latin words for "farmer"). The book remained the authoritative text on mining for 180 years after its publication. It was also an important chemistry text for the period and is significant in the history of chemistry.[1]

De re metallica title page 1556.jpg
Title page of 1561 edition
AuthorGeorgius Agricola
TranslatorHerbert Hoover
Lou Henry Hoover
Publication date
1556
Published in English
1912
ISBN0-486-60006-8
OCLC34181557

https://en.wikipedia.org/wiki/De_re_metallica


solid-state drive (SSD) is a solid-state storage device that uses integrated circuit assemblies to store data persistently, typically using flash memory, and functioning as secondary storage in the hierarchy of computer storage. It is also sometimes called a semiconductor storage device, a solid-state device or a solid-state disk,[1] even though SSDs lack the physical spinning disks and movable read–write heads used in hard disk drives (HDDs) and floppy disks.[2]

https://en.wikipedia.org/wiki/Solid-state_drive

charge-coupled device (CCD) is an integrated circuit containing an array of linked, or coupled, capacitors. Under the control of an external circuit, each capacitor can transfer its electric charge to a neighboring capacitor. CCD sensors are a major technology used in digital imaging.

In a CCD image sensorpixels are represented by p-doped metal–oxide–semiconductor (MOS) capacitors. These MOS capacitors, the basic building blocks of a CCD,[1] are biased above the threshold for inversion when image acquisition begins, allowing the conversion of incoming photons into electron charges at the semiconductor-oxide interface; the CCD is then used to read out these charges. Although CCDs are not the only technology to allow for light detection, CCD image sensors are widely used in professional, medical, and scientific applications where high-quality image data are required. In applications with less exacting quality demands, such as consumer and professional digital camerasactive pixel sensors, also known as CMOS sensors (complementary MOS sensors), are generally used. However, the large quality advantage CCDs enjoyed early on has narrowed over time and since the late 2010s CMOS sensors are the dominant technology, having largely if not completely replaced CCD image sensors.

https://en.wikipedia.org/wiki/Charge-coupled_device

The metal–oxide–semiconductor field-effect transistor (MOSFETMOS-FET, or MOS FET), also known as the metal–oxide–silicon transistor (MOS transistor, or MOS),[1] is a type of insulated-gate field-effect transistor that is fabricated by the controlled oxidation of a semiconductor, typically silicon. The voltage of the gate terminal determines the electrical conductivity of the device; this ability to change conductivity with the amount of applied voltage can be used for amplifying or switching electronic signals.


A key advantage of a MOSFET is that it requires almost no input current to control the load current, when compared with bipolar junction transistors (BJTs). In an enhancement mode MOSFET, voltage applied to the gate terminal can increase the conductivity from the "normally off" state. In a depletion mode MOSFET, voltage applied at the gate can reduce the conductivity from the "normally on" state.[5] MOSFETs are also capable of high scalability, with increasing miniaturization, and can be easily scaled down to smaller dimensions. 

https://en.wikipedia.org/wiki/MOSFET

metal gate, in the context of a lateral metal–oxide–semiconductor (MOS) stack, is the gate electrode separated by an oxide from the transistor's channel – the gate material is made from a metal. In most MOS transistors since about the mid 1970s, the "M" for metal has been replaced by a non-metal gate material.

Aluminum gate[edit]

The first MOSFET (metal–oxide–semiconductor field-effect transistor) was made by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959, and demonstrated in 1960.[1] They used silicon as channel material and a non-self-aligned aluminum gate.[2] Aluminum gate metal (typically deposited in an evaporation vacuum chamber onto the wafer surface) was common through the early 1970s.

Polysilicon[edit]

By the late 1970s, the industry had moved away from aluminum as the gate material in the metal–oxide–semiconductor stack due to fabrication complications and performance issues.[citation needed] A material called polysilicon (polycrystalline silicon, highly doped with donors or acceptors to reduce its electrical resistance) was used to replace aluminum.

Polysilicon can be deposited easily via chemical vapor deposition (CVD) and is tolerant to subsequent manufacturing steps which involve extremely high temperatures (in excess of 900–1000 °C), where metal was not. Particularly, metal (most commonly aluminum – a Type III (P-type) dopant) has a tendency to disperse into (alloy with) silicon during these thermal annealing steps.[citation needed] In particular, when used on a silicon wafer with a < 1 1 1 > crystal orientation, excessive alloying of aluminum (from extended high temperature processing steps) with the underlying silicon can create a short circuit between the diffused FET source or drain areas under the aluminum and across the metallurgical junction into the underlying substrate – causing irreparable circuit failures. These shorts are created by pyramidal-shaped spikes of silicon-aluminum alloy – pointing vertically "down" into the silicon wafer. The practical high-temperature limit for annealing aluminum on silicon is on the order of 450 °C. Polysilicon is also attractive for the easy manufacturing of self-aligned gates. The implantation or diffusion of source and drain dopant impurities is

https://en.wikipedia.org/wiki/Metal_gate

In semiconductor elecronics fabricaton technology, a self-aligned gate is a transistor manufacturing feature whereby the gate electrode of a MOSFET (metal–oxide–semiconductor field-effect transistor) is used as a mask for the doping of the source and drain regions. This technique ensures that the gate is naturally and precisely aligned to the edges of the source and drain.

The use of self-aligned gates in MOS transistors is one of the key innovations that led to the large increase in computing power in the 1970s. Self-aligned gates are still used in most modern integrated circuit processes.

https://en.wikipedia.org/wiki/Self-aligned_gate

Carbon dioxide (chemical formula CO2) is a chemical compound occurring as an acidic colorless gas with a density about 53% higher than that of dry air. Carbon dioxide molecules consist of a carbon atom covalently double bonded to two oxygen atoms. It occurs naturally in Earth's atmosphere as a trace gas. The current concentration is about 0.04% (412 ppm) by volume, having risen from pre-industrial levels of 280 ppm.[9][10] Natural sources include volcanoes, forest fires, hot springsgeysers, and it is freed from carbonate rocks by dissolution in water and acids. Because carbon dioxide is soluble in water, it occurs naturally in groundwaterrivers and lakesice capsglaciers and seawater. It is present in deposits of petroleum and natural gas. Carbon dioxide has a sharp and acidic odor and generates the taste of soda water in the mouth.[11] However, at normally encountered concentrations it is odorless.[1]

https://en.wikipedia.org/wiki/Carbon_dioxide#In_Earth's_atmosphere

In chemistry, triiodide usually refers to the triiodide ion, I
3
. This anion, one of the polyhalogen ions, is composed of three iodine atoms. It is formed by combining aqueous solutions of iodide salts and iodine. Some salts of the anion have been isolated, including thallium(I) triiodide (Tl+[I3]) and ammonium triiodide ([NH4]+[I3]). Triiodide is observed to be a red colour in solution[1] .

https://en.wikipedia.org/wiki/Triiodide

The trihydrogen cation or protonated molecular hydrogen is a cation (positive ion) with formula H+
3
, consisting of three hydrogen nuclei (protons) sharing two electrons.

The trihydrogen cation is one of the most abundant ions in the universe. It is stable in the interstellar medium (ISM) due to the low temperature and low density of interstellar space. The role that H+
3
 plays in the gas-phase chemistry of the ISM is unparalleled by any other molecular ion.

The trihydrogen cation is the simplest triatomic molecule, because its two electrons are the only valence electrons in the system. It is also the simplest example of a three-center two-electron bond system.

https://en.wikipedia.org/wiki/Trihydrogen_cation

Phosphorous acid (or phosphonic acid (singular)) is the compound described by the formula H3PO3. This acid is diprotic (readily ionizes two protons), not triprotic as might be suggested by this formula. Phosphorous acid is an intermediate in the preparation of other phosphorus compounds. Organic derivatives of phosphorous acid, compounds with the formula RPO3H2, are called phosphonic acids.

https://en.wikipedia.org/wiki/Phosphorous_acid

Phosphonates or phosphonic acids are organophosphorus compounds containing C−PO(OH)2 or C−PO(OR)2 groups (where R = alkylaryl). Phosphonic acids, typically handled as salts, are generally nonvolatile solids that are poorly soluble in organic solvents, but soluble in water and common alcohols. Many commercially important compounds are phosphonates, including glyphosate (the active molecule of the herbicide "Roundup"), and ethephon, a widely used plant growth regulator. Bisphosphonates are popular drugs for treatment of osteoporosis.[1]

Clodronic acid is a bisphosphonate used as a drug to treat osteoporosis.

In biology and medicinal chemistry, phosphonate groups are used as stable bioisoteres for phosphate, such as in the antiviral nucleotide analog, Tenofovir, one of the cornerstones of anti-HIV therapy. And there is an indication that phosphonate derivatives are "promising ligands for nuclear medicine."[2]




https://en.wikipedia.org/wiki/Phosphonate


Potassium-40 (40K) is a radioactive isotope of potassium which has a long half-life of 1.251×109 years. It makes up 0.012% (120 ppm) of the total amount of potassium found in nature.

Potassium-40 is a rare example of an isotope that undergoes both types of beta decay. In about 89.28% of events, it decays to calcium-40 (40Ca) with emission of a beta particle (β, an electron) with a maximum energy of 1.31 MeV and an antineutrino. In about 10.72% of events, it decays to argon-40 (40Ar) by electron capture (EC), with the emission of a neutrino and then a 1.460 MeV gamma ray.[1] The radioactive decay of this particular isotope explains the large abundance of argon (nearly 1%) in the Earth's atmosphere, as well as prevalence of 40Ar over other isotopes. Very rarely (0.001% of events), it decays to 40Ar by emitting a positron (β+) and a neutrino.[2]

https://en.wikipedia.org/wiki/Potassium-40

Phosphorus is a chemical element with the symbol P and atomic number 15. Elemental phosphorus exists in two major forms, white phosphorus and red phosphorus, but because it is highly reactive, phosphorus is never found as a free element on Earth. It has a concentration in the Earth's crust of about one gram per kilogram (compare copper at about 0.06 grams). In minerals, phosphorus generally occurs as phosphate.

https://en.wikipedia.org/wiki/Phosphorus

Hydrogen is the chemical element with the symbol H and atomic number 1. Hydrogen is the lightest element. At standard conditions hydrogen is a gas of diatomic molecules having the formula H2. It is colorlessodorlesstasteless,[8] non-toxic, and highly combustible. Hydrogen is the most abundant chemical substance in the universe, constituting roughly 75% of all normal matter.[9][note 1] Stars such as the Sun are mainly composed of hydrogen in the plasma state. Most of the hydrogen on Earth exists in molecular forms such as water and organic compounds. For the most common isotope of hydrogen (symbol 1H) each atom has one proton, one electron, and no neutrons.

https://en.wikipedia.org/wiki/Hydrogen

Sodium is a chemical element with the symbol Na (from Latin natrium) and atomic number 11. It is a soft, silvery-white, highly reactive metal. Sodium is an alkali metal, being in group 1 of the periodic table. Its only stable isotope is 23Na. The free metal does not occur in nature, and must be prepared from compounds. Sodium is the sixth most abundant element in the Earth's crust and exists in numerous minerals such as feldsparssodalite, and rock salt (NaCl). Many salts of sodium are highly water-soluble: sodium ions have been leached by the action of water from the Earth's minerals over eons, and thus sodium and chlorine are the most common dissolved elements by weight in the oceans.

https://en.wikipedia.org/wiki/Sodium

Saline water (more commonly known as salt water) is water that contains a high concentration of dissolved salts (mainly sodium chloride). The salt concentration is usually expressed in parts per thousand (permille, ‰) and parts per million (ppm). The United States Geological Survey classifies saline water in three salinity categories. Salt concentration in slightly saline water is around 1,000 to 3,000 ppm (0.1–0.3%), in moderately saline water 3,000 to 10,000 ppm (0.3–1%) and in highly saline water 10,000 to 35,000 ppm (1–3.5%). Seawater has a salinity of roughly 35,000 ppm, equivalent to 35 grams of salt per one liter (or kilogram) of water. The saturation level is only nominally dependent on the temperature of the water.[1] At 20 °C one liter of water can dissolve about 357 grams of salt, a concentration of 26.3% w/w. At boiling (100 °C) the amount that can be dissolved in one liter of water increases to about 391 grams, a concentration of 28.1% w/w.

https://en.wikipedia.org/wiki/Saline_water

Seawater, or salt water, is water from a sea or ocean. On average, seawater in the world's oceans has a salinity of about 3.5% (35 g/l, 35 ppt, 600 mM). This means that every kilogram (roughly one liter by volume) of seawater has approximately 35 grams (1.2 oz) of dissolved salts (predominantly sodium (Na+
) and chloride (Cl
ions). Average density at the surface is 1.025 kg/l. Seawater is denser than both fresh water and pure water (density 1.0 kg/l at 4 °C (39 °F)) because the dissolved salts increase the mass by a larger proportion than the volume. The freezing point of seawater decreases as salt concentration increases. At typical salinity, it freezes at about −2 °C (28 °F).[1] The coldest seawater still in the liquid state ever recorded was found in 2010, in a stream under an Antarctic glacier: the measured temperature was −2.6 °C (27.3 °F).[2] Seawater pH is typically limited to a range between 7.5 and 8.4.[3] However, there is no universally accepted reference pH-scale for seawater and the difference between measurements based on different reference scales may be up to 0.14 units.[4]

https://en.wikipedia.org/wiki/Seawater

Dry ice is the solid form of carbon dioxide. It is commonly used as it does not have a liquid state and sublimates directly from the solid state to the gas state. It is used primarily as a cooling agent, but is also used in fog machines at theatres for dramatic effects. Its advantages include lower temperature than that of water ice and not leaving any residue (other than incidental frost from moisture in the atmosphere). It is useful for preserving frozen foods (such as ice cream) where mechanical cooling is unavailable.

Dry ice sublimates at 194.7 K (−78.5 °C; −109.2 °F) at Earth atmospheric pressure. This extreme cold makes the solid dangerous to handle without protection from frostbite injury. While generally not very toxic, the outgassing from it can cause hypercapnia (abnormally elevated carbon dioxide levels in the blood) due to buildup in confined locations.

https://en.wikipedia.org/wiki/Dry_ice

The atmosphere of Earth, commonly known as air, is the layer of gases retained by Earth's gravity that surrounds the planet and forms its planetary atmosphere. The atmosphere of Earth protects life on Earth by creating pressure allowing for liquid water to exist on the Earth's surface, absorbing ultraviolet solar radiation, warming the surface through heat retention (greenhouse effect), and reducing temperature extremes between day and night (the diurnal temperature variation).

https://en.wikipedia.org/wiki/Atmosphere_of_Earth

https://en.wikipedia.org/w/index.php?title=Air&redirect=no

https://en.wikipedia.org/wiki/Cumulonimbus_cloud

Ozone depletion consists of two related events observed since the late 1970s: a steady lowering of about four percent in the total amount of ozone in Earth's atmosphere, and a much larger springtime decrease in stratospheric ozone (the ozone layer) around Earth's polar regions.[1] The latter phenomenon is referred to as the ozone hole. There are also springtime polar tropospheric ozone depletion events in addition to these stratospheric events.

https://en.wikipedia.org/wiki/Ozone_depletion

https://en.wikipedia.org/wiki/Acid_rain

mirror is an object that reflects an image. Light that bounces off a mirror will show an image of whatever is in front of it, when focused through the lens of the eye or a camera. Mirrors reverse the direction of the image in an equal yet opposite angle from which the light shines upon it. This allows the viewer to see themselves or objects behind them, or even objects that are at an angle from them but out of their field of view, such as around a corner. Natural mirrors have existed since prehistoric times, such as the surface of water, but people have been manufacturing mirrors out of a variety of materials for thousands of years, like stone, metals, and glass. In modern mirrors, metals like silver or aluminum are often used due to their high reflectivity, applied as a thin coating on glass because of its naturally smooth and very hard surface.

https://en.wikipedia.org/wiki/Mirror


Specular reflection, or regular reflection, is the mirror-like reflection of waves, such as light, from a surface.[1]

The law of reflection states that a reflected ray of light emerges from the reflecting surface at the same angle to the surface normal as the incident ray, but on the opposing side of the surface normal in the plane formed by the incident and reflected rays. This behavior was first described by Hero of Alexandria (AD c. 10–70).[2]

Specular reflection may be contrasted with diffuse reflection, in which light is scattered away from the surface in a range of directions.

https://en.wikipedia.org/wiki/Specular_reflection


In describing reflection and refraction in optics, the plane of incidence (also called the incidence plane or the meridional plane[citation needed]) is the plane which contains the surface normal and the propagation vector of the incoming radiation.[1] (In wave optics, the latter is the k-vector, or wavevector, of the incoming wave.)

When reflection is specular, as it is for a mirror or other shiny surface, the reflected ray also lies in the plane of incidence; when refraction also occurs, the refracted ray lies in the same plane. The condition of co-planarity among incident ray, surface normal, and reflected ray (refracted ray) is known as the first law of reflection (first law of refraction, respectively).[2]


https://en.wikipedia.org/wiki/Plane_of_incidence

The term plane of polarization refers to the direction of polarization of linearly-polarized light or other electromagnetic radiation. Unfortunately the term is used with two contradictory meanings. As originally defined by Étienne-Louis Malus in 1811,[2] the plane of polarization coincided (although this was not known at the time) with the plane containing the direction of propagation and the magnetic vector.[3] In modern literature, the term plane of polarization, if it is used at all, is likely to mean the plane containing the direction of propagation and the electric vector,[4] because the electric field has the greater propensity to interact with matter.[5]
https://en.wikipedia.org/wiki/Plane_of_polarization


Jump to navigation
Jump to search

In optics, the corpuscular theory of light, arguably set forward by Descartes in 1637, states that light is made up of small discrete particles called "corpuscles" (little particles) which travel in a straight line with a finite velocity and possess impetus. This was based on an alternate description of atomism of the time period.

Isaac Newton was a pioneer of this theory; he notably elaborated upon it in 1672. This early conception of the particle theory of light was an early forerunner to the modern understanding of the photon. This theory cannot explain refractiondiffraction and interference, which require an understanding of the wave theory of light of Christiaan Huygens.

https://en.wikipedia.org/wiki/Corpuscular_theory_of_light


In physicsmirror matter, also called shadow matter or Alice matter, is a hypothetical counterpart to ordinary matter.[1]

https://en.wikipedia.org/wiki/Mirror_matter


In quantum mechanics, a parity transformation (also called parity inversion) is the flip in the sign of one spatial coordinate. In three dimensions, it can also refer to the simultaneous flip in the sign of all three spatial coordinates (a point reflection):

It can also be thought of as a test for chirality of a physical phenomenon, in that a parity inversion transforms a phenomenon into its mirror image. All fundamental interactions of elementary particles, with the exception of the weak interaction, are symmetric under parity. The weak interaction is chiral and thus provides a means for probing chirality in physics. In interactions that are symmetric under parity, such as electromagnetism in atomic and molecular physics, parity serves as a powerful controlling principle underlying quantum transitions.

A matrix representation of P (in any number of dimensions) has determinant equal to −1, and hence is distinct from a rotation, which has a determinant equal to 1. In a two-dimensional plane, a simultaneous flip of all coordinates in sign is not a parity transformation; it is the same as a 180°-rotation.

In quantum mechanics, wave functions that are unchanged by a parity transformation are described as even functions, while those that change sign under a parity transformation are odd functions.

https://en.wikipedia.org/wiki/Parity_(physics)



Certain epoxy resins and their processes can create a hermetic bond to copper, brass, stainless steel, specialty alloys, plastic, or epoxy itself with similar coefficients of thermal expansion, and are used in the manufacture of hermetic electrical and fiber optic hermetic seals. Epoxy-based seals can increase signal density within a feedthrough design compared to other technologies with minimal spacing requirements between electrical conductors. Epoxy hermetic seal designs can be used in hermetic seal applications for low or high vacuum or pressures, effectively sealing gases or fluids including helium gas to very low helium gas leak rates similar to glass or ceramic. Hermetic epoxy seals also offer the design flexibility of sealing either copper alloy wires or pins instead of the much less electrically conductive Kovar pin materials required in glass or ceramic hermetic seals. With a typical operating temperature range of −70 °C to +125 °C or 150 °C, epoxy hermetic seals are more limited in comparison to glass or ceramic seals, although some hermetic epoxy designs are capable of withstanding 200 °C.[2]

https://en.wikipedia.org/wiki/Hermetic_seal

https://en.wikipedia.org/wiki/Oxygen_transmission_rate

https://en.wikipedia.org/wiki/Lubricant

https://en.wikipedia.org/wiki/Solvent

https://en.wikipedia.org/wiki/Proton

https://en.wikipedia.org/wiki/Phonon

https://en.wikipedia.org/wiki/Igor_Tamm

https://en.wikipedia.org/wiki/Condensed_matter_physics

https://en.wikipedia.org/wiki/Clock_generator

https://en.wikipedia.org/wiki/Mirror

https://en.wikipedia.org/wiki/Vacuum


See also[edit]

https://en.wikipedia.org/wiki/Zero_point

https://en.wikipedia.org/wiki/Nucleic_acid_analogue


radioligand

phosphor rad trans

hydrogen oxygen ozone trihydrocat hydrag prop hydrazine proton phonon pholton mirror matter vacume dark matter plane formsalt salt water air atmosphere water triangle triplet material rock iodine triiodine cationcyclic triatom hydro PH4 hydro solvent oxygen lube oxy tri cascade


draft


Energy transformation, also known as energy conversion, is the process of changing energy from one form to another. In physicsenergy is a quantity that provides the capacity to perform work (e.g. Lifting an object) or provides heat. In addition to being converted, according to the law of conservation of energy, energy is transferable to a different location or object, but it cannot be created or destroyed.

The energy in many of its forms may be used in natural processes, or to provide some service to society such as heating, refrigeration, lighting or performing mechanical work to operate machines. For example, to heat a home, the furnace burns fuel, whose chemical potential energy is converted into thermal energy, which is then transferred to the home's air to raise its temperature.

Limitations in the conversion of thermal energy[edit]

Conversions to thermal energy from other forms of energy may occur with 100% efficiency.[1][self-published source?] Conversion among non-thermal forms of energy may occur with fairly high efficiency, though there is always some energy dissipated thermally due to friction and similar processes. Sometimes the efficiency is close to 100%, such as when potential energy is converted to kinetic energy as an object falls in a vacuum. This also applies to the opposite case; for example, an object in an elliptical orbit around another body converts its kinetic energy (speed) into gravitational potential energy (distance from the other object) as it moves away from its parent body. When it reaches the furthest point, it will reverse the process, accelerating and converting potential energy into kinetic. Since space is a near-vacuum, this process has close to 100% efficiency.

Thermal energy is unique because it cannot be converted to other forms of energy. Only a difference in the density of thermal/heat energy (temperature) can be used to perform work, and the efficiency of this conversion will be (much) less than 100%. This is because thermal energy represents a particularly disordered form of energy; it is spread out randomly among many available states of a collection of microscopic particles constituting the system (these combinations of position and momentum for each of the particles are said to form a phase space). The measure of this disorder or randomness is entropy, and its defining feature is that the entropy of an isolated system never decreases. One cannot take a high-entropy system (like a hot substance, with a certain amount of thermal energy) and convert it into a low entropy state (like a low-temperature substance, with correspondingly lower energy), without that entropy going somewhere else (like the surrounding air). In other words, there is no way to concentrate energy without spreading out energy somewhere else.

Thermal energy in equilibrium at a given temperature already represents the maximal evening-out of energy between all possible states[2] because it is not entirely convertible to a "useful" form, i.e. one that can do more than just affect temperature. The second law of thermodynamics states that the entropy of a closed system can never decrease. For this reason, thermal energy in a system may be converted to other kinds of energy with efficiencies approaching 100% only if the entropy of the universe is increased by other means, to compensate for the decrease in entropy associated with the disappearance of the thermal energy and its entropy content. Otherwise, only a part of that thermal energy may be converted to other kinds of energy (and thus useful work). This is because the remainder of the heat must be reserved to be transferred to a thermal reservoir at a lower temperature. The increase in entropy for this process is greater than the decrease in entropy associated with the transformation of the rest of the heat into other types of energy.

In order to make energy transformation more efficient, it is desirable to avoid thermal conversion. For example, the efficiency of nuclear reactors, where the kinetic energy of the nuclei is first converted to thermal energy and then to electrical energy, lies at around 35%.[3][4] By direct conversion of kinetic energy to electric energy, effected by eliminating the intermediate thermal energy transformation, the efficiency of the energy transformation process can be dramatically improved.[5]

History of energy transformation[edit]

Energy transformations in the universe over time are usually characterized by various kinds of energy, which have been available since the Big Bang, later being "released" (that is, transformed to more active types of energy such as kinetic or radiant energy) by a triggering mechanism.

Release of energy from gravitational potential[edit]

A direct transformation of energy occurs when hydrogen produced in the Big Bang collects into structures such as planets, in a process during which part of the gravitational potential is to be converted directly into heat. In JupiterSaturn, and Neptune, for example, such heat from the continued collapse of the planets' large gas atmospheres continue to drive most of the planets' weather systems. These systems, consisting of atmospheric bands, winds, and powerful storms, are only partly powered by sunlight. However, on Uranus, little of this process occurs.[why?][citation needed]

On Earth, a significant portion of the heat output from the interior of the planet, estimated at a third to half of the total, is caused by the slow collapse of planetary materials to a smaller size, generating heat.[citation needed]

Release of energy from radioactive potential[edit]

Familiar examples of other such processes transforming energy from the Big Bang include nuclear decay, which releases energy that was originally "stored" in heavy isotopes, such as uranium and thorium. This energy was stored at the time of the nucleosynthesis of these elements. This process uses the gravitational potential energy released from the collapse of Type II supernovae to create these heavy elements before they are incorporated into star systems such as the Solar System and the Earth. The energy locked into uranium is released spontaneously during most types of radioactive decay, and can be suddenly released in nuclear fission bombs. In both cases, a portion of the energy binding the atomic nuclei together is released as heat.

Release of energy from hydrogen fusion potential[edit]

In a similar chain of transformations beginning at the dawn of the universe, nuclear fusion of hydrogen in the Sun releases another store of potential energy which was created at the time of the Big Bang. At that time, according to one theory[which?], space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This resulted in hydrogen representing a store of potential energy which can be released by nuclear fusion. Such a fusion process is triggered by heat and pressure generated from the gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into starlight. Considering the solar system, starlight, overwhelmingly from the Sun, may again be stored as gravitational potential energy after it strikes the Earth. This occurs in the case of avalanches, or when water evaporates from oceans and is deposited as precipitation high above sea level (where, after being released at a hydroelectric dam, it can be used to drive turbine/generators to produce electricity).

Sunlight also drives many weather phenomena on Earth. One example is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, give up some of their thermal energy suddenly to power a few days of violent air movement. Sunlight is also captured by plants as a chemical potential energy via photosynthesis, when carbon dioxide and water are converted into a combustible combination of carbohydrates, lipids, and oxygen. The release of this energy as heat and light may be triggered suddenly by a spark, in a forest fire; or it may be available more slowly for animal or human metabolism when these molecules are ingested, and catabolism is triggered by enzyme action.

Through all of these transformation chains, the potential energy stored at the time of the Big Bang is later released by intermediate events, sometimes being stored in several different ways for long periods between releases, as more active energy. All of these events involve the conversion of one kind of energy into others, including heat.

Examples[edit]

Examples of sets of energy conversions in machines[edit]

coal-fired power plant involves these energy transformations:

  1. Chemical energy in the coal is converted into thermal energy in the exhaust gases of combustion
  2. Thermal energy of the exhaust gases converted into thermal energy of steam through heat exchange
  3. Thermal energy of steam converted to mechanical energy in the turbine
  4. Mechanical energy of the turbine converted to electrical energy by the generator, which is the ultimate output

In such a system, the first and fourth steps are highly efficient, but the second and third steps are less efficient. The most efficient gas-fired electrical power stations can achieve 50% conversion efficiency.[citation needed] Oil- and coal-fired stations are less efficient.

In a conventional automobile, the following energy transformations occur:

  1. Chemical energy in the fuel is converted into kinetic energy of expanding gas via combustion
  2. Kinetic energy of expanding gas converted to the linear piston movement
  3. Linear piston movement converted to rotary crankshaft movement
  4. Rotary crankshaft movement passed into transmission assembly
  5. Rotary movement passed out of transmission assembly
  6. Rotary movement passed through a differential
  7. Rotary movement passed out of differential to drive wheels
  8. Rotary movement of drive wheels converted to linear motion of the vehicle

Other energy conversions[edit]

Lamatalaventosa Wind Farm

There are many different machines and transducers that convert one energy form into another. A short list of examples follows:

See also[edit]

https://en.wikipedia.org/wiki/Energy_transformation



Jump to navigation
Jump to search

The groundwater energy balance is the energy balance of a groundwater body in terms of incoming hydraulic energy associated with groundwater inflow into the body, energy associated with the outflow, energy conversion into heat due to friction of flow, and the resulting change of energy status and groundwater level.

https://en.wikipedia.org/wiki/Groundwater_energy_balance


hygrometer is an instrument used to measure the amount of water vapor in air, in soil, or in confined spaces. Humidity measurement instruments usually rely on measurements of some other quantities such as temperature, pressure, mass, a mechanical or electrical change in a substance as moisture is absorbed. By calibration and calculation, these measured quantities can lead to a measurement of humidity. Modern electronic devices use temperature of condensation (called the dew point), or changes in electrical capacitance or resistance to measure humidity differences. A crude hygrometer was invented by Leonardo da Vinci in 1480. Major leaps came forward during the 1600s; Francesco Folli invented a more practical version of the device, while Robert Hooke improved a number of meteorological devices including the hygrometer. A more modern version was created by Swiss polymath Johann Heinrich Lambert in 1755. Later, in the year 1783, Swiss physicist and Geologist Horace Bénédict de Saussure invented the first hygrometer using human hair to measure humidity.

The maximum amount of water vapor that can be held in a given volume of air (saturation) varies greatly by temperature; cold air can hold less mass of water per unit volume than hot air. Temperature can change humidity.

Ancient hygrometers[edit]

Prototype hygrometers were devised and developed during the Shang dynasty in Ancient China to study weather.[1] The Chinese used a bar of charcoal and a lump of earth: its dry weight was taken, then compared with its damp weight after being exposed in the air. The differences in weight were used to tally the humidity level.

Other techniques were applied using mass to measure humidity, such as when the air was dry, the bar of charcoal would be light, while when the air was humid, the bar of charcoal would be heavy. By hanging a lump of earth and a bar of charcoal on the two ends of a staff separately and adding a fixed lifting string on the middle point to make the staff horizontal in dry air, an ancient hygrometer was made.[2][1]

Metal-paper coil type[edit]

The metal-paper coil hygrometer is very useful for giving a dial indication of humidity changes. It appears most often in inexpensive devices, and its accuracy is limited, with variations of 10% or more. In these devices, water vapor is absorbed by a salt-impregnated paper strip attached to a metal coil, causing the coil to change shape. These changes (analogous to those in a bimetallic thermometer) cause an indication on a dial. There is usually a metal needle on the front of the gauge that will change where it points to.

Hair tension hygrometers[edit]

Deluc's hair tension whalebone hygrometer (MHS Geneva)

These devices use a human or animal hair under some tension. The hair is hygroscopic (tending toward retaining moisture); its length changes with humidity, and the length change may be magnified by a mechanism and indicated on a dial or scale. In the late 17th century, such devices were called by some scientists hygroscopes; that word is no longer in use, but hygroscopic and hygroscopy, which derive from it, still are. The traditional folk art device known as a weather house works on this principle. Whale bone and other materials may be used in place of hair.

In 1783, Swiss physicist and geologist Horace Bénédict de Saussure built the first hair-tension hygrometer using human hair.

It consists of a human hair eight to ten inches[3] long, b c, Fig. 37, fastened at one extremity to a screw, a, and at the other passing over a pulley, c, being strained tight by a silk thread and weight, d.

— John William Draper, A Textbook on Chemistry

The pulley is connected to an index which moves over a graduated scale (e). The instrument can be made more sensitive by removing oils from the hair, such as by first soaking the hair in diethyl ether.[4]

Psychrometer (wet-and-dry-bulb thermometer)[edit]

The interior of a Stevenson screen showing a motorized psychrometer

A psychrometer, or a wet and dry-bulb thermometer, consists of two calibrated thermometers, one that is dry and one that is kept moist with distilled water on a sock or wick.[5] At temperatures above the freezing point of water, evaporation of water from the wick lowers the temperature, such that the wet-bulb thermometer will be at a lower temperature than that of the dry-bulb thermometer. When the air temperature is below freezing, however, the wet-bulb must be covered with a thin coating of ice, in order to be accurate. As a result of the heat of sublimation, the wet-bulb temperature will eventually be lower than the dry bulb, although this may take many minutes of continued use of the psychrometer.

Psychrometer probably made in Switzerland circa 1850 by Kappeller (MHS Geneva)

Relative humidity (RH) is computed from the ambient temperature, shown by the dry-bulb thermometer and the difference in temperatures as shown by the wet-bulb and dry-bulb thermometers. Relative humidity can also be determined by locating the intersection of the wet and dry-bulb temperatures on a psychrometric chart. The dry and wet thermometers coincide when the air is fully saturated, and the greater the difference the drier the air. Psychrometers are commonly used in meteorology, and in the HVAC industry for proper refrigerant charging of residential and commercial air conditioning systems.

Sling psychrometer[edit]

A sling psychrometer for outdoor use

A sling psychrometer, which uses thermometers attached to a handle is manually spun in free air flow until both temperatures stabilize. This is sometimes used for field measurements, but is being replaced by more convenient electronic sensors. A whirling psychrometer uses the same principle, but the two thermometers are fitted into a device that resembles a ratchet or football rattle.

Chilled mirror dew point hygrometer[edit]

Dew point is the temperature at which a sample of moist air (or any other water vapor) at constant pressure reaches water vapor saturation. At this saturation temperature, further cooling results in condensation of water. Chilled mirror dewpoint hygrometers are some of the most precise instruments commonly available. They use a chilled mirror and optoelectronic mechanism to detect condensation on the mirror's surface. The temperature of the mirror is controlled by electronic feedback to maintain a dynamic equilibrium between evaporation and condensation, thus closely measuring the dew point temperature. An accuracy of 0.2 °C is attainable with these devices, which correlates at typical office environments to a relative humidity accuracy of about ±1.2%. These devices need frequent cleaning, a skilled operator and periodic calibration to attain these levels of accuracy. Even so, they are prone to heavy drifting in environments where smoke or otherwise impure air may be present.

More recently, spectroscopic chilled-mirrors have been introduced. Using this method, the dew point is determined with spectroscopic light detection which ascertains the nature of the condensation. This method avoids many of the pitfalls of the previous chilled-mirrors and is capable of operating drift free.

Capacitive[edit]

For applications where cost, space, or fragility are relevant, other types of electronic sensors are used, at the price of a lower accuracy. In capacitive hygrometers, the effect of humidity on the dielectric constant of a polymer or metal oxide material is measured. With calibration, these sensors have an accuracy of ±2% RH in the range 5–95% RH. Without calibration, the accuracy is 2 to 3 times worse. Capacitive sensors are robust against effects such as condensation and temporary high temperatures.[6] Capacitive sensors are subject to contamination, drift and aging effects, but they are suitable for many applications.

Resistive[edit]

In resistive hygrometers, the change in electrical resistance of a material due to humidity is measured.[6] Typical materials are salts and conductive polymers. Resistive sensors are less sensitive than capacitive sensors – the change in material properties is less, so they require more complex circuitry. The material properties also tend to depend both on humidity and temperature, which means in practice that the sensor must be combined with a temperature sensor. The accuracy and robustness against condensation vary depending on the chosen resistive material. Robust, condensation-resistant sensors exist with an accuracy of up to ±3% RH (relative humidity).

Thermal[edit]

In thermal hygrometers, the change in thermal conductivity of air due to humidity is measured. These sensors measure absolute humidity rather than relative humidity.[6]

Gravimetric[edit]

A Gravimetric hygrometer measures the mass of an air sample compared to an equal volume of dry air. This is considered the most accurate primary method to determine the moisture content of the air.[7] National standards based on this type of measurement have been developed in US, UK, EU and Japan. The inconvenience of using this device means that it is usually only used to calibrate less accurate instruments, called Transfer Standards.

Optical[edit]

An optical hygrometer measures the absorption of light by water in the air.[8] A light emitter and a light detector are arranged with a volume of air between them. The attenuation of the light, as seen by the detector, indicates the humidity, according to the Beer–Lambert law. Types include the Lyman-alpha hygrometer (using Lyman-alpha light emitted by hydrogen), the krypton hygrometer (using 123.58 nm light emitted by krypton), and the differential absorption hygrometer (using light emitted by two lasers operating at different wavelengths, one absorbed by humidity and the other not).

Calibration standards[edit]

Psychrometer calibration[edit]

Accurate calibration of the thermometers used is fundamental to precise humidity determination by the wet-dry method. The thermometers must be protected from radiant heat and must have a sufficiently high flow of air over the wet bulb for the most accurate results. One of the most precise types of wet-dry bulb psychrometer was invented in the late 19th century by Adolph Richard Assmann (1845–1918);[12] in English-language references the device is usually spelled "Assmann psychrometer." In this device, each thermometer is suspended within a vertical tube of polished metal, and that tube is in turn suspended within a second metal tube of slightly larger diameter; these double tubes serve to isolate the thermometers from radiant heating. Air is drawn through the tubes with a fan that is driven by a clockwork mechanism to ensure a consistent speed (some modern versions use an electric fan with electronic speed control).[13] According to Middleton, 1966, "an essential point is that air is drawn between the concentric tubes, as well as through the inner one."[14]

It is very challenging, particularly at low relative humidity, to obtain the maximal theoretical depression of the wet-bulb temperature; an Australian study in the late 1990s found that liquid-in-glass wet-bulb thermometers were warmer than theory predicted even when considerable precautions were taken;[15] these could lead to RH value readings that are 2 to 5 percent points too high.

One solution sometimes used for accurate humidity measurement when the air temperature is below freezing is to use a thermostatically-controlled electric heater to raise the temperature of outside air to above freezing. In this arrangement, a fan draws outside air past (1) a thermometer to measure the ambient dry-bulb temperature, (2) the heating element, (3) a second thermometer to measure the dry-bulb temperature of the heated air, then finally (4) a wet-bulb thermometer. According to the World Meteorological Organization Guide, "The principle of the heated psychrometer is that the water vapor content of an air mass does not change if it is heated. This property may be exploited to the advantage of the psychrometer by avoiding the need to maintain an ice bulb under freezing conditions.".[16]

Since the humidity of the ambient air is calculated indirectly from three temperature measurements, in such a device accurate thermometer calibration is even more important than for a two-bulb configuration.

Saturated salt calibration[edit]

Various researchers[17] have investigated the use of saturated salt solutions for calibrating hygrometers. Slushy mixtures of certain pure salts and distilled water have the property that they maintain an approximately constant humidity in a closed container. A saturated table salt (Sodium Chloride) bath will eventually give a reading of approximately 75%. Other salts have other equilibrium humidity levels: Lithium Chloride ~11%; Magnesium Chloride ~33%; Potassium Carbonate ~43%; Potassium Sulfate ~97%. Salt solutions will vary somewhat in humidity with temperature and they can take relatively long times to come to equilibrium, but their ease of use compensates somewhat for these disadvantages in low precision applications, such as checking mechanical and electronic hygrometers.

See also[edit]

https://en.wikipedia.org/wiki/Hygrometer#Chilled_mirror_hygrometer


The ice accretion indicator is an L-shaped piece of aluminium 38 cm (15 in) long by 4 to 5 cm (1.6 to 2.0 in) wide.[1][2] It is used to indicate the formation of icefrost or the presence of freezing rain or freezing drizzle.

It is normally attached to a Stevenson screen, about 1 m (3 ft 3 in) above ground,[2] but may be mounted in other areas away from any artificial heat sources. The weather station would have two on site and they would be exchanged after every weather observation. The spare indicator should always be at the outside air temperature to ensure that it is ready for use and would normally be stored inside the screen.[3]

If the observer notes the presence of ice or frost on the indicator then a remark to that effect should be sent in the next weather observation. Examples of these are 'rime icing on indicator' and 'FROIN' (frost on indicator). As the indicator is at air temperature and is kept horizontal it provides an excellent surface on which to observe freezing precipitation.

https://en.wikipedia.org/wiki/Ice_accretion_indicator



Jump to navigation
Jump to search

pyranometer is a type of actinometer used for measuring solar irradiance on a planar surface and it is designed to measure the solar radiation flux density (W/m2) from the hemisphere above within a wavelength range 0.3 μm to 3 μm. The name pyranometer stems from the Greek words πῦρ (pyr), meaning "fire", and ἄνω (ano), meaning "above, sky".

A typical pyranometer does not require any power to operate. However, recent technical development includes use of electronics in pyranometers, which do require (low) external power.

https://en.wikipedia.org/wiki/Pyranometer


Types of pyrolysis[edit]

Complete pyrolysis of organic matter usually leaves a solid residue that consists mostly of elemental carbon; the process is then called carbonization. More specific cases of pyrolysis include:

https://en.wikipedia.org/wiki/Pyrolysis

Dry distillation is the heating of solid materials to produce gaseous products (which may condense into liquids or solids). The method may involve pyrolysis or thermolysis, or it may not (for instance, a simple mixture of ice and glass could be separated without breaking any chemical bonds, but organic matter contains a greater diversity of molecules, some of which are likely to break). If there are no chemical changes, just phase changes, it resembles classical distillation, although it will generally need higher temperatures. Dry distillation in which chemical changes occur is a type of destructive distillation or cracking.
Derivation of a wood-tar creosote from resinous woods[1]

Derivation of wood-tar creosote.svg

Uses[edit]

The method has been used to obtain liquid fuels from coal and wood. It can also be used to break down mineral salts such as sulfates (SO2−
4
) through thermolysis, in this case producing sulfur dioxide (SO2) or sulfur trioxide (SO3) gas which can be dissolved in water to obtain sulfuric acid. By this method sulfuric acid was first identified and artificially produced. When substances of vegetable origin, e.g. coaloil shalepeat or wood, are heated in the absence of air (dry distillation), they decompose into gas, liquid products and coke/charcoal. The yield and chemical nature of the decomposition products depend on the nature of the raw material and the conditions under which the dry distillation is done. Decomposition within a temperature range of 450 to about 600°C is called carbonization or low-temperature degassing. At temperatures above 900°C, the process is called coking or high-temperature degassing.[2] If coal is gasified to make coal gas or carbonized to make coke then Coal tar is among the by-products.

https://en.wikipedia.org/wiki/Dry_distillation

Coal tar is a thick dark liquid which is a by-product of the production of coke and coal gas from coal.[2][3] It has both medical and industrial uses.[2][4] Medicinally it is a topical medication applied to skin to treat psoriasis and seborrheic dermatitis (dandruff).[5] It may be used in combination with ultraviolet light therapy.[5] Industrially it is a railroad tie preservative and used in the surfacing of roads.[6] Coal Tar was listed as a known human carcinogen in the first Report on Carcinogens from the U.S. Federal Government.[7]

Coal tar was discovered circa 1665 and used for medical purposes as early as the 1800s.[6][8] Circa 1850, the discovery that it could be used as the main ingredient in synthetic dyes engendered an entire industry.[9] It is on the World Health Organization's List of Essential Medicines.[10] Coal tar is available as a generic medication and over the counter.[4]

Side effects include skin irritation, sun sensitivity, allergic reactions, and skin discoloration.[5] It is unclear if use during pregnancy is safe for the baby and use during breastfeeding is not typically recommended.[11] The exact mechanism of action is unknown.[12] It is a complex mixture of phenolspolycyclic aromatic hydrocarbons (PAHs), and heterocyclic compounds.[2] It demonstrates antifungalanti-inflammatoryanti-itch, and antiparasitic properties.[12]

https://en.wikipedia.org/wiki/Coal_tar



Sedimentary rock
Oilshale.jpg
Combustion of oil shale
Composition
Primary
Secondary

https://en.wikipedia.org/wiki/Oil_shale

Oil shale gas (also: retort gas or retorting gas) is a synthetic non-condensable gas mixture (syngas) produced by oil shale thermal processing (pyrolysis). Although often referred to as shale gas, it differs from the natural gas produced from shale, which is also known as shale gas.[1]
https://en.wikipedia.org/wiki/Oil_shale_gas

Carbon monoxide (chemical formula CO) is a colorless, odorless, tasteless, flammable gas that is slightly less dense than air. Carbon monoxide consists of one carbon atom and one oxygen atom. It is the simplest molecule of the oxocarbon family. In coordination complexes the carbon monoxide ligand is called carbonyl. It is a key ingredient in many processes in industrial chemistry.[5]

Thermal combustion is the most common source of carbon monoxide, however there are numerous environmental and biological sources that generate and emit a significant amount of carbon monoxide. Carbon monoxide is important in the production of many compounds ranging from drugs, fragrances, and fuels. It is produced by many organisms, including humans.[6] Upon emission into the atmosphere, carbon monoxide affects several processes that contribute to climate change.[7]

Carbon monoxide has important biological roles across phylogenetic kingdoms. In mammalian physiology, carbon monoxide is a classical example of hormesis where low concentrations serve as an endogenous neurotransmitter (gasotransmitter) and high concentrations are toxic resulting in carbon monoxide poisoning. It is isoelectronic with cyanide anion CN.

https://en.wikipedia.org/wiki/Carbon_monoxide


Jump to navigation
Jump to search

Gasotransmitters is a class of neurotransmitters. The molecules are distinguished from other bioactive endogenous gaseous signaling molecules based on a need to meet distinct characterization criteria. Currently, only nitric oxide, carbon monoxide, and hydrogen sulfide are accepted as gasotransmitters.[1]

The name gasotransmitter is not intended to suggest a gaseous physical state such as infinitesimally small gas bubbles; the physical state is dissolution in complex body fluids and cytosol.[2] These particular gases share many common features in their production and function but carry on their tasks in unique ways which differ from classical signaling molecules.

https://en.wikipedia.org/wiki/Gasotransmitter

Gaseous signaling molecules are gaseous molecules that are either synthesized internally (endogenously) in the organismtissue or cell or are received by the organism, tissue or cell from outside (say, from the atmosphere or hydrosphere, as in the case of oxygen) and that are used to transmit chemical signals which induce certain physiological or biochemical changes in the organism, tissue or cell. The term is applied to, for example, oxygencarbon dioxidesulfur dioxidenitrous oxidehydrogen cyanideammoniamethanehydrogenethylene, etc.

Select gaseous signaling molecules behave as neurotransmitters and are called gasotransmitters. These include nitric oxidecarbon monoxide, and hydrogen sulfide.

Historically, the study of gases and physiological effects was categorized under factitious airs.

The biological roles of each of the gaseous signaling molecules are outlined below.

https://en.wikipedia.org/wiki/Gaseous_signaling_molecules



Biochemistry
 is the chemistry of life. Biochemists study the elementscompounds and chemical reactions that are controlled by biomolecules (such as polypeptidespolynucleotidespolysaccharideslipids and chemical messenger[disambiguation needed]s) and take place in all living organisms.

https://en.wikipedia.org/wiki/Category:Biochemistry



In chemistrydisproportionation, sometimes called dismutation, is a redox reaction in which one compound of intermediate oxidation state converts to two compounds, one of higher and one of lower oxidation states.[1][2] More generally, the term can be applied to any desymmetrizing reaction of the following type: 2 A → A' + A", regardless of whether it is a redox or some other type of process.[3]

Examples[edit]

Hg2Cl2 → Hg + HgCl2
H
3
PO
3
 → 3 H3PO4 + PH3
  • Desymmetrizing reactions are sometimes referred to as disproportionation, as illustrated by the thermal degradation of bicarbonate:
HCO
3
 → CO2−
3
 + H2CO3
The oxidation numbers remain constant in this acid-base reaction. This process is also called autoionization.

Reverse reaction[edit]

The reverse of disproportionation, such as when a compound in an intermediate oxidation state is formed from precursors of lower and higher oxidation states, is called comproportionation, also known as synproportionation.

History[edit]

The first disproportionation reaction to be studied in detail was:

2 Sn2+ → Sn4+ + Sn

This was examined using tartrates by Johan Gadolin in 1788. In the Swedish version of his paper he called it 'söndring'.[4][5]

Further examples[edit]

Polymer chemistry[edit]

In free-radical chain-growth polymerizationchain termination can occur by a disproportionation step in which a hydrogen atom is transferred from one growing chain molecule to another which produces two dead (non-growing) chains.[10]

-------CH2–C°HX + -------CH2–C°HX → -------CH=CHX + -------CH2–CH2X

Biochemistry[edit]

In 1937, Hans Adolf Krebs, who discovered the citric acid cycle bearing his name, confirmed the anaerobic dismutation of pyruvic acid into lactic acidacetic acid and CO2 by certain bacteria according to the global reaction:[11]

2 pyruvic acid + H2O → lactic acid + acetic acid + CO2

The dismutation of pyruvic acid in other small organic molecules (ethanol + CO2, or lactate and acetate, depending on the environmental conditions) is also an important step in fermentation reactions. Fermentation reactions can also be considered as disproportionation or dismutation biochemical reactions. Indeed, the donor and acceptor of electrons in the redox reactions supplying the chemical energy in these complex biochemical systems are the same organic molecules simultaneously acting as reductant or oxidant.

Another example of biochemical dismutation reaction is the disproportionation of acetaldehyde into ethanol and acetic acid.[12]

While in respiration electrons are transferred from substrate (electron donor) to an electron acceptor, in fermentation part of the substrate molecule itself accepts the electrons. Fermentation is therefore a type of disproportionation, and does not involve an overall change in oxidation state of the substrate. Most of the fermentative substrates are organic molecules. However, a rare type of fermentation may also involve the disproportionation of inorganic sulfur compounds in certain sulfate-reducing bacteria.[13]

See also[edit]

https://en.wikipedia.org/wiki/Disproportionation

Xenon is a chemical element with the symbol Xe and atomic number 54. It is a colorless, dense, odorless noble gas found in Earth's atmosphere in trace amounts.[11] Although generally unreactive, it can undergo a few chemical reactions such as the formation of xenon hexafluoroplatinate, the first noble gas compound to be synthesized.[12][13][14]

Xenon is used in flash lamps[15] and arc lamps,[16] and as a general anesthetic.[17] The first excimer laser design used a xenon dimer molecule (Xe2) as the lasing medium,[18] and the earliest laser designs used xenon flash lamps as pumps.[19] Xenon is also used to search for hypothetical weakly interacting massive particles[20] and as a propellant for ion thrusters in spacecraft.[21]

Naturally occurring xenon consists of seven stable isotopes and two long-lived radioactive isotopes. More than 40 unstable xenon isotopes undergo radioactive decay, and the isotope ratios of xenon are an important tool for studying the early history of the Solar System.[22] Radioactive xenon-135 is produced by beta decay from iodine-135 (a product of nuclear fission), and is the most significant (and unwanted) neutron absorber in nuclear reactors.[23]

https://en.wikipedia.org/wiki/Xenon

https://aip.scitation.org/doi/10.1063/1.1734052

https://www.youtube.com/watch?v=Va2F1e7VIKw

https://en.wikipedia.org/wiki/Neon_compounds



Ligands
[edit]

Neon can form a very weak bond to a transition metal atom as a ligand, for example Cr(CO)5Ne,[15] Mo(CO)5Ne, and W(CO)5Ne.[16]

NeNiCO is predicted to have a binding energy of 2.16 kcal/mol. The presence of neon changes the bending frequency of Ni−C−O by 36 cm−1.[17][18]

NeAuF[19] and NeBeS[20] have been isolated in noble gas matrixes.[21] NeBeCO3 has been detected by infrared spectroscopy in a solid neon matrix. It was made from beryllium gas, dioxygen and carbon monoxide.[16]

The cyclic molecule Be2O2 can be made by evaporating Be with a laser with oxygen and an excess of inert gas. It coordinates two noble gas atoms and has had spectra measured in solid neon matrices. Known neon containing molecules are the homoleptic Ne.Be2O2.Ne, and heteroleptic Ne.Be2O2.Ar and Ne.Be2O2.Kr. The neon atoms are attracted to the beryllium atoms as they have a positive charge in this molecule.[22]

Beryllium sulfite molecules BeO2S, can also coordinate neon onto the beryllium atom. The dissociation energy for neon is 0.9 kcal/mol. When neon is added to the cyclic molecule, the ∠O-Be-O decreases and the O-Be bond lengths increase.[23]

Solids[edit]

High pressure Van der Waals solids include (N2)6Ne7.[24]

Neon hydrate or neon clathrate, a clathrate, can form in ice II at 480 MPa pressure between 70 K and 260 K.[25] Other neon hydrates are also predicted resembling hydrogen clathrate, and those clathrates of helium. These include the C0, ice Ih and ice Ic forms.[25]

Neon atoms can be trapped inside fullerenes such as C60 and C70. The isotope 22Ne is strongly enriched in carbonaceous chondrite meteorites, by more than 1,000 times its occurrence on Earth. This neon is given off when a meteorite is heated.[26] An explanation for this is that originally when carbon was condensing from the aftermath of a supernova explosion, cages of carbon form that preferentially trap sodium atoms, including 22Na. Forming fullerenes trap sodium orders of magnitude more often than neon, so Na@C60 is formed. rather than the more common 20Ne@C60. The 22Na@C60 then decays radioactively to 22Ne@C60, without any other neon isotopes.[27] To make buckyballs with neon inside, buckminsterfullerene can be heated to 600 °C with neon under pressure. With three atmospheres for one hour, about 1 in 8,500,000 molecules end up with Ne@C60. The concentration inside the buckyballs is about the same as in the surrounding gas. This neon comes back out when heated to 900 °C.[28]


https://en.wikipedia.org/wiki/Neon_compounds


An excimer (originally short for excited dimer) is a short-lived dimeric or heterodimeric molecule formed from two species, at least one of which has a valence shell completely filled with electrons (for example, noble gases). In this case, formation of molecules is possible only if such atom is in an electronic excited state.[1] Heteronuclear molecules and molecules that have more than two species are also called exciplex molecules (originally short for excited complex). Excimers are often diatomic and are composed of two atoms or molecules that would not bond if both were in the ground state. The lifetime of an excimer is very short, on the order of nanoseconds. Binding of a larger number of excited atoms forms Rydberg matter clusters, the lifetime of which can exceed many seconds.
https://en.wikipedia.org/wiki/Excimer

clathrate is a chemical substance consisting of a lattice that traps or contains molecules. The word clathrate is derived from the Latin clathratus (clatratus), meaning ‘with bars, latticed’.[1] Most clathrate compounds are polymeric and completely envelop the guest molecule, but in modern usage clathrates also include host–guest complexes and inclusion compounds.[2] According to IUPAC, clathrates are inclusion compounds "in which the guest molecule is in a cage formed by the host molecule or by a lattice of host molecules."[3] The term refers to many molecular hosts, including calixarenes and cyclodextrins and even some inorganic polymers such as zeolites.

Many clathrates are derived from organic hydrogen-bonded frameworks. These frameworks are prepared from molecules that "self-associate" by multiple hydrogen-bonding interactions.

https://en.wikipedia.org/wiki/Clathrate_compound


Aniline is an organic compound with the formula C6H5NH2. Consisting of a phenyl group attached to an amino group, aniline is the simplest aromatic amine. It is an industrially significant commodity chemical, as well as a versatile starting material for fine chemical synthesis. Its main use is in the manufacture of precursors to polyurethane, dyes, and other industrial chemicals. Like most volatile amines, it has the odor of rotten fish. It ignites readily, burning with a smoky flame characteristic of aromatic compounds.[6]

Relative to benzene, it is electron-rich. It thus participates more rapidly in electrophilic aromatic substitution reactions. Likewise, it is also prone to oxidation: while freshly purified aniline is an almost colorless oil, exposure to air results in gradual darkening to yellow or red, due to the formation of strongly colored, oxidized impurities. Aniline can be diazotized to give a diazonium salt, which can then undergo various nucleophilic substitution reactions.

“Aniline” is ultimately from Portuguese anil which means "the indigo shrub", with suffix -ine indicating "derived substance".[7]

Like other amines, aniline is both a base (pKaH = 4.6) and a nucleophile, although less so than structurally similar aliphatic amines.

Because an early source of the benzene from which they are derived was coal tar, aniline dyes are also called coal tar dyes.

https://en.wikipedia.org/wiki/Aniline


Matrix isolation is an experimental technique used in chemistry and physics. It generally involves a material being trapped within an unreactive matrix. A host matrix is a continuous solid phase in which guest particles (atoms, molecules, ions, etc.) are embedded. The guest is said to be isolated within the host matrix. Initially the term matrix-isolation was used to describe the placing of a chemical species in any unreactive material, often polymers or resins, but more recently has referred specifically to gases in low-temperature solids. A typical matrix isolation experiment involves a guest sample being diluted in the gas phase with the host material, usually a noble gas or nitrogen. This mixture is then deposited on a window that is cooled to below the melting point of the host gas. The sample may then be studied using various spectroscopic procedures.

https://en.wikipedia.org/wiki/Matrix_isolation


https://www.nbcnews.com/video/drone-captures-images-of-ice-cave-caused-by-climate-change-in-switzerland-129388613738
https://cen.acs.org/physical-chemistry/geochemistry/Naicas-crystal-cave-captivates-chemists/97/i6
https://www.bbc.com/news/science-environment-39013829
https://www.bbc.com/news/science-environment-39013829

https://www.researchgate.net/publication/307833603_The_mass_and_energy_balance_of_ice_within_the_Eisriesenwelt_cave_Austria
https://www.intechopen.com/chapters/69113
https://ui.adsabs.harvard.edu/abs/2011TCry....5..245O/abstract


Wideband Amplifiers

By Peter Staric, Erik Margan



https://books.google.com/books?id=dzsrxlafZAgC&pg=SA3-PA72&lpg=SA3-PA72&dq=ice+mirror+amplifier&source=bl&ots=6MMzesOsvK&sig=ACfU3U1nDXhzPldsh3S8lqQRyjSnOR0F6w&hl=en&sa=X&ved=2ahUKEwjXgu3b7s72AhVimeAKHb36BIkQ6AF6BAgfEAM#v=onepage&q=ice%20mirror%20amplifier&f=false


foreign ip and usa ack

https://www.nbcnews.com/video/drone-captures-images-of-ice-cave-caused-by-climate-change-in-switzerland-129388613738
https://cen.acs.org/physical-chemistry/geochemistry/Naicas-crystal-cave-captivates-chemists/97/i6
https://www.bbc.com/news/science-environment-39013829

Ice crystals are solid ice exhibiting atomic ordering on various length scales and include hexagonal columns, hexagonal plates, dendritic crystals, and diamond dust.
https://en.wikipedia.org/wiki/Ice_crystals


An ice nucleus, also known as an ice nucleating particle (INP), is a particle which acts as the nucleus for the formation of an ice crystal in the atmosphere.
https://en.wikipedia.org/wiki/Ice_nucleus


No comments:

Post a Comment