Blog Archive

Monday, September 13, 2021

09-13-2021-0327 - Free radical reactions catalyzed by ultraviolet light from the sun oxidize unburned hydrocarbons to aldehydes, ketones, and dicarbonyl compounds, whose secondary reactions create peroxyacyl radicals, which combine with nitrogen dioxide to form peroxyacyl nitrates. Peroxyacyl nitrates Neutron optics: Reflector · Supermirror PANs are secondary pollutants, which means they are not directly emitted as exhaust from power plants or internal combustion engines, but they are formed from other pollutants by chemical reactions in the atmosphere. Free radical reactions catalyzed by ultraviolet light from the sun oxidize unburned hydrocarbons to aldehydes, ketones, and dicarbonyl compounds, whose secondary reactions create peroxyacyl radicals, which combine with nitrogen dioxide to form peroxyacyl nitrates. he most common peroxyacyl radical is peroxyacetyl, which can be formed from the free radical oxidation of acetaldehyde, various ketones, or the photolysis of dicarbonyl compounds such as methylglyoxal or diacetyl. photolysis of dicarbonyl compounds such as methylglyoxal or diacetyl. Since they dissociate quite slowly in the atmosphere into radicals and NO2, Both PANs and their chlorinatedderivates are said to be mutagenic, as they can be a factor causing skin cancer. https://en.wikipedia.org/wiki/Peroxyacyl_nitrates This article is about the chemical process. For the nuclear reaction, see Photodisintegration. Photodissociation, photolysis, or photodecomposition is a chemical reaction in which a chemical compound is broken down by photons. It is defined as the interaction of one or more photons with one target molecule. Photodissociation is not limited to visible light. Any photon with sufficient energy can affect the chemical bonds of a chemical compound. Since a photon's energy is inversely proportional to its wavelength, electromagnetic waves with the energy of visible light or higher, such as ultraviolet light, x-rays and gamma rays are usually involved in such reactions. photosynthesis purple sulfur bacteria, hydrogen sulfide (H2S) is oxidized to sulfur (S). thylakoids cyanobacteria chloroplasts green algae photolysis of water Photolysis of water occurs in the thylakoids of cyanobacteriaand the chloroplasts of green algae and plants. diatomic oxygen(O2)

PANs are secondary pollutants, which means they are not directly emitted as exhaust from power plants or internal combustion engines, but they are formed from other pollutants by chemical reactions in the atmosphere.

 Free radical reactions catalyzed by ultraviolet light from the sun oxidize unburned hydrocarbons to aldehydesketones, and dicarbonyl compounds, whose secondary reactions create peroxyacyl radicals, which combine with nitrogen dioxide to form peroxyacyl nitrates.

he most common peroxyacyl radical is peroxyacetyl, which can be formed from the free radical oxidation of acetaldehyde, various ketones, or the photolysis of dicarbonyl compounds such as methylglyoxal or diacetyl.

photolysis of dicarbonyl compounds such as methylglyoxal or diacetyl.

Since they dissociate quite slowly in the atmosphere into radicals and NO2,

Both PANs and their chlorinated derivates are said to be mutagenic, as they can be a factor causing skin cancer.

https://en.wikipedia.org/wiki/Peroxyacyl_nitrates


Photodissociationphotolysis, or photodecomposition is a chemical reaction in which a chemical compound is broken down by photons. It is defined as the interaction of one or more photons with one target molecule. Photodissociation is not limited to visible light. Any photon with sufficient energy can affect the chemical bonds of a chemical compound. Since a photon's energy is inversely proportional to its wavelength, electromagnetic waves with the energy of visible light or higher, such as ultraviolet lightx-rays and gamma rays are usually involved in such reactions.

Photolysis in photosynthesis[edit]

Photolysis is part of the light-dependent reaction or light phase or photochemical phase or Hill reaction of photosynthesis. The general reaction of photosynthetic photolysis can be given as

H2A + 2 photons (light) → 2 e + 2 H+ + A

The chemical nature of "A" depends on the type of organism. In purple sulfur bacteria, hydrogen sulfide (H2S) is oxidized to sulfur (S). In oxygenic photosynthesis, water (H2O) serves as a substrate for photolysis resulting in the generation of diatomic oxygen(O2). This is the process which returns oxygen to Earth's atmosphere. Photolysis of water occurs in the thylakoids of cyanobacteriaand the chloroplasts of green algae and plants.

Energy transfer models[edit]

The conventional, semi-classical, model describes the photosynthetic energy transfer process as one in which excitation energy hops from light-capturing pigment molecules to reaction center molecules step-by-step down the molecular energy ladder.

The effectiveness of photons of different wavelengths depends on the absorption spectra of the photosynthetic pigments in the organism. Chlorophylls absorb light in the violet-blue and red parts of the spectrum, while accessory pigments capture other wavelengths as well. The phycobilins of red algae absorb blue-green light which penetrates deeper into water than red light, enabling them to photosynthesize in deep waters. Each absorbed photon causes the formation of an exciton (an electron excited to a higher energy state) in the pigment molecule. The energy of the exciton is transferred to a chlorophyll molecule (P680, where P stands for pigment and 680 for its absorption maximum at 680 nm) in the reaction center of photosystem II via resonance energy transfer. P680 can also directly absorb a photon at a suitable wavelength.

Photolysis during photosynthesis occurs in a series of light-driven oxidation events. The energized electron (exciton) of P680 is captured by a primary electron acceptor of the photosynthetic electron transfer chain and thus exits photosystem II. In order to repeat the reaction, the electron in the reaction center needs to be replenished. This occurs by oxidation of water in the case of oxygenic photosynthesis. The electron-deficient reaction center of photosystem II (P680*) is the strongest biological oxidizing agentyet discovered, which allows it to break apart molecules as stable as water.[1]

The water-splitting reaction is catalyzed by the oxygen evolving complex of photosystem II. This protein-bound inorganic complex contains four manganese ions, plus calcium and chloride ions as cofactors. Two water molecules are complexed by the manganese cluster, which then undergoes a series of four electron removals (oxidations) to replenish the reaction center of photosystem II. At the end of this cycle, free oxygen (O2) is generated and the hydrogen of the water molecules has been converted to four protons released into the thylakoid lumen (Dolai's S-state diagrams).[citation needed]

These protons, as well as additional protons pumped across the thylakoid membrane coupled with the electron transfer chain, form a proton gradient across the membrane that drives photophosphorylation and thus the generation of chemical energy in the form of adenosine triphosphate (ATP). The electrons reach the P700 reaction center of photosystem I where they are energized again by light. They are passed down another electron transfer chain and finally combine with the coenzyme NADP+ and protons outside the thylakoids to form NADPH. Thus, the net oxidation reaction of water photolysis can be written as:

2 H2O + 2 NADP+ + 8 photons (light) → 2 NADPH + 2 H+ + O2

The free energy change (ฮ”G) for this reaction is 102 kilocalories per mole. Since the energy of light at 700 nm is about 40 kilocalories per mole of photons, approximately 320 kilocalories of light energy are available for the reaction. Therefore, approximately one-third of the available light energy is captured as NADPH during photolysis and electron transfer. An equal amount of ATP is generated by the resulting proton gradient. Oxygen as a byproduct is of no further use to the reaction and thus released into the atmosphere.[2]

Quantum models[edit]

In 2007 a quantum model was proposed by Graham Fleming and his co-workers which includes the possibility that photosynthetic energy transfer might involve quantum oscillations, explaining its unusually high efficiency.[3]

According to Fleming[4] there is direct evidence that remarkably long-lived wavelike electronic quantum coherence plays an important part in energy transfer processes during photosynthesis, which can explain the extreme efficiency of the energy transfer because it enables the system to sample all the potential energy pathways, with low loss, and choose the most efficient one. This claim has, however, since been proven wrong in several publications [5] [6] [7] .[8][9]

This approach has been further investigated by Gregory Scholes and his team at the University of Toronto, which in early 2010 published research results that indicate that some marine algae make use of quantum-coherent electronic energy transfer (EET) to enhance the efficiency of their energy harnessing.[10][11][12]

Photoinduced proton transfer[edit]

Photoacids are molecules that upon light absorption undergo a proton transfer to form the photobase.

In these reactions the dissociation occurs in the electronically excited state. After proton transfer and relaxation to the electronic ground state, the proton and acid recombine to form the photoacid again.

photoacids are a convenient source to induce pH jumps in ultrafast laser spectroscopy experiments.

Photolysis in the atmosphere[edit]

Photolysis occurs in the atmosphere as part of a series of reactions by which primary pollutants such as hydrocarbons and nitrogen oxides react to form secondary pollutants such as peroxyacyl nitrates. See photochemical smog.

The two most important photodissociation reactions in the troposphere are firstly:

O3 + hฮฝ → O2 + O(1D)     ฮป < 320 nm

which generates an excited oxygen atom which can react with water to give the hydroxyl radical:

O(1D) + H2O → 2 OH

The hydroxyl radical is central to atmospheric chemistry as it initiates the oxidation of hydrocarbons in the atmosphere and so acts as a detergent.

Secondly the reaction:

NO2 + hฮฝ → NO + O

is a key reaction in the formation of tropospheric ozone.

The formation of the ozone layer is also caused by photodissociation. Ozone in the Earth's stratosphere is created by ultraviolet light striking oxygen molecules containing two oxygen atoms (O2), splitting them into individual oxygen atoms (atomic oxygen). The atomic oxygen then combines with unbroken O2 to create ozone, O3. In addition, photolysis is the process by which CFCs are broken down in the upper atmosphere to form ozone-destroying chlorine free radicals.

Astrophysics[edit]

In astrophysics, photodissociation is one of the major processes through which molecules are broken down (but new molecules are being formed). Because of the vacuum of the interstellar medium, molecules and free radicals can exist for a long time. Photodissociation is the main path by which molecules are broken down. Photodissociation rates are important in the study of the composition of interstellar clouds in which stars are formed.

Examples of photodissociation in the interstellar medium are (hฮฝ is the energy of a single photon of frequency ฮฝ):

Atmospheric gamma-ray bursts[edit]

Currently orbiting satellites detect an average of about one gamma-ray burst per day. Because gamma-ray bursts are visible to distances encompassing most of the observable universe, a volume encompassing many billions of galaxies, this suggests that gamma-ray bursts must be exceedingly rare events per galaxy.

Measuring the exact rate of gamma-ray bursts is difficult, but for a galaxy of approximately the same size as the Milky Way, the expected rate (for long GRBs) is about one burst every 100,000 to 1,000,000 years.[13] Only a few percent of these would be beamed towards Earth. Estimates of rates of short GRBs are even more uncertain because of the unknown beaming fraction, but are probably comparable.[14]

A gamma-ray burst in the Milky Way, if close enough to Earth and beamed towards it, could have significant effects on the biosphere. The absorption of radiation in the atmosphere would cause photodissociation of nitrogen, generating nitric oxide that would act as a catalyst to destroy ozone.[15]

The atmospheric photodissociation

would yield

  • NO2 (consumes up to 400 ozone molecules)
  • CH2 (nominal)
  • CH4 (nominal)
  • CO2

(incomplete)

According to a 2004 study, a GRB at a distance of about a kiloparsec could destroy up to half of Earth's ozone layer; the direct UV irradiation from the burst combined with additional solar UV radiation passing through the diminished ozone layer could then have potentially significant impacts on the food chain and potentially trigger a mass extinction.[16][17] The authors estimate that one such burst is expected per billion years, and hypothesize that the Ordovician-Silurian extinction event could have been the result of such a burst.

There are strong indications that long gamma-ray bursts preferentially or exclusively occur in regions of low metallicity. Because the Milky Way has been metal-rich since before the Earth formed, this effect may diminish or even eliminate the possibility that a long gamma-ray burst has occurred within the Milky Way within the past billion years.[18] No such metallicity biases are known for short gamma-ray bursts. Thus, depending on their local rate and beaming properties, the possibility for a nearby event to have had a large impact on Earth at some point in geological time may still be significant.[19]

Multiple photon dissociation[edit]

Single photons in the infrared spectral range usually are not energetic enough for direct photodissociation of molecules. However, after absorption of multiple infrared photons a molecule may gain internal energy to overcome its barrier for dissociation. Multiple photon dissociation (MPD, IRMPD with infrared radiation) can be achieved by applying high power lasers, e.g. a carbon dioxide laser, or a free electron laser, or by long interaction times of the molecule with the radiation field without the possibility for rapid cooling, e.g. by collisions. The latter method allows even for MPD induced by black-body radiation, a technique called blackbody infrared radiative dissociation (BIRD).

See also[edit]

References[edit]

  1. ^ Campbell, Neil A.; Reece, Jane B. (2005). Biology (7th ed.). San Francisco: Pearson – Benjamin Cummings. pp. 186–191. ISBN 0-8053-7171-0.
  2. ^ Raven, Peter H.; Ray F. Evert; Susan E. Eichhorn (2005). Biology of Plants (7th ed.). New York: W.H. Freeman and Company Publishers. pp. 115–127ISBN 0-7167-1007-2.
  3. ^ Engel Gregory S., Calhoun Tessa R., Read Elizabeth L., Ahn Tae-Kyu, Manฤal Tomรกลก, Cheng Yuan-Chung, Blankenship Robert E., Fleming Graham R. (2007). "Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems". Nature446: 782–786. Bibcode:2007Natur.446..782Edoi:10.1038/nature05678PMID 17429397.
  4. ^ http://www.physorg.com/news95605211.html Quantum secrets of photosynthesis revealed
  5. ^ R. Tempelaar; T. L. C. Jansen; J. Knoester (2014). "Vibrational Beatings Conceal Evidence of Electronic Coherence in the FMO Light-Harvesting Complex". J. Phys. Chem. B118 (45): 12865–12872. doi:10.1021/jp510074qPMID 25321492.
  6. ^ N. Christenson; H. F. Kauffmann; T. Pullerits; T. Mancal (2012). "Origin of Long-Lived Coherences in Light-Harvesting Complexes"J. Phys. Chem. B116: 7449–7454. arXiv:1201.6325doi:10.1021/jp304649cPMC 3789255PMID 22642682.
  7. ^ E. Thyrhaug; K. Zidek; J. Dostal; D. Bina; D. Zigmantas (2016). "Exciton Structure and Energy Transfer in the Fenna−Matthews− Olson Complex". J. Phys. Chem. Lett7(9): 1653–1660. doi:10.1021/acs.jpclett.6b00534PMID 27082631.
  8. ^ A. G. Dijkstra; Y. Tanimura (2012). "The role of the environment time scale in light-harvesting efficiency and coherent oscillations"New J. Phys14 (7): 073027. Bibcode:2012NJPh...14g3027Ddoi:10.1088/1367-2630/14/7/073027.
  9. ^ D. M. Monahan; L. Whaley-Mayda; A. Ishizaki; G. R. Fleming (2015). "Influence of weak vibrational-electronic couplings on 2D electronic spectra and inter-site coherence in weakly coupled photosynthetic complexes". J. Chem. Phys143 (6): 065101. Bibcode:2015JChPh.143f5101Mdoi:10.1063/1.4928068PMID 26277167.
  10. ^ "Scholes Group Research". Archived from the originalon 2018-09-30. Retrieved 2010-03-23.
  11. ^ Gregory D. Scholes (7 January 2010), "Quantum-coherent electronic energy transfer: Did Nature think of it first?", Journal of Physical Chemistry Letters1 (1): 2–8, doi:10.1021/jz900062f
  12. ^ Elisabetta Collini; Cathy Y. Wong; Krystyna E. Wilk; Paul M. G. Curmi; Paul Brumer; Gregory D. Scholes (4 February 2010), "Coherently wired light-harvesting in photosynthetic marine algae at ambient temperature", Nature463 (7281): 644–7, Bibcode:2010Natur.463..644Cdoi:10.1038/nature08811PMID 20130647
  13. ^ Podsiadlowski 2004[citation not found]
  14. ^ Guetta 2006[citation not found]
  15. ^ Thorsett 1995[citation not found]
  16. ^ Melott 2004[citation not found]
  17. ^ Wanjek 2005[citation not found]
  18. ^ Stanek 2006[citation not found]
  19. ^ Ejzak 2007[citation not found]

Authority control Edit this at Wikidata

General

Integrated Authority File (Germany)

Other

Microsoft Academic

Categories: Chemical reactionsAstrophysicsPhotosynthesisPhotochemistry


https://en.wikipedia.org/wiki/Photodissociation


Photoacids are molecules which become more acidic upon absorption of light. Either the light causes a photodissociation to produce a strong acid or the light causes photoassociation (such as a ring forming reaction) that leads to an increased acidity and dissociation of a proton.

There are two main types of molecules that release protons upon illumination: photoacid generators (PAGs) and photoacids(PAHs). PAGs undergo proton photodissociation irreversibly, while PAHs are molecules that undergo proton photodissociation and thermal reassociation.[1] In this latter case, the excited state is strongly acidic, but reversible.

Photoacid generators[edit]

An example due to photodissociation is triphenylsulfonium triflate. This colourless salt consists of a sulfonium cation and the triflateanion. Many related salts are known including those with other noncoordinating anions and those with diverse substituents on the phenyl rings.

The triphenylsulfonium salts absorb at a wavelength of 233 nm, which induces a dissociation of one of the three phenyl rings. This dissociated phenyl radical then re-combines with remaining diphenylsulfonium to liberate an H+ ion.[2] The second reaction is irreversible, and therefore the entire process is irreversible, so triphenylsulfonium triflate is a photoacid generator. The ultimate products are thus a neutral organic sulfide and the strong acid triflic acid.

[(C6H5)3S+][CF3SO
3
] +  hฮฝ → [(C6H5)2S+.][CF3SO
3
] + C6H.
5
[(C6H5)2S+.][CF3SO
3
] + C6H.
5
 → (C6H5C6H4)(C6H5)S +  [CF3SO
3
][H+]

Applications of these photoacids include photolithography[3] and catalysis of the polymerization of epoxides.

Photoacids[edit]

An example of a photoacid which undergoes excited-state proton transfer without prior photolysis is the fluorescent dye pyranine(8-hydroxy-1,3,6-pyrenetrisulfonate or HPTS).[4]

The Fรถrster cycle was proposed by Theodor Fรถrster[5] and combines knowledge of the ground state acid dissociation constant(pKa), absorption, and fluorescence spectra to predict the pKa in the excited state of a photoacid.

References[edit]

  1. ^ V. K. Johns, P. K. Patel, S. Hassett, P. Calvo-Marzal, Y. Qin and K. Y. Chumbimuni-Torres, Visible Light Activated Ion Sensing Using a Photoacid Polymer for Calcium DetectionAnal. Chem. 201486, 6184−6187. (Published online: 3 June 2014) doi:10.1021/ac500956j
  2. ^ W. D. Hinsberg, G. M. Wallraff, Lithographic Resists, Kirk-Othmer Encyclopedia of Chemical Technology, Wiley-VCH, Weinheim, 2005. (Published online: 17 June 2005) doi:10.1002/0471238961.1209200808091419.a01.pub2
  3. ^ J. V. Crivello The Discovery and Development of Onium Salt Cationic PhotoinitiatorsJ. Polym. Sci., Part A: Polym. Chem.199937, 4241−4254. doi:10.1002/(SICI)1099-0518(19991201)37:23<4241::AID-POLA1>3.0.CO;2-R
  4. ^ N. Amdursky, R. Simkovitch and D. Huppert, Excited-state proton transfer of photoacids adsorbed on biomaterialsJ. Phys. Chem. B.2014118, 13859−13869. doi:10.1021/jp509153r
  5. ^ Kramer, Horst E. A.; Fischer, Peter (9 November 2010). "The Scientific Work of Theodor Fรถrster: A Brief Sketch of his Life and Personality". ChemPhysChem12 (3): 555–558. doi:10.1002/cphc.201000733PMID 21344592.

https://en.wikipedia.org/wiki/Photoacid


Pyranine is a hydrophilicpH-sensitive fluorescent dye from the group of chemicals known as arylsulfonates.[1][2] Pyranine is soluble in water and has applications as a coloring agent, biological stain, optical detecting reagent, and a pH indicator.[3][4] One example would be the measurement of intracellular pH.[5] Pyranine is also found in yellow highlighters, giving them their characteristic fluorescence and bright yellow-green colour. It is also found in some types of soap.[6]

It is synthesized from pyrenetetrasulfonic acid and a solution of sodium hydroxide in water under reflux.[7] The trisodium salt crystallizes as yellow needles when adding an aqueous solution of sodium chloride.

https://en.wikipedia.org/wiki/Pyranine

Triflic acid, the short name for trifluoromethanesulfonic acidTFMSTFSAHOTfor TfOH, is a sulfonic acid with the chemical formula CF3SO3H. It is one of the strongest known acids. Triflic acid is mainly used in research as a catalyst for esterification.[2][3] It is a hygroscopic, colorless, slightly viscous liquid and is soluble in polar solvents.

Trifluoromethanesulfonic acid

Trifluoromethanesulfonic acid is produced industrially by electrochemical fluorination(ECF) of methanesulfonic acid:

CH3SO3H + 4 HF → CF3SO2F + H2O + 3 H2

The resulting CF3SO2F is hydrolyzed, and the resulting triflate salt is preprotonated. Alternatively, trifluoromethanesulfonic acid arises by oxidation of trifluoromethylsulfenyl chloride:[4]

CF3SCl + 2 Cl2 + 3 H2O → CF3SO3H + 5 HCl

Triflic acid is purified by distillation from triflic anhydride.[3]

Trifluoromethanesulfonic acid was first synthesized in 1954 by Robert Haszeldineand Kidd by the following reaction:[5]

Synthesis Trifluoromethanesulfonic acid 1.svg

In the laboratory, triflic acid is useful in protonations because the conjugate base of triflic acid is nonnucleophilic. It is also used as an acidic titrant in nonaqueous acid-base titration because it behaves as a strong acid in many solvents (acetonitrileacetic acid, etc.) where common mineral acids (such as HCl or H2SO4) are only moderately strong.

With a Ka = 5×1014, pKa −14.7±2.0,[1] triflic acid qualifies as a superacid. It owes many of its useful properties to its great thermal and chemical stability. Both the acid and its conjugate base CF3SO
3
, known as triflate, resist oxidation/reduction reactions, whereas many strong acids are oxidizing, e.g. perchloric or nitric acid. Further recommending its use, triflic acid does not sulfonate substrates, which can be a problem with sulfuric acidfluorosulfuric acid, and chlorosulfonic acid. Below is a prototypical sulfonation, which HOTf does not undergo:

C6H6 + H2SO4 → C6H5(SO3H) + H2O

Triflic acid fumes in moist air and forms a stable solid monohydrate, CF3SO3H·H2O, melting point 34 °C.

Salt and complex formation[edit]

The triflate ligand is labile, reflecting its low basicity. Trifluoromethanesulfonic acid exothermically reacts with metal carbonateshydroxides, and oxides. Illustrative is the synthesis of Cu(OTf)2.[6]

CuCO3 + 2 CF3SO3H → Cu(O3SCF3)2 + H2O + CO2

Chloride ligands can be converted to the corresponding triflates:

3 CF3SO3H + [Co(NH3)5Cl]Cl2 → [Co(NH3)5O3SCF3](O3SCF3)2 + 3 HCl

This conversion is conducted in neat HOTf at 100 °C, followed by precipitation of the salt upon the addition of ether.

Organic chemistry[edit]

Triflic acid reacts with acyl halides to give mixed triflate anhydrides, which are strong acylating agents, e.g. in Friedel–Crafts reactions.

CH3C(O)Cl + CF3SO3H → CH3C(O)OSO2CF3 + HCl
CH3C(O)OSO2CF3 + C6H6 → CH3C(O)C6H5 + CF3SO3H

Triflic acid catalyzes the reaction of aromatic compounds with sulfonyl chlorides, probably also through the intermediacy of a mixed anhydride of the sulfonic acid.

Triflic acid promotes other Friedel–Crafts-like reactions including the cracking of alkanes and alkylation of alkenes, which are very important to the petroleum industry. These triflic acid derivative catalysts are very effective in isomerizing straight chain or slightly branched hydrocarbons that can increase the octane rating of a particular petroleum-based fuel.

Triflic acid reacts exothermically with alcohols to produce ethers and olefins.

triflic acid condensation reaction

Dehydration gives the acid anhydridetrifluoromethanesulfonic anhydride, (CF3SO2)2O.

Triflic acid is one of the strongest acids. Contact with skin causes severe burns with delayed tissue destruction. On inhalation it causes fatal spasms, inflammation and edema.[7]

Like sulfuric acid, triflic acid must be slowly added to polar solvents to prevent thermal runaway.

https://en.wikipedia.org/wiki/Triflic_acid


Thermal runaway describes a process that is accelerated by increased temperature, in turn releasing energy that further increases temperature. Thermal runaway occurs in situations where an increase in temperature changes the conditions in a way that causes a further increase in temperature, often leading to a destructive result. It is a kind of uncontrolled positive feedback.

In chemistry (and chemical engineering), thermal runaway is associated with strongly exothermic reactions that are accelerated by temperature rise. In electrical engineering, thermal runaway is typically associated with increased current flow and power dissipation. Thermal runaway can occur in civil engineering, notably when the heat released by large amounts of curing concrete is not controlled.[citation needed] In astrophysics, runaway nuclear fusion reactions in stars can lead to nova and several types of supernovaexplosions, and also occur as a less dramatic event in the normal evolution of solar-mass stars, the "helium flash".

Some climate researchers have postulated that a global average temperature increase of 3–4 degrees Celsius above the preindustrial baseline could lead to a further unchecked increase in surface temperatures. For example, releases of methane, a greenhouse gas more potent than CO2, from wetlands, melting permafrost and continental margin seabed clathrate deposits could be subject to positive feedback.[1][2]

Thermal runaway is also called thermal explosion in chemical engineering, or runaway reaction in organic chemistry. It is a process by which an exothermic reaction goes out of control: the reaction rate increases due to an increase in temperature, causing a further increase in temperature and hence a further rapid increase in the reaction rate. This has contributed to industrial chemical accidents, most notably the 1947 Texas City disaster from overheated ammonium nitrate in a ship's hold, and the 1976 explosion of zoalene, in a drier, at King's Lynn.[3] Frank-Kamenetskii theory provides a simplified analytical model for thermal explosion. Chain branching is an additional positive feedback mechanism which may also cause temperature to skyrocket because of rapidly increasing reaction rate.

Chemical reactions are either endothermic or exothermic, as expressed by their change in enthalpy. Many reactions are highly exothermic, so many industrial-scale and oil refinery processes have some level of risk of thermal runaway. These include hydrocrackinghydrogenationalkylation (SN2), oxidationmetalation and nucleophilic aromatic substitution. For example, oxidation of cyclohexane into cyclohexanol and cyclohexanone and ortho-xylene into phthalic anhydride have led to catastrophic explosions when reaction control failed.

Thermal runaway may result from unwanted exothermic side reaction(s) that begin at higher temperatures, following an initial accidental overheating of the reaction mixture. This scenario was behind the Seveso disaster, where thermal runaway heated a reaction to temperatures such that in addition to the intended 2,4,5-trichlorophenol, poisonous 2,3,7,8-tetrachlorodibenzo-p-dioxinwas also produced, and was vented into the environment after the reactor's rupture disk burst.[4]

Thermal runaway is most often caused by failure of the reactor vessel's cooling system. Failure of the mixer can result in localized heating, which initiates thermal runaway. Similarly, in flow reactors, localized insufficient mixing causes hotspots to form, wherein thermal runaway conditions occur, which causes violent blowouts of reactor contents and catalysts. Incorrect equipment component installation is also a common cause. Many chemical production facilities are designed with high-volume emergency venting, a measure to limit the extent of injury and property damage when such accidents occur.

At large scale, it is unsafe to "charge all reagents and mix", as is done in laboratory scale. This is because the amount of reaction scales with the cube of the size of the vessel (V ∝ r³), but the heat transfer area scales with the square of the size (A ∝ r²), so that the heat production-to-area ratio scales with the size (V/A ∝ r). Consequently, reactions that easily cool fast enough in the laboratory can dangerously self-heat at ton scale. In 2007, this kind of erroneous procedure caused an explosion of a 2,400 U.S. gallons (9,100 L)-reactor used to metalate methylcyclopentadiene with metallic sodium, causing the loss of four lives and parts of the reactor being flung 400 feet (120 m) away.[5][6] Thus, industrial scale reactions prone to thermal runaway are preferably controlled by the addition of one reagent at a rate corresponding to the available cooling capacity.

Some laboratory reactions must be run under extreme cooling, because they are very prone to hazardous thermal runaway. For example, in Swern oxidation, the formation of  sulfonium chloride must be performed in a cooled system (−30 °C), because at room temperature the reaction undergoes explosive thermal runaway.[6]

Microwave heating[edit]

Microwaves are used for heating of various materials in cooking and various industrial processes. The rate of heating of the material depends on the energy absorption, which depends on the dielectric constant of the material. The dependence of dielectric constant on temperature varies for different materials; some materials display significant increase with increasing temperature. This behavior, when the material gets exposed to microwaves, leads to selective local overheating, as the warmer areas are better able to accept further energy than the colder areas—potentially dangerous especially for thermal insulators, where the heat exchange between the hot spots and the rest of the material is slow. These materials are called thermal runaway materials. This phenomenon occurs in some ceramics.

Electrical engineering[edit]

Some electronic components develop lower resistances or lower triggering voltages (for nonlinear resistances) as their internal temperature increases. If circuit conditions cause markedly increased current flow in these situations, increased power dissipationmay raise the temperature further by Joule heating. A vicious circle or positive feedback effect of thermal runaway can cause failure, sometimes in a spectacular fashion (e.g. electrical explosion or fire). To prevent these hazards, well-designed electronic systems typically incorporate current limiting protection, such as thermal fuses, circuit breakers, or PTC current limiters.

To handle larger currents, circuit designers may connect multiple lower-capacity devices (e.g. transistors, diodes, or MOVs) in parallel. This technique can work well, but is susceptible to a phenomenon called current hogging, in which the current is not shared equally across all devices. Typically, one device may have a slightly lower resistance, and thus draws more current, heating it more than its sibling devices, causing its resistance to drop further. The electrical load ends up funneling into a single device, which then rapidly fails. Thus, an array of devices may end up no more robust than its weakest component.

The current-hogging effect can be reduced by carefully matching the characteristics of each paralleled device, or by using other design techniques to balance the electrical load. However, maintaining load balance under extreme conditions may not be straightforward. Devices with an intrinsic positive temperature coefficient (PTC) of electrical resistance are less prone to current hogging, but thermal runaway can still occur because of poor heat sinking or other problems.

Many electronic circuits contain special provisions to prevent thermal runaway. This is most often seen in transistor biasing arrangements for high-power output stages. However, when equipment is used above its designed ambient temperature, thermal runaway can still occur in some cases. This occasionally causes equipment failures in hot environments, or when air cooling vents are blocked.

Semiconductors[edit]

Silicon shows a peculiar profile, in that its electrical resistance increases with temperature up to about 160 °C, then starts decreasing, and drops further when the melting point is reached. This can lead to thermal runaway phenomena within internal regions of the semiconductor junction; the resistance decreases in the regions which become heated above this threshold, allowing more current to flow through the overheated regions, in turn causing yet more heating in comparison with the surrounding regions, which leads to further temperature increase and resistance decrease. This leads to the phenomenon of current crowding and formation of current filaments (similar to current hogging, but within a single device), and is one of the underlying causes of many semiconductor junction failures.

Bipolar junction transistors (BJTs)[edit]

Leakage current increases significantly in bipolar transistors (especially germanium-based bipolar transistors) as they increase in temperature. Depending on the design of the circuit, this increase in leakage current can increase the current flowing through a transistor and thus the power dissipation, causing a further increase in collector-to-emitter leakage current. This is frequently seen in a push–pull stage of a class AB amplifier. If the pull-up and pull-down transistors are biased to have minimal crossover distortionat room temperature, and the biasing is not temperature-compensated, then as the temperature rises both transistors will be increasingly biased on, causing current and power to further increase, and eventually destroying one or both devices.

One rule of thumb to avoid thermal runaway is to keep the operating point of a BJT so that Vce ≤ 1/2Vcc

Another practice is to mount a thermal feedback sensing transistor or other device on the heat sink, to control the crossover bias voltage. As the output transistors heat up, so does the thermal feedback transistor. This in turn causes the thermal feedback transistor to turn on at a slightly lower voltage, reducing the crossover bias voltage, and so reducing the heat dissipated by the output transistors.

If multiple BJT transistors are connected in parallel (which is typical in high current applications), a current hogging problem can occur. Special measures must be taken to control this characteristic vulnerability of BJTs.

In power transistors (which effectively consist of many small transistors in parallel), current hogging can occur between different parts of the transistor itself, with one part of the transistor becoming more hot than the others. This is called second breakdown, and can result in destruction of the transistor even when the average junction temperature seems to be at a safe level.

Power MOSFETs[edit]

Power MOSFETs typically increase their on-resistance with temperature. Under some circumstances, power dissipated in this resistance causes more heating of the junction, which further increases the junction temperature, in a positive feedback loop. As a consequence, power MOSFETs have stable and unstable regions of operation.[7] However, the increase of on-resistance with temperature helps balance current across multiple MOSFETs connected in parallel, so current hogging does not occur. If a MOSFET transistor produces more heat than the heatsink can dissipate, then thermal runaway can still destroy the transistors. This problem can be alleviated to a degree by lowering the thermal resistance between the transistor die and the heatsink. See also Thermal Design Power.

Metal oxide varistors (MOVs)[edit]

Metal oxide varistors typically develop lower resistance as they heat up. If connected directly across an AC or DC power bus (a common usage for protection against electrical transients), a MOV which has developed a lowered trigger voltage can slide into catastrophic thermal runaway, possibly culminating in a small explosion or fire.[8] To prevent this possibility, fault current is typically limited by a thermal fuse, circuit breaker, or other current limiting device.

Tantalum capacitors[edit]

Tantalum capacitors are, under some conditions, prone to self-destruction by thermal runaway. The capacitor typically consists of a sintered tantalum sponge acting as the anode, a manganese dioxide cathode, and a dielectric layer of tantalum pentoxide created on the tantalum sponge surface by anodizing. It may happen that the tantalum oxide layer has weak spots that undergo dielectric breakdown during a voltage spike. The tantalum sponge then comes into direct contact with the manganese dioxide, and increased leakage current causes localized heating; usually, this drives an endothermic chemical reaction that produces manganese(III) oxideand regenerates (self-heals) the tantalum oxide dielectric layer.

However, if the energy dissipated at the failure point is high enough, a self-sustaining exothermic reaction can start, similar to the thermite reaction, with metallic tantalum as fuel and manganese dioxide as oxidizer. This undesirable reaction will destroy the capacitor, producing smoke and possibly flame.[9]

Therefore, tantalum capacitors can be freely deployed in small-signal circuits, but application in high-power circuits must be carefully designed to avoid thermal runaway failures.

Digital logic[edit]

The leakage current of logic switching transistors increases with temperature. In rare instances, this may lead to thermal runaway in digital circuits. This is not a common problem, since leakage currents usually make up a small portion of overall power consumption, so the increase in power is fairly modest — for an Athlon 64, the power dissipation increases by about 10% for every 30 degrees Celsius.[10] For a device with a TDP of 100 W, for thermal runaway to occur, the heat sink would have to have a thermal resistivity of over 3 K/W (kelvins per watt), which is about 6 times worse than a stock Athlon 64 heat sink. (A stock Athlon 64 heat sink is rated at 0.34 K/W, although the actual thermal resistance to the environment is somewhat higher, due to the thermal boundary between processor and heatsink, rising temperatures in the case, and other thermal resistances.[citation needed]) Regardless, an inadequate heat sink with a thermal resistance of over 0.5 to 1 K/W would result in the destruction of a 100 W device even without thermal runaway effects.

Batteries[edit]

When handled improperly, or if manufactured defectively, some rechargeable batteries can experience thermal runaway resulting in overheating. Sealed cells will sometimes explode violently if safety vents are overwhelmed or nonfunctional.[11] Especially prone to thermal runaway are lithium-ion batteries, most markedly in the form of the lithium polymer battery.[citation needed] Reports of exploding cellphones occasionally appear in newspapers. In 2006, batteries from Apple, HP, Toshiba, Lenovo, Dell and other notebook manufacturers were recalled because of fire and explosions.[12][13][14][15] The Pipeline and Hazardous Materials Safety Administration (PHMSA) of the U.S. Department of Transportation has established regulations regarding the carrying of certain types of batteries on airplanes because of their instability in certain situations. This action was partially inspired by a cargo bay fire on a UPS airplane.[16] One of the possible solutions is in using safer and less reactive anode (lithium titanates) and cathode (lithium iron phosphate) materials — thereby avoiding the cobalt electrodes in many lithium rechargeable cells — together with non-flammable electrolytes based on ionic liquids.

Astrophysics[edit]

Runaway thermonuclear reactions can occur in stars when nuclear fusion is ignited in conditions under which the gravitational pressure exerted by overlying layers of the star greatly exceeds thermal pressure, a situation that makes possible rapid increases in temperature through gravitational compression. Such a scenario may arise in stars containing degenerate matter, in which electron degeneracy pressure rather than normal thermal pressure does most of the work of supporting the star against gravity, and in stars undergoing implosion. In all cases, the imbalance arises prior to fusion ignition; otherwise, the fusion reactions would be naturally regulated to counteract temperature changes and stabilize the star. When thermal pressure is in equilibrium with overlying pressure, a star will respond to the increase in temperature and thermal pressure due to initiation of a new exothermic reaction by expanding and cooling. A runaway reaction is only possible when this response is inhibited.

Helium flashes in red giant stars[edit]

When stars in the 0.8–2.0 solar mass range exhaust the hydrogen in their cores and become red giants, the helium accumulating in their cores reaches degeneracy before it ignites. When the degenerate core reaches a critical mass of about 0.45 solar masses, helium fusion is ignited and takes off in a runaway fashion, called the helium flash, briefly increasing the star's energy production to a rate 100 billion times normal. About 6% of the core is quickly converted into carbon.[17] While the release is sufficient to convert the core back into normal plasma after a few seconds, it does not disrupt the star,[18][19] nor immediately change its luminosity. The star then contracts, leaving the red giant phase and continuing its evolution into a stable helium-burning phase.

Novae[edit]

nova results from runaway hydrogen fusion (via the CNO cycle) in the outer layer of a carbon-oxygen white dwarf star. If a white dwarf has a companion star from which it can accrete gas, the material will accumulate in a surface layer made degenerate by the dwarf's intense gravity. Under the right conditions, a sufficiently thick layer of hydrogen is eventually heated to a temperature of 20 million K, igniting runaway fusion. The surface layer is blasted off the white dwarf, increasing luminosity by a factor on the order of 50,000. The white dwarf and companion remain intact, however, so the process can repeat.[20] A much rarer type of nova may occur when the outer layer that ignites is composed of helium.[21]

X-ray bursts[edit]

Analogous to the process leading to novae, degenerate matter can also accumulate on the surface of a neutron star that is accreting gas from a close companion. If a sufficiently thick layer of hydrogen accumulates, ignition of runaway hydrogen fusion can then lead to an X-ray burst. As with novae, such bursts tend to repeat and may also be triggered by helium or even carbon fusion.[22][23] It has been proposed that in the case of "superbursts", runaway breakup of accumulated heavy nuclei into iron groupnuclei via photodissociation rather than nuclear fusion could contribute the majority of the energy of the burst.[23]

Type Ia supernovae[edit]

type Ia supernova results from runaway carbon fusion in the core of a carbon-oxygen white dwarf star. If a white dwarf, which is composed almost entirely of degenerate matter, can gain mass from a companion, the increasing temperature and density of material in its core will ignite carbon fusion if the star's mass approaches the Chandrasekhar limit. This leads to an explosion that completely disrupts the star. Luminosity increases by a factor of greater than 5 billion. One way to gain the additional mass would be by accreting gas from a giant star (or even main sequence) companion.[24] A second and apparently more common mechanism to generate the same type of explosion is the merger of two white dwarfs.[24][25]

Pair-instability supernovae[edit]

pair-instability supernova is believed to result from runaway oxygen fusion in the core of a massive, 130–250 solar mass, low to moderate metallicity star.[26] According to theory, in such a star, a large but relatively low density core of nonfusing oxygen builds up, with its weight supported by the pressure of gamma rays produced by the extreme temperature. As the core heats further, the gamma rays eventually begin to pass the energy threshold needed for collision-induced decay into electron-positron pairs, a process called pair production. This causes a drop in the pressure within the core, leading it to contract and heat further, causing more pair production, a further pressure drop, and so on. The core starts to undergo gravitational collapse. At some point this ignites runaway oxygen fusion, releasing enough energy to obliterate the star. These explosions are rare, perhaps about one per 100,000 supernovae.

Comparison to nonrunaway supernovae[edit]

Not all supernovae are triggered by runaway nuclear fusion. Type Ib, Ic and type II supernovae also undergo core collapse, but because they have exhausted their supply of atomic nuclei capable of undergoing exothermic fusion reactions, they collapse all the way into neutron stars, or in the higher-mass cases, stellar black holes, powering explosions by the release of gravitational potential energy (largely via release of neutrinos). It is the absence of runaway fusion reactions that allows such supernovae to leave behind compact stellar remnants.

See also[edit]


https://en.wikipedia.org/wiki/Thermal_runaway


cascading failure is a process in a system of interconnected parts in which the failure of one or few parts can trigger the failure of other parts and so on. Such a failure may happen in many types of systems, including power transmission, computer networking, finance, transportation systems, organisms, the human body, and ecosystems.

Cascading failures may occur when one part of the system fails. When this happens, other parts must then compensate for the failed component. This in turn overloads these nodes, causing them to fail as well, prompting additional nodes to fail one after another.

Cascading failure is common in power grids when one of the elements fails (completely or partially) and shifts its load to nearby elements in the system. Those nearby elements are then pushed beyond their capacity so they become overloaded and shift their load onto other elements. Cascading failure is a common effect seen in high voltage systems, where a single point of failure (SPF) on a fully loaded or slightly overloaded system results in a sudden spike across all nodes of the system. This surge current can induce the already overloaded nodes into failure, setting off more overloads and thereby taking down the entire system in a very short time.

This failure process cascades through the elements of the system like a ripple on a pond and continues until substantially all of the elements in the system are compromised and/or the system becomes functionally disconnected from the source of its load. For example, under certain conditions a large power grid can collapse after the failure of a single transformer.

Monitoring the operation of a system, in real-time, and judicious disconnection of parts can help stop a cascade. Another common technique is to calculate a safety margin for the system by computer simulation of possible failures, to establish safe operating levels below which none of the calculated scenarios is predicted to cause cascading failure, and to identify the parts of the network which are most likely to cause cascading failures.[1]

One of the primary problems with preventing electrical grid failures is that the speed of the control signal is no faster than the speed of the propagating power overload, i.e. since both the control signal and the electrical power are moving at the same speed, it is not possible to isolate the outage by sending a warning ahead to isolate the element.

The question if power grid failures are correlated have been studied in Daqing Li et al.[2] as well as by Paul DH Hines et al.[3]

Examples[edit]

Cascading failure caused the following power outages:

Cascading structural failure[edit]

Certain load-bearing structures with discrete structural components can be subject to the "zipper effect", where the failure of a single structural member increases the load on adjacent members. In the case of the Hyatt Regency walkway collapse, a suspended walkway (which was already overstressed due to an error in construction) failed when a single vertical suspension rod failed, overloading the neighboring rods which failed sequentially (i.e. like a zipper). A bridge that can have such a failure is called fracture critical, and numerous bridge collapses have been caused by the failure of a single part. Properly designed structures use an adequate factor of safety and/or alternate load paths to prevent this type of mechanical cascade failure.[5]

Biology[edit]

Biochemical cascades exist in biology, where a small reaction can have system-wide implications. One negative example is ischemic cascade, in which a small ischemic attack releases toxins which kill off far more cells than the initial damage, resulting in more toxins being released. Current research is to find a way to block this cascade in stroke patients to minimize the damage.

In the study of extinction, sometimes the extinction of one species will cause many other extinctions to happen. Such a species is known as a keystone species.

Electronics[edit]

Another example is the Cockcroft–Walton generator, which can also experience cascade failures wherein one failed diode can result in all the diodes failing in a fraction of a second.

Yet another example of this effect in a scientific experiment was the implosion in 2001 of several thousand fragile glass photomultiplier tubes used in the Super-Kamiokande experiment, where the shock wave caused by the failure of a single detector appears to have triggered the implosion of the other detectors in a chain reaction.

Diverse infrastructures such as water supplytransportation, fuel and power stations are coupled together and depend on each other for functioning, see Fig. 1. Owing to this coupling, interdependent networks are extremely sensitive to random failures, and in particular to targeted attacks, such that a failure of a small fraction of nodes in one network can trigger an iterative cascade of failures in several interdependent networks.[12][13]Electrical blackouts frequently result from a cascade of failures between interdependent networks, and the problem has been dramatically exemplified by the several large-scale blackouts that have occurred in recent years. Blackouts are a fascinating demonstration of the important role played by the dependencies between networks. For example, the 2003 Italy blackout resulted in a widespread failure of the railway networkhealth care systems, and financial services and, in addition, severely influenced the telecommunication networks. The partial failure of the communication system in turn further impaired the electrical grid management system, thus producing a positive feedback on the power grid.[14] This example emphasizes how inter-dependence can significantly magnify the damage in an interacting network system. A framework to study the cascading failures between coupled networks based on percolation theory was developed recently.[15] The cascading failures can lead to abrupt collapse compare to percolation in a single network where the breakdown of the network is continuous, see Fig. 2. Cascading failures in spatially embedded systems have been shown to lead to extreme vulnerability.[16] For the dynamic process of cascading failures see ref.[17] A model for repairing failures in order to avoid cascading failures was developed by Di Muro et al.[18]

Furthermore, it was shown that such interdependent systems when embedded in space are extremely vulnerable to localized attacks or failures. Above a critical radius of damage, the failure may spread to the entire system.[19]

Cascading failures spreading of localized attacks on spatial multiplex networks with a community structure has been studied by Vaknin et al.[20] Universal features of cascading failures in interdependent networks have been reported Duan et al.[21] A method for mitigating cascading failures in networks using localized information has been developed by Smolyak et al.[22]

For a comprehensive review on cascading failures in complex networks see Valdez et al.[23]

Model for overload cascading failures[edit]

A model for cascading failures due to overload propagation is the Motter–Lai model.[24] The tempo-spatial propagation of such failures have been studied by Jichang Zhao et al.[25]

See also[edit]

https://en.wikipedia.org/wiki/Cascading_failure



Cogeneration or combined heat and power (CHP) is the use of a heat engine[1] or power station to generate electricity and useful heat at the same time.

https://en.wikipedia.org/wiki/Cogeneration


Geothermal Power

Geothermal power is electrical power generated from geothermal energy. Technologies in use include dry steam power stations, flash steam power stations and binary cycle power stations. Geothermal electricity generation is currently used in 26 countries,[1][2] while geothermal heating is in use in 70 countries.[3]

As of 2019, worldwide geothermal power capacity amounts to 15.4 gigawatts(GW), of which 23.86 percent or 3.68 GW are installed in the United States.[4]International markets grew at an average annual rate of 5 percent over the three years to 2015, and global geothermal power capacity is expected to reach 14.5–17.6 GW by 2020.[5] Based on current geologic knowledge and technology the GEA publicly discloses, the Geothermal Energy Association (GEA) estimates that only 6.9 percent of total global potential has been tapped so far, while the IPCCreported geothermal power potential to be in the range of 35 GW to 2 TW.[3]Countries generating more than 15 percent of their electricity from geothermal sources include El SalvadorKenya, the PhilippinesIcelandNew Zealand,[6] and Costa Rica.

Geothermal power is considered to be a sustainablerenewable source of energy because the heat extraction is small compared with the Earth's heat content.[7]The greenhouse gas emissions of geothermal electric stations are on average 45 grams of carbon dioxide per kilowatt-hour of electricity, or less than 5 percent of that of conventional coal-fired plants.[8]

As a source of renewable energy for both power and heating, geothermal has the potential to meet 3-5% of global demand by 2050. With economic incentives, it is estimated that by 2100 it will be possible to meet 10% of global demand.[6]

Enhanced geothermal system1:Reservoir 2:Pump house 3:Heat exchanger 4:Turbine hall 5:Production well 6:Injection well 7:Hot water to district heating 8:Porous sediments 9:Observation well 10:Crystalline bedrock

The Earth's heat content is about 1×1019 TJ (2.8×1015 TWh).[3] This heat naturally flows to the surface by conduction at a rate of 44.2 TW[20] and is replenished by radioactive decay at a rate of 30 TW.[7] These power rates are more than double humanity's current energy consumption from primary sources, but most of this power is too diffuse (approximately 0.1 W/m2 on average) to be recoverable. The Earth's crust effectively acts as a thick insulating blanket which must be pierced by fluid conduits (of magma, water or other) to release the heat underneath.

Electricity generation requires high-temperature resources that can only come from deep underground. The heat must be carried to the surface by fluid circulation, either through magma conduitshot springshydrothermal circulationoil wells, drilled water wells, or a combination of these. This circulation sometimes exists naturally where the crust is thin: magma conduits bring heat close to the surface, and hot springs bring the heat to the surface. If no hot spring is available, a well must be drilled into a hot aquifer. Away from tectonic plate boundaries the geothermal gradient is 25–30 °C per kilometre (km) of depth in most of the world, so wells would have to be several kilometres deep to permit electricity generation.[3] The quantity and quality of recoverable resources improves with drilling depth and proximity to tectonic plate boundaries.

In ground that is hot but dry, or where water pressure is inadequate, injected fluid can stimulate production. Developers bore two holes into a candidate site, and fracture the rock between them with explosives or high-pressure water. Then they pump water or liquefied carbon dioxide down one borehole, and it comes up the other borehole as a gas.[15] This approach is called hot dry rock geothermal energy in Europe, or enhanced geothermal systems in North America. Much greater potential may be available from this approach than from conventional tapping of natural aquifers.[15]

Estimates of the electricity generating potential of geothermal energy vary from 35 to 2000 GW depending on the scale of investments.[3] This does not include non-electric heat recovered by co-generation, geothermal heat pumps and other direct use. A 2006 report by the Massachusetts Institute of Technology (MIT) that included the potential of enhanced geothermal systems estimated that investing US$1 billion in research and development over 15 years would allow the creation of 100 GW of electrical generating capacity by 2050 in the United States alone.[15] The MIT report estimated that over 200×109 TJ (200 ZJ; 5.6×107 TWh) would be extractable, with the potential to increase this to over 2,000 ZJ with technology improvements – sufficient to provide all the world's present energy needs for several millennia.[15]

At present, geothermal wells are rarely more than 3 km (1.9 mi) deep.[3] Upper estimates of geothermal resources assume wells as deep as 10 km (6.2 mi). Drilling near this depth is now possible in the petroleum industry, although it is an expensive process. The deepest research well in the world, the Kola Superdeep Borehole (KSDB-3), is 12.261 km (7.619 mi) deep.[21] This record has recently been imitated by commercial oil wells, such as Exxon's Z-12 well in the Chayvo field, Sakhalin.[22] Wells drilled to depths greater than 4 km (2.5 mi) generally incur drilling costs in the tens of millions of dollars.[23] The technological challenges are to drill wide bores at low cost and to break larger volumes of rock.

Geothermal power is considered to be sustainable because the heat extraction is small compared to the Earth's heat content, but extraction must still be monitored to avoid local depletion.[7] Although geothermal sites are capable of providing heat for many decades, individual wells may cool down or run out of water. The three oldest sites, at Larderello, Wairakei, and the Geysers have all reduced production from their peaks. It is not clear whether these stations extracted energy faster than it was replenished from greater depths, or whether the aquifers supplying them are being depleted. If production is reduced, and water is reinjected, these wells could theoretically recover their full potential. Such mitigation strategies have already been implemented at some sites. The long-term sustainability of geothermal energy has been demonstrated at the Lardarello field in Italy since 1913, at the Wairakei field in New Zealand since 1958,[24] and at the Geysers field in California since 1960.[25]

Power station types[edit]

Dry steam (left), flash steam (centre), and binary cycle (right) power stations.

Geothermal power stations are similar to other steam turbine thermal power stationsin that heat from a fuel source (in geothermal's case, the Earth's core) is used to heat water or another working fluid. The working fluid is then used to turn a turbine of a generator, thereby producing electricity. The fluid is then cooled and returned to the heat source.

Dry steam power stations[edit]

Dry steam stations are the simplest and oldest design. This type of power station is not found very often, because it requires a resource that produces dry steam, but is the most efficient, with the simplest facilities.[26] In these sites, there may be liquid water present in the reservoir, but no water is produced to the surface, only steam.[26] Dry Steam Power directly uses geothermal steam of 150 °C or greater to turn turbines.[3] As the turbine rotates it powers a generator which then produces electricity and adds to the power field.[27] Then, the steam is emitted to a condenser. Here the steam turns back into a liquid which then cools the water.[28]After the water is cooled it flows down a pipe that conducts the condensate back into deep wells, where it can be reheated and produced again. At The Geysers in California, after the first 30 years of power production, the steam supply had depleted and generation was substantially reduced. To restore some of the former capacity, supplemental water injection was developed during the 1990s and 2000s, including utilization of effluent from nearby municipal sewage treatment facilities.[29]

Flash steam power stations[edit]

Flash steam stations pull deep, high-pressure hot water into lower-pressure tanks and use the resulting flashed steam to drive turbines. They require fluid temperatures of at least 180 °C, usually more. This is the most common type of station in operation today. Flash steam plants use geothermal reservoirs of water with temperatures greater than 360 °F (182 °C). The hot water flows up through wells in the ground under its own pressure. As it flows upward, the pressure decreases and some of the hot water is transformed into steam. The steam is then separated from the water and used to power a turbine/generator. Any leftover water and condensed steam may be injected back into the reservoir, making this a potentially sustainable resource.[30] [31]

Binary cycle power stations[edit]

Binary cycle power stations are the most recent development, and can accept fluid temperatures as low as 57 °C.[14] The moderately hot geothermal water is passed by a secondary fluid with a much lower boiling point than water. This causes the secondary fluid to flash vaporize, which then drives the turbines. This is the most common type of geothermal electricity station being constructed today.[32] Both Organic Rankine and Kalina cycles are used. The thermal efficiency of this type of station is typically about 10–13%.[citation needed]


Fluids drawn from the deep earth carry a mixture of gases, notably carbon dioxide (CO
2
), hydrogen sulfide (H
2
S
), methane (CH
4
), ammonia (NH
3
), and radon (Rn). If released, these pollutants contribute to global warmingacid rain, radiation, and noxious smells.[failed verification]


https://en.wikipedia.org/wiki/Geothermal_power


combined cycle power plant is an assembly of heat engines that work in tandem from the same source of heat, converting it into mechanical energy. On land, when used to make electricity the most common type is called a combined cycle gas turbine (CCGT) plant. The same principle is also used for marine propulsion, where it is called a combined gas and steam (COGAS) plant. Combining two or more thermodynamic cycles improves overall efficiency, which reduces fuel costs.

The principle is that after completing its cycle in the first engine, the working fluid (the exhaust) is still hot enough that a second subsequent heat engine can extract energy from the heat in the exhaust. Usually the heat passes through a heat exchanger so that the two engines can use different working fluids.

By generating power from multiple streams of work, the overall efficiency of the system can be increased by 50–60%. That is, from an overall efficiency of say 34% (for a simple cycle), to as much as 64% (for a combined cycle).[1] This is more than 84% of the theoretical efficiency of a Carnot cycle. Heat engines can only use part of the energy from their fuel (usually less than 50%), so in a non-combined cycle heat engine, the remaining heat (i.e., hot exhaust gas) from combustion is wasted.

https://en.wikipedia.org/wiki/Combined_cycle_power_plant


Instantaneous power in an electric circuit is the rate of flow of energy past a given point of the circuit. In alternating current circuits, energy storage elements such as inductors and capacitors may result in periodic reversals of the direction of energy flow.

The portion of instantaneous power that, averaged over a complete cycle of the AC waveform, results in net transfer of energy in one direction is known as instantaneous active power, and its time average is known as active power or real power.[1]:3 The portion of instantaneous power that results in no net transfer of energy but instead oscillates between the source and load in each cycle due to stored energy, is known as instantaneous reactive power, and its amplitude is the absolute value of reactive power.[2][1]:4

Active, reactive, apparent, and complex power in sinusoidal steady-state[edit]

In a simple alternating current (AC) circuit consisting of a source and a linear time-invariant load, both the current and voltage are sinusoidal at the same frequency.[3] If the load is purely resistive, the two quantities reverse their polarity at the same time. At every instant the product of voltage and current is positive or zero, the result being that the direction of energy flow does not reverse. In this case, only active power is transferred.

If the load is purely reactive, then the voltage and current are 90 degrees out of phase. For two quarters of each cycle, the product of voltage and current is positive, but for the other two quarters, the product is negative, indicating that on average, exactly as much energy flows into the load as flows back out. There is no net energy flow over each half cycle. In this case, only reactive power flows: There is no net transfer of energy to the load; however, electrical power does flow along the wires and returns by flowing in reverse along the same wires. The current required for this reactive power flow dissipates energy in the line resistance, even if the ideal load device consumes no energy itself. Practical loads have resistance as well as inductance, or capacitance, so both active and reactive powers will flow to normal loads.

Apparent power is the product of the RMS values of voltage and current. Apparent power is taken into account when designing and operating power systems, because although the current associated with reactive power does no work at the load, it still must be supplied by the power source. Conductors, transformers and generators must be sized to carry the total current, not just the current that does useful work. Failure to provide for the supply of sufficient reactive power in electrical grids can lead to lowered voltage levels and, under certain operating conditions, to the complete collapse of the network or blackout. Another consequence is that adding the apparent power for two loads will not accurately give the total power unless they have the same phase difference between current and voltage (the same power factor).

Conventionally, capacitors are treated as if they generate reactive power, and inductors are treated as if they consume it. If a capacitor and an inductor are placed in parallel, then the currents flowing through the capacitor and the inductor tend to cancel rather than add. This is the fundamental mechanism for controlling the power factor in electric power transmission; capacitors (or inductors) are inserted in a circuit to partially compensate for reactive power 'consumed' ('generated') by the load. Purely capacitive circuits supply reactive power with the current waveform leading the voltage waveform by 90 degrees, while purely inductive circuits absorb reactive power with the current waveform lagging the voltage waveform by 90 degrees. The result of this is that capacitive and inductive circuit elements tend to cancel each other out.[4]

The Power Triangle
The complex power is the vector sum of active and reactive power. The apparent power is the magnitude of the complex power.
  Active powerP
  Reactive powerQ
  Complex powerS
  Apparent power|S|
  Phase of voltage relative to current

Engineers use the following terms to describe energy flow in a system (and assign each of them a different unit to differentiate between them):

  • Active power,[5] P, or real power:[6] watt (W);
  • Reactive powerQvolt-ampere reactive (var);
  • Complex powerSvolt-ampere (VA);
  • Apparent power, |S|: the magnitude of complex power S: volt-ampere (VA);
  • Phase of voltage relative to currentฯ†: the angle of difference (in degrees) between current and voltage; . Current lagging voltage (quadrant I vector), current leading voltage (quadrant IV vector).

These are all denoted in the adjacent diagram (called a Power Triangle).

In the diagram, P is the active power, Q is the reactive power (in this case positive), S is the complex power and the length of S is the apparent power. Reactive power does not do any work, so it is represented as the imaginary axisof the vector diagram. Active power does do work, so it is the real axis.

The unit for power is the watt (symbol: W). Apparent power is often expressed in volt-amperes (VA) since it is the product of RMS voltage and RMS current. The unit for reactive power is var, which stands for volt-ampere reactive. Since reactive power transfers no net energy to the load, it is sometimes called "wattless" power. It does, however, serve an important function in electrical grids and its lack has been cited as a significant factor in the Northeast Blackout of 2003.[7] Understanding the relationship among these three quantities lies at the heart of understanding power engineering. The mathematical relationship among them can be represented by vectors or expressed using complex numbers, S = P + j Q (where j is the imaginary unit).

Calculations and equations in sinusoidal steady-state[edit]

The formula for complex power (units: VA) in phasor form is:

,

where V denotes voltage in phasor form, with the amplitude as RMS, and I denotes current in phasor form, with the amplitude as RMS. Also by convention, the complex conjugate of I is used, which is denoted  (or ), rather than I itself. This is done because otherwise using the product V I to define S would result in a quantity that depends on the reference angle chosen for V or I, but defining S as V I* results in a quantity that doesn't depend on the reference angle and allows to relate S to P and Q.[8]

Other forms of complex power (units in volt-amps, VA) are derived from Z, the load impedance (units in ohms, ฮฉ).

.

Consequentially, with reference to the power triangle, real power (units in watts, W) is derived as:

.

For a purely resistive load, real power can be simplified to:

.

R denotes resistance (units in ohms, ฮฉ) of the load.

Reactive power (units in volts-amps-reactive, var) is derived as:

.

For a purely reactive load, reactive power can be simplified to:

,

where X denotes reactance (units in ohms, ฮฉ) of the load.

Combining, the complex power (units in volt-amps, VA) is back-derived as

,

and the apparent power (units in volt-amps, VA) as

.

These are simplified diagrammatically by the power triangle.

Power factor[edit]

The ratio of active power to apparent power in a circuit is called the power factor. For two systems transmitting the same amount of active power, the system with the lower power factor will have higher circulating currents due to energy that returns to the source from energy storage in the load. These higher currents produce higher losses and reduce overall transmission efficiency. A lower power factor circuit will have a higher apparent power and higher losses for the same amount of active power. The power factor is 1.0 when the voltage and current are in phase. It is zero when the current leads or lags the voltage by 90 degrees. When the voltage and current are 180 degrees out of phase, the power factor is negative one, and the load is feeding energy into the source (an example would be a home with solar cells on the roof that feed power into the power grid when the sun is shining). Power factors are usually stated as "leading" or "lagging" to show the sign of the phase angle of current with respect to voltage. Voltage is designated as the base to which current angle is compared, meaning that current is thought of as either "leading" or "lagging" voltage. Where the waveforms are purely sinusoidal, the power factor is the cosine of the phase angle () between the current and voltage sinusoidal waveforms. Equipment data sheets and nameplates will often abbreviate power factor as "" for this reason.

Example: The active power is 700 W and the phase angle between voltage and current is 45.6°. The power factor is cos(45.6°) = 0.700. The apparent power is then: 700 W / cos(45.6°) = 1000 VA. The concept of power dissipation in AC circuit is explained and illustrated with the example.

For instance, a power factor of 0.68 means that only 68 percent of the total current supplied (in magnitude) is actually doing work; the remaining current does no work at the load. 

Reactive power[edit]

In a direct current circuit, the power flowing to the load is proportional to the product of the current through the load and the potential drop across the load. Energy flows in one direction from the source to the load. In AC power, the voltage and current both vary approximately sinusoidally. When there is inductance or capacitance in the circuit, the voltage and current waveforms do not line up perfectly. The power flow has two components – one component flows from source to load and can perform work at the load; the other portion, known as "reactive power", is due to the delay between voltage and current, known as phase angle, and cannot do useful work at the load. It can be thought of as current that is arriving at the wrong time (too late or too early). To distinguish reactive power from active power, it is measured in units of "volt-amperes reactive", or var. These units can simplify to watts but are left as var to denote that they represent no actual work output.

Energy stored in capacitive or inductive elements of the network gives rise to reactive power flow. Reactive power flow strongly influences the voltage levels across the network. Voltage levels and reactive power flow must be carefully controlled to allow a power system to be operated within acceptable limits. A technique known as reactive compensation is used to reduce apparent power flow to a load by reducing reactive power supplied from transmission lines and providing it locally. For example, to compensate an inductive load, a shunt capacitor is installed close to the load itself. This allows all reactive power needed by the load to be supplied by the capacitor and not have to be transferred over the transmission lines. This practice saves energy because it reduces the amount of energy that is required to be produced by the utility to do the same amount of work. Additionally, it allows for more efficient transmission line designs using smaller conductors or fewer bundled conductors and optimizing the design of transmission towers.

Capacitive vs. inductive loads[edit]

Stored energy in the magnetic or electric field of a load device, such as a motor or capacitor, causes an offset between the current and the voltage waveforms. A capacitor is an AC device that stores energy in the form of an electric field. As current is driven through the capacitor, charge build-up causes an opposing voltage to develop across the capacitor. This voltage increases until some maximum dictated by the capacitor structure. In an AC network, the voltage across a capacitor is constantly changing. The capacitor opposes this change, causing the current to lead the voltage in phase. Capacitors are said to "source" reactive power, and thus to cause a leading power factor.

Induction machines are some of the most common types of loads in the electric power system today. These machines use inductors, or large coils of wire to store energy in the form of a magnetic field. When a voltage is initially placed across the coil, the inductor strongly resists this change in a current and magnetic field, which causes a time delay for the current to reach its maximum value. This causes the current to lag behind the voltage in phase. Inductors are said to "sink" reactive power, and thus to cause a lagging power factor. Induction generators can source or sink reactive power, and provide a measure of control to system operators over reactive power flow and thus voltage.[9] Because these devices have opposite effects on the phase angle between voltage and current, they can be used to "cancel out" each other's effects. This usually takes the form of capacitor banks being used to counteract the lagging power factor caused by induction motors.

Reactive power control[edit]

Transmission connected generators are generally required to support reactive power flow. For example, on the United Kingdom transmission system, generators are required by the Grid Code Requirements to supply their rated power between the limits of 0.85 power factor lagging and 0.90 power factor leading at the designated terminals. The system operator will perform switching actions to maintain a secure and economical voltage profile while maintaining a reactive power balance equation:

The ‘system gain’ is an important source of reactive power in the above power balance equation, which is generated by the capacitative nature of the transmission network itself. By making decisive switching actions in the early morning before the demand increases, the system gain can be maximized early on, helping to secure the system for the whole day. To balance the equation some pre-fault reactive generator use will be required. Other sources of reactive power that will also be used include shunt capacitors, shunt reactors, static VAR compensators and voltage control circuits.

Unbalanced sinusoidal polyphase systems[edit]

While active power and reactive power are well defined in any system, the definition of apparent power for unbalanced polyphase systems is considered to be one of the most controversial topics in power engineering. Originally, apparent power arose merely as a figure of merit. Major delineations of the concept are attributed to Stanley's Phenomena of Retardation in the Induction Coil (1888) and Steinmetz's Theoretical Elements of Engineering (1915). However, with the development of three phase power distribution, it became clear that the definition of apparent power and the power factor could not be applied to unbalanced polyphase systems. In 1920, a "Special Joint Committee of the AIEE and the National Electric Light Association" met to resolve the issue. They considered two definitions. 

,

that is, the arithmetic sum of the phase apparent powers; and 

,

that is, the magnitude of total three-phase complex power.

The 1920 committee found no consensus and the topic continued to dominate discussions. In 1930, another committee formed and once again failed to resolve the question. The transcripts of their discussions are the lengthiest and most controversial ever published by the AIEE.[10] Further resolution of this debate did not come until the late 1990s.

A new definition based on symmetrical components theory was proposed in 1993 by Alexander Emanuel for unbalanced linear load supplied with asymmetrical sinusoidal voltages: 

,

that is, the root of squared sums of line voltages multiplied by the root of squared sums of line currents.  denotes the positive sequence power:

 denotes the positive sequence voltage phasor, and  denotes the positive sequence current phasor.[10]

Real number formulas[edit]

A perfect resistor stores no energy; so current and voltage are in phase. Therefore, there is no reactive power and  (using the passive sign convention). Therefore, for a perfect resistor

.

For a perfect capacitor or inductor, there is no net power transfer; so all power is reactive. Therefore, for a perfect capacitor or inductor:

.

where  is the reactance of the capacitor or inductor.

If  is defined as being positive for an inductor and negative for a capacitor, then the modulus signs can be removed from S and X and get

.

Instantaneous power is defined as:

,

where  and  are the time-varying voltage and current waveforms.

This definition is useful because it applies to all waveforms, whether they are sinusoidal or not. This is particularly useful in power electronics, where non-sinusoidal waveforms are common.

In general, engineers are interested in the active power averaged over a period of time, whether it is a low frequency line cycle or a high frequency power converter switching period. The simplest way to get that result is to take the integral of the instantaneous calculation over the desired period:

.

This method of calculating the average power gives the active power regardless of harmonic content of the waveform. In practical applications, this would be done in the digital domain, where the calculation becomes trivial when compared to the use of rms and phase to determine active power:

.

Multiple frequency systems[edit]

Since an RMS value can be calculated for any waveform, apparent power can be calculated from this. For active power it would at first appear that it would be necessary to calculate many product terms and average all of them. However, looking at one of these product terms in more detail produces a very interesting result.

However, the time average of a function of the form cos(ฯ‰t + k) is zero provided that ฯ‰ is nonzero. Therefore, the only product terms that have a nonzero average are those where the frequency of voltage and current match. In other words, it is possible to calculate active (average) power by simply treating each frequency separately and adding up the answers. Furthermore, if voltage of the mains supply is assumed to be a single frequency (which it usually is), this shows that harmonic currents are a bad thing. They will increase the RMS current (since there will be non-zero terms added) and therefore apparent power, but they will have no effect on the active power transferred. Hence, harmonic currents will reduce the power factor. Harmonic currents can be reduced by a filter placed at the input of the device. Typically this will consist of either just a capacitor (relying on parasitic resistance and inductance in the supply) or a capacitor-inductor network. An active power factor correction circuit at the input would generally reduce the harmonic currents further and maintain the power factor closer to unity.

See also[edit]

References[edit]

  1. Jump up to: a b IEEE Standard Definitions for the Measurement of Electric Power Quantities Under Sinusoidal, Nonsinusoidal, Balanced, or Unbalanced Conditions. IEEE. 2010. doi:10.1109/IEEESTD.2010.5439063ISBN 978-0-7381-6058-0.
  2. ^ Thomas, Roland E.; Rosa, Albert J.; Toussaint, Gregory J. (2016). The Analysis and Design of Linear Circuits (8 ed.). Wiley. pp. 812–813. ISBN 978-1-119-23538-5.
  3. ^ Das, J. C. (2015). Power System Harmonics and Passive Filter Design. Wiley, IEEE Press. p. 2. ISBN 978-1-118-86162-2To distinguish between linear and nonlinear loads, we may say that linear time-invariant loads are characterized so that an application of a sinusoidal voltage results in a sinusoidal flow of current.
  4. ^ "Importance of Reactive Power for System". 21 March 2011. Archived from the original on 2015-05-12. Retrieved 2015-04-29.
  5. ^ Definition of Active Power in the International Electrotechnical Vocabulary Archived April 23, 2015, at the Wayback Machine
  6. ^ IEEE 100 : the authoritative dictionary of IEEE standards terms.-7th ed. ISBN 0-7381-2601-2, page 23
  7. ^ "August 14, 2003 Outage – Sequence of Events" (PDF)FERC. 2003-09-12. Archived from the original (PDF) on 2007-10-20. Retrieved 2008-02-18.
  8. ^ Close, Charles M. The Analysis of Linear Circuits. pp. 398 (section 8.3).
  9. ^ "Archived copy". Archived from the original on 2015-10-25. Retrieved 2015-04-29.
  10. Jump up to: a b Emanuel, Alexander (July 1993). "On The Definition of Power Factor and Apparent Power in Unbalanced Polyphase Circuits with Sinusoidal Voltage and Currents". IEEE Transactions on Power Delivery8 (3): 841–852. doi:10.1109/61.252612.

External links[edit]


https://en.wikipedia.org/wiki/AC_power


Hydroelectricity, or hydroelectric power, is electricity produced from hydropower. In 2015, hydropower generated 16.6% of the world's total electricity and 70% of all renewable electricity,[2] and was expected to increase by about 3.1% each year for the next 25 years.

Hydropower is produced in 150 countries, with the Asia-Pacific region generating 33 percent of global hydropower in 2013. China is the largest hydroelectricity producer, with 920 TWh of production in 2013, representing 16.9% of domestic electricity use.

The cost of hydroelectricity is relatively low, making it a competitive source of renewable electricity. The hydro station consumes no water, unlike coal or gas plants. The typical cost of electricity from a hydro station larger than 10 megawatts is 3 to 5 US cents per kilowatt hour.[3] With a dam and reservoir it is also a flexible source of electricity, since the amount produced by the station can be varied up or down very rapidly (as little as a few seconds) to adapt to changing energy demands. Once a hydroelectric complex is constructed, the project produces no direct waste, and it generally has a considerably lower output level of greenhouse gases than photovoltaic power plants and certainly fossil fuel powered energy plants (see also Life-cycle greenhouse-gas emissions of energy sources).[4] However, when constructed in lowland rainforest areas, where inundation of a part of the forest is necessary, they can emit substantial amounts of greenhouse gases.

https://en.wikipedia.org/wiki/Hydroelectricity


Osmotic powersalinity gradient power or blue energy is the energy available from the difference in the salt concentration between seawater and river water. Two practical methods for this are reverse electrodialysis (RED) and pressure retarded osmosis (PRO). Both processes rely on osmosis with membranes. The key waste product is brackish water. This byproduct is the result of natural forces that are being harnessed: the flow of fresh water into seas that are made up of salt water.

In 1954, Pattle[1] suggested that there was an untapped source of power when a river mixes with the sea, in terms of the lost osmotic pressure, however it was not until the mid ‘70s where a practical method of exploiting it using selectively permeable membranes by Loeb [2] was outlined.

The method of generating power by pressure retarded osmosis was invented by Prof. Sidney Loeb in 1973 at the Ben-Gurion University of the Negev, Beersheba, Israel.[3]The idea came to Prof. Loeb, in part, as he observed the Jordan River flowing into the Dead Sea. He wanted to harvest the energy of mixing of the two aqueous solutions (the Jordan River being one and the Dead Sea being the other) that was going to waste in this natural mixing process.[4] In 1977 Prof. Loeb invented a method of producing power by a reverse electrodialysis heat engine.[5]

The technologies have been confirmed in laboratory conditions. They are being developed into commercial use in the Netherlands (RED) and Norway (PRO). The cost of the membrane has been an obstacle. A new, lower cost membrane, based on an electrically modified polyethylene plastic, made it fit for potential commercial use.[6] Other methods have been proposed and are currently under development. Among them, a method based on electric double-layer capacitor technology[7] and a method based on vapor pressuredifference.[8]

https://en.wikipedia.org/wiki/Osmotic_power


Marine current power

Technologies for marine-current-power generation
There are several types of open-flow devices that can be used in marine-current-power applications; many of them are modern descendants of the waterwheel or similar. However, the more technically sophisticated designs, derived from wind-power rotors, are the most likely to achieve enough cost-effectiveness and reliability to be practical in a massive marine-current-power future scenario. Even though there is no generally accepted term for these open-flow hydro-turbines, some sources refer to them as water-current turbines. There are two main types of water current turbines that might be considered: axial-flow horizontal-axis propellers (with both variable-pitch or fixed-pitch), and cross-flow Darrieus rotors. Both rotor types may be combined with any of the three main methods for supporting water-current turbines: floating moored systems, sea-bed mounted systems, and intermediate systems. Sea-bed-mounted monopile structures constitute the first-generation marine current power systems. They have the advantage of using existing (and reliable) engineering know-how, but they are limited to relatively shallow waters (about 20 to 40 m depth).[3]

https://en.wikipedia.org/wiki/Marine_current_power

The Pelamis Wave Energy Converter was a technology that used the motion of ocean surface waves to create electricity. The machine was made up of connected sections which flex and bend as waves pass; it is this motion which is used to generate electricity.

Developed by the now defunct [1] Scottish company Pelamis Wave Power (formerly Ocean Power Delivery), the Pelamis became the first offshore wave machine to generate electricity into the grid, when it was first connected to the UK grid in 2004.[2]Pelamis Wave Power then went on to build and test five additional Pelamis machines: three first-generation P1 machines, which were tested in a farm off the coast of Portugal in 2009, and two second-generation machines, the Pelamis P2, were tested off Orkney between 2010 and 2014. The company went into administration in November 2014, with the intellectual property transferred to the Scottish Governmentbody Wave Energy Scotland.[3]

https://en.wikipedia.org/wiki/Pelamis_Wave_Energy_Converter


Ocean Thermal Energy Conversion (OTEC) uses the ocean thermal gradient between cooler deep and warmer shallow or surface seawatersto run a heat engine and produce useful work, usually in the form of electricity. OTEC can operate with a very high capacity factor and so can operate in base load mode.

The denser cold water masses, formed by ocean surface water interaction with cold atmosphere in quite specific areas of the North Atlantic and the Southern Ocean, sink into the deep sea basins and spread in entire deep ocean by the thermohaline circulationUpwelling of cold water from the deep ocean is replenished by the downwelling of cold surface sea water.

Among ocean energy sources, OTEC is one of the continuously available renewable energy resources that could contribute to base-load power supply.[1] The resource potential for OTEC is considered to be much larger than for other ocean energy forms.[2] Up to 88,000 TWh/yr of power could be generated from OTEC without affecting the ocean's thermal structure.[3]

Systems may be either closed-cycle or open-cycle. Closed-cycle OTEC uses working fluids that are typically thought of as refrigerants such as ammonia or R-134a. These fluids have low boiling points, and are therefore suitable for powering the system's generator to generate electricity. The most commonly used heat cycle for OTEC to date is the Rankine cycle, using a low-pressure turbine. Open-cycle engines use vapor from the seawater itself as the working fluid.

OTEC can also supply quantities of cold water as a by-product. This can be used for air conditioning and refrigeration and the nutrient-rich deep ocean water can feed biological technologies. Another by-product is fresh water distilled from the sea.[4]

OTEC theory was first developed in the 1880s and the first bench size demonstration model was constructed in 1926. Currently operating pilot-scale OTEC plants are located in Japan, overseen by Saga University, and Makai in Hawaii. [5]

https://en.wikipedia.org/wiki/Ocean_thermal_energy_conversion


Tidal power or tidal energy is harnessed by converting energy from tides into useful forms of power, mainly electricity using various methods.

Although not yet widely used, tidal energy has the potential for future electricity generation. Tides are more predictable than the wind and the sun. Among sources of renewable energy, tidal energy has traditionally suffered from relatively high cost and limited availability of sites with sufficiently high tidal ranges or flow velocities, thus constricting its total availability. However, many recent technological developments and improvements, both in design (e.g. dynamic tidal powertidal lagoons) and turbine technology (e.g. new axial turbinescross flow turbines), indicate that the total availability of tidal power may be much higher than previously assumed and that economic and environmental costs may be brought down to competitive levels.

Sihwa Lake Tidal Power Station, located in Gyeonggi Province, South Korea, is the world's largest tidal power installation, with a total power output capacity of 254 MW.

https://en.wikipedia.org/wiki/Tidal_power


Wave power is the capture of energy of wind waves to do useful work – for example, electricity generationwater desalination, or pumping water. A machine that exploits wave power is a wave energy converter (WEC).

Wave power is distinct from tidal power, which captures the energy of the current caused by the gravitational pull of the Sun and Moon. Waves and tides are also distinct from ocean currents which are caused by other forces including breaking waveswind, the Coriolis effectcabbeling, and differences in temperature and salinity.

Wave-power generation is not a widely employed commercial technology compared to other established renewable energy sources such as wind powerhydropower and solar power. However, there have been attempts to use this source of energy since at least 1890[1] mainly due to its high power density. As a comparison, the power density of photovoltaic panels is 1 kW/m2 at peak solar insolation, and the power density of the wind is 1 kW/m2 at 12 m/s; the average annual power density of the waves at e.g. San Francisco coast is 25 kW/m2.[2]

In 2000 the world's first commercial Wave Power Device, the Islay LIMPETwas installed on the coast of Islay in Scotland and connected to the National Grid.[3] In 2008, the first experimental multi-generator wave farm was opened in Portugal at the Aguรงadoura Wave Park.[4]

https://en.wikipedia.org/wiki/Wave_power


Using high voltage allowed an AC system to transmit power over longer distances from more efficient large central generating stations.

https://en.wikipedia.org/wiki/War_of_the_currents


Deformed power is a concept in electrical engineering which characterize the distortion to the sinusoidal states in electric network. It was introduced by Constantin Budeanu in 1927.

It is defined by the following formula:

where S, P, Q, D are the apparentactivereactive and deformed powers.

In linear electrical components like electrical resistance occurs no deformed (distortion) power. It is caused by nonlinear loads represented for instance by semiconducting devices (rectifiers, thyristors) especially when used for rectification of an alternating current to a direct one. The rectification is needed especially for providing current for electric traction and electrochemical industry.

https://en.wikipedia.org/wiki/Deformed_power


Solar power is the conversion of energy from sunlight into electricity, either directly using photovoltaics (PV), indirectly using concentrated solar power, or a combination. Concentrated solar power systems use lenses or mirrors and solar tracking systems to focus a large area of sunlight into a small beam. Photovoltaic cells convert light into an electric current using the photovoltaic effect.[1]

Photovoltaics were initially solely used as a source of electricity for small and medium-sized applications, from the calculator powered by a single solar cell to remote homes powered by an off-grid rooftop PV system. Commercial concentrated solar power plants were first developed in the 1980s. As the cost of solar electricity has fallen, the number of grid-connected solar PV systemshas grown into the millions and gigawatt-scale photovoltaic power stations are being built. Solar PV is rapidly becoming an inexpensive, low-carbon technology to harness renewable energy from the Sun. The current largest photovoltaic power station in the world is the Pavagada Solar Park, Karnataka, India with a generation capacity of 2050 MW.[2]

The International Energy Agency projected in 2014 that under its "high renewables" scenario, by 2050, solar photovoltaics and concentrated solar power would contribute about 16 and 11 percent, respectively, of worldwide electricity consumption, and solar would be the world's largest source of electricity. Most solar installations would be in China and India.[3] In 2019, solar power generated 2.7% of the world's electricity, growing over 24% from the previous year.[4] As of October 2020, the unsubsidised levelised cost of electricity for utility-scale solar power is around $36/MWh.[5]

https://en.wikipedia.org/wiki/Solar_power


Wind power or wind energy is the use of wind to provide mechanical power through wind turbines to turn electric generators for electrical power. Wind power is a popular sustainablerenewable energy source that has a much smaller impact on the environment compared to burning fossil fuels.

Wind farms consist of many individual wind turbines, which are connected to the electric power transmission network. Onshore wind is an inexpensive source of electric power, competitive with, or in many places cheaper than, coal or gas plants. Onshore wind farms have a greater visual impact on the landscape than other power stations, as they need to be spread over more land[3][4] and need to be built in rural areas, which can lead to "industrialization of the countryside"[5] and habitat loss.[4] Offshore wind is steadier and stronger than on land and offshore farms have less visual impact, but construction and maintenance costs are significantly higher. Small onshore wind farms can feed some energy into the grid or provide power to isolated off-grid locations.

Wind power is an intermittent energy source, which cannot be dispatched on demand.[3] Locally, it gives variable power, which is consistent from year to year but varies greatly over shorter time scales. Therefore, it must be used with other power sources to give a reliable supply. Power-management techniques such as having dispatchablepower sources (often gas-fired power plant or hydroelectric power), excess capacity, geographically distributed turbines, exporting and importing power to neighboring areas, grid storage, reducing demand when wind production is low, and curtailing occasional excess wind power, are used to overcome these problems. As the proportion of wind power in a region increases, more conventional power sources are needed to back it up, and the grid may need to be upgraded.[6][7] Weather forecasting permits the electric-power network to be readied for the predictable variations in production that occur.

In 2019, wind supplied 1430 TWh of electricity, which was 5.3% of worldwide electrical generation,[8] with the global installed wind power capacity reaching more than 651 GW, an increase of 10% over 2018.[9]

https://en.wikipedia.org/wiki/Wind_power


Nuclear power is the use of nuclear reactions to produce electricity. Nuclear power can be obtained from nuclear fissionnuclear decay and nuclear fusion reactions. Presently, the vast majority of electricity from nuclear power is produced by nuclear fission of uranium and plutonium in nuclear power plants. Nuclear decay processes are used in niche applications such as radioisotope thermoelectric generators in some space probes such as Voyager 2. Generating electricity from fusion power remains the focus of international research.

Civilian nuclear power supplied 2,586 terawatt hours (TWh) of electricity in 2019, equivalent to about 10% of global electricity generation, and was the second-largest low-carbon powersource after hydroelectricity. As of January 2021, there are 442 civilian fission reactors in the world, with a combined electrical capacity of 392 gigawatt (GW). There are also 53 nuclear power reactors under construction and 98 reactors planned, with a combined capacity of 60 GW and 103 GW, respectively. The United States has the largest fleet of nuclear reactors, generating over 800 TWh zero-emissions electricity per year with an average capacity factor of 92%. Most reactors under construction are generation III reactorsin Asia.

Nuclear power has one of the lowest levels of fatalities per unit of energy generated compared to other energy sources. Coalpetroleumnatural gas and hydroelectricity each have caused more fatalities per unit of energy due to air pollution and accidents. Since its commercialization in the 1970s, nuclear power has prevented about 1.84 million air pollution-related deaths and the emission of about 64 billion tonnes of carbon dioxide equivalent that would have otherwise resulted from the burning of fossil fuels.Accidents in nuclear power plants include the Chernobyl disaster in the Soviet Union in 1986, the Fukushima Daiichi nuclear disaster in Japan in 2011, and the more contained Three Mile Island accident in the United States in 1979.

There is a debate about nuclear power. Proponents, such as the World Nuclear Association and Environmentalists for Nuclear Energy, contend that nuclear power is a safe, sustainable energy source that reduces carbon emissionsNuclear power opponents, such as Greenpeace and NIRS, contend that nuclear power poses many threats to people and the environment.

The first light bulbs ever lit by electricity generated by nuclear power at EBR-1 at Argonne National Laboratory-West, December 20, 1951.[2]

In the United States, these research efforts led to the creation of the first man-made nuclear reactor, the Chicago Pile-1, which achieved criticality on December 2, 1942. The reactor's development was part of the Manhattan Project, the Allied effort to create atomic bombsduring World War II. It led to the building of larger single-purpose production reactors for the production of weapons-grade plutonium for use in the first nuclear weapons. The United States tested the first nuclear weapon in July 1945, the Trinity test, with the atomic bombings of Hiroshima and Nagasaki taking place one month later.

Despite the military nature of the first nuclear devices, the 1940s and 1950s were characterized by strong optimism for the potential of nuclear power to provide cheap and endless energy.[6] Electricity was generated for the first time by a nuclear reactor on December 20, 1951, at the EBR-I experimental station near Arco, Idaho, which initially produced about 100 kW.[7][8] In 1953, American President Dwight Eisenhower gave his "Atoms for Peace" speech at the United Nations, emphasizing the need to develop "peaceful" uses of nuclear power quickly. This was followed by the Atomic Energy Act of 1954 which allowed rapid declassification of U.S. reactor technology and encouraged development by the private sector.

The first organization to develop practical nuclear power was the U.S. Navy, with the S1W reactor for the purpose of propelling submarines and aircraft carriers. The first nuclear-powered submarine, USS Nautilus, was put to sea in January 1954.[9][10] The S1W reactorwas a Pressurized Water Reactor. This design was chosen because it was simpler, more compact, and easier to operate compared to alternative designs, thus more suitable to be used in submarines. This decision would result in the PWR being the reactor of choice also for power generation, thus having a lasting impact on the civilian electricity market in the years to come.[11]

On June 27, 1954, the Obninsk Nuclear Power Plant in the USSR became the world's first nuclear power plant to generate electricity for a power grid, producing around 5 megawatts of electric power.[12] The world's first commercial nuclear power station, Calder Hall at Windscale, England was connected to the national power grid on 27 August 1956. In common with a number of other generation I reactors, the plant had the dual purpose of producing electricity and plutonium-239, the latter for the nascent nuclear weapons program in Britain.[13]

The first major accident at a nuclear reactor occurred in 1961 at the SL-1, a U.S. Army experimental nuclear power reactor at the Idaho National Laboratory. An uncontrolled chain reaction resulted in a steam explosion which killed the three crew members and caused a meltdown.[14][15] Another serious accident happened in 1968, when one of the two liquid-metal-cooled reactors on board the Soviet submarine K-27 underwent a fuel element failure, with the emission of gaseous fission products into the surrounding air, resulting in 9 crew fatalities and 83 injuries.[16]

Nuclear power plants are thermal power stations that generate electricity by harnessing the thermal energy released from nuclear fission. A fission nuclear power plant is generally composed of a nuclear reactor, in which the nuclear reactions generating heat take place; a cooling system, which removes the heat from inside the reactor; a steam turbine, which transforms the heat into mechanical energy; an electric generator, which transforms the mechanical energy into electrical energy.[55]

When a neutron hits the nucleus of a uranium-235 or plutonium atom, it can split the nucleus into two smaller nuclei. The reaction is called nuclear fission. The fission reaction releases energy and neutrons. The released neutrons can hit other uranium or plutonium nuclei, causing new fission reactions, which release more energy and more neutrons. This is called a chain reaction. In most commercial reactors, the reaction rate is controlled by control rods that absorb excess neutrons. The controllability of nuclear reactors depends on the fact that a small fraction of neutrons resulting from fission are delayed. The time delay between the fission and the release of the neutrons slows down changes in reaction rates and gives time for moving the control rods to adjust the reaction rate.[55][56]

The life cycle of nuclear fuel starts with uranium mining. The uranium ore is then converted into a compact ore concentrate form, known as yellowcake (U3O8), to facilitate transport.[57]Fission reactors generally need uranium-235, a fissile isotope of uranium. The concentration of uranium-235 in natural uranium is very low (about 0.7%). Some reactors can use this natural uranium as fuel, depending on their neutron economy. These reactors generally have graphite or heavy water moderators. For light water reactors, the most common type of reactor, this concentration is too low, and it must be increased by a process called uranium enrichment.[57] In civilian light water reactors, uranium is typically enriched to 3.5-5% uranium-235.[58] The uranium is then generally converted into uranium oxide (UO2), a ceramic, that is then compressively sintered into fuel pellets, a stack of which forms fuel rods of the proper composition and geometry for the particular reactor.[58]

After some time in the reactor, the fuel will have reduced fissile material and increased fission products, until its use becomes impractical.[58] At this point, the spent fuel will be moved to a spent fuel pool which provides cooling for the thermal heat and shielding for ionizing radiation. After several months or years, the spent fuel is radioactively and thermally cool enough to be moved to dry storage casks or reprocessed.[58]

Proportions of the isotopes uranium-238 (blue) and uranium-235 (red) found in natural uranium and in enriched uranium for different applications. Light water reactors use 3-5% enriched uranium, while CANDU reactors work with natural uranium.

Uranium is a fairly common element in the Earth's crust: it is approximately as common as tin or germanium, and is about 40 times more common than silver.[59] Uranium is present in trace concentrations in most rocks, dirt, and ocean water, but is generally economically extracted only where it is present in high concentrations. Uranium mining can be underground, open-pit, or in-situ leachmining. An increasing number of the highest output mines are remote underground operations, such as McArthur River uranium mine, in Canada, which by itself accounts for 13% of global production. As of 2011 the world's known resources of uranium, economically recoverable at the arbitrary price ceiling of US$130/kg, were enough to last for between 70 and 100 years.[60][61][62] In 2007, the OECD estimated 670 years of economically recoverable uranium in total conventional resources and phosphate ores assuming the then-current use rate.[63]

Light water reactors make relatively inefficient use of nuclear fuel, mostly using only the very rare uranium-235 isotope.[64] Nuclear reprocessing can make this waste reusable, and newer reactors also achieve a more efficient use of the available resources than older ones.[64] With a pure fast reactor fuel cycle with a burn up of all the uranium and actinides (which presently make up the most hazardous substances in nuclear waste), there is an estimated 160,000 years worth of Uranium in total conventional resources and phosphate ore at the price of 60–100 US$/kg.[65]

Unconventional uranium resources also exist. Uranium is naturally present in seawater at a concentration of about 3 micrograms per liter,[66][67][68] with 4.4 billion tons of uranium considered present in seawater at any time.[69] In 2014 it was suggested that it would be economically competitive to produce nuclear fuel from seawater if the process was implemented at large scale.[70] Over geological timescales, uranium extracted on an industrial scale from seawater would be replenished by both river erosion of rocks and the natural process of uranium dissolved from the surface area of the ocean floor, both of which maintain the solubility equilibriaof seawater concentration at a stable level.[69] Some commentators have argued that this strengthens the case for nuclear power to be considered a renewable energy.[71]

Typical composition of uranium dioxide fuel before and after approximately 3 years in the once-through nuclear fuel cycle of a LWR.[72]

The normal operation of nuclear power plants and facilities produce radioactive waste, or nuclear waste. This type of waste is also produced during plant decommissioning. There are two broad categories of nuclear waste: low-level waste and high-level waste.[73] The first has low radioactivity and includes contaminated items such as clothing, which poses limited threat. High-level waste is mainly the spent fuel from nuclear reactors, which is very radioactive and must be cooled and then safely disposed of or reprocessed.[73]

High-level waste

Activity of spent UOx fuel in comparison to the activity of natural uranium ore over time.[74][72]
Dry cask storage vessels storing spent nuclear fuel assemblies

The most important waste stream from nuclear power reactors is spent nuclear fuel, which is considered high-level waste. For LWRs, spent fuel is typically composed of 95% uranium, 4% fission products, and about 1% transuranic actinides (mostly plutoniumneptunium and americium).[75] The plutonium and other transuranics are responsible for the bulk of the long-term radioactivity, whereas the fission products are responsible for the bulk of the short-term radioactivity.[76]

High-level waste requires treatment, management, and isolation from the environment. These operations present considerable challenges due to the extremely long periods these materials remain potentially hazardous to living organisms. This is due to long-lived fission products (LLFP), such as technetium-99 (half-life 220,000 years) and iodine-129 (half-life 15.7 million years).[77] LLFP dominate the waste stream in terms of radioactivity, after the more intensely radioactive short-lived fission products (SLFPs) have decayed into stable elements, which takes approximately 300 years.[72] After about 500 years, the waste becomes less radioactive than natural uranium ore.[78] Commonly suggested methods to isolate LLFP waste from the biosphere include separation and transmutation,[72] synroctreatments, or deep geological storage.[79][80][81][82]

Thermal-neutron reactors, which presently constitute the majority of the world fleet, cannot burn up the reactor grade plutonium that is generated during the reactor operation. This limits the life of nuclear fuel to a few years. In some countries, such as the United States, spent fuel is classified in its entirety as a nuclear waste.[83] In other countries, such as France, it is largely reprocessed to produce a partially recycled fuel, known as mixed oxide fuel or MOX. For spent fuel that does not undergo reprocessing, the most concerning isotopes are the medium-lived transuranic elements, which are led by reactor-grade plutonium (half-life 24,000 years).[84] Some proposed reactor designs, such as the Integral Fast Reactor and molten salt reactors, can use as fuel the plutonium and other actinides in spent fuel from light water reactors, thanks to their fast fission spectrum. This offers a potentially more attractive alternative to deep geological disposal.[85][86][87]

The thorium fuel cycle results in similar fission products, though creates a much smaller proportion of transuranic elements from neutron capture events within a reactor. Spent thorium fuel, although more difficult to handle than spent uranium fuel, may present somewhat lower proliferation risks.[88]

Low-level waste

The nuclear industry also produces a large volume of low-level waste, with low radioactivity, in the form of contaminated items like clothing, hand tools, water purifier resins, and (upon decommissioning) the materials of which the reactor itself is built. Low-level waste can be stored on-site until radiation levels are low enough to be disposed of as ordinary waste, or it can be sent to a low-level waste disposal site.[89]

Waste relative to other types

In countries with nuclear power, radioactive wastes account for less than 1% of total industrial toxic wastes, much of which remains hazardous for long periods.[64] Overall, nuclear power produces far less waste material by volume than fossil-fuel based power plants.[90] Coal-burning plants, in particular, produce large amounts of toxic and mildly radioactive ash resulting from the concentration of naturally occurring radioactive materials in coal.[91] A 2008 report from Oak Ridge National Laboratory concluded that coal power actually results in more radioactivity being released into the environment than nuclear power operation, and that the population effective dose equivalent from radiation from coal plants is 100 times that from the operation of nuclear plants.[92]Although coal ash is much less radioactive than spent nuclear fuel by weight, coal ash is produced in much higher quantities per unit of energy generated. It is also released directly into the environment as fly ash, whereas nuclear plants use shielding to protect the environment from radioactive materials.[93]

Nuclear waste volume is small compared to the energy produced. For example, at Yankee Rowe Nuclear Power Station, which generated 44 billion kilowatt hours of electricity when in service, its complete spent fuel inventory is contained within sixteen casks.[94] It is estimated that to produce a lifetime supply of energy for a person at a western standard of living (approximately 3 GWh) would require on the order of the volume of a soda can of low enriched uranium, resulting in a similar volume of spent fuel generated.[95][96][97]

Waste disposal

Storage of radioactive waste at WIPP
Nuclear waste flasks generated by the United States during the Cold War are stored underground at the Waste Isolation Pilot Plant (WIPP) in New Mexico. The facility is seen as a potential demonstration for storing spent fuel from civilian reactors.

Following interim storage in a spent fuel pool, the bundles of used fuel rod assemblies of a typical nuclear power station are often stored on site in dry cask storage vessels.[98]Presently, waste is mainly stored at individual reactor sites and there are over 430 locations around the world where radioactive material continues to accumulate.

Disposal of nuclear waste is often considered the most politically divisive aspect in the lifecycle of a nuclear power facility.[99] With the lack of movement of nuclear waste in the 2 billion year old natural nuclear fission reactors in OkloGabon being cited as "a source of essential information today."[100][101] Experts suggest that centralized underground repositories which are well-managed, guarded, and monitored, would be a vast improvement.[99] There is an "international consensus on the advisability of storing nuclear waste in deep geological repositories".[102] With the advent of new technologies, other methods including horizontal drillhole disposal into geologically inactive areas have been proposed.[103][104]

Most waste packaging, small-scale experimental fuel recycling chemistry and radiopharmaceutical refinement is conducted within remote-handled hot cells.

There are no commercial scale purpose built underground high-level waste repositories in operation.[102][105][106] However, in Finland the Onkalo spent nuclear fuel repository of the Olkiluoto Nuclear Power Plant is under construction as of 2015.[107]

Reprocessing

Most thermal-neutron reactors run on a once-through nuclear fuel cycle, mainly due to the low price of fresh uranium. However, many reactors are also fueled with recycled fissionable materials that remain in spent nuclear fuel. The most common fissionable material that is recycled is the reactor-grade plutonium (RGPu) that is extracted from spent fuel, it is mixed with uranium oxide and fabricated into mixed-oxide or MOX fuel. Because thermal LWRs remain the most common reactor worldwide, this type of recycling is the most common. It is considered to increase the sustainability of the nuclear fuel cycle, reduce the attractiveness of spent fuel to theft, and lower the volume of high level nuclear waste.[108] Spent MOX fuel cannot generally be recycled for use in thermal-neutron reactors. This issue does not affect fast-neutron reactors, which are therefore preferred in order to achieve the full energy potential of the original uranium.[109][110]

The main constituent of spent fuel from LWRs is slightly enriched uranium. This can be recycled into reprocessed uranium (RepU), which can be used in a fast reactor, used directly as fuel in CANDU reactors, or re-enriched for another cycle through an LWR. Re-enriching of reprocessed uranium is common in France and Russia.[111] Reprocessed uranium is also safer in terms of nuclear proliferation potential.[112][113][114]

Reprocessing has the potential to recover up to 95% of the uranium and plutonium fuel in spent nuclear fuel, as well as reduce long-term radioactivity within the remaining waste. However, reprocessing has been politically controversial because of the potential for nuclear proliferation and varied perceptions of increasing the vulnerability to nuclear terrorism.[109][115] Reprocessing also leads to higher fuel cost compared to the once-through fuel cycle.[109][115] While reprocessing reduces the volume of high-level waste, it does not reduce the fission products that are the primary causes of residual heat generation and radioactivity for the first few centuries outside the reactor. Thus, reprocessed waste still requires an almost identical treatment for the initial first few hundred years.

Reprocessing of civilian fuel from power reactors is currently done in France, the United Kingdom, Russia, Japan, and India. In the United States, spent nuclear fuel is currently not reprocessed.[111] The La Hague reprocessing facility in France has operated commercially since 1976 and is responsible for half the world's reprocessing as of 2010.[116] It produces MOX fuel from spent fuel derived from several countries. More than 32,000 tonnes of spent fuel had been reprocessed as of 2015, with the majority from France, 17% from Germany, and 9% from Japan.[117]

Breeding

Nuclear fuel assemblies being inspected before entering a pressurized water reactor in the United States.

Breeding is the process of converting non-fissile material into fissile material that can be used as nuclear fuel. The non-fissile material that can be used for this process is called fertile material, and constitute the vast majority of current nuclear waste. This breeding process occurs naturally in breeder reactors. As opposed to light water thermal-neutron reactors, which use uranium-235 (0.7% of all natural uranium), fast-neutron breeder reactors use uranium-238 (99.3% of all natural uranium) or thorium. A number of fuel cycles and breeder reactor combinations are considered to be sustainable or renewable sources of energy.[118][119] In 2006 it was estimated that with seawater extraction, there was likely five billion years' worth of uranium resources for use in breeder reactors.[120]

Breeder technology has been used in several reactors, but as of 2006, the high cost of reprocessing fuel safely requires uranium prices of more than US$200/kg before becoming justified economically.[121] Breeder reactors are however being developed for their potential to burn up all of the actinides (the most active and dangerous components) in the present inventory of nuclear waste, while also producing power and creating additional quantities of fuel for more reactors via the breeding process.[122][123] As of 2017, there are two breeders producing commercial power, BN-600 reactor and the BN-800 reactor, both in Russia.[124] The Phรฉnix breeder reactor in France was powered down in 2009 after 36 years of operation.[124] Both China and India are building breeder reactors. The Indian 500 MWe Prototype Fast Breeder Reactor is in the commissioning phase,[125] with plans to build more.[126]

Another alternative to fast-neutron breeders are thermal-neutron breeder reactors that use uranium-233 bred from thorium as fission fuel in the thorium fuel cycle.[127] Thorium is about 3.5 times more common than uranium in the Earth's crust, and has different geographic characteristics.[127] India's three-stage nuclear power programme features the use of a thorium fuel cycle in the third stage, as it has abundant thorium reserves but little uranium.[127]

Nuclear decommissioning

Nuclear decommissioning is the process of dismantling a nuclear facility to the point that it no longer requires measures for radiation protection,[128] returning the facility and its parts to a safe enough level to be entrusted for other uses.[129] Due to the presence of radioactive materials, nuclear decommissioning presents technical and economic challenges.[130] The costs of decommissioning are generally spread over the lifetime of a facility and saved in a decommissioning fund.[131]

https://en.wikipedia.org/wiki/Nuclear_power


thermal power station is a power station in which heat energy is converted to electricity. Typically, water is heated into steam, which is used to drive an electrical generator. After it passes through the turbine the steam is condensed in a steam condenser and recycled to where it was heated. This is known as a Rankine cycle. The greatest variation in the design of thermal power stations is due to the different heat sources: fossil fuel, nuclear energy, solar energy, biofuels, and waste incineration are all used. Certain thermal power stations are also designed to produce heat for industrial purposes, for district heating, or desalination of water, in addition to generating electrical power.

https://en.wikipedia.org/wiki/Thermal_power_station


The Prototype Fast Breeder Reactor (PFBR) is a 500 MWe fast breeder nuclear reactor presently being constructed at the Madras Atomic Power Station in KalpakkamIndia.[2] The Indira Gandhi Centre for Atomic Research (IGCAR) is responsible for the design of this reactor. The facility builds on the decades of experience gained from operating the lower power Fast Breeder Test Reactor (FBTR). Originally planned to be commissioned in 2012, the construction of the reactor suffered from multiple delays. As of August 2020, criticality is planned to be achieved in 2021.[3]

https://en.wikipedia.org/wiki/Prototype_Fast_Breeder_Reactor


THORIUM FUEL

The thorium fuel cycle is a nuclear fuel cycle that uses an isotope of thorium232
Th
, as the fertile material. In the reactor, 232
Th
 is transmuted into the fissile artificial uranium isotope 233
U
 which is the nuclear fuel. Unlike natural uranium, natural thorium contains only trace amounts of fissile material (such as 231
Th
), which are insufficient to initiate a nuclear chain reaction. Additional fissile material or another neutron source is necessary to initiate the fuel cycle. In a thorium-fuelled reactor, 232
Th
 absorbs neutronsto produce 233
U
. This parallels the process in uranium breeder reactors whereby fertile 238
U
 absorbs neutrons to form fissile 239
Pu
. Depending on the design of the reactor and fuel cycle, the generated 233
U
 either fissions in situ or is chemically separated from the used nuclear fuel and formed into new nuclear fuel.

The thorium fuel cycle has several potential advantages over a uranium fuel cycle, including thorium's greater abundance, superior physical and nuclear properties, reduced plutonium and actinide production,[1] and better resistance to nuclear weapons proliferationwhen used in a traditional light water reactor[1][2] though not in a molten salt reactor.[3][4]

Concerns about the limits of worldwide uranium resources motivated initial interest in the thorium fuel cycle.[5] It was envisioned that as uranium reserves were depleted, thorium would supplement uranium as a fertile material. However, for most countries uranium was relatively abundant and research in thorium fuel cycles waned. A notable exception was India's three-stage nuclear power programme.[6] In the twenty-firstf century thorium's potential for improving proliferation resistance and waste characteristics led to renewed interest in the thorium fuel cycle.[7][8][9]

At Oak Ridge National Laboratory in the 1960s, the Molten-Salt Reactor Experiment used 233
U
 as the fissile fuel in an experiment to demonstrate a part of the Molten Salt Breeder Reactor that was designed to operate on the thorium fuel cycle.  Molten salt reactor(MSR) experiments assessed thorium's feasibility, using thorium(IV) fluoride dissolved in a molten salt fluid that eliminated the need to fabricate fuel elements. The MSR program was defunded in 1976 after its patron Alvin Weinberg was fired.[10]

In 1993, Carlo Rubbia proposed the concept of an energy amplifier or "accelerator driven system" (ADS), which he saw as a novel and safe way to produce nuclear energy that exploited existing accelerator technologies. Rubbia's proposal offered the potential to incinerate high-activity nuclear waste and produce energy from natural thorium and depleted uranium.[11][12]

Kirk Sorensen, former NASA scientist and Chief Technologist at Flibe Energy, has been a long-time promoter of thorium fuel cycle and particularly liquid fluoride thorium reactors (LFTRs). He first researched thorium reactors while working at NASA, while evaluating power plant designs suitable for lunar colonies. In 2006 Sorensen started "energyfromthorium.com" to promote and make information available about this technology.[13]

(Fluoride production capabilities generation/geoqty/etc.; oversights anticipated, fundamental diff bet man made and natural)

A 2011 MIT study concluded that although there is little in the way of barriers to a thorium fuel cycle, with current or near term light-water reactor designs there is also little incentive for any significant market penetration to occur. As such they conclude there is little chance of thorium cycles replacing conventional uranium cycles in the current nuclear power market, despite the potential benefits.[14]

Nuclear reactions with thorium[edit]

"Thorium is like wet wood […it] needs to be turned into fissile uranium just as wet wood needs to be dried in a furnace."

— Ratan Kumar Sinha, former Chairman of the Atomic Energy Commission of India.[15]

In the thorium cycle, fuel is formed when 232
Th
 captures a neutron (whether in a fast reactoror thermal reactor) to become 233
Th
. This normally emits an electron and an anti-neutrino (
ฮฝ
) by 
ฮฒ
 decay
 to become 233
Pa
. This then emits another electron and anti-neutrino by a second 
ฮฒ
 decay to become 233
U
, the fuel:

Fission product waste[edit]

Nuclear fission produces radioactive fission products which can have half-lives from days to greater than 200,000 years. According to some toxicity studies,[16] the thorium cycle can fully recycle actinide wastes and only emit fission product wastes, and after a few hundred years, the waste from a thorium reactor can be less toxic than the uranium ore that would have been used to produce low enriched uranium fuel for a light water reactor of the same power. Other studies assume some actinide losses and find that actinide wastes dominate thorium cycle waste radioactivity at some future periods.[17]

https://en.wikipedia.org/wiki/Protactinium

Actinide waste[edit]

In a reactor, when a neutron hits a fissile atom (such as certain isotopes of uranium), it either splits the nucleus or is captured and transmutes the atom. In the case of 233
U
, the transmutations tend to produce useful nuclear fuels rather than transuranic waste. When 233
U
 absorbs a neutron, it either fissions or becomes 234
U
. The chance of fissioning on absorption of a thermal neutron is about 92%; the capture-to-fission ratio of 233
U
, therefore, is about 1:12 – which is better than the corresponding capture vs. fission ratios of 235
U
 (about 1:6), or 239
Pu
 or 241
Pu
 (both about 1:3).[5][18] The result is less transuranic waste than in a reactor using the uranium-plutonium fuel cycle.

Transmutations in the thorium fuel cycle
237Np
231U232U233U234U235U236U237U
231Pa232Pa233Pa234Pa
230Th231Th232Th233Th
  • Nuclides with a yellow background in italic have half-lives under 30 days
  • Nuclides in bold have half-lives over 1,000,000 years
  • Nuclides in red frames are fissile

234
U
, like most actinides with an even number of neutrons, is not fissile, but neutron capture produces fissile 235
U
. If the fissile isotope fails to fission on neutron capture, it produces 236
U
237
Np
238
Pu
, and eventually fissile 239
Pu
 and heavier isotopes of plutonium. The 237
Np
 can be removed and stored as waste or retained and transmuted to plutonium, where more of it fissions, while the remainder becomes 242
Pu
, then americium and curium, which in turn can be removed as waste or returned to reactors for further transmutation and fission.

However, the 231
Pa
 (with a half-life of 3.27×104 years) formed via (n,2n) reactions with 232
Th
 (yielding 231
Th
that decays to 231
Pa
), while not a transuranic waste, is a major contributor to the long-term radiotoxicity of spent nuclear fuel.

Uranium-232 contamination[edit]

232
U
 is also formed in this process, via (n,2n) reactions between fast neutrons and 233
U
233
Pa
, and 232
Th
:

Unlike most even numbered heavy isotopes, 232
U
 is also a fissile fuel fissioning just over half the time when it absorbs a thermal neutron.[19] 232
U
 has a relatively short half-life (68.9 years), and some decay products emit high energy gamma radiation, such as 224
Rn
212
Bi
 and particularly 208
Tl
.
The full decay chain, along with half-lives and relevant gamma energies, is:

The 4n decay chain of 232Th, commonly called the "thorium series"

232
U
 decays to 228
Th
 where it joins the decay chain of 232
Th

Thorium-cycle fuels produce hard gamma emissions, which damage electronics, limiting their use in bombs. 232
U
 cannot be chemically separated from 233
U
 from used nuclear fuel; however, chemical separation of thorium from uranium removes the decay product 228
Th
 and the radiation from the rest of the decay chain, which gradually build up as 228
Th
 reaccumulates. The contamination could also be avoided by using a molten-salt breeder reactor and separating the 233
Pa
 before it decays into 233
U
.[3] The hard gamma emissions also create a radiological hazard which requires remote handling during reprocessing.

Nuclear fuel[edit]

As a fertile material thorium is similar to 238
U
, the major part of natural and depleted uranium. The thermal neutron absorption cross section (ฯƒa) and resonance integral (average of neutron cross sections over intermediate neutron energies) for 232
Th
 are about three and one third times those of the respective values for 238
U
.

Advantages[edit]

The primary physical advantage of thorium fuel is that it uniquely makes possible a breeder reactor that runs with slow neutrons, otherwise known as a thermal breeder reactor.[5] These reactors are often considered simpler than the more traditional fast-neutron breeders. Although the thermal neutron fission cross section (ฯƒf) of the resulting 233
U
 is comparable to 235
U
 and 239
Pu
, it has a much lower capture cross section (ฯƒฮณ) than the latter two fissile isotopes, providing fewer non-fissile neutron absorptions and improved neutron economy. The ratio of neutrons released per neutron absorbed (ฮท) in 233
U
 is greater than two over a wide range of energies, including the thermal spectrum. A breeding reactor in the uranium - plutonium cycle needs to use fast neutrons, because in the thermal spectrum one neutron absorbed by 239
Pu
 on average leads to less than two neutrons.

Thorium is estimated to be about three to four times more abundant than uranium in Earth's crust,[20] although present knowledge of reserves is limited. Current demand for thorium has been satisfied as a by-product of rare-earth extraction from monazite sands. Notably, there is very little thorium dissolved in seawater, so seawater extraction is not viable, as it is with uranium. Using breeder reactors, known thorium and uranium resources can both generate world-scale energy for thousands of years.

Thorium-based fuels also display favorable physical and chemical properties that improve reactor and repository performance. Compared to the predominant reactor fuel, uranium dioxide (UO
2
), thorium dioxide (ThO
2
) has a higher melting point, higher thermal conductivity, and lower coefficient of thermal expansion. Thorium dioxide also exhibits greater chemical stability and, unlike uranium dioxide, does not further oxidize.[5]

Because the 233
U
 produced in thorium fuels is significantly contaminated with 232
U
 in proposed power reactor designs, thorium-based used nuclear fuel possesses inherent proliferation resistance. 232
U
 cannot be chemically separated from 233
U
 and has several decay products that emit high-energy gamma radiation. These high-energy photons are a radiological hazard that necessitate the use of remote handling of separated uranium and aid in the passive detection of such materials.

The long-term (on the order of roughly 103 to 106 years) radiological hazard of conventional uranium-based used nuclear fuel is dominated by plutonium and other minor actinides, after which long-lived fission products become significant contributors again. A single neutron capture in 238
U
 is sufficient to produce transuranic elements, whereas five captures are generally necessary to do so from 232
Th
. 98–99% of thorium-cycle fuel nuclei would fission at either 233
U
 or 235
U
, so fewer long-lived transuranics are produced. Because of this, thorium is a potentially attractive alternative to uranium in mixed oxide (MOX) fuels to minimize the generation of transuranics and maximize the destruction of plutonium.[21]

Disadvantages[edit]

There are several challenges to the application of thorium as a nuclear fuel, particularly for solid fuel reactors:

In contrast to uranium, naturally occurring thorium is effectively mononuclidic and contains no fissile isotopes; fissile material, generally 233
U
235
U
 or plutonium, must be added to achieve criticality. This, along with the high sintering temperature necessary to make thorium-dioxide fuel, complicates fuel fabrication. Oak Ridge National Laboratory experimented with thorium tetrafluoride as fuel in a molten salt reactor from 1964–1969, which was expected to be easier to process and separate from contaminants that slow or stop the chain reaction.

In an open fuel cycle (i.e. utilizing 233
U
 in situ), higher burnup is necessary to achieve a favorable neutron economy. Although thorium dioxide performed well at burnups of 170,000 MWd/t and 150,000 MWd/t at Fort St. Vrain Generating Station and AVRrespectively,[5] challenges complicate achieving this in light water reactors (LWR), which compose the vast majority of existing power reactors.

In a once-through thorium fuel cycle, thorium-based fuels produce far less long-lived transuranics than uranium-based fuels, some long-lived actinide products constitute a long-term radiological impact, especially 231
Pa
 and 233
U
.[16] On a closed cycle,233
U
 and 231
Pa
 can be reprocessed. 231
Pa
 is also considered an excellent burnable poison absorber in light water reactors.[22]

Another challenge associated with the thorium fuel cycle is the comparatively long interval over which 232
Th
 breeds to 233
U
. The half-life of 233
Pa
 is about 27 days, which is an order of magnitude longer than the half-life of 239
Np
. As a result, substantial 233
Pa
develops in thorium-based fuels. 233
Pa
 is a significant neutron absorber and, although it eventually breeds into fissile 235
U
, this requires two more neutron absorptions, which degrades neutron economy and increases the likelihood of transuranic production.

Alternatively, if solid thorium is used in a closed fuel cycle in which 233
U
 is recycledremote handling is necessary for fuel fabrication because of the high radiation levels resulting from the decay products of 232
U
. This is also true of recycled thorium because of the presence of 228
Th
, which is part of the 232
U
 decay sequence. Further, unlike proven uranium fuel recycling technology (e.g. PUREX), recycling technology for thorium (e.g. THOREX) is only under development.

Although the presence of 232
U
 complicates matters, there are public documents showing that 233
U
 has been used once in a nuclear weapon test. The United States tested a composite 233
U
-plutonium bomb core in the MET (Military Effects Test) blast during Operation Teapot in 1955, though with much lower yield than expected.[23]

Advocates for liquid core and molten salt reactors such as LFTRs claim that these technologies negate thorium's disadvantages present in solid fuelled reactors. As only two liquid-core fluoride salt reactors have been built (the ORNL ARE and MSRE) and neither have used thorium, it is hard to validate the exact benefits.[5]

Thorium-fuelled reactors[edit]

Thorium fuels have fueled several different reactor types, including light water reactorsheavy water reactorshigh temperature gas reactorssodium-cooled fast reactors, and molten salt reactors.[24]

List of thorium-fueled reactors[edit]

From IAEA TECDOC-1450 "Thorium Fuel Cycle – Potential Benefits and Challenges", Table 1: Thorium utilization in different experimental and power reactors.[5] Additionally from Energy Information Administration, "Spent Nuclear Fuel Discharges from U. S. Reactors", Table B4: Dresden 1 Assembly Class.[25]

NameCountryReactor typePowerFuelOperation period
Dresden Unit 1United StatesBWR197 MW(e)ThO2 corner rods, UO2clad in Zircaloy-2 tube1960–1978
AVRGermany (West)HTGR, experimental (pebble bed reactor)15 MW(e)Th+235
U
 Driver fuel, coated fuel particles, oxide & dicarbides
1967–1988
THTR-300HTGR, power (pebble type)300 MW(e)1985–1989
LingenBWR irradiation-testing60 MW(e)Test fuel (Th,Pu)O2pellets1968–1973
Dragon (OECD-Euratom)UK (also Sweden, Norway and Switzerland)HTGR, Experimental (pin-in-block design)20 MWtTh+235
U
 Driver fuel, coated fuel particles, oxide & dicarbides
1966–1973
Peach BottomUnited StatesHTGR, Experimental (prismatic block)40 MW(e)1966–1972
Fort St VrainHTGR, Power (prismatic block)330 MW(e)Th+235
U
 Driver fuel, coated fuel particles, Dicarbide
1976–1989
MSRE ORNLMSR7.5 MWt233
U
 molten fluorides
1964–1969
BORAX-IV & Elk River StationBWR (pin assemblies)2.4 MW(e); 24 MW(e)Th+235
U
 Driver fuel oxide pellets
1963–1968
ShippingportLWBRPWR, (pin assemblies)100 MW(e)Th+233
U
 Driver fuel, oxide pellets
1977–1982
Indian Point 1285 MW(e)1962–1980
SUSPOP/KSTR KEMANetherlandsAqueous homogenous suspension (pin assemblies)1 MWtTh+HEU, oxide pellets1974–1977
NRX & NRUCanadaMTR (pin assemblies)20 MW; 200 MW (see)Th+235
U
, Test Fuel
1947 (NRX) + 1957 (NRU); Irradiation–testing of few fuel elements
CIRUSDHRUVA; & KAMINIIndiaMTR thermal40 MWt; 100 MWt; 30 kWt (low power, research)Al+233
U
 Driver fuel, ‘J’ rod of Th & ThO2, ‘J’ rod of ThO2
1960–2010 (CIRUS); others in operation
KAPS 1 &2; KGS 1 & 2; RAPS 2, 3 & 4PHWR, (pin assemblies)220 MW(e)ThO2 pellets (for neutron flux flattening of initial core after start-up)1980 (RAPS 2) +; continuing in all new PHWRs
FBTRLMFBR, (pin assemblies)40 MWtThO2 blanket1985; in operation
PettenNetherlandsHigh Flux Reactor thorium molten salt experiment45 MW(e)?2024; planned

See also[edit]

Radioactive.svg Nuclear technology portal Crystal energy.svg Energy portal


https://en.wikipedia.org/wiki/Thorium_fuel_cycle


ENERGY AMPLIFIER

In nuclear physics, an energy amplifier is a novel type of nuclear power reactor, a subcritical reactor, in which an energetic particle beam is used to stimulate a reaction, which in turn releases enough energy to power the particle accelerator and leave an energy profit for power generation. The concept has more recently been referred to as an accelerator-driven system (ADS) or accelerator-driven sub-critical reactor.

None have ever been built.

Note. photon proton or hydrogen. scale/quantity/transforms/etc.. subcritical reactor important.

History[edit]

The concept is credited to Italian scientist Carlo Rubbia, a Nobel Prize particle physicist and former director of Europe's CERNinternational nuclear physics lab. He published a proposal for a power reactor based on a proton cyclotron accelerator with a beam energy of 800 MeV to 1 GeV, and a target with thorium as fuel and lead as a coolant. Rubbia's scheme also borrows from ideas developed by a group led by nuclear physicist Charles Bowman of the Los Alamos National Laboratory[1]

Principle and feasibility[edit]

The energy amplifier first uses a particle accelerator (e.g. linacsynchrotroncyclotron or  FFAG) to produce a beam of high-energy (relativistic) protons. The beam is directed to smash into the nucleus of a heavy metal target, such as lead, thorium or uranium. Inelastic collisions between the proton beam and the target results in spallation, which produces twenty to thirty neutrons per event.[2] It might be possible to increase the neutron flux through the use of a neutron amplifier, a thin film of fissile material surrounding the spallation source; the use of neutron amplification in CANDU reactors has been proposed. While CANDU is a critical design, many of the concepts can be applied to a sub-critical system.[3][4] Thorium nuclei absorb neutrons, thus breeding fissile uranium-233, an isotope of uranium which is not found in nature. Moderated neutrons produce U-233 fission, releasing energy.

This design is entirely plausible with currently available technology, but requires more study before it can be declared both practical and economical.

OMEGA project (option making of extra gain from actinides and fission products (ใ‚ชใƒกใ‚ฌ่จˆ็”ป)) is being studied as one of methodology of accelerator-driven system (ADS) in Japan.[5]

Richard Garwin and Georges Charpak describe the energy amplifier in detail in their book "Megawatts and Megatons: A Turning Point in the Nuclear Age?" (2001) on pages 153-163.

Earlier, the general concept of the energy amplifier, namely an accelerator-driven sub-critical reactor, was covered in "The Second Nuclear Era" (1985) pages 62–64, by Alvin M. Weinberg and others.

Advantages[edit]

The concept has several potential advantages over conventional nuclear fission reactors:

  • Subcritical design means that the reaction could not run away — if anything went wrong, the reaction would stop and the reactor would cool down. A meltdown could however occur if the ability to cool the core was lost.
  • Thorium is an abundant element — much more so than uranium — reducing strategic and political supply issues and eliminating costly and energy-intensive isotope separation. There is enough thorium to generate energy for at least several thousand years at current consumption rates.[6]
  • The energy amplifier would produce very little plutonium, so the design is believed to be more proliferation-resistant than conventional nuclear power (although the question of uranium-233 as nuclear weapon material must be assessed carefully).
  • The possibility exists of using the reactor to consume plutonium, reducing the world stockpile of the very-long-lived element.
  • Less long-lived radioactive waste is produced — the waste material would decay after 500 years to the radioactive level of coalash.
  • No new science is required; the technologies to build the energy amplifier have all been demonstrated. Building an energy amplifier requires only engineering effort, not fundamental research (unlike nuclear fusion proposals).
  • Power generation might be economical compared to current nuclear reactor designs if the total fuel cycle and decommissioningcosts are considered.
  • The design could work on a relatively small scale, and had the potential to load-follow by modulating the proton beam, making it more suitable for countries without a well-developed power grid system.
  • Inherent safety and safe fuel transport could make the technology more suitable for developing countries as well as in densely populated areas.

Disadvantages[edit]

  • Each reactor needs its own facility (particle accelerator) to generate the high energy proton beam, which is very costly. Apart from linear particle accelerators, which are very expensive, no proton accelerator of sufficient power and energy (> ~12 MW at 1 GeV) has ever been built. Currently, the Spallation Neutron Source utilizes a 1.44 MW proton beam to produce its neutrons, with upgrades envisioned to 5 MW.[7] Its 1.1 billion USD cost included research equipment not needed for a commercial reactor.
  • The fuel material needs to be chosen carefully to avoid unwanted nuclear reactions. This implies a full-scale nuclear reprocessing plant associated with the energy amplifier.[8]

See also[edit]

https://en.wikipedia.org/wiki/Energy_amplifier


An accelerator-driven subcritical reactor is a nuclear reactor design formed by coupling a substantially subcritical nuclear reactor core with a high-energy proton or electron accelerator. It could use thorium as a fuel, which is more abundant than uranium.[1]

The neutrons needed for sustaining the fission process would be provided by a particle accelerator producing neutrons by spallation or photo-neutron production. These neutrons activate the thorium, enabling fission without needing to make the reactor critical. One benefit of such reactors is the relatively short half-lives of their waste products. For proton accelerators, the high-energy proton beam impacts a molten lead target inside the core, chipping or "spalling" neutrons from the lead nuclei. These spallation neutrons convert fertile thorium to protactinium-233 and after 27 days into fissile uranium-233 and drive the fission reaction in the uranium.[1]

Thorium reactors can generate power from the plutonium residue left by uranium reactors. Thorium does not require significant refining, unlike uranium, and has a higher neutron yield per neutron absorbed.

https://en.wikipedia.org/wiki/Accelerator-driven_subcritical_reactor


Wednesday, August 11, 2021

08-11-2021-1647 - Nuclear transmutation

Nuclear transmutation is the conversion of one chemical element or an isotope into another chemical element.[1] Because any element (or isotope of one) is defined by its number of protons(and neutrons) in its atoms, i.e. in the atomic nucleus, nuclear transmutation occurs in any process where the number of protons or neutrons in the nucleus is changed.

A transmutation can be achieved either by nuclear reactions (in which an outside particle reacts with a nucleus) or by radioactive decay, where no outside cause is needed.

https://en.wikipedia.org/wiki/Nuclear_transmutation

Muon capture is the capture of a negative muon by a proton, usually resulting in production of a neutron and a neutrino, and sometimes a gamma photon.

https://en.wikipedia.org/wiki/Muon_capture

breeder reactor is a nuclear reactor that generates more fissile material than it consumes.[1] Breeder reactors achieve this because their neutron economy is high enough to create more fissile fuel than they use, by irradiation of a fertile material, such as uranium-238 or thorium-232, that is loaded into the reactor along with fissile fuel. Breeders were at first found attractive because they made more complete use of uranium fuel than light water reactors, but interest declined after the 1960s as more uranium reserves were found,[2] and new methods of uranium enrichment reduced fuel costs.

https://en.wikipedia.org/wiki/Breeder_reactor

The Spallation Neutron Source (SNS) is an accelerator-based neutron source facility in the U.S. that provides the most intense pulsed neutron beams in the world for scientific research and industrial development.[1] Each year, this facility hosts hundreds of researchers from universities, national laboratories, and industry, who conduct basic and applied research and technology development using neutrons. SNS is part of Oak Ridge National Laboratory, which is managed by UT-Battelle for the United States Department of Energy (DOE). SNS is a DOE Office of Science user facility,[2] and it is open to scientists and researchers from all over the world.


https://en.wikipedia.org/wiki/Spallation_Neutron_Source

Neutron detection is the effective detection of neutrons entering a well-positioned detector. There are two key aspects to effective neutron detection: hardware and software. Detection hardware refers to the kind of neutron detector used (the most common today is the scintillation detector) and to the electronics used in the detection setup. Further, the hardware setup also defines key experimental parameters, such as source-detector distancesolid angle and detector shielding. Detection software consists of analysis tools that perform tasks such as graphical analysis to measure the number and energies of neutrons striking the detector.

https://en.wikipedia.org/wiki/Neutron_detection


neutron reflector is any material that reflects neutrons. This refers to elastic scattering rather than to a specular reflection. The material may be graphiteberylliumsteeltungsten carbidegold, or other materials. A neutron reflector can make an otherwise subcritical mass of fissile material critical, or increase the amount of nuclear fission that a critical or supercritical mass will undergo. Such an effect was exhibited twice in accidents involving the Demon Core, a subcritical plutonium pit that went critical in two separate fatal incidents when the pit's surface was momentarily surrounded by too much neutron reflective material.

In a uranium graphite chain reacting pile, the critical size may be considerably reduced by surrounding the pile with a layer of graphite, since such an envelope reflects many neutrons back into the pile.

To obtain a 30-year life span, the SSTAR nuclear reactor design calls for a moveable neutron reflector to be placed over the column of fuel. The reflector's slow downward travel over the column would cause the fuel to be burned from the top of the column to the bottom.

A reflector made of a light material like graphite or beryllium will also serve as a neutron moderator reducing neutron kinetic energy, while a heavy material like lead or lead-bismuth eutectic will have less effect on neutron velocity.

In power reactors, a neutron reflector reduces the non-uniformity of the power distribution in the peripheral fuel assemblies, reduces neutron leakage and reduces a coolant flow bypass of the core. By reducing neutron leakage, the reflector increases reactivity of the core and reduces the amount of fuel necessary to maintain the reactor critical for a long period. In light-water reactors, the neutron reflector is installed for following purposes:

  • The neutron flux distribution is “flattened“, i.e., the ratio of the average flux to the maximum flux is increased. Therefore reflectors reduce the non-uniformity of the power distribution.
  • By increasing the neutron flux at the edge of the core, there is much better utilization in the peripheral fuel assemblies. This fuel, in the outer regions of the core, now contributes much more to the total power production.
  • The neutron reflector scatters back (or reflects) into the core many neutrons that would otherwise escape. The neutrons reflected back into the core are available for chain reaction. This means that the minimum critical size of the reactor is reduced. Alternatively, if the core size is maintained, the reflector makes additional reactivity available for higher fuel burnup. The decrease in the critical size of core required is known as the reflector savings.
  • Neutron reflectors reduce neutron leakage i.e. to reduce the neutron fluence on a reactor pressure vessel.
  • Neutron reflectors reduce a coolant flow bypass of a core.
  • Neutron reflectors serve as a thermal and radiation shield of a reactor core.

See also[edit]

External links[edit]

https://en.wikipedia.org/wiki/Neutron_reflector

https://en.wikipedia.org/wiki/Neutron_reflector

https://en.wikipedia.org/wiki/Tamper_(nuclear_weapon)

https://en.wikipedia.org/wiki/Neutron_reflector

https://en.wikipedia.org/wiki/Neutron_supermirror

https://en.wikipedia.org/wiki/Spallation_Neutron_Source

https://en.wikipedia.org/wiki/Thorium_fuel_cycle

https://en.wikipedia.org/wiki/AC_power

https://nikiyaantonbettey.blogspot.com/search?q=Nuclear+transmutation





Frankie Valli and the Four Seasons - Can't Take My Eyes off You



No comments:

Post a Comment