Blog Archive

Wednesday, September 22, 2021

09-22-2021-1734 - Trans-Atlantic Pipeline 1800s-1900s (N/R, 2021)

Trans-Atlantic Pipeline 1800s-1900s

(N/R, 2021)



The Atlantic Bridge is a flight route from Gander, Newfoundland, Canada to Scotland, with a refueling stop in Iceland. 

During the Second World War, new bombers flew this route. Today, it is seldom used for commercial aviation, since modern jet airliners can fly a direct route from Canada or the United States to Europe without the need for a fueling stop. However, smaller aircraft which do not have the necessary range to make a direct crossing of the ocean still routinely use this route, or may alternatively stop in Greenland, typically via Narsarsuaq and Kulusuk or the Azores for refueling. The most common users of this route are ferry pilots delivering light aeroplanes (often six seats or less) to new owners.

This route is longer overall than the direct route and involves an extra landing and takeoff, which is costly in fuel terms.

https://en.wikipedia.org/wiki/Atlantic_Bridge_(flight_route)

https://en.wikipedia.org/wiki/Transatlantic_flight

Other uses of the seabed[edit]

Proper planning of a pipeline route has to factor in a wide range of human activities that make use of the seabed along the proposed route, or that are likely to do so in the future. They include the following:[2][8][12]

  • Other pipelines: If and where the proposed pipeline intersects an existing one, which is not uncommon, a bridging structure may be required at that juncture in order to cross it. This has to be done at a right angle. The juncture should be carefully designed so as to avoid interferences between the two structures either by direct physical contact or due to hydrodynamic effects.

https://en.wikipedia.org/wiki/Submarine_pipeline

In applied mechanicsbending (also known as flexure) characterizes the behavior of a slender structural element subjected to an external load applied perpendicularly to a longitudinal axis of the element.

The structural element is assumed to be such that at least one of its dimensions is a small fraction, typically 1/10 or less, of the other two.[1] When the length is considerably longer than the width and the thickness, the element is called a beam. For example, a closet rod sagging under the weight of clothes on clothes hangers is an example of a beam experiencing bending. On the other hand, a shell is a structure of any geometric form where the length and the width are of the same order of magnitude but the thickness of the structure (known as the 'wall') is considerably smaller. A large diameter, but thin-walled, short tube supported at its ends and loaded laterally is an example of a shell experiencing bending.

In the absence of a qualifier, the term bending is ambiguous because bending can occur locally in all objects. Therefore, to make the usage of the term more precise, engineers refer to a specific object such as; the bending of rods,[2] the bending of beams,[1] the bending of plates,[3] the bending of shells[2] and so on.

https://en.wikipedia.org/wiki/Bending


Electromagnetic or magnetic induction is the production of an electromotive force across an electrical conductor in a changing magnetic field.

Michael Faraday is generally credited with the discovery of induction in 1831, and James Clerk Maxwell mathematically described it as Faraday's law of inductionLenz's law describes the direction of the induced field. Faraday's law was later generalized to become the Maxwell–Faraday equation, one of the four Maxwell equations in his theory of electromagnetism.

Electromagnetic induction has found many applications, including electrical components such as inductors and transformers, and devices such as electric motors and generators.

https://en.wikipedia.org/wiki/Electromagnetic_induction


hydrogen turboexpander-generator or generator loaded expander for hydrogen gas is an axial flow turbine or radial expander for energy recovery through which a high pressure hydrogen gas is expanded to produce work that is used to drive an electrical generator. It replaces the control valve or regulator where the pressure drops to the appropriate pressure for the low pressure network. A turboexpander-generator can help recover energy losses and offset electrical requirements and CO2 emissions.[1]

https://en.wikipedia.org/wiki/Hydrogen_turboexpander-generator


transatlantic flight is the flight of an aircraft across the Atlantic Ocean from EuropeAfricaSouth Asia, or the Middle East to North AmericaCentral America, or South America, or vice versa. Such flights have been made by fixed-wing aircraftairshipsballoons and other aircraft.

Early aircraft engines did not have the reliability needed for the crossing, nor the power to lift the required fuel. There are difficulties navigating over featureless expanses of water for thousands of miles, and the weather, especially in the North Atlantic, is unpredictable. Since the middle of the 20th century, however, transatlantic flight has become routine, for commercialmilitarydiplomatic, and other purposes. Experimental flights (in balloons, small aircraft, etc.) present challenges for transatlantic fliers.

History[edit]

The idea of transatlantic flight came about with the advent of the hot air balloon. The balloons of the period were inflated with coal gas, a moderate lifting medium compared to hydrogen or helium, but with enough lift to use the winds that would later be known as the Jet Stream. In 1859, John Wise built an enormous aerostat named the Atlantic, intending to cross the Atlantic. The flight lasted less than a day, crash-landing in Henderson, New YorkThaddeus S. C. Lowe prepared a massive balloon of 725,000 cubic feet (20,500 m3) called the City of New York to take off from Philadelphia in 1860, but was interrupted by the onset of the American Civil Warin 1861. The first successful transatlantic flight in a balloon was the Double Eagle II from Presque Isle, Maine, to Miserey, near Paris in 1978.

First transatlantic flights[edit]

Alcock and Brown made the first non-stop transatlantic flight in June 1919. They took off from St John's, Newfoundland, Canada, and landed in Clifden, County Galway, Ireland.

In April 1913 the London newspaper The Daily Mail offered a prize of £10,000[1] (£470,000 in 2021[2]) to

the aviator who shall first cross the Atlantic in an aeroplane in flight from any point in the United States of America, Canada or Newfoundland and any point in Great Britain or Ireland" in 72 continuous hours.[3]

The competition was suspended with the outbreak of World War I in 1914 but reopened after Armistice was declared in 1918.[3] The war saw tremendous advances in aerial capabilities, and a real possibility of transatlantic flight by aircraft emerged.

Between 8 and 31 May 1919, the Curtiss seaplane NC-4 made a crossing of the Atlantic flying from the U.S. to Newfoundland, then to the Azores, and on to mainland Portugal and finally the United Kingdom. The whole journey took 23 days, with six stops along the way. A trail of 53 "station ships" across the Atlantic gave the aircraft points to navigate by. This flight was not eligible for the Daily Mail prize since it took more than 72 consecutive hours and also because more than one aircraft was used in the attempt.[4]

There were four teams competing for the first non-stop flight across the Atlantic. They were Australian pilot Harry Hawker with observer Kenneth Mackenzie-Grieve in a single-engine Sopwith Atlantic; Frederick Raynham and C. W. F. Morgan in a Martinsyde; the Handley Page Group, led by Mark Kerr; and the Vickers entry John Alcock and Arthur Whitten Brown. Each group had to ship its aircraft to Newfoundland and make a rough field for the takeoff.[5][6]

Hawker and Mackenzie-Grieve made the first attempt on 18 May, but engine failure brought them down in the ocean where they were rescued. Raynham and Morgan also made an attempt on 18 May but crashed on takeoff due to the high fuel load. The Handley Page team was in the final stages of testing its aircraft for the flight in June, but the Vickers group was ready earlier.[5][6]

The first transatlantic flight by rigid airship, and the first return transatlantic flight, was made just a couple of weeks after the transatlantic flight of Alcock and Brown, on 2 July 1919. Major George Herbert Scott of the Royal Air Force flew the airship R34 with his crew and passengers from RAF East Fortune, Scotland to Mineola, New York (on Long Island), covering a distance of about 3,000 miles (4,800 km) in about four and a half days.

Transatlantic routes[edit]

Unlike over land, transatlantic flights use standardized aircraft routes called North Atlantic Tracks (NATs). These change daily in position (although altitudes are standardized) to compensate for weather—particularly the jet stream tailwinds and headwinds, which may be substantial at cruising altitudes and have a strong influence on trip duration and fuel economy. Eastbound flights generally operate during night-time hours, while westbound flights generally operate during daytime hours, for passenger convenience. The eastbound flow, as it is called, generally makes European landfall from about 0600UT to 0900UT. The westbound flow generally operates within a 1200–1500UT time slot. Restrictions on how far a given aircraft may be from an airport also play a part in determining its route; in the past, airliners with three or more engines were not restricted, but a twin-engine airliner was required to stay within a certain distance of airports that could accommodate it (since a single engine failure in a four-engine aircraft is less crippling than a single engine failure in a twin). Modern aircraft with two engines flying transatlantic (the most common models used for transatlantic service being the Airbus A330Boeing 767Boeing 777 and Boeing 787) have to be ETOPS certified.


https://en.wikipedia.org/wiki/Transatlantic_flight


https://en.wikipedia.org/wiki/Max_Aitken,_1st_Baron_Beaverbrook

https://en.wikipedia.org/wiki/Edward_Wentworth_Beatty


Jet streams are fast flowing, narrow, meanderingair currents in the atmospheres of some planets, including Earth.[1] On Earth, the main jet streams are located near the altitude of the tropopause and are westerly winds (flowing west to east). Jet streams may start, stop, split into two or more parts, combine into one stream, or flow in various directions including opposite to the direction of the remainder of the jet.

Overview[edit]

The strongest jet streams are the polar jets, at 9–12 km (30,000–39,000 ft) above sea level, and the higher altitude and somewhat weaker subtropical jets at 10–16 km (33,000–52,000 ft). The Northern Hemisphere and the Southern Hemisphere each have a polar jet and a subtropical jet. The northern hemisphere polar jet flows over the middle to northern latitudes of North AmericaEurope, and Asia and their intervening oceans, while the southern hemisphere polar jet mostly circles Antarctica, both all year round. 

Jet streams are the product of two factors: the atmospheric heating by solar radiation that produces the large-scale Polar, Ferrel, and Hadley circulation cells, and the action of the Coriolis force acting on those moving masses. The Coriolis force is caused by the planet's rotation on its axis. On other planets, internal heat rather than solar heating drives their jet streams. The Polar jet stream forms near the interface of the Polar and Ferrel circulation cells; the subtropical jet forms near the boundary of the Ferrel and Hadley circulation cells.[2]

Other jet streams also exist. During the Northern Hemisphere summer, easterly jets can form in tropical regions, typically where dry air encounters more humid air at high altitudes. Low-level jets also are typical of various regions such as the central United States. There are also jet streams in the thermosphere.

Meteorologists use the location of some of the jet streams as an aid in weather forecasting. The main commercial relevance of the jet streams is in air travel, as flight time can be dramatically affected by either flying with the flow or against. Often, airlines work to fly 'with' the jet stream to obtain significant fuel cost and time savings . Dynamic North Atlantic Tracks are one example of how airlines and air traffic control work together to accommodate the jet stream and winds aloft that results in the maximum benefit for airlines and other users. Clear-air turbulence, a potential hazard to aircraft passenger safety, is often found in a jet stream's vicinity, but it does not create a substantial alteration on flight times.

Discovery[edit]

The first indications of this phenomenon came from American professor Elias Loomis in the 1800s, when he proposed a powerful air current in the upper air blowing west to east across the United States as an explanation for the behaviour of major storms.[3] After the 1883 eruption of the Krakatoa volcano, weather watchers tracked and mapped the effects on the sky over several years. They labelled the phenomenon the "equatorial smoke stream".[4][5] In the 1920s, a Japanese meteorologist, Wasaburo Oishi, detected the jet stream from a site near Mount Fuji.[6][7] He tracked pilot balloons, also known as pibals (balloons used to determine upper level winds),[8] as they rose into the atmosphere. Oishi's work largely went unnoticed outside Japan because it was published in Esperanto. American pilot Wiley Post, the first man to fly around the world solo in 1933, is often given some credit for discovery of jet streams. Post invented a pressurized suit that let him fly above 6,200 metres (20,300 ft). In the year before his death, Post made several attempts at a high-altitude transcontinental flight, and noticed that at times his ground speed greatly exceeded his air speed.[9] German meteorologist Heinrich Seilkopf is credited with coining a special term, Strahlströmung (literally "jet current"), for the phenomenon in 1939.[10][11] Many sources credit real understanding of the nature of jet streams to regular and repeated flight-path traversals during World War II. Flyers consistently noticed westerly tailwinds in excess of 160 km/h (100 mph) in flights, for example, from the US to the UK.[12] Similarly in 1944 a team of American meteorologists in Guam, including Reid Bryson, had enough observations to forecast very high west winds that would slow World War II bombers travelling to Japan.[13]

Description[edit]

General configuration of the polar and subtropical jet streams
Cross section of the subtropical and polar jet streams by latitude

Polar jet streams are typically located near the 250 hPa (about 1/4 atmosphere) pressure level, or seven to twelve kilometres (23,000 to 39,000 ft) above sea level, while the weaker subtropical jet streams are much higher, between 10 and 16 kilometres (33,000 and 52,000 ft). Jet streams wander laterally dramatically, and changes in their altitude. The jet streams form near breaks in the tropopause, at the transitions between the Polar, Ferrel and Hadley circulation cells, and whose circulation, with the Coriolis force acting on those masses, drives the jet streams. The Polar jets, at lower altitude, and often intruding into mid-latitudes, strongly affect weather and aviation.[14][15] The polar jet stream is most commonly found between latitudes 30° and 60° (closer to 60°), while the subtropical jet streams are located close to latitude 30°. These two jets merge at some locations and times, while at other times they are well separated. The northern Polar jet stream is said to "follow the sun" as it slowly migrates northward as that hemisphere warms, and southward again as it cools.[16][17]

The width of a jet stream is typically a few hundred kilometres or miles and its vertical thickness often less than five kilometres (16,000 feet).[18]

Meanders (Rossby Waves) of the Northern Hemisphere's polar jet stream developing (a), (b); then finally detaching a "drop" of cold air (c). Orange: warmer masses of air; pink: jet stream.

Jet streams are typically continuous over long distances, but discontinuities are common.[19] The path of the jet typically has a meandering shape, and these meanders themselves propagate eastward, at lower speeds than that of the actual wind within the flow. Each large meander, or wave, within the jet stream is known as a Rossby wave (planetary wave). Rossby waves are caused by changes in the Coriolis effect with latitude.[citation needed] Shortwave troughs, are smaller scale waves superimposed on the Rossby waves, with a scale of 1,000 to 4,000 kilometres (600–2,500 mi) long,[20] that move along through the flow pattern around large scale, or longwave, "ridges" and "troughs" within Rossby waves.[21] Jet streams can split into two when they encounter an upper-level low, that diverts a portion of the jet stream under its base, while the remainder of the jet moves by to its north.

The wind speeds are greatest where temperature differences between air masses are greatest, and often exceed 92 km/h (50 kn; 57 mph).[19] Speeds of 400 km/h (220 kn; 250 mph) have been measured.[22]

The jet stream moves from West to East bringing changes of weather.[23] Meteorologists now understand that the path of jet streams affects cyclonic storm systems at lower levels in the atmosphere, and so knowledge of their course has become an important part of weather forecasting. For example, in 2007 and 2012, Britain experienced severe flooding as a result of the polar jet staying south for the summer.[24][25][26]

Cause[edit]

Highly idealised depiction of the global circulation. The upper-level jets tend to flow latitudinally along the cell boundaries.

In general, winds are strongest immediately under the tropopause (except locally, during tornadoestropical cyclones or other anomalous situations). If two air masses of different temperatures or densities meet, the resulting pressure difference caused by the density difference (which ultimately causes wind) is highest within the transition zone. The wind does not flow directly from the hot to the cold area, but is deflected by the Coriolis effect and flows along the boundary of the two air masses.[27]

All these facts are consequences of the thermal wind relation. The balance of forces acting on an atmospheric air parcel in the vertical direction is primarily between the gravitational force acting on the mass of the parcel and the buoyancy force, or the difference in pressure between the top and bottom surfaces of the parcel. Any imbalance between these forces results in the acceleration of the parcel in the imbalance direction: upward if the buoyant force exceeds the weight, and downward if the weight exceeds the buoyancy force. The balance in the vertical direction is referred to as hydrostatic. Beyond the tropics, the dominant forces act in the horizontal direction, and the primary struggle is between the Coriolis force and the pressure gradient force. Balance between these two forces is referred to as geostrophic. Given both hydrostatic and geostrophic balance, one can derive the thermal wind relation: the vertical gradient of the horizontal wind is proportional to the horizontal temperature gradient. If two air masses, one cold and dense to the North and the other hot and less dense to the South, are separated by a vertical boundary and that boundary should be removed, the difference in densities will result in the cold air mass slipping under the hotter and less dense air mass. The Coriolis effect will then cause poleward-moving mass to deviate to the East, while equatorward-moving mass will deviate toward the west. The general trend in the atmosphere is for temperatures to decrease in the poleward direction. As a result, winds develop an eastward component and that component grows with altitude. Therefore, the strong eastward moving jet streams are in part a simple consequence of the fact that the Equator is warmer than the North and South poles.[27]

Polar jet stream[edit]

The thermal wind relation does not explain why the winds are organized into tight jets, rather than distributed more broadly over the hemisphere. One factor that contributes to the creation of a concentrated polar jet is the undercutting of sub-tropical air masses by the more dense polar air masses at the polar front. This causes a sharp north-south pressure (south-north potential vorticity) gradient in the horizontal plane, an effect which is most significant during double Rossby wave breaking events.[28] At high altitudes, lack of friction allows air to respond freely to the steep pressure gradient with low pressure at high altitude over the pole. This results in the formation of planetary wind circulations that experience a strong Coriolis deflection and thus can be considered 'quasi-geostrophic'. The polar front jet stream is closely linked to the frontogenesis process in midlatitudes, as the acceleration/deceleration of the air flow induces areas of low/high pressure respectively, which link to the formation of cyclones and anticyclones along the polar front in a relatively narrow region.[19]

Subtropical jet[edit]

A second factor which contributes to a concentrated jet is more applicable to the subtropical jet which forms at the poleward limit of the tropical Hadley cell, and to first order this circulation is symmetric with respect to longitude. Tropical air rises to the tropopause, and moves poleward before sinking; this is the Hadley cell circulation. As it does so it tends to conserve angular momentum, since friction with the ground is slight. Air masses that begin moving poleward are deflected eastward by the Coriolis force (true for either hemisphere), which for poleward moving air implies an increased westerly component of the winds[29] (note that deflection is leftward in the southern hemisphere).

Other planets[edit]

Jupiter's atmosphere has multiple jet streams, caused by the convection cells that form the familiar banded color structure; on Jupiter, these convection cells are driven by internal heating.[22] The factors that control the number of jet streams in a planetary atmosphere is an active area of research in dynamical meteorology. In models, as one increases the planetary radius, holding all other parameters fixed,[clarification needed] the number of jet streams decreases.[citation needed]

Some effects[edit]

Hurricane protection[edit]

Hurricane Flossie over Hawaii in 2007. Note the large band of moisture that developed East of Hawaii Island that came from the hurricane.

The subtropical jet stream rounding the base of the mid-oceanic upper trough is thought[30] to be one of the causes most of the Hawaiian Islands have been resistant to the long list of Hawaii hurricanes that have approached. For example, when Hurricane Flossie (2007) approached and dissipated just before reaching landfall, the U.S. National Oceanic and Atmospheric Administration (NOAA) cited vertical wind shear as evidenced in the photo.[30]

Uses[edit]

On Earth, the northern polar jet stream is the most important one for aviation and weather forecasting, as it is much stronger and at a much lower altitude than the subtropical jet streams and also covers many countries in the Northern Hemisphere, while the southern polar jet stream mostly circles Antarctica and sometimes the southern tip of South America. Thus, the term jet stream in these contexts usually implies the northern polar jet stream.

Aviation[edit]

Flights between Tokyo and Los Angelesusing the jet stream eastbound and a great circle route westbound.

The location of the jet stream is extremely important for aviation. Commercial use of the jet stream began on 18 November 1952, when Pan Am flew from Tokyo to Honolulu at an altitude of 7,600 metres (24,900 ft). It cut the trip time by over one-third, from 18 to 11.5 hours.[31] Not only does it cut time off the flight, it also nets fuel savings for the airline industry.[32][33] Within North America, the time needed to fly east across the continent can be decreased by about 30 minutes if an airplane can fly with the jet stream, or increased by more than that amount if it must fly west against it.

Associated with jet streams is a phenomenon known as clear-air turbulence (CAT), caused by vertical and horizontal wind shear caused by jet streams.[34] The CAT is strongest on the cold air side of the jet,[35] next to and just under the axis of the jet.[36] Clear-air turbulence can cause aircraft to plunge and so present a passenger safety hazard that has caused fatal accidents, such as the death of one passenger on United Airlines Flight 826.[37][38]

Possible future power generation[edit]

Scientists are investigating ways to harness the wind energy within the jet stream. According to one estimate of the potential wind energy in the jet stream, only one percent would be needed to meet the world's current energy needs. The required technology would reportedly take 10–20 years to develop.[39] There are two major but divergent scientific articles about jet stream power. Archer & Caldeira[40] claim that the Earth's jet streams could generate a total power of 1700 terawatts (TW)and that the climatic impact of harnessing this amount would be negligible. However, Miller, Gans, & Kleidon[41] claim that the jet streams could generate a total power of only 7.5 TW and that the climatic impact would be catastrophic.

Unpowered aerial attack[edit]

Near the end of World War II, from late 1944 until early 1945, the Japanese Fu-Go balloon bomb, a type of fire balloon, was designed as a cheap weapon intended to make use of the jet stream over the Pacific Ocean to reach the west coast of Canada and the United States. They were relatively ineffective as weapons, but they were used in one of the few attacks on North America during World War II, causing six deaths and a small amount of damage.[42] However, the Japanese were world leaders in biological weapons research at this time. The Japanese Imperial Army's Noborito Institute cultivated anthrax and plague Yersinia pestis; furthermore, it produced enough cowpox viruses to infect the entire United States.[43] The deployment of these biological weapons on fire balloons was planned in 1944.[44] Emperor Hirohito did not permit deployment of biological weapons on the basis of a report of President Staff Officer Umezu on 25 October 1944. Consequently, biological warfare using Fu-Go balloons was not implemented.[45]

Dust Bowl[edit]

Evidence suggests the jet stream was at least partly responsible for the widespread drought conditions during the 1930s Dust Bowl in the Midwest United States. Normally, the jet stream flows east over the Gulf of Mexico and turns northward pulling up moisture and dumping rain onto the Great Plains. During the Dust Bowl, the jet stream weakened and changed course traveling farther south than normal. This starved the Great Plains and other areas of the Midwest of rainfall, causing extraordinary drought conditions.[58]


https://en.wikipedia.org/wiki/Jet_stream


Notable transatlantic flights and attempts[edit]

1910s[edit]

Airship America failure
In October 1910, the American journalist Walter Wellman, who had in 1909 attempted to reach the North Pole by balloon, set out for Europe from Atlantic City in a dirigibleAmerica. A storm off Cape Cod sent him off course, and then engine failure forced him to ditch halfway between New York and Bermuda. Wellman, his crew of five – and the balloon's cat – were rescued by RMS Trent, a passing British ship. The Atlantic bid failed, but the distance covered, about 1,000 statute miles (1,600 km), was at the time a record for a dirigible.[61]
US Navy warships "strung out like a string of pearls" along the NC's flightpath (3rd leg)
First transatlantic flight
On 8–31 May 1919, the U.S. Navy Curtiss NC-4 flying boat under the command of Albert Read, flew 4,526 statute miles (7,284 km) from Rockaway, New York, to Plymouth (England), via among other stops Trepassey (Newfoundland), Hortaand Ponta Delgada (both Azores) and Lisbon (Portugal) in 53h 58m, spread over 23 days. The crossing from Newfoundland to the European mainland had taken 10 days 22 hours, with the total time in flight of 26h 46m. The longest non-stop leg of the journey, from Trepassey, Newfoundland, to Horta in the Azores, was 1,200 statute miles (1,900 km) and lasted 15h 18m.
Sopwith Atlantic failure
On 18 May 1919, the Australian Harry Hawker, together with navigator Kenneth Mackenzie Grieve, attempted to become the first to achieve a non-stop flight across the Atlantic Ocean. They set off from Mount Pearl, Newfoundland, in the Sopwith Atlantic biplane. After fourteen and a half hours of flight the engine overheated and they were forced to divert towards the shipping lanes: they found a passing freighter, the Danish Mary, established contact and crash-landed ahead of her. Mary's radio was out of order, so that it was not until six days later when the boat reached Scotland that word was received that they were safe. The wheels from the undercarriage, jettisoned soon after takeoff, were later recovered by local fishermen and are now in the Newfoundland Museum in St. John's.[62]
First non-stop transatlantic flight
On 14–15 June 1919, Capt. John Alcock and Lieut. Arthur Whitten Brown of the United Kingdom in Vickers Vimy bomber, between islands, 1,960 nautical miles (3,630 km), from St. John's, Newfoundland, to Clifden, Ireland, in 16h 12m.
First east-to-west transatlantic flight
On 2 July 1919, Major George Herbert Scott of the Royal Air Force with his crew and passengers flies from RAF East FortuneScotland to Mineola, New York (on Long Island) in airship R34, covering a distance of about 3,000 statute miles (4,800 km) in about four and a half days. R34 then made the return trip to England arriving at RNAS Pulham in 75 hours, thus also completing the first double crossing of the Atlantic (east-west-east).

1920s[edit]

First flight across the South Atlantic
On 30 March–17 June 1922, Lieutenant Commander Sacadura Cabral and Commander Gago Coutinho of Portugal, using three Fairey IIID floatplanes (LusitaniaPortugal, and Santa Cruz), after two ditchings, with only internal means of navigation (the Coutinho-invented sextant with artificial horizon) from Lisbon, Portugal, to Rio de JaneiroBrazil.[63]
First non-stop aircraft flight between European and American mainlands
In October 1924, the Zeppelin LZ-126 (later known as ZR-3 USS Los Angeles), flew from Germany to New Jersey with a crew commanded by Dr. Hugo Eckener, covering a distance of about 4,000 statute miles (6,400 km).[64]
First night-time flight across the Atlantic
On the night of 16–17 April 1927, the Portuguese aviators Sarmento de Beires, Jorge de Castilho and Manuel Gouveia, flew from the Bijagós islands, Portuguese Guinea to Fernando de Noronha island, Brazil in the Dornier Wal flying boat Argos.
First flight across the South Atlantic made by a non-European crew
On 28 April 1927, Brazilian João Ribeiro de Barros, with the assistance of João Negrão (co-pilot), Newton Braga (navigator), and Vasco Cinquini (mechanic), crossed the Atlantic in the hydroplane Jahú. The four aviators flew from Genoa, in Italy, to Santo Amaro (São Paulo), making stops in Spain, GibraltarCape Verdeand Fernando de Noronha, in the Brazilian territory.
Disappearance of L'Oiseau Blanc
On 8–9 May 1927, Charles Nungesser and François Coli attempted to cross the Atlantic from Paris to the US in a Levasseur PL-8 biplane L'Oiseau Blanc ("The White Bird"), but were lost.
First solo transatlantic flight and first non-stop fixed-wing aircraft flight between America and mainland Europe
On 20–21 May 1927, Charles A. Lindbergh flew his Ryan monoplane (named Spirit of St. Louis), 3,600 nautical miles (6,700 km), from Roosevelt Field, New Yorkto Paris–Le Bourget Airport, in 33½ hours.
First transatlantic air passenger
On 4–6 June 1927, the first transatlantic air passenger was Charles A. Levine. He was carried as a passenger by Clarence D. Chamberlin from Roosevelt Field, New York, to Eisleben, Germany, in a Wright-powered Bellanca.
First non-stop air crossing of the South Atlantic
On 14–15 October 1927, Dieudonne Costes and Joseph Le Brix, flying a Breguet 19, flew from Senegal to Brazil.
First non-stop fixed-wing aircraft westbound flight over the North Atlantic
On 12–13 April 1928, Ehrenfried Günther Freiherr von Hünefeld and Capt. Hermann Köhl of Germany and Comdr. James Fitzmaurice of Ireland, flew a Junkers W33 monoplane (named Bremen), 2,070 statute miles (3,330 km), from Baldonnell near Dublin, Ireland, to Labrador, in 36½ hours.[65]
First crossing of the Atlantic by a woman
On 17–18 June 1928, Amelia Earhart was a passenger on an aircraft piloted by Wilmer Stultz. Since most of the flight was on instruments for which Earhart had no training, she did not pilot the aircraft. Interviewed after landing, she said, "Stultz did all the flying – had to. I was just baggage, like a sack of potatoes. Maybe someday I'll try it alone."
Notable flight (around the world)
On 1–8 August 1929, in making the circumnavigation, Dr Hugo Eckener piloted the LZ 127 Graf Zeppelin across the Atlantic three times: from Germany 4,391 statute miles (7,067 km) east to west in four days from 1 August; return 4,391 statute miles (7,067 km) west to east in two days from 8 August; after completing the circumnavigation to Lakehurst, a final 4,391 statute miles (7,067 km) west to east landing 4 September, making three crossings in 34 days.[66]

1930s[edit]

First scheduled transatlantic passenger flights
From 1931 onwards, LZ 127 Graf Zeppelin operated the world's first scheduled transatlantic passenger flights, mainly between Germany and Brazil (64 such round trips overall) sometimes stopping in Spain, MiamiLondon, and Berlin.
First nonstop east-to-west fixed-wing aircraft flight between European and American mainlands
On 1–2 September 1930, Dieudonne Costes and Maurice Bellonte flew a Breguet 19 Super Bidon biplane (named Point d'Interrogation, Question Mark), 6,200 km from Paris to New York City.
First non-stop flight to exceed 5,000 miles distance
On 28–30 July 1931, Russell Norton Boardman and John Louis Polando flew a Bellanca Special J-300 high-wing monoplane named the Cape Cod from New York City's Floyd Bennett Field to Istanbul in 49:20 hours in completely crossing the North Atlantic and much of the Mediterranean Sea; establishing a straight-line distance record of 5,011.8 miles (8,065.7 km).[67][68]
First solo crossing of the South Atlantic
27–28 November 1931. Bert Hinkler flew from Canada to New York, then via the West Indies, Venezuela, Guiana, Brazil and the South Atlantic to Great Britain in a de Havilland Puss Moth.[69]
First solo crossing of the Atlantic by a woman
On 20 May 1932, Amelia Earhart set off from Harbour Grace, Newfoundland, intending to fly to Paris in her single engine Lockheed Vega 5b to emulate Charles Lindbergh's solo flight. After encountering storms and a burnt exhaust pipe, Earhart landed in a pasture at Culmore, north of DerryNorthern Ireland, ending a flight lasting 14h 56m.
First solo westbound crossing of the Atlantic
On 18–19 August 1932, Jim Mollison, flying a de Havilland Puss Moth, flew from Dublin to New Brunswick.
Lightest (empty weight) aircraft that crossed the Atlantic
On 7–8 May 1933, Stanisław Skarżyński made a solo flight across the South Atlantic, covering 3,582 kilometres (2,226 mi), in a RWD-5bis – empty weight below 450 kilograms (990 lb). If considering the total takeoff weight (as per FAI records) then there is a longer distance Atlantic crossing: the distance world record holder, Piper PA-24 Comanche in this class, 1000–1750 kg. FAI[permanent dead link].
Mass flight
Notable mass transatlantic flight: On 1–15 July 1933, Gen. Italo Balbo of Italy led 24 Savoia-Marchetti S.55seaplanes 6,100 statute miles (9,800 km), in a flight from Orbetello, Italy, to the Century of Progress International Exposition Chicago, Illinois, in 47h 52m. The flight made six intermediate stops. Previously, Balbo had led a flight of 12 flying boats from Rome to Rio de Janeiro, Brazil, in December 1930 – January 1931, taking nearly a month.
First solo westbound crossing of the Atlantic by a woman and first person to solo westbound from England
On 4–5 September 1936, Beryl Markham, flying a Percival Vega Gull from Abingdon (then in Berkshire, now Oxfordshire), intended to fly to New York, but was forced down at Cape Breton Island, Nova Scotia, due to icing of fuel tank vents.
First transatlantic passenger service on heavier-than air aircraft
on 24 June 1939, Pan American inaugurated transatlantic passenger service between New York and Marseilles, France, using Boeing 314 flying boats. On 8 July 1939, a service began between New York and Southampton as well. A single fare was US$375.00 (US$6,909.00 in 2019 dollars). Scheduled landplane flights started in October 1945.

1940s[edit]

First transatlantic flight of non-rigid airships
On 1 June 1944, two K class blimps from Blimp Squadron 14 of the United States Navy (USN) completed the first transatlantic crossing by non-rigid airships.[70]On 28 May 1944, the two K-ships (K-123 and K-130) left South Weymouth, Massachusetts, and flew approximately 16 hours to Naval Station Argentia, Newfoundland. From Argentia, the blimps flew approximately 22 hours to Lajes Field on Terceira Island in the Azores. The final leg of the first transatlantic crossing was about a 20-hour flight from the Azores to Craw Field in Port Lyautey (Kenitra), French Morocco.[71]
First jet aircraft to cross the Atlantic Ocean
On 14 July 1948, six de Havilland Vampire F3s of No. 54 Squadron RAF, commanded by Wing Commander D S Wilson-MacDonald, DSO, DFC, flew via StornowayIceland, and Labrador to Montreal on the first leg of a goodwill tour of the U.S. and Canada.

1950s[edit]

First jet aircraft to make a non-stop transatlantic flight
On 21 February 1951, an RAF English Electric Canberra B Mk 2 (serial number WD932) flown by Squadron Leader A Callard of the Aeroplane & Armament Experimental Establishment, flew from Aldergrove Northern Ireland, to Gander, Newfoundland. The flight covered almost 1,800 nautical miles (3,300 km) in 4h 37 m. The aircraft was being flown to the U.S. to act as a pattern aircraft for the Martin B-57 Canberra.
First jet aircraft transatlantic passenger service
On 4 October 1958, British Overseas Airways Corporation (BOAC) flew the first jet airliner service using the de Havilland Comet, when G-APDC initiated the first transatlantic Comet 4 service and the first scheduled transatlantic passenger jet service in history, flying from London to New York with a stopover at Gander.


https://en.wikipedia.org/wiki/Transatlantic_flight


North Atlantic Tracks, officially titled the North Atlantic Organised Track System (NAT-OTS), is a structured set of transatlantic flight routes that stretch from the northeast of North America to western Europe across the Atlantic Ocean, within the North Atlantic airspace region. They ensure that aircraft are separated over the ocean, where there is little radar coverage. These heavily travelled routes are used by aircraft flying between North America and Europe, operating between the altitudes of 29,000 and 41,000 ft (8,800 and 12,500 m) inclusive. Entrance and movement along these tracks is controlled by special oceanic control centres to maintain separation between aircraft. The primary purpose of these routes is to allow air traffic control to effectively separate the aircraft. Because of the volume of NAT traffic, allowing aircraft to choose their own co-ordinates would make the ATC task far more complex. They are aligned in such a way as to minimize any head winds and maximize tail winds impact on the aircraft. This results in much more efficiency by reducing fuel burn and flight time. To make such efficiencies possible, the routes are created twice daily to take account of the shifting of the winds aloft and the principal traffic flow, eastward in North America evening and westward twelve hours later.

North Atlantic Tracks for the westbound crossing of February 24, 2017, with the new RLAT Tracks shown in blue

History[edit]

The first implementation of an organised track system across the North Atlantic was in fact for commercial shipping, dating back to 1898 when the North Atlantic Track Agreement was signed. After World War II, increasing commercial airline traffic across the North Atlantic led to difficulties for ATC in separating aircraft effectively, and so in 1961 the first occasional use of NAT Tracks was made. In 1965, the publication of NAT Tracks became a daily feature, allowing controllers to force traffic onto fixed track structures in order to effectively separate the aircraft by time, altitude, and latitude.[1] In 1966, the two agencies at Shannon and Prestwick merged to become Shanwick, with responsibility out to 30°W longitude; according to the official document "From 1st April, 1966, such a communications service between such aircraft and the said air traffic control centres as has before that date been provided by the radio stations at Ballygirreen in Ireland and Birdlip in the United Kingdom will be provided between such aircraft and the said air traffic control centre at Prestwick or such other air traffic control centre in the United Kingdom as may from time to time be nominated".[2]

Other historical dates include:

  • 1977 – MNPS Introduced
  • 1981 – Longitudinal separation reduced to 10 minutes
  • 1996 – GPS approved for navigation on NAT; OMEGA withdrawn
  • 1997 – RVSM introduced on the NAT
  • 2006 – CPDLC overtakes HF as primary comms method
  • 2011 – Longitudinal separation reduced to 5 minutes
  • 2015 – RLAT introduced[1]

Route planning[edit]

The specific routing of the tracks is dictated based on a number of factors, the most important being the jetstream—aircraft going from North America to Europe experience tailwinds caused by the jetstream. Tracks to Europe use the jetstream to their advantage by routing along the strongest tailwinds. Because of the difference in ground speed caused by the jetstream, westbound flights tend to be longer in duration than their eastbound counterparts. North Atlantic Tracks are published by Shanwick Centre (EGGX) and Gander Centre (CZQX), in consultation with other adjacent air traffic control agencies and airlines.[citation needed]

The day prior to the tracks being published, airlines that fly the North Atlantic regularly send a preferred route message (PRM) to Gander and Shanwick. This allows the ATC agency to know what the route preferences are of the bulk of the North Atlantic traffic.

Provision of North Atlantic Track air traffic control[edit]

Air traffic controllers responsible for the Shanwick flight information region (FIR) are based at the Shanwick Oceanic Control Centre at Prestwick Centre in Ayrshire, Scotland. Air traffic controllers responsible for the Gander FIR are based at the Gander Oceanic Control Centre in GanderNewfoundland and LabradorCanada.[citation needed]

Flight planning[edit]

Western Boundary – Gander
Eastern Boundary – Shanwick

Using a NAT Track, even when they are active in the direction an aircraft is flying, is not mandatory. However, less than optimum altitude assignment, or a reroute, is likely to occur. Therefore, most operators choose to file a flight plan on a NAT Track. The correct method is to file a flight plan with an Oceanic Entry Point (OEP), then the name of the NAT Track, e.g. "NAT A" for NAT Track Alpha, and the Oceanic Exit Point (OXP).[citation needed]

A typical routing would be: DCT KONAN UL607 EVRIN DCT MALOT/M081F350 DCT 53N020W 52N030W NATA JOOPY/N0462F360 N276C TUSKY DCT PLYMM. Oceanic boundary points for the NAT Tracks are along the FIR boundary of Gander on the west side, and Shanwick on the east side.[citation needed]

While the routes change daily, they maintain a series of entrance and exit waypoints which link into the airspace system of North America and Europe. Each route is uniquely identified by a letter of the alphabet. Westbound tracks (valid from 11:30 UTC to 19:00 UTC at 30W) are indicated by the letters A,B,C,D etc. (as far as M if necessary, omitting I), where A is the northernmost track, and eastbound tracks (valid from 01:00 UTC to 08:00 UTC at 30W) are indicated by the letters Z,Y,X,W etc. (as far as N if necessary, omitting O), where Z is the southernmost track. Waypoints on the route are identified by named waypoints (or "fixes") and by the crossing of degrees of latitude and longitude (such as "54/40", indicating 54°N latitude, 40°W longitude).[citation needed]

A ‘random route’ must have a waypoint every 10 degrees of longitude. Aircraft can also join an outer track half way along.[citation needed]

Since 2017, aircraft can plan any flight level in the NAT HLA (high level airspace), with no need to follow ICAO standard cruising levels.[3]

Flying the routes[edit]

Prior to departure, airline flight dispatchers/flight operations officers will determine the best track based on destination, aircraft weight, aircraft type, prevailing winds and air traffic control route charges. The aircraft will then contact the Oceanic Center controller before entering oceanic airspace and request the track giving the estimated time of arrival at the entry point. The Oceanic Controllers then calculate the required separation distances between aircraft and issue clearances to the pilots. It may be that the track is not available at that altitude or time so an alternate track or altitude will be assigned. Aircraft cannot change assigned course or altitude without permission.

Contingency plans exist within the North Atlantic Track system to account for any operational issues that occur. For example, if an aircraft can no longer maintain the speed or altitude it was assigned, the aircraft can move off the track route and fly parallel to its track, but well away from other aircraft. Also, pilots on North Atlantic Tracks are required to inform air traffic control of any deviations in altitude or speed necessitated by avoiding weather, such as thunderstorms or turbulence.

Despite advances in navigation technology, such as GPS and LNAV, errors can and do occur. While typically not dangerous, two aircraft can violate separation requirements. On a busy day, aircraft are spaced approximately 10 minutes apart. With the introduction of TCAS, aircraft traveling along these tracks can monitor the relative position of other aircraft, thereby increasing the safety of all track users.

Since there is little radar coverage in the middle of the Atlantic, aircraft must report in as they cross various waypoints along each track, their anticipated crossing time of the next waypoint, and the waypoint after that. These reports enable the Oceanic Controllers to maintain separation between aircraft. These reports can be made to dispatchers via a satellite communications link (CPDLC) or via high frequency (HF) radios. In the case of HF reports, each aircraft operates using SELCAL (selective calling). The use of SELCAL allows an aircraft crew to be notified of incoming communications even when the aircraft's radio has been muted. Thus, crew members need not devote their attention to continuous radio listening. If the aircraft is equipped with automatic dependent surveillance, (ADS-C & ADS-B), voice position reports on HF are no longer necessary, as automatic reports are downlinked to the Oceanic Control Centre. In this case, a SELCAL check only has to be performed when entering the oceanic area and with any change in radio frequency to ensure a working backup system for the event of a datalink failure.

Maximizing traffic capacity[edit]

Increased aircraft density can be achieved by allowing closer vertical spacing of aircraft through participation in the RVSM program.[citation needed]

Additionally from 10 June 2004 the strategic lateral offset procedure (SLOP) was introduced to the North Atlantic airspace to reduce the risk of mid-air collision by spreading out aircraft laterally. It reduces the risk of collision for non-normal events such as operational altitude deviation errors and turbulence induced altitude deviations. In essence, the procedure demands that aircraft in North Atlantic airspace fly track centreline or one or two nautical mile offsets to the right of centreline only. However, the choice is left up to the pilot.[citation needed]

On 12 November 2015, a new procedure allowing for reduced lateral separation minima (RLAT) was introduced. RLAT reduces the standard distance between NAT tracks from 60 to 30 nautical miles (69 to 35 mi; 111 to 56 km), or from one whole degree of latitude to a half degree. This allows more traffic to operate on the most efficient routes, reducing fuel cost. The first RLAT tracks were published in December 2015.

The tracks reverse direction twice daily. In the daylight, all traffic on the tracks operates in a westbound flow. At night, the tracks flow eastbound towards Europe. This is done to accommodate traditional airline schedules, with departures from North America to Europe scheduled for departure in the evening thereby allowing passengers to arrive at their destination in the morning. Westbound departures typically leave Europe between early morning to late afternoon and arrive in North America from early afternoon to late evening. In this manner, a single aircraft can be efficiently utilized by flying to Europe at night and to North America in the day. The tracks are updated daily and their position may alternate on the basis of a variety of variable factors, but predominantly due to weather systems.[citation needed]

The FAANav CanadaNATS and the JAA publish a NOTAM daily with the routes and flight levels to be used in each direction of travel.[citation needed] The current tracks are available online.

Space-based ADS-B[edit]

At the end of March 2019, Nav Canada and the UK’s National Air Traffic Services (NATS) activated the Aireon space-based ADS-B relayed every few seconds by 450 nmi (830 km) high Iridium satellites to air traffic control centers. Aircraft separation can be lowered from 40 nmi (74 km) longitudinally to 14–17 nmi (26–31 km), while lateral separations will be reduced from 23 to 19 nmi (43 to 35 km) in October and to 15 nmi (28 km) in November 2020. In the three following months, 31,700 flights could fly at their optimum speeds, saving up to 400–650 kg (880–1,430 lb) of fuel per crossing. Capacity is increased as NATS expects 16% more flights by 2025, while predicting that 10% of traffic will use the Organized Track System in the coming years, down from 38% today[when?].[4]

In 2018, 500,000 flights went through; annual fuel savings are expected around 38,800 t (85,500,000 lb), and may improve later.[5]

Concorde[edit]

Concorde did not travel on the North Atlantic Tracks as it flew between 45,000 and 60,000 ft (14,000 and 18,000 m), a much higher altitude than subsonic airliners. The weather variations at these altitudes were so minor that Concorde followed the same track each day. These fixed tracks were known as 'Track Sierra Mike' (SM) and 'Track Sierra Oscar' (SO) for westbound flights and 'Track Sierra November' (SN) for eastbounds. An additional route, 'Track Sierra Papa' (SP), was used for seasonal British Airways flights from London Heathrow to/from Barbados.[citation needed]

See also[edit]

https://en.wikipedia.org/wiki/North_Atlantic_Tracks


Pages in category "Airline routes"

The following 17 pages are in this category, out of 17 total. This list may not reflect recent changes (learn more).

W


https://en.wikipedia.org/wiki/Category:Airline_routes

Longer-term climatic changes[edit]

Climate scientists have hypothesized that the jet stream will gradually weaken as a result of global warming. Trends such as Arctic sea ice decline, reduced snow cover, evapotranspiration patterns, and other weather anomalies have caused the Arctic to heat up faster than other parts of the globe (polar amplification). This in turn reduces the temperature gradient that drives jet stream winds, which may eventually cause the jet stream to become weaker and more variable in its course.[59][60][61][62][63][64][65] As a consequence, extreme winter weather is expected to become more frequent. With a weaker jet stream, the Polar vortex has a higher probability to leak out of the polar area and bring extremely cold weather to the middle latitude regions.

Since 2007, and particularly in 2012 and early 2013, the jet stream has been at an abnormally low latitude across the UK, lying closer to the English Channel, around 50°N rather than its more usual north of Scotland latitude of around 60°N.[failed verification] However, between 1979 and 2001, the average position of the jet stream moved northward at a rate of 2.01 kilometres (1.25 mi) per year across the Northern Hemisphere. Across North America, this type of change could lead to drier conditions across the southern tier of the United States and more frequent and more intense tropical cyclones in the tropics. A similar slow poleward drift was found when studying the Southern Hemisphere jet stream over the same time frame.[66]

Other upper-level jets[edit]

Polar night jet[edit]

The polar-night jet stream forms mainly during the winter months when the nights are much longer, hence polar nights, in their respective hemispheres at around 60° latitude. The polar night jet moves at a greater height (about 24,000 metres (80,000 ft)) than it does during the summer.[67] During these dark months the air high over the poles becomes much colder than the air over the Equator. This difference in temperature gives rise to extreme air pressure differences in the stratosphere, which, when combined with the Coriolis effect, create the polar night jets, that race eastward at an altitude of about 48 kilometres (30 mi).[68] The polar vortex is circled by the polar night jet. The warmer air can only move along the edge of the polar vortex, but not enter it. Within the vortex, the cold polar air becomes increasingly cold with neither warmer air from lower latitudes nor energy from the Sun entering during the polar night.[69]

Low-level jets[edit]

There are wind maxima at lower levels of the atmosphere that are also referred to as jets.

Barrier jet[edit]

A barrier jet in the low levels forms just upstream of mountain chains, with the mountains forcing the jet to be oriented parallel to the mountains. The mountain barrier increases the strength of the low level wind by 45 percent.[70] In the North American Great Plains a southerly low-level jet helps fuel overnight thunderstorm activity during the warm season, normally in the form of mesoscale convective systems which form during the overnight hours.[71] A similar phenomenon develops across Australia, which pulls moisture poleward from the Coral Sea towards cut-off lows which form mainly across southwestern portions of the continent.[72]

Coastal jet[edit]

Coastal low-level jets are related to a sharp contrast between high temperatures over land and lower temperatures over the sea and play an important role in coastal weather, giving rise to strong coast parallel winds.[73][74][75] Most costal jets are associated with the oceanic high-pressure systems and thermal low over land.[76][77]These jets are mainly located along cold eastern boundary marine currents, in upwelling regions offshore California, Peru-Chile, Benguela, Portugal, Canary and West Australia, and offshore Yemen—Oman.[78][79][80]

Valley exit jet[edit]

valley exit jet is a strong, down-valley, elevated air current that emerges above the intersection of the valley and its adjacent plain. These winds frequently reach a maximum of 20 m/s (72 km/h; 45 mph) at a height of 40–200 m (130–660 ft) above the ground. Surface winds below the jet may sway vegetation, but are significantly weaker.

They are likely to be found in valley regions that exhibit diurnal mountain wind systems, such as those of the dry mountain ranges of the US. Deep valleys that terminate abruptly at a plain are more impacted by these factors than are those that gradually become shallower as downvalley distance increases.[81]

Africa[edit]

The mid-level African easterly jet occurs during the Northern Hemisphere summer between 10°N and 20°N above West Africa, and the nocturnal poleward low-level jet occurs in the Great Plains of east and South Africa.[82] The low-level easterly African jet stream is considered to play a crucial role in the southwest monsoon of Africa,[83] and helps form the tropical waves which move across the tropical Atlantic and eastern Pacific oceans during the warm season.[84] The formation of the thermal low over northern Africa leads to a low-level westerly jet stream from June into October.[85]

See also[edit]

https://en.wikipedia.org/wiki/Jet_stream


Evapotranspiration (ET) is the sum of water evaporation and transpiration from a surface area to the atmosphere. Evaporation accounts for the movement of water to the air from sources such as the soil, canopy interception, and water bodies. Transpiration accounts for the movement of water within a plant and the subsequent exit of water as vapor through stomata in its leaves in vascular plants and phyllids in non-vascular plants. A plant that contributes to evapotranspiration is called an evapotranspirator.[1] Evapotranspiration is an important part of the water cycle

Potential evapotranspiration (PET) is a representation of the environmental demand for evapotranspiration and represents the evapotranspiration rate of a short green crop (grass), completely shading the ground, of uniform height and with adequate water status in the soil profile. It is a reflection of the energy available to evaporate water, and of the wind available to transport the water vapor from the ground up into the lower atmosphere. Often a value for the potential evapotranspiration is calculated at a nearby climatic station on a reference surface, conventionally short grass. This value is called the reference evapotranspiration (ET0). Actual evapotranspiration is said to equal potential evapotranspiration when there is ample water. Some US states utilize a full cover alfalfa reference crop that is 0.5 m in height, rather than the short green grass reference, due to the higher value of ET from the alfalfa reference.[2]

https://en.wikipedia.org/wiki/Evapotranspiration


circumpolar vortex, or simply polar vortex, is a large region of cold, rotating air that encircles both of Earth's polar regions. Polar vortices also exist on other rotating, low-obliquity planetary bodies.[1] The term polar vortex can be used to describe two distinct phenomena; the stratospheric polar vortex, and the tropospheric polar vortex. The stratospheric and tropospheric polar vortices both rotate in the direction of the Earth's spin, but they are distinct phenomena that have different sizes, structures, seasonal cycles, and impacts on weather.

The stratospheric polar vortex is an area of high-speed, cyclonically rotating winds around 15 km to 50 km high, poleward of 50°, and is strongest in winter. It forms in Autumn when Arctic or Antarctic temperatures cool rapidly as the polar night begins. The increased temperature difference between the pole and the tropics causes strong winds and the Coriolis effect causes the vortex to spin up. The stratospheric polar vortex breaks down in Spring as the polar night ends. A sudden stratospheric warming (SSW) is an event that occurs when the stratospheric vortex breaks down during winter, and can have significant impacts on surface weather.[citation needed]

The tropospheric polar vortex is often defined as the area poleward of the tropospheric jet stream. The equatorward edge is around 40° to 50°, and it extends from the surface up to around 10 km to 15 km. Its yearly cycle differs from the stratospheric vortex because the tropospheric vortex exists all year, but is similar to the stratospheric vortex since it is also strongest in winter when the polar regions are coldest.

The tropospheric polar vortex was first described as early as 1853.[2] The stratospheric vortex's SSWs were discovered in 1952 with radiosonde observations at altitudes higher than 20 km.[3] The tropospheric polar vortex was mentioned frequently in the news and weather media in the cold North American winter of 2013–2014, popularizing the term as an explanation of very cold temperatures.[4] The tropospheric vortex increased in public visibility in 2021 as a result of extreme frigid temperatures in the central United States, with some sources linking its effects to climate change.[5]

Ozone depletion occurs within the polar vortices – particularly over the Southern Hemisphere – reaching a maximum depletion in the spring.

Identification[edit]

The bases of the two polar vortices are located in the middle and upper troposphere and extend into the stratosphere. Beneath that lies a large mass of cold, dense Arctic air. The interface between the cold dry air mass of the pole and the warm moist air mass farther south defines the location of the polar front. The polar front is centered, roughly at 60° latitude. A polar vortex strengthens in the winter and weakens in the summer because of its dependence on the temperature difference between the equator and the poles.[14]

Polar cyclones are low-pressure zones embedded within the polar air masses, and exist year-round. The stratospheric polar vortex develops at latitudes above the subtropical jet stream.[15] Horizontally, most polar vortices have a radius of less than 1,000 kilometres (620 mi).[16] Since polar vortices exist from the stratosphere downward into the mid-troposphere,[6] a variety of heights/pressure levels are used to mark its position. The 50 hPa pressure surface is most often used to identify its stratospheric location.[17] At the level of the tropopause, the extent of closed contours of potential temperature can be used to determine its strength. Others have used levels down to the 500 hPa pressure level (about 5,460 metres (17,910 ft) above sea level during the winter) to identify the polar vortex.[18]

https://en.wikipedia.org/wiki/Polar_vortex


The Arctic oscillation (AO) or Northern Annular Mode/Northern Hemisphere Annular Mode (NAM) is a weather phenomenon at the Arctic poles north of 20 degrees latitude. It is an important mode of climate variability for the Northern Hemisphere. The southern hemisphere analogue is called the Antarctic oscillation or Southern Annular Mode (SAM). The index varies over time with no particular periodicity, and is characterized by non-seasonal sea-level pressure anomalies of one sign in the Arctic, balanced by anomalies of opposite sign centered at about 37–45° N.[1]

The North Atlantic oscillation (NAO) is a close relative of the Arctic oscillation. There is debate over whether one or the other is more fundamentally representative of the atmosphere's dynamics. The NAO may be identified in a more physically meaningful way, which may carry more impact on measurable effects of changes in the atmosphere.[2]

Positive and negative phases of the Arctic Oscillation

The Arctic oscillation index is defined using the daily or monthly 1000 hPa geopotential height anomalies from latitudes 20° N to 90° N. The anomalies are projected onto the Arctic oscillation loading pattern,[5] which is defined as the first empirical orthogonal function (EOF) of monthly mean 1000 hPa geopotential height during the 1979-2000 period. The time series is then normalized with the monthly mean index's standard deviation.

https://en.wikipedia.org/wiki/Arctic_oscillation


Geopotential height or geopotential altitude is a vertical coordinate referenced to Earth's mean sea level, an adjustment to geometric height (altitude above mean sea level) that accounts for the variation of gravity with latitude and altitude. Thus, it can be considered a "gravity-adjusted height".

Definition[edit]

At an elevation of h, the geopotential is defined as:

where  is the acceleration due to gravity,  is latitude, and z is the geometric elevation. Thus geopotential is the gravitational potential energy per unit mass at that elevation h.[1]

The geopotential height is:

which normalizes the geopotential to  = 9.80665 m/s2, the standard gravity at mean sea level.[citation needed][1]

Usage[edit]

Geopotential height analysis on the North American Mesoscale Model (NAM) at 500 hPa.

Geophysical sciences such as meteorology often prefer to express the horizontal pressure gradient force as the gradient of geopotential along a constant-pressure surface, because then it has the properties of a conservative force. For example, the primitive equations which weather forecast models solve use hydrostatic pressure as a vertical coordinate, and express the slopes of those pressure surfaces in terms of geopotential height. 

A plot of geopotential height for a single pressure level in the atmosphere shows the troughs and ridges (highs and lows) which are typically seen on upper air charts. The geopotential thickness between pressure levels – difference of the 850 hPa and 1000 hPa geopotential heights for example – is proportional to mean virtual temperature in that layer. Geopotential height contours can be used to calculate the geostrophic wind, which is faster where the contours are more closely spaced and tangential to the geopotential height contours.[citation needed]

The National Weather Service defines geopotential height as:

"...roughly the height above sea level of a pressure level. For example, if a station reports that the 500 mb [i.e. millibar] height at its location is 5600 m, it means that the level of the atmosphere over that station at which the atmospheric pressure is 500 mb is 5600 meters above sea level. This is an estimated height based on temperature and pressure data."[2]

See also[edit]

https://en.wikipedia.org/wiki/Geopotential_height


The primitive equations are a set of nonlinear partial differential equations that are used to approximate global atmospheric flow and are used in most atmospheric models. They consist of three main sets of balance equations:

  1. continuity equation: Representing the conservation of mass.
  2. Conservation of momentum: Consisting of a form of the Navier–Stokes equations that describe hydrodynamical flow on the surface of a sphere under the assumption that vertical motion is much smaller than horizontal motion (hydrostasis) and that the fluid layer depth is small compared to the radius of the sphere
  3. thermal energy equation: Relating the overall temperature of the system to heat sources and sinks

The primitive equations may be linearized to yield Laplace's tidal equations, an eigenvalue problem from which the analytical solution to the latitudinal structure of the flow may be determined.

In general, nearly all forms of the primitive equations relate the five variables uv, ω, TW, and their evolution over space and time.

The equations were first written down by Vilhelm Bjerknes.[1]

https://en.wikipedia.org/wiki/Primitive_equations


Hydrostatic pressure[edit]

In a fluid at rest, all frictional and inertial stresses vanish and the state of stress of the system is called hydrostatic. When this condition of V = 0 is applied to the Navier–Stokes equations, the gradient of pressure becomes a function of body forces only. For a barotropic fluid in a conservative force field like a gravitational force field, the pressure exerted by a fluid at equilibrium becomes a function of force exerted by gravity.

The hydrostatic pressure can be determined from a control volume analysis of an infinitesimally small cube of fluid. Since pressure is defined as the force exerted on a test area (p = F/A, with p: pressure, F: force normal to area AA: area), and the only force acting on any such small cube of fluid is the weight of the fluid column above it, hydrostatic pressure can be calculated according to the following formula:

where:

  • p is the hydrostatic pressure (Pa),
  • ρ is the fluid density (kg/m3),
  • g is gravitational acceleration (m/s2),
  • A is the test area (m2),
  • z is the height (parallel to the direction of gravity) of the test area (m),
  • z0 is the height of the zero reference point of the pressure (m).

For water and other liquids, this integral can be simplified significantly for many practical applications, based on the following two assumptions: Since many liquids can be considered incompressible, a reasonable good estimation can be made from assuming a constant density throughout the liquid. (The same assumption cannot be made within a gaseous environment.) Also, since the height h of the fluid column between z and z0 is often reasonably small compared to the radius of the Earth, one can neglect the variation of g. Under these circumstances, the integral is simplified into the formula:

where h is the height z − z0 of the liquid column between the test volume and the zero reference point of the pressure. This formula is often called Stevin's law.[4][5]Note that this reference point should lie at or below the surface of the liquid. Otherwise, one has to split the integral into two (or more) terms with the constant ρliquidand ρ(z′)above. For example, the absolute pressure compared to vacuum is:

where H is the total height of the liquid column above the test area to the surface, and patm is the atmospheric pressure, i.e., the pressure calculated from the remaining integral over the air column from the liquid surface to infinity. This can easily be visualized using a pressure prism.

Hydrostatic pressure has been used in the preservation of foods in a process called pascalization.[6]

https://en.wikipedia.org/wiki/Hydrostatics#Hydrostatic_pressure


Pascalizationbridgmanizationhigh pressure processing (HPP)[1] or high hydrostatic pressure (HHPprocessing[2] is a method of preserving and sterilizingfood, in which a product is processed under very high pressure, leading to the inactivation of certain microorganisms and enzymes in the food.[3] HPP has a limited effect on covalent bonds within the food product, thus maintaining both the sensory and nutritional aspects of the product.[4] The technique was named after Blaise Pascal, a French scientist of the 17th century whose work included detailing the effects of pressure on fluids. During pascalization, more than 50,000 pounds per square inch (340 MPa, 3.4 kbar) may be applied for around fifteen minutes, leading to the inactivation of yeastmold, and bacteria.[5][6] Pascalization is also known as bridgmanization,[7] named for physicist Percy Williams Bridgman.[8]

https://en.wikipedia.org/wiki/Pascalization


Vertical pressure variation is the variation in pressure as a function of elevation. Depending on the fluid in question and the context being referred to, it may also vary significantly in dimensions perpendicular to elevation as well, and these variations have relevance in the context of pressure gradient force and its effects. However, the vertical variation is especially significant, as it results from the pull of gravity on the fluid; namely, for the same given fluid, a decrease in elevation within it corresponds to a taller column of fluid weighing down on that point.

Basic formula[edit]

A relatively simple version [1] of the vertical fluid pressure variation is simply that the pressure difference between two elevations is the product of elevation change, gravity, and density. The equation is as follows:

, and

where

P is pressure,
ρ is density,
g is acceleration of gravity, and
h is height.

The delta symbol indicates a change in a given variable. Since g is negative, an increase in height will correspond to a decrease in pressure, which fits with the previously mentioned reasoning about the weight of a column of fluid.

When density and gravity are approximately constant (that is, for relatively small changes in height), simply multiplying height difference, gravity, and density will yield a good approximation of pressure difference. Where different fluids are layered on top of one another, the total pressure difference would be obtained by adding the two pressure differences; the first being from point 1 to the boundary, the second being from the boundary to point 2; which would just involve substituting the ρ and Δh values for each fluid and taking the sum of the results. If the density of the fluid varies with height, mathematical integration would be required.

Whether or not density and gravity can be reasonably approximated as constant depends on the level of accuracy needed, but also on the length scale of height difference, as gravity and density also decrease with higher elevation. For density in particular, the fluid in question is also relevant; seawater, for example, is considered an incompressible fluid; its density can vary with height, but much less significantly than that of air. Thus water's density can be more reasonably approximated as constant than that of air, and given the same height difference, the pressure differences in water are approximately equal at any height.

Hydrostatic paradox[edit]

Diagram illustrating the hydrostatic paradox

The barometric formula depends only on the height of the fluid chamber, and not on its width or length. Given a large enough height, any pressure may be attained. This feature of hydrostatics has been called the hydrostatic paradox. As expressed by W. H. Besant,[2]

Any quantity of liquid, however small, may be made to support any weight, however large.

The Dutch scientist Simon Stevin was the first to explain the paradox mathematically.[3] In 1916 Richard Glazebrook mentioned the hydrostatic paradox as he described an arrangement he attributed to Pascal: a heavy weight W rests on a board with area A resting on a fluid bladder connected to a vertical tube with cross-sectional area α. Pouring water of weight w down the tube will eventually raise the heavy weight. Balance of forces leads to the equation

Glazebrook says, "By making the area of the board considerable and that of the tube small, a large weight W can be supported by a small weight w of water. This fact is sometimes described as the hydrostatic paradox."[4]

Demonstrations of the hydrostatic paradox are used in teaching the phenomenon.[5][6]

In the context of Earth's atmosphere[edit]

If one is to analyze the vertical pressure variation of the atmosphere of Earth, the length scale is very significant (troposphere alone being several kilometres tall; thermosphere being several hundred kilometres) and the involved fluid (air) is compressible. Gravity can still be reasonably approximated as constant, because length scales on the order of kilometres are still small in comparison to Earth's radius, which is on average about 6371 km,[7] and gravity is a function of distance from Earth's core.[8]

Density, on the other hand, varies more significantly with height. It follows from the ideal gas law that

where

m is average mass per air molecule,
P is pressure at a given point,
k is the Boltzmann constant,
T is the temperature in kelvins.

Put more simply, air density depends on air pressure. Given that air pressure also depends on air density, it would be easy to get the impression that this was circular definition, but it is simply interdependency of different variables. This then yields a more accurate formula, of the form

where

Ph is the pressure at height h,
P0 is the pressure at reference point 0 (typically referring to sea level),
m is the mass per air molecule,
g is the acceleration due to gravity,
h is height from reference point 0,
k is the Boltzmann constant,
T is the temperature in kelvins.

Therefore, instead of pressure being a linear function of height as one might expect from the more simple formula given in the "basic formula" section, it is more accurately represented as an exponential function of height.

Note that in this simplification, the temperature is treated as constant, even though temperature also varies with height. However, the temperature variation within the lower layers of the atmosphere (tropospherestratosphere) is only in the dozens of degrees, as opposed to their thermodynamic temperature, which is in the hundreds, so the temperature variation is reasonably small and is thus ignored. For smaller height differences, including those from top to bottom of even the tallest of buildings, (like the CN tower) or for mountains of comparable size, the temperature variation will easily be within the single-digits. (See also lapse rate.)

An alternative derivation, shown by the Portland State Aerospace Society,[9] is used to give height as a function of pressure instead. This may seem counter-intuitive, as pressure results from height rather than vice versa, but such a formula can be useful in finding height based on pressure difference when one knows the latter and not the former. Different formulas are presented for different kinds of approximations; for comparison with the previous formula, the first referenced from the article will be the one applying the same constant-temperature approximation; in which case:

where (with values used in the article)

z is the elevation in meters,
R is the specific gas constant = 287.053 J/(kg K)
T is the absolute temperature in kelvins = 288.15 K at sea level,
g is the acceleration due to gravity = 9.80665 m/s2 at sea level,
P is the pressure at a given point at elevation z in Pascals, and
P0 is pressure at the reference point = 101,325 Pa at sea level.

A more general formula derived in the same article accounts for a linear change in temperature as a function of height (lapse rate), and reduces to above when the temperature is constant:

where

L is the atmospheric lapse rate (change in temperature divided by distance) = −6.5×10−3 K/m, and
T0 is the temperature at the same reference point for which P = P0

and the other quantities are the same as those above. This is the recommended formula to use.

See also[edit]


Vertical pressure variation is the variation in pressure as a function of elevation

https://en.wikipedia.org/wiki/Vertical_pressure_variation


The pressure-gradient force is the force that results when there is a difference in pressure across a surface. In general, a pressure is a force per unit area, across a surface. A difference in pressure across a surface then implies a difference in force, which can result in an acceleration according to Newton's second law of motion, if there is no additional force to balance it. The resulting force is always directed from the region of higher-pressure to the region of lower-pressure. When a fluid is in an equilibrium state (i.e. there are no net forces, and no acceleration), the system is referred to as being in hydrostatic equilibrium. In the case of atmospheres, the pressure-gradient force is balanced by the gravitational force, maintaining hydrostatic equilibrium. In Earth's atmosphere, for example, air pressure decreases at altitudes above Earth's surface, thus providing a pressure-gradient force which counteracts the force of gravity on the atmosphere.

Formalism[edit]

Consider a cubic parcel of fluid with a density , a height , and a surface area . The mass of the parcel can be expressed as, . Using Newton's second law, , we can then examine a pressure difference  (assumed to be only in the -direction) to find the resulting force, .

The acceleration resulting from the pressure gradient is then,

.

The effects of the pressure gradient are usually expressed in this way, in terms of an acceleration, instead of in terms of a force. We can express the acceleration more precisely, for a general pressure  as,

.

The direction of the resulting force (acceleration) is thus in the opposite direction of the most rapid increase of pressure.

References[edit]

  • Roland B. Stull (2000) Meteorology for Scientists and Engineers, Second Edition, Ed. Brooks/Cole, ISBN 0-534-37214-7.

https://en.wikipedia.org/wiki/Pressure-gradient_force


https://en.wikipedia.org/wiki/Geopotential_height

https://en.wikipedia.org/wiki/Vertical_pressure_variation

https://en.wikipedia.org/wiki/Henry_Cavendish

https://en.wikipedia.org/wiki/Antoine_Lavoisier


The atmosphere of Earth, commonly known as air, is the layer of gases retained by Earth's gravity that surrounds the planet and forms its planetary atmosphere. The atmosphere of Earth protects life on Earth by creating pressure allowing for liquid water to exist on the Earth's surface, absorbing ultraviolet solar radiation, warming the surface through heat retention (greenhouse effect), and reducing temperature extremes between day and night (the diurnal temperature variation).

By mole fraction (i.e., by number of molecules), dry air contains 78.08% nitrogen, 20.95% oxygen, 0.93% argon, 0.04% carbon dioxide, and small amounts of other gases.[8] Air also contains a variable amount of water vapor, on average around 1% at sea level, and 0.4% over the entire atmosphere. Air composition, temperature, and atmospheric pressure vary with altitude. Within the atmosphere, air suitable for use in photosynthesis by terrestrial plants and breathing of terrestrial animalsis found only in Earth's troposphere.[citation needed]

Earth's early atmosphere consisted of gases in the solar nebula, primarily hydrogen. The atmosphere changed significantly over time, affected by many factors such as volcanismlife, and weathering. Recently, human activity has also contributed to atmospheric changes, such as global warmingozone depletion and acid deposition.

The atmosphere has a mass of about 5.15×1018 kg,[9] three quarters of which is within about 11 km (6.8 mi; 36,000 ft) of the surface. The atmosphere becomes thinner with increasing altitude, with no definite boundary between the atmosphere and outer space. The Kármán line, at 100 km (62 mi) or 1.57% of Earth's radius, is often used as the border between the atmosphere and outer space. Atmospheric effects become noticeable during atmospheric reentry of spacecraft at an altitude of around 120 km (75 mi). Several layers can be distinguished in the atmosphere, based on characteristics such as temperature and composition.

The study of Earth's atmosphere and its processes is called atmospheric science (aerology), and includes multiple subfields, such as climatology and atmospheric physics. Early pioneers in the field include Léon Teisserenc de Bort and Richard Assmann.[10] The study of historic atmosphere is called paleoclimatology.

https://en.wikipedia.org/wiki/Atmosphere_of_Earth#Composition


https://en.wikipedia.org/wiki/Jet_stream


Stabilisation[edit]

Several methods are used to stabilise and protect submarine pipelines and their components. These may be used alone or in combinations.[34]

Trenching and burial[edit]

Simplified drawing showing a typical jetting system for trenching below a submarine pipeline that is lying on the seafloor.

A submarine pipeline may be laid inside a trench as a means of safeguarding it against fishing gear (e.g. anchors) and trawling activity.[35][36] This may also be required in shore approaches to protect the pipeline against currents and wave action(as it crosses the surf zone). Trenching can be done prior to pipeline lay (pre-lay trenching), or afterward by seabed removal from below the pipeline (post-lay trenching). In the latter case, the trenching device rides on top of, or straddles, the pipeline.[35][36] Several systems are used to dig trenches in the seabed for submarine pipelines:

  • Jetting: This is a post-lay trenching procedure whereby the soil is removed from beneath the pipeline by using powerful pumps to blow water on each side of it.[37][38]
  • Mechanical cutting: This system uses chains or cutter disks to dig through and remove harder soils, including boulders,[39]from below the pipeline.
  • Plowing: The plowing principle, which was initially used for pre-lay trenching, has evolved into sophisticated systems that are lighter in size for faster and safer operation.
  • Dredging/excavation: In shallower water, the soil can be removed with a dredger or an excavator prior to laying the pipeline. This can be done in a number of ways, notably with a ′′cutter-suction′′ system, with the use of buckets or with a backhoe.[35]

″A buried pipe is far better protected than a pipe in an open trench.″[40] This is commonly done either by covering the structure with rocks quarried from a nearby shoreline. Alternatively, the soil excavated from the seabed during trenching can be used as backfill. A significant drawback to burial is the difficulty in locating a leak should it arise, and for the ensuing repairing operations.[41]

https://en.wikipedia.org/wiki/Submarine_pipeline


Noctilucent clouds, or night shining clouds, are tenuous cloud-like phenomena in the upper atmosphere of Earth. They consist of ice crystals and are only visible during astronomical twilightNoctilucent roughly means "night shining" in Latin. They are most often observed during the summer months from latitudes between ±50° and ±70°. Too faint to be seen in daylight, they are visible only when the observer and the lower layers of the atmosphere are in Earth's shadow, but while these very high clouds are still in sunlight. Recent studies suggest that increased atmospheric methaneemissions produce additional water vapor once the methane molecules reach the mesosphere – creating, or reinforcing existing noctilucent clouds.[1]

They are the highest clouds in Earth's atmosphere, located in the mesosphere at altitudes of around 76 to 85 km (249,000 to 279,000 ft).

https://en.wikipedia.org/wiki/Noctilucent_cloud


Aerogel is a synthetic porous ultralight material derived from a gel, in which the liquid component for the gel has been replaced with a gas without significant collapse of the gel structure.[4] The result is a solid with extremely low density[5] and extremely low thermal conductivity. Nicknames include frozen smoke,[6] solid smokesolid airsolid cloud, and blue smoke, owing to its translucent nature and the way light scatters in the material. Silicaaerogels feel like fragile expanded polystyrene to the touch, while some polymer-based aerogels feel like rigid foams. Aerogels can be made from a variety of chemical compounds.[7]

Aerogel was first created by Samuel Stephens Kistler in 1931,[8] as a result of a bet[9] with Charles Learned over who could replace the liquid in "jellies" with gas without causing shrinkage.[10][11]

Aerogels are produced by extracting the liquid component of a gel through supercritical drying or freeze-drying. This allows the liquid to be slowly dried off without causing the solid matrix in the gel to collapse from capillary action, as would happen with conventional evaporation. The first aerogels were produced from silica gels. Kistler's later work involved aerogels based on aluminachromia and tin dioxideCarbon aerogels were first developed in the late 1980s.[12]

https://en.wikipedia.org/wiki/Aerogel


Supercritical drying, also known as critical point drying, is a process to remove liquid in a precise and controlled way.[1] It is useful in the production of microelectromechanical systems (MEMS), the drying of spices, the production of aerogel, the decaffeination of coffee and in the preparation of biological specimens for scanning electron microscopy.

Phase diagram[edit]

As the substance in a liquid body crosses the boundary from liquid to gas (see green arrow in phase diagram), the liquid changes into gas at a finite rate, while the amount of liquid decreases. When this happens within a heterogeneous environment, surface tension in the liquid body pulls against any solid structures the liquid might be in contact with. Delicate structures such as cell walls, the dendrites in silica gel, and the tiny machinery of microelectromechanical devices, tend to be broken apart by this surface tension as the liquid–gas–solid junction moves by.

To avoid this, the sample can be brought via two possible alternate paths from the liquid phase to the gas phase without crossing the liquid–gas boundary on the phase diagram. In freeze-drying, this means going around to the left (low temperature, low pressure; blue arrow). However, some structures are disrupted even by the solid–gas boundary. Supercritical drying, on the other hand, goes around the line to the right, on the high-temperature, high-pressure side (red arrow). This route from liquid to gas does not cross any phase boundary, instead passing through the supercritical region, where the distinction between gas and liquid ceases to apply. Densities of the liquid phase and vapor phase become equal at critical point of drying.

Fluids suitable for supercritical drying include carbon dioxide (critical point 304.25 K at 7.39 MPa or 31.1 °C at 1072 psi) and freon (≈300 K at 3.5–4 MPa or 25–0 °C at 500–600 psi). Nitrous oxide has similar physical behavior to carbon dioxide, but is a powerful oxidizer in its supercritical state. Supercritical water is inconvenient due to possible heat damage to a sample at its critical point temperature (647 K, 374 °C) and corrosiveness of water at such high temperatures and pressures (22.064 MPa, 3,212 psi).

In most such processes, acetone is first used to wash away all water, exploiting the complete miscibility of these two fluids. The acetone is then washed away with high pressure liquid carbon dioxide, the industry standard now that freon is unavailable. The liquid carbon dioxide is then heated until its temperature goes beyond the critical point, at which time the pressure can be gradually released, allowing the gas to escape and leaving a dried product.

https://en.wikipedia.org/wiki/Supercritical_drying


In materials science, the sol–gel process is a method for producing solid materials from small molecules. The method is used for the fabrication of metal oxides, especially the oxides of silicon (Si) and titanium (Ti). The process involves conversion of monomers into a colloidal solution (sol) that acts as the precursor for an integrated network (or gel) of either discrete particles or network polymers. Typical precursors are metal alkoxides.

https://en.wikipedia.org/wiki/Sol–gel_process


launch loop, or Lofstrom loop, is a proposed system for launching objects into orbit using a moving cable-like system situated inside a sheath attached to the Earth at two ends and suspended above the atmosphere in the middle. The design concept was published by Keith Lofstrom and describes an active structure maglev cable transport system that would be around 2,000 km (1,240 mi) long and maintained at an altitude of up to 80 km (50 mi). A launch loop would be held up at this altitude by the momentum of a belt that circulates around the structure. This circulation, in effect, transfers the weight of the structure onto a pair of magnetic bearings, one at each end, which support it.

Launch loops are intended to achieve non-rocket spacelaunch of vehicles weighing 5 metric tons by electromagnetically accelerating them so that they are projected into Earth orbit or even beyond. This would be achieved by the flat part of the cable which forms an acceleration track above the atmosphere.[1]

The system is designed to be suitable for launching humans for space tourismspace exploration and space colonization, and provides a relatively low 3g acceleration.[2]

Launch loop accelerator section (return cable not shown).

https://en.wikipedia.org/wiki/Launch_loop


The chain fountain phenomenon, also known as the self-siphoning beadsNewton's beads or the Mould effect, is a counterintuitive physical phenomenon observed with a chain placed inside a jar, when one end of the chain is pulled from the jar and is allowed to fall to the floor beneath under the influence of gravity. This process establishes a self-sustaining flow of the chain which rises over the edge and goes down to the floor or ground beneath it, as if being sucked out of the jar by an invisible siphon. For chains with small adjacent beads, the arch can ascend into the air over and above the edge of the jar with a noticeable gap.[1]

Snapshot of chain fountain process

https://en.wikipedia.org/wiki/Chain_fountain


Cable transport is a broad class of transport modes that have cables. They transport passengers and goods, often in vehicles called cable cars. The cable may be driven or passive, and items may be moved by pulling, sliding, sailing, or by drives within the object being moved on cableways. The use of pulleys and balancing of loads moving up and down are common elements of cable transport. They are often used in mountainous areas where cable haulage can overcome large differences in elevation.

https://en.wikipedia.org/wiki/Cable_transport


Electromagnetic suspension (EMS) is the magnetic levitation of an object achieved by constantly altering the strength of a magnetic field produced by electromagnets using a feedback loop. In most cases the levitation effect is mostly due to permanent magnets as they don't have any power dissipation, with electromagnets only used to stabilize the effect.

According to Earnshaw's Theorem a paramagnetically magnetised body cannot rest in stable equilibrium when placed in any combination of gravitational and magnetostatic fields. In these kinds of fields an unstable equilibrium condition exists. Although static fields cannot give stability, EMS works by continually altering the current sent to electromagnets to change the strength of the magnetic field and allows a stable levitation to occur. In EMS a feedback loop which continuously adjusts one or more electromagnets to correct the object's motion is used to cancel the instability.

Many systems use magnetic attraction pulling upwards against gravity for these kinds of systems as this gives some inherent lateral stability, but some use a combination of magnetic attraction and magnetic repulsion to push upwards.

Magnetic levitation technology is important because it reduces energy consumption, largely obviating friction. It also avoids wear and has very low maintenance requirements. The application of magnetic levitation is most commonly known for its role in Maglev trains.

https://en.wikipedia.org/wiki/Electromagnetic_suspension


Magnetic levitation (maglev) or magnetic suspension is a method by which an object is suspended with no support other than magnetic fieldsMagnetic force is used to counteract the effects of the gravitational force and any other forces.

The two primary issues involved in magnetic levitation are lifting forces: providing an upward force sufficient to counteract gravity, and stability: ensuring that the system does not spontaneously slide or flip into a configuration where the lift is neutralized.

Magnetic levitation is used for maglev trains, contactless meltingmagnetic bearings and for product display purposes.

An example of magnetic pseudo-levitation with a mechanical support (wooden rod) providing stability.

Relative motion between conductors and magnets[edit]

If one moves a base made of a very good electrical conductor such as copperaluminium or silver close to a magnet, an (eddy) current will be induced in the conductor that will oppose the changes in the field and create an opposite field that will repel the magnet (Lenz's law). At a sufficiently high rate of movement, a suspended magnet will levitate on the metal, or vice versa with suspended metal. Litz wire made of wire thinner than the skin depth for the frequencies seen by the metal works much more efficiently than solid conductors. Figure 8 coils can be used to keep something aligned.[7]

An especially technologically interesting case of this comes when one uses a Halbach array instead of a single pole permanent magnet, as this almost doubles the field strength, which in turn almost doubles the strength of the eddy currents. The net effect is to more than triple the lift force. Using two opposed Halbach arrays increases the field even further.[8]

Halbach arrays are also well-suited to magnetic levitation and stabilisation of gyroscopes and electric motor and generator spindles.

Oscillating electromagnetic fields[edit]

Aluminium foil floating above the induction cooktop thanks to eddy currents induced in it.

conductor can be levitated above an electromagnet (or vice versa) with an alternating current flowing through it. This causes any regular conductor to behave like a diamagnet, due to the eddy currents generated in the conductor.[9][10] Since the eddy currents create their own fields which oppose the magnetic field, the conductive object is repelled from the electromagnet, and most of the field lines of the magnetic field will no longer penetrate the conductive object.

This effect requires non-ferromagnetic but highly conductive materials like aluminium or copper, as the ferromagnetic ones are also strongly attracted to the electromagnet (although at high frequencies the field can still be expelled) and tend to have a higher resistivity giving lower eddy currents. Again, litz wire gives the best results.

The effect can be used for stunts such as levitating a telephone book by concealing an aluminium plate within it.

At high frequencies (a few tens of kilohertz or so) and kilowatt powers small quantities of metals can be levitated and melted using levitation melting without the risk of the metal being contaminated by the crucible.[11]

One source of oscillating magnetic field that is used is the linear induction motor. This can be used to levitate as well as provide propulsion.

Diamagnetically stabilized levitation[edit]

Permanent magnet stably levitated between fingertips

Earnshaw's theorem does not apply to diamagnets. These behave in the opposite manner to normal magnets owing to their relative permeability of μr < 1 (i.e. negative magnetic susceptibility). Diamagnetic levitation can be inherently stable.

A permanent magnet can be stably suspended by various configurations of strong permanent magnets and strong diamagnets. When using superconducting magnets, the levitation of a permanent magnet can even be stabilized by the small diamagnetism of water in human fingers.[12]

Diamagnetic levitation[edit]

Diamagnetic levitation of pyrolytic carbon

Diamagnetism is the property of an object which causes it to create a magnetic field in opposition to an externally applied magnetic field, thus causing the material to be repelled by magnetic fields. Diamagnetic materials cause lines of magnetic flux to curve away from the material. Specifically, an external magnetic field alters the orbital velocity of electrons around their nuclei, thus changing the magnetic dipole moment.

According to Lenz's law, this opposes the external field. Diamagnets are materials with a magnetic permeability less than μ0 (a relative permeability less than 1). Consequently, diamagnetism is a form of magnetism that is only exhibited by a substance in the presence of an externally applied magnetic field. It is generally quite a weak effect in most materials, although superconductors exhibit a strong effect.

Direct diamagnetic levitation[edit]

A live frog levitates inside a 32 mmdiameter vertical bore of a Bitter solenoid in a magnetic field of about 16 teslas

A substance that is diamagnetic repels a magnetic field. All materials have diamagnetic properties, but the effect is very weak, and is usually overcome by the object's paramagnetic or ferromagnetic properties, which act in the opposite manner. Any material in which the diamagnetic component is stronger will be repelled by a magnet.

Diamagnetic levitation can be used to levitate very light pieces of pyrolytic graphite or bismuth above a moderately strong permanent magnet. As water is predominantly diamagnetic, this technique has been used to levitate water droplets and even live animals, such as a grasshopper, frog and a mouse.[13] However, the magnetic fields required for this are very high, typically in the range of 16 teslas, and therefore create significant problems if ferromagnetic materials are nearby. Operation of this electromagnet used in the frog levitation experiment required 4 MW (4000000 watts) of power. [13]: 5 

The minimum criterion for diamagnetic levitation is , where:

Assuming ideal conditions along the z-direction of solenoid magnet:

Superconductors[edit]

Superconductors may be considered perfect diamagnets, and completely expel magnetic fields due to the Meissner effect when the superconductivity initially forms; thus superconducting levitation can be considered a particular instance of diamagnetic levitation. In a type-II superconductor, the levitation of the magnet is further stabilized due to flux pinning within the superconductor; this tends to stop the superconductor from moving with respect to the magnetic field, even if the levitated system is inverted.

These principles are exploited by EDS (Electrodynamic Suspension), superconducting bearingsflywheels, etc.

A very strong magnetic field is required to levitate a train. The JR–Maglev trains have superconducting magnetic coils, but the JR–Maglev levitation is not due to the Meissner effect.

Rotational stabilization[edit]

File:Levitron-levitating-top-demonstrating-Roy-M-Harrigans-spin-stabilized-magnetic-levitation.ogv
A Levitron branded top demonstrates spin-stabilized magnetic levitation

A magnet or properly assembled array of magnets can be stably levitated against gravity when gyroscopically stabilized by spinning it in a toroidal field created by a base ring of magnet(s). However, this only works while the rate of precession is between both upper and lower critical thresholds—the region of stability is quite narrow both spatially and in the required rate of precession.

The first discovery of this phenomenon was by Roy M. Harrigan, a Vermont inventor who patented a levitation device in 1983 based upon it.[14] Several devices using rotational stabilization (such as the popular Levitron branded levitating top toy) have been developed citing this patent. Non-commercial devices have been created for university research laboratories, generally using magnets too powerful for safe public interaction.

Strong focusing[edit]

Earnshaw's theory strictly only applies to static fields. Alternating magnetic fields, even purely alternating attractive fields,[15] can induce stability and confine a trajectory through a magnetic field to give a levitation effect.

This is used in particle accelerators to confine and lift charged particles, and has been proposed for maglev trains as well.[15]

Uses[edit]

Known uses of magnetic levitation include maglev trains, contactless meltingmagnetic bearings and for product display purposes. Moreover, recently magnetic levitation has been approached in the field of microrobotics.

Maglev transportation[edit]

Maglev, or magnetic levitation, is a system of transportation that suspends, guides and propels vehicles, predominantly trains, using magnetic levitation from a very large number of magnets for lift and propulsion. This method has the potential to be faster, quieter and smoother than wheeled mass transit systems. The technology has the potential to exceed 6,400 km/h (4,000 mi/h) if deployed in an evacuated tunnel.[16] If not deployed in an evacuated tube the power needed for levitation is usually not a particularly large percentage and most of the power needed is used to overcome air drag, as with any other high speed train. Some maglev Hyperloopprototype vehicles are being developed as part of the Hyperloop pod competition in 2015–2016, and are expected to make initial test runs in an evacuated tube later in 2016.[17]

The highest recorded speed of a maglev train is 603 kilometers per hour (374.69 mph), achieved in Japan on 21 April 2015; 28.2 km/h faster than the conventional TGV speed record. Maglev trains exist and are planned across the world. Notable projects in Asia include Central Japan Railway Company's superconducting maglev train and Shanghai's maglev train, the oldest commercial maglev still in operation. Elsewhere, various projects have been considered across Europe and Northeast Maglev aims to overhaul North America's Northeast Corridor with JR Central's SCMaglev technology.

Magnetic bearings[edit]

Levitation melting[edit]

Electromagnetic levitation (EML), patented by Muck in 1923,[18] is one of the oldest levitation techniques used for containerless experiments.[19] The technique enables the levitation of an object using electromagnets. A typical EML coil has reversed winding of upper and lower sections energized by a radio frequency power supply.

https://en.wikipedia.org/wiki/Magnetic_levitation


Maglev (from magnetic levitation) is a system of train transportation that uses two sets of magnets: one set to repel and push the train up off the track, and another set to move the elevated train ahead, taking advantage of the lack of friction. Along certain "medium-range" routes (usually 320 to 640 km (200 to 400 mi)), maglev can compete favourably with high-speed railand airplanes.

With maglev technology, the train travels along a guideway of magnets which control the train's stability and speed. While the propulsion and levitation require no moving parts, the bogies can move in relation to the main body of the vehicle and some technologies require support by retractable wheels at speeds under 150 kilometres per hour (93 mph). This compares with electric multiple units that may have several dozen parts per bogie. Maglev trains can therefore in some cases be quieter and smoother than conventional trains and have the potential for much higher speeds.[1]

Maglev vehicles have set several speed records, and maglev trains can accelerate and decelerate much faster than conventional trains; the only practical limitation is the safety and comfort of the passengers, although wind resistance at very high speeds can cause running costs that are four to five times that of conventional high-speed rail (such as the Tokaido Shinkansen).[2] The power needed for levitation is typically not a large percentage of the overall energy consumption of a high-speed maglev system.[3] Overcoming drag, which makes all land transport more energy intensive at higher speeds, takes the most energy. Vactrain technology has been proposed as a means to overcome this limitation. Maglev systems have been much more expensive to construct than conventional train systems, although the simpler construction of maglev vehicles makes them cheaper to manufacture and maintain.[citation needed]

The Shanghai maglev train, also known as the Shanghai Transrapid, has a top speed of 430 km/h (270 mph). The line is the fastest operational high-speed maglev train, designed to connect Shanghai Pudong International Airport and the outskirts of central PudongShanghai. It covers a distance of 30.5 km (19 mi) in just over 8 minutes. For the first time, the launch generated wide public interest and media attention, propelling the popularity of the mode of transportation.[4] Despite over a century of research and development, maglev transport systems are now operational in just three countries (Japan, South Korea and China).[citation needed] The incremental benefits of maglev technology have often been considered hard to justify against cost and risk, especially where there is an existing or proposed conventional high-speed train line with spare passenger carrying capacity, as in high-speed rail in Europe, the High Speed 2 in the UK and Shinkansen in Japan.

Technology[edit]

In the public imagination, "maglev" often evokes the concept of an elevated monorail track with a linear motor. Maglev systems may be monorail or dual rail—the SCMaglev MLX01 for instance uses a trench-like track—and not all monorail trains are maglevs. Some railway transport systems incorporate linear motors but use electromagnetism only for propulsion, without levitating the vehicle. Such trains have wheels and are not maglevs.[note 3] Maglev tracks, monorail or not, can also be constructed at grade or underground in tunnels. Conversely, non-maglev tracks, monorail or not, can be elevated or underground too. Some maglev trains do incorporate wheels and function like linear motor-propelled wheeled vehicles at slower speeds but levitate at higher speeds. This is typically the case with electrodynamic suspension maglev trains. Aerodynamic factors may also play a role in the levitation of such trains.

MLX01 Maglev train Superconducting magnet bogie

The two main types of maglev technology are:

  • Electromagnetic suspension (EMS), electronically controlled electromagnets in the train attract it to a magnetically conductive (usually steel) track.
  • Electrodynamic suspension (EDS) uses superconducting electromagnets or strong permanent magnets that create a magnetic field, which induces currents in nearby metallic conductors when there is relative movement, which pushes and pulls the train towards the designed levitation position on the guide way.

Electromagnetic suspension (EMS)[edit]

Electromagnetic suspension (EMS) is used to levitate the Transrapid on the track, so that the train can be faster than wheeled mass transit systems[59][60]

In electromagnetic suspension (EMS) systems, the train levitates above a steel rail while electromagnets, attached to the train, are oriented toward the rail from below. The system is typically arranged on a series of C-shaped arms, with the upper portion of the arm attached to the vehicle, and the lower inside edge containing the magnets. The rail is situated inside the C, between the upper and lower edges.

Magnetic attraction varies inversely with the square of distance, so minor changes in distance between the magnets and the rail produce greatly varying forces. These changes in force are dynamically unstable—a slight divergence from the optimum position tends to grow, requiring sophisticated feedback systems to maintain a constant distance from the track, (approximately 15 mm [0.59 in]).[61][62]

The major advantage to suspended maglev systems is that they work at all speeds, unlike electrodynamic systems, which only work at a minimum speed of about 30 km/h (19 mph). This eliminates the need for a separate low-speed suspension system, and can simplify track layout. On the downside, the dynamic instability demands fine track tolerances, which can offset this advantage. Eric Laithwaite was concerned that to meet required tolerances, the gap between magnets and rail would have to be increased to the point where the magnets would be unreasonably large.[63] In practice, this problem was addressed through improved feedback systems, which support the required tolerances.

Electrodynamic suspension (EDS)[edit]

The Japanese SCMaglev's EDS suspension is powered by the magnetic fields induced either side of the vehicle by the passage of the vehicle's superconducting magnets.
EDS Maglev propulsion via propulsion coils

In electrodynamic suspension (EDS), both the guideway and the train exert a magnetic field, and the train is levitated by the repulsive and attractive force between these magnetic fields.[64] In some configurations, the train can be levitated only by repulsive force. In the early stages of maglev development at the Miyazaki test track, a purely repulsive system was used instead of the later repulsive and attractive EDS system.[65] The magnetic field is produced either by superconducting magnets (as in JR–Maglev) or by an array of permanent magnets (as in Inductrack). The repulsive and attractive force in the track is created by an induced magnetic field in wires or other conducting strips in the track.

A major advantage of EDS maglev systems is that they are dynamically stable—changes in distance between the track and the magnets creates strong forces to return the system to its original position.[63] In addition, the attractive force varies in the opposite manner, providing the same adjustment effects. No active feedback control is needed.

However, at slow speeds, the current induced in these coils and the resultant magnetic flux is not large enough to levitate the train. For this reason, the train must have wheels or some other form of landing gear to support the train until it reaches take-off speed. Since a train may stop at any location, due to equipment problems for instance, the entire track must be able to support both low- and high-speed operation.

Another downside is that the EDS system naturally creates a field in the track in front and to the rear of the lift magnets, which acts against the magnets and creates magnetic drag. This is generally only a concern at low speeds, and is one of the reasons why JR abandoned a purely repulsive system and adopted the sidewall levitation system.[65] At higher speeds other modes of drag dominate.[63]

The drag force can be used to the electrodynamic system's advantage, however, as it creates a varying force in the rails that can be used as a reactionary system to drive the train, without the need for a separate reaction plate, as in most linear motor systems. Laithwaite led development of such "traverse-flux" systems at his Imperial College laboratory.[63] Alternatively, propulsion coils on the guideway are used to exert a force on the magnets in the train and make the train move forward. The propulsion coils that exert a force on the train are effectively a linear motor: an alternating current through the coils generates a continuously varying magnetic field that moves forward along the track. The frequency of the alternating current is synchronized to match the speed of the train. The offset between the field exerted by magnets on the train and the applied field creates a force moving the train forward.

Tracks[edit]

The term "maglev" refers not only to the vehicles, but to the railway system as well, specifically designed for magnetic levitation and propulsion. All operational implementations of maglev technology make minimal use of wheeled train technology and are not compatible with conventional rail tracks. Because they cannot share existing infrastructure, maglev systems must be designed as standalone systems. The SPM maglev system is inter-operable with steel rail tracks and would permit maglev vehicles and conventional trains to operate on the same tracks.[63] MAN in Germany also designed a maglev system that worked with conventional rails, but it was never fully developed.[citation needed]

Evaluation[edit]

Each implementation of the magnetic levitation principle for train-type travel involves advantages and disadvantages.


TechnologyPros Cons

EMS[66][67](Electromagnetic suspension)Magnetic fields inside and outside the vehicle are less than EDS; proven, commercially available technology; high speeds (500 km/h or 310 mph); no wheels or secondary propulsion system needed.The separation between the vehicle and the guideway must be constantly monitored and corrected due to the unstable nature of electromagnetic attraction; the system's inherent instability and the required constant corrections by outside systems may induce vibration.

EDS[68][69]
(Electrodynamic suspension)
Onboard magnets and large margin between rail and train enable highest-recorded speeds (603 km/h or 375 mph) and heavy load capacity; demonstrated successful operations using high-temperature superconductors in its onboard magnets, cooled with inexpensive liquid nitrogen[citation needed].Strong magnetic fields on the train would make the train unsafe for passengers with pacemakers or magnetic data storage media such as hard drives and credit cards, necessitating the use of magnetic shielding; limitations on guideway inductivity limit maximum speed;[citation needed] vehicle must be wheeled for travel at low speeds.

InductrackSystem[70][71](Permanent Magnet Passive Suspension)Failsafe Suspension—no power required to activate magnets; Magnetic field is localized below the car; can generate enough force at low speeds (around 5 km/h or 3.1 mph) for levitation; given power failure cars stop safely; Halbach arrays of permanent magnets may prove more cost-effective than electromagnets.Requires either wheels or track segments that move for when the vehicle is stopped. Under development as of 2008; no commercial version or full-scale prototype.

Neither Inductrack nor the Superconducting EDS are able to levitate vehicles at a standstill, although Inductrack provides levitation at much lower speed; wheels are required for these systems. EMS systems are wheel-free.

The German Transrapid, Japanese HSST (Linimo), and Korean Rotem EMS maglevs levitate at a standstill, with electricity extracted from guideway using power rails for the latter two, and wirelessly for Transrapid. If guideway power is lost on the move, the Transrapid is still able to generate levitation down to 10 km/h (6.2 mph) speed,[citation needed] using the power from onboard batteries. This is not the case with the HSST and Rotem systems.

Propulsion[edit]

EMS systems such as HSST/Linimo can provide both levitation and propulsion using an onboard linear motor. But EDS systems and some EMS systems such as Transrapid levitate but do not propel. Such systems need some other technology for propulsion. A linear motor (propulsion coils) mounted in the track is one solution. Over long distances coil costs could be prohibitive.

Stability[edit]

Earnshaw's theorem shows that no combination of static magnets can be in a stable equilibrium.[72] Therefore a dynamic (time varying) magnetic field is required to achieve stabilization. EMS systems rely on active electronic stabilization that constantly measures the bearing distance and adjusts the electromagnet current accordingly. EDS systems rely on changing magnetic fields to create currents, which can give passive stability.

Because maglev vehicles essentially fly, stabilisation of pitch, roll and yaw is required. In addition to rotation, surge (forward and backward motions), sway (sideways motion) or heave (up and down motions) can be problematic.

Superconducting magnets on a train above a track made out of a permanent magnet lock the train into its lateral position. It can move linearly along the track, but not off the track. This is due to the Meissner effect and flux pinning.

Guidance system[edit]

Some systems use Null Current systems (also sometimes called Null Flux systems).[64][73] These use a coil that is wound so that it enters two opposing, alternating fields, so that the average flux in the loop is zero. When the vehicle is in the straight ahead position, no current flows, but any moves off-line create flux that generates a field that naturally pushes/pulls it back into line.

Proposed technology enhancements[edit]

Evacuated tubes[edit]

Some systems (notably the Swissmetro system) propose the use of vactrains—maglev train technology used in evacuated (airless) tubes, which removes air drag. This has the potential to increase speed and efficiency greatly, as most of the energy for conventional maglev trains is lost to aerodynamic drag.[74]

One potential risk for passengers of trains operating in evacuated tubes is that they could be exposed to the risk of cabin depressurization unless tunnel safety monitoring systems can repressurize the tube in the event of a train malfunction or accident though since trains are likely to operate at or near the Earth's surface, emergency restoration of ambient pressure should be straightforward. The RAND Corporation has depicted a vacuum tube train that could, in theory, cross the Atlantic or the USA in around 21 minutes.[75]

Rail-Maglev Hybrid[edit]

The Polish startup Nevomo (previously Hyper Poland) is developing a system for modifying existing railway tracks into a maglev system, on which conventional wheel-rail trains, as well maglev vehicles can travel.[76] Vehicles on this so-called ‘magrail’ system will be able to reach speeds of up to 300 km/h at significantly lower infrastructure costs than stand-alone maglev lines. Similar to proposed Vactrain systems, magrail is designed to allow a later-stage upgrade with a vacuum cover which will enable vehicles to reach speeds of up to 600 km/h due to reduced air pressure, making the system similar to a hyperloop, but without the necessity for dedicated infrastructure corridors.[77]

Energy use[edit]

Energy for maglev trains is used to accelerate the train. Energy may be regained when the train slows down via regenerative braking. It also levitates and stabilises the train's movement. Most of the energy is needed to overcome air drag. Some energy is used for air conditioning, heating, lighting and other miscellany.

At low speeds the percentage of power used for levitation can be significant, consuming up to 15% more power than a subway or light rail service.[78] For short distances the energy used for acceleration might be considerable.

The force used to overcome air drag increases with the square of the velocity and hence dominates at high speed. The energy needed per unit distance increases by the square of the velocity and the time decreases linearly. However power increases by the cube of the velocity. For example, 2.37 times as much power is needed to travel at 400 km/h (250 mph) than 300 km/h (190 mph), while drag increases by 1.77 times the original force.[79]

Aircraft take advantage of lower air pressure and lower temperatures by cruising at altitude to reduce energy consumption but unlike trains need to carry fuel on board. This has led to the suggestion of conveying maglev vehicles through partially evacuated tubes.

Comparison with conventional trains[edit]

Maglev transport is non-contact and electric powered. It relies less or not at all on the wheels, bearings and axles common to wheeled rail systems.[80]

  • Speed: Maglev allows higher top speeds than conventional rail, but experimental wheel-based high-speed trains have demonstrated similar speeds.
  • Maintenance: Maglev trains currently in operation have demonstrated the need for minimal guideway maintenance. Vehicle maintenance is also minimal (based on hours of operation, rather than on speed or distance traveled). Traditional rail is subject to mechanical wear and tear that increases rapidly with speed, also increasing maintenance.[80] For example: the wearing down of brakes and overhead wire wear have caused problems for the Fastech 360 rail Shinkansen. Maglev would eliminate these issues.
  • Weather: Maglev trains are little affected by snow, ice, severe cold, rain or high winds. However, they have not operated in the wide range of conditions that traditional friction-based rail systems have operated. Maglev vehicles accelerate and decelerate faster than mechanical systems regardless of the slickness of the guideway or the slope of the grade because they are non-contact systems.[80]
  • Track: Maglev trains are not compatible with conventional track, and therefore require custom infrastructure for their entire route. By contrast conventional high-speed trains such as the TGV are able to run, albeit at reduced speeds, on existing rail infrastructure, thus reducing expenditure where new infrastructure would be particularly expensive (such as the final approaches to city terminals), or on extensions where traffic does not justify new infrastructure. John Harding, former chief maglev scientist at the Federal Railroad Administration, claimed that separate maglev infrastructure more than pays for itself with higher levels of all-weather operational availability and nominal maintenance costs. These claims have yet to be proven in an intense operational setting and they do not consider the increased maglev construction costs.
  • Efficiency: Conventional rail is probably more efficient at lower speeds. But due to the lack of physical contact between the track and the vehicle, maglev trains experience no rolling resistance, leaving only air resistance and electromagnetic drag, potentially improving power efficiency.[81] Some systems, however, such as the Central Japan Railway Company SCMaglev use rubber tires at low speeds, reducing efficiency gains.[citation needed]
  • Weight: The electromagnets in many EMS and EDS designs require between 1 and 2 kilowatts per ton.[82] The use of superconductor magnets can reduce the electromagnets' energy consumption. A 50-ton Transrapid maglev vehicle can lift an additional 20 tons, for a total of 70 tons, which consumes 70–140 kW (94–188 hp).[citation needed] Most energy use for the TRI is for propulsion and overcoming air resistance at speeds over 100 mph (160 km/h).[citation needed]
  • Weight loading: High-speed rail requires more support and construction for its concentrated wheel loading. Maglev cars are lighter and distribute weight more evenly.[83]
  • Noise: Because the major source of noise of a maglev train comes from displaced air rather than from wheels touching rails, maglev trains produce less noise than a conventional train at equivalent speeds. However, the psychoacoustic profile of the maglev may reduce this benefit: a study concluded that maglev noise should be rated like road traffic, while conventional trains experience a 5–10 dB "bonus", as they are found less annoying at the same loudness level.[84][85][86]
  • Magnet reliability: Superconducting magnets are generally used to generate the powerful magnetic fields to levitate and propel the trains. These magnets must be kept below their critical temperatures (this ranges from 4.2 K to 77 K, depending on the material). New alloys and manufacturing techniques in superconductors and cooling systems have helped address this issue.
  • Control systems: No signalling systems are needed for high-speed rail, because such systems are computer controlled. Human operators cannot react fast enough to manage high-speed trains. High-speed systems require dedicated rights of way and are usually elevated. Two maglev system microwave towers are in constant contact with trains. There is no need for train whistles or horns, either.
  • Terrain: Maglevs are able to ascend higher grades, offering more routing flexibility and reduced tunneling.[87] However, their high speed and greater need for control make it difficult for a maglev to merge with complex terrain, such as a curved hill. Traditional trains, on the other hand, are able to curve alongside a mountain top or meander through a forest.

Comparison with aircraft[edit]

Differences between airplane and maglev travel:

  • Efficiency: For maglev systems the lift-to-drag ratio can exceed that of aircraft (for example Inductrack can approach 200:1 at high speed, far higher than any aircraft). This can make maglevs more efficient per kilometer. However, at high cruising speeds, aerodynamic drag is much larger than lift-induced drag. Jets take advantage of low air density at high altitudes to significantly reduce air drag. Hence despite their lift-to-drag ratio disadvantage, they can travel more efficiently at high speeds than maglev trains that operate at sea level.[citation needed]
  • Routing: Maglevs offer competitive journey times for distances of 800 km (500 mi) or less. Additionally, maglevs can easily serve intermediate destinations.
  • Availability: Maglevs are little affected by weather.[citation needed]
  • Travel time: Maglevs do not face the extended security protocols faced by air travelers nor is time consumed for taxiing, or for queuing for take-off and landing.[citation needed]

https://en.wikipedia.org/wiki/Maglev


Halbach array is a special arrangement of permanent magnets that augments the magnetic field on one side of the array while cancelling the field to near zero on the other side.[1][2] This is achieved by having a spatially rotating pattern of magnetisation.

The rotating pattern of permanent magnets (on the front face; on the left, up, right, down) can be continued indefinitely and have the same effect. The effect of this arrangement is roughly similar to many horseshoe magnets placed adjacent to each other, with similar poles touching.

The principle was first invented by James (Jim) M. Winey of Magnepan in 1970, for the ideal case of continuously rotating magnetization, induced by a one-sided stripe-shaped coil.[3]

The effect was also discovered by John C. Mallinson in 1973, and these "one-sided flux" structures were initially described by him as a "curiosity", although at the time he recognized from this discovery the potential for significant improvements in magnetic tape technology.[4]

Physicist Klaus Halbach, while at the Lawrence Berkeley National Laboratory during the 1980s, independently invented the Halbach array to focus particle accelerator beams.[5]

https://en.wikipedia.org/wiki/Halbach_array


The Electromagnetic Aircraft Launch System (EMALS) is a type of aircraft launching system developed by General Atomics for the United States Navy. The system launches carrier-based aircraft by means of a catapultemploying a linear induction motor rather than the conventional steam piston. EMALS was first installed on the United States Navy's Gerald R. Ford-class aircraft carrierUSS Gerald R. Ford.

Its main advantage is that it accelerates aircraft more smoothly, putting less stress on their airframes. Compared to steam catapults, the EMALS also weighs less, is expected to cost less and require less maintenance, and can launch both heavier and lighter aircraft than a steam piston-driven system. It also reduces the carrier's requirement of fresh water, thus reducing the demand for energy-intensive desalination.

China is reportedly developing a similar system which is expected to be used on China's Type 003 aircraft carriers.[1][2]

https://en.wikipedia.org/wiki/Electromagnetic_Aircraft_Launch_System


homopolar motor is a direct current electric motor with two magnetic poles, the conductors of which always cut unidirectional lines of magnetic flux by rotating a conductor around a fixed axis so that the conductor is at right angles to a static magnetic field. The resulting force being continuous in one direction, the homopolar motor needs no commutatorbut still requires slip rings.[1] The name homopolar indicates that the electrical polarity of the conductor and the magnetic field poles do not change (i.e., that it does not require commutation).

Electromagnetic rotation experiment of Faraday, ca. 1821[2]

The homopolar motor was the first electrical motor to be built. Its operation was demonstrated by Michael Faraday in 1821 at the Royal Institution in London.[3][4]

In 1821, soon after the Danish physicist and chemist Hans Christian Ørsted discovered the phenomenon of electromagnetismHumphry Davy and British scientist William Hyde Wollaston tried, but failed, to design an electric motor.[5] Faraday, having been challenged by Davy as a joke[citation needed], went on to build two devices to produce what he called "electromagnetic rotation". One of these, now known as the homopolar motor, caused a continuous circular motion that was engendered by the circular magnetic force around a wire that extended into a pool of mercury wherein was placed a magnet. The wire would then rotate around the magnet if supplied with current from a chemical battery. These experiments and inventions formed the foundation of modern electromagnetic technology. In his excitement, Faraday published results. This strained his mentor relationship with Davy, due to his mentor's jealousy of Faraday's achievement, and is the reason for Faraday’s assignment to other activities, which consequently prevented his involvement in electromagnetic research for several years.[6][7]

B. G. Lamme described in 1913 a homopolar machine rated 2,000 kW, 260 V, 7,700 A and 1,200 rpm with 16 slip rings operating at a peripheral velocity of 67 m/s. A unipolar generator rated 1,125 kW, 7.5 V 150,000 A, 514 rpm built in 1934 was installed in a U.S. steel mill for pipe welding purposes.[8]

https://en.wikipedia.org/wiki/Homopolar_motor


homopolar generator is a DC electrical generator comprising an electrically conductive disc or cylinder rotating in a plane perpendicular to a uniform static magnetic field. A potential difference is created between the center of the disc and the rim (or ends of the cylinder) with an electrical polarity that depends on the direction of rotation and the orientation of the field. It is also known as a unipolar generatoracyclic generatordisk dynamo, or Faraday disc. The voltage is typically low, on the order of a few volts in the case of small demonstration models, but large research generators can produce hundreds of volts, and some systems have multiple generators in series to produce an even larger voltage.[1] They are unusual in that they can source tremendous electric current, some more than a million amperes, because the homopolar generator can be made to have very low internal resistance. Also, the homopolar generator is unique in that no other rotary electric machine can produce DC without using rectifiers or commutators. [2]

Homopolar generators underwent a renaissance in the 1950s as a source of pulsed power storage. These devices used heavy disks as a form of flywheel to store mechanical energy that could be quickly dumped into an experimental apparatus. An early example of this sort of device was built by Sir Mark Oliphant at the Research School of Physical Sciences and EngineeringAustralian National University. It stored up to 500 megajoules of energy[4] and was used as an extremely high-current source for synchrotron experimentation from 1962 until it was disassembled in 1986. Oliphant's construction was capable of supplying currents of up to 2 megaamperes (MA).

Similar devices of even larger size are designed and built by Parker Kinetic Designs (formerly OIME Research & Development) of Austin. They have produced devices for a variety of roles, from powering railguns to linear motors (for space launches) to a variety of weapons designs. Industrial designs of 10 MJ were introduced for a variety of roles, including electrical welding.[5]

Description and operation[edit]

Disc-type generator[edit]

Basic Faraday disc generator

This device consists of a conducting flywheel rotating in a magnetic field with one electrical contact near the axis and the other near the periphery. It has been used for generating very high currents at low voltages in applications such as weldingelectrolysis and railgun research. In pulsed energy applications, the angular momentum of the rotor is used to accumulate energy over a long period and then release it in a short time.

In contrast to other types of generators, the output voltage never changes polarity. The charge separation results from the Lorentz force on the free charges in the disk. The motion is azimuthal and the field is axial, so the electromotive force is radial. The electrical contacts are usually made through a "brush" or slip ring, which results in large losses at the low voltages generated. Some of these losses can be reduced by using mercury or other easily liquefied metal or alloy (galliumNaK) as the "brush", to provide essentially uninterrupted electrical contact.

A recent suggested modification is to use a plasma contact supplied by a negative resistance neon streamer touching the edge of the disk or drum, using specialized low work function carbon in vertical strips. This would have the advantage of very low resistance within a current range possibly up to thousands of amps without the liquid metal contact.[citation needed]

If the magnetic field is provided by a permanent magnet, the generator works regardless of whether the magnet is fixed to the stator or rotates with the disc. Before the discovery of the electron and the Lorentz force law, the phenomenon was inexplicable and was known as the Faraday paradox.

https://en.wikipedia.org/wiki/Homopolar_generator


flywheel is a mechanical device which uses the conservation of angular momentum to store rotational energy; a form of kinetic energy proportional to the product of its moment of inertia and the square of its rotational speed. In particular, if we assume the flywheel's moment of inertia to be constant (i.e., a flywheel with fixed mass and second moment of area revolving about some fixed axis) then the stored (rotational) energy is directly associated with the square of its rotational speed.

Since a flywheel serves to store mechanical energy for later use, it is natural to consider it as a kinetic energy analogue of an electrical inductor. Once suitably abstracted, this shared principle of energy storage is described in the generalized concept of an accumulator. As with other types of accumulators, a flywheel inherently smoothes sufficiently small deviations in the power output of a system, thereby effectively playing the role of a low-pass filter with respect to the mechanical velocity (angular, or otherwise) of the system. More precisely, a flywheel's stored energy will donate a surge in power output upon a drop in power input and will conversely absorb any excess power input (system-generated power) in the form of rotational energy.

Common uses of a flywheel include:

  • Smoothing the power output of an energy source. For example, flywheels are used in reciprocating engines because the active torque from the individual pistons is intermittent.
  • Energy storage systems
  • Delivering energy at rates beyond the ability of an energy source. This is achieved by collecting energy in a flywheel over time and then releasing it quickly, at rates that exceed the abilities of the energy source.
  • Controlling the orientation of a mechanical system, gyroscope and reaction wheel

Flywheels are typically made of steel and rotate on conventional bearings; these are generally limited to a maximum revolution rate of a few thousand RPM.[1] High energy density flywheels can be made of carbon fiber composites and employ magnetic bearings, enabling them to revolve at speeds up to 60,000 RPM (1 kHz).[2]

https://en.wikipedia.org/wiki/Flywheel


gyroscope (from Ancient Greek γῦρος gûros, "circle" and σκοπέω skopéō, "to look") is a device used for measuring or maintaining orientation and angular velocity.[1][2] It is a spinning wheel or disc in which the axis of rotation (spin axis) is free to assume any orientation by itself. When rotating, the orientation of this axis is unaffected by tilting or rotation of the mounting, according to the conservation of angular momentum.

Gyroscopes based on other operating principles also exist, such as the microchip-packaged MEMS gyroscopes found in electronic devices (sometimes called gyrometers), solid-state ring lasersfibre optic gyroscopes, and the extremely sensitive quantum gyroscope.[3]

Applications of gyroscopes include inertial navigation systems, such as in the Hubble Telescope, or inside the steel hull of a submerged submarine. Due to their precision, gyroscopes are also used in gyrotheodolites to maintain direction in tunnel mining.[4] Gyroscopes can be used to construct gyrocompasses, which complement or replace magnetic compasses (in ships, aircraft and spacecraft, vehicles in general), to assist in stability (bicycles, motorcycles, and ships) or be used as part of an inertial guidance system.

MEMS gyroscopes are popular in some consumer electronics, such as smartphones.

https://en.wikipedia.org/wiki/Gyroscope


In surveying, a gyrotheodolite (also: surveying gyro) is an instrument composed of a gyrocompass mounted to a theodolite. It is used to determine the orientation of true north. It is the main instrument for orientation in mine surveying[1] and in tunnel engineering, where astronomical star sights are not visible and GPS does not work.

https://en.wikipedia.org/wiki/Gyrotheodolite


Electrodynamic tethers (EDTs) are long conducting wires, such as one deployed from a tether satellite, which can operate on electromagnetic principles as generators, by converting their kinetic energy to electrical energy, or as motors, converting electrical energy to kinetic energy.[1] Electric potential is generated across a conductive tether by its motion through a planet's magnetic field.

A number of missions have demonstrated electrodynamic tethers in space, most notably the TSS-1TSS-1R, and Plasma Motor Generator (PMG) experiments.

https://en.wikipedia.org/wiki/Electrodynamic_tether


PMG[edit]

A follow-on experiment, the Plasma Motor Generator (PMG), used the SEDS deployer to deploy a 500-m tether to demonstrate electrodynamic tether operation.[21][22]

The PMG was planned to test the ability of a Hollow Cathode Assembly (HCA) to provide a low–impedance bipolar electric current between a spacecraft and the ionosphere. In addition, other expectations were to show that the mission configuration could function as an orbit-boosting motor as well as a generator, by converting orbital energy into electricity. The tether was a 500 m length of insulated 18 gauge copper wire.[21]

The mission was launched on 26 June 1993, as the secondary payload on a Delta II rocket. The total experiment lasted approximately seven hours. In that time, the results demonstrated that current is fully reversible, and therefore was capable of generating power and orbit boosting modes. The hollow cathode was able to provide a low–power way of connecting the current to and from the ambient plasma. This means that the HC demonstrated its electron collection and emission capabilities.[23]

https://en.wikipedia.org/wiki/Space_tether_missions#PMG


Mu-metal is a nickeliron soft ferromagnetic alloy with very high permeability, which is used for shielding sensitive electronic equipment against static or low-frequency magnetic fields. It has several compositions. One such composition is approximately 77% nickel, 16% iron, 5% copper, and 2% chromium or molybdenum.[1][2] More recently, mu-metal is considered to be ASTM A753 Alloy 4 and is composed of approximately 80% nickel, 5% molybdenum, small amounts of various other elements such as silicon, and the remaining 12 to 15% iron.[3] The name came from the Greek letter mu (μ) which represents permeability in physics and engineering formulae. A number of different proprietary formulations of the alloy are sold under trade names such as MuMETALMumetall, and Mumetal2

Mu-metal typically has relative permeability values of 80,000–100,000 compared to several thousand for ordinary steel. It is a "soft" ferromagnetic material; it has low magnetic anisotropy and magnetostriction,[1] giving it a low coercivity so that it saturates at low magnetic fields. This gives it low hysteresis losses when used in AC magnetic circuits. Other high-permeability nickel–iron alloys such as permalloy have similar magnetic properties; mu-metal's advantage is that it is more ductile, malleable and workable, allowing it to be easily formed into the thin sheets needed for magnetic shields.[1]

Mu-metal objects require heat treatment after they are in final form—annealing in a magnetic field in hydrogen atmosphere, which increases the magnetic permeability about 40 times.[4] The annealing alters the material's crystal structure, aligning the grains and removing some impurities, especially carbon, which obstruct the free motion of the magnetic domain boundaries. Bending or mechanical shock after annealing may disrupt the material's grain alignment, leading to a drop in the permeability of the affected areas, which can be restored by repeating the hydrogen annealing step.

https://en.wikipedia.org/wiki/Mu-metal


nuclear power plant (sometimes abbreviated as NPP)[1] is a thermal power station in which the heat source is a nuclear reactor. As is typical of thermal power stations, heat is used to generate steam that drives a steam turbine connected to a generator that produces electricity. As of 2018, the International Atomic Energy Agencyreported there were 450 nuclear power reactors in operation in 30 countries around the world.[2][3]

Nuclear plants are usually considered to be base load stations since fuel is a small part of the cost of production[4]and because they cannot be easily or quickly dispatched. Their operations, maintenance, and fuel costs are at the low end of the spectrum, making them suitable as base-load power suppliers. However, the cost of proper long term radioactive waste storage is uncertain.

https://en.wikipedia.org/wiki/Nuclear_power_plant


nuclear thermal rocket (NTR) is a type of thermal rocket where the heat from a nuclear reaction, often nuclear fission, replaces the chemical energy of the propellants in a chemical rocket. In an NTR, a working fluid, usually liquid hydrogen, is heated to a high temperature in a nuclear reactor and then expands through a rocket nozzle to create thrust. The external nuclear heat source theoretically allows a higher effective exhaust velocity and is expected to double or triple payload capacity compared to chemical propellants that store energy internally.

NTRs have been proposed as a spacecraft propulsion technology, with the earliest ground tests occurring in 1955. The United States maintained an NTR development program through 1973, when it was shut down to focus on Space Shuttledevelopment. Although more than ten reactors of varying power output have been built and tested, as of 2021, no nuclear thermal rocket has flown.[1]

Whereas all early applications for nuclear thermal rocket propulsion used fission processes, research in the 2010s has moved to fusion approaches. The Direct Fusion Drive project at the Princeton Plasma Physics Laboratory is one such example, although "energy positive fusion has remained elusive". In 2019, the U.S. Congress approved US$125 million in development funding for nuclear thermal propulsion rockets.[1]

https://en.wikipedia.org/wiki/Nuclear_thermal_rocket


Principle of Operation[edit]

Nuclear-powered thermal rockets are more effective than chemical thermal rockets, primarily because they can use low-molecular-mass propellant such as hydrogen.

As thermal rockets, nuclear thermal rockets work almost exactly like chemical rockets: a heat source releases thermal energy into a gaseous propellant inside the body of the engine, and a nozzle at one end acts as a very simple heat engine: it allows the propellant to expand away from the vehicle, carrying momentum with it and converting thermal energy to coherent kinetic energy. The specific impulse (Isp) of the engine is set by the speed of the exhaust stream. That in turn varies as the square root of the kinetic energy loaded on each unit mass of propellant. The kinetic energy per molecule of propellant is determined by the temperature of the heat source (whether it be a nuclear reactor or a chemical reaction). At any particular temperature, lightweight propellant molecules carry just as much kinetic energy as heavier propellant molecules, and therefore have more kinetic energy per unit mass. This makes low-molecular-mass propellants more effective than high-molecular-mass propellants.

Because chemical rockets and nuclear rockets are made from refractory solid materials, they are both limited to operate below ~3000°C, by the strength characteristics of high temperature metals. Chemical rockets use the most readily available propellant, which is waste products from the chemical reactions producing their heat energy. Most liquid-fueled chemical rockets use either hydrogen or hydrocarbon combustion, and the propellant is therefore mainly water (molecular mass 18) and/or carbon dioxide (molecular mass 44). Nuclear thermal rockets using gaseous hydrogen propellant (molecular mass 2) therefore have a theoretical maximum Isp that is 3x-4.5x greater than those of chemical rockets.

History[edit]

As early as 1944, Stanisław Ulam and Frederic de Hoffmann contemplated the idea of controlling the power of the nuclear explosions to launch space vehicles.[2]After World War II, the U.S. military started the development of intercontinental ballistic missiles (ICBM) based on the German V-2 rocket designs. Some large rockets were designed to carry nuclear warheads with nuclear-powered propulsion engines.[2] As early as 1946, secret reports were prepared for the U.S. Air Force, as part of the NEPA project, by North American Aviation and Douglas Aircraft Company's Project Rand.[3] These groundbreaking reports identified a reactor engine in which a working fluid of low molecular weight is heated using a nuclear reactor as the most promising form of nuclear propulsion but identified many technical issues that needed to be resolved.[4][5][6][7][8][9][10][11]

In January 1947, not aware of this classified research, engineers of the Applied Physics Laboratory published their research on nuclear power propulsion and their report was eventually classified.[12][2][13] In May 1947, American-educated Chinese scientist Hsue-Shen Tsien presented his research on "thermal jets" powered by a porous graphite-moderated nuclear reactor at the Nuclear Science and Engineering Seminars LIV organised by the Massachusetts Institute of Technology.[14][13]

In 1948 and 1949, physicist Leslie Shepherd and rocket scientist Val Cleaver produced a series of groundbreaking scientific papers that considered how nuclear technology might be applied to interplanetary travel. The papers examined both nuclear-thermal and nuclear-electric propulsion.[15][16][17][18]

Nuclear fuel types[edit]

A nuclear thermal rocket can be categorized by the type of reactor, ranging from a relatively simple solid reactor up to the much more difficult to construct but theoretically more efficient gas core reactor. As with all thermal rocket designs, the specific impulse produced is proportional to the square root of the temperature to which the working fluid (reaction mass) is heated. To extract maximum efficiency, the temperature must be as high as possible. For a given design, the temperature that can be attained is typically determined by the materials chosen for reactor structures, the nuclear fuel, and the fuel cladding. Erosion is also a concern, especially the loss of fuel and associated releases of radioactivity.[citation needed]

Solid core[edit]

NERVA solid-core design

Solid core nuclear reactors have been fueled by compounds of uranium that exist in solid phase under the conditions encountered and undergo nuclear fission to release energy. Flight reactors must be lightweight and capable of tolerating extremely high temperatures, as the only coolant available is the working fluid/propellant.[1] A nuclear solid core engine is the simplest design to construct and is the concept used on all tested NTRs.[citation needed]

A solid core reactor's performance is ultimately limited by the material properties, including melting point, of the materials used in the nuclear fuel and reactor pressure vessel. Nuclear reactions can create much higher temperatures than most materials can typically withstand, meaning that much of the potential of the reactor cannot be realized. Additionally, with cooling being provided by the propellant only all of the decay heat remaining after reactor shutdown must be radiated to space, a slow process that will expose the fuel rods to extreme temperature stress. During operation, temperatures at the fuel rod surfaces range from the 22 K of admitted propellant up to 3000 K at the exhaust end. Taking place over the 1.3 m length of a fuel rod, this is certain to cause cracking of the cladding if the coefficients of expansion are not precisely matched in all the components of the reactor.[citation needed]

Using hydrogen as a propellant, a solid core design would typically deliver specific impulses (Isp) on the order of 850 to 1000 seconds, which is about twice that of liquid hydrogen-oxygen designs such as the Space Shuttle main engine. Other propellants have also been proposed, such as ammonia, water or LOX, but these propellants would provide reduced exhaust velocity and performance at a marginally reduced fuel cost. Yet another mark in favor of hydrogen is that at low pressures it begins to dissociate at about 1500 K, and at high pressures around 3000 K. This lowers the mass of the exhaust species, increasing Isp.

Early publications were doubtful of space applications for nuclear engines. In 1947, a complete nuclear reactor was so heavy that solid core nuclear thermal engines would be entirely unable[19] to achieve a thrust-to-weight ratio of 1:1, which is needed to overcome the gravity of the Earth at launch. Over the next twenty-five years U.S. nuclear thermal rocket designs eventually reached thrust-to-weight ratios of approximately 7:1. This is still a much lower thrust-to-weight ratio than what is achievable with chemical rockets, which have thrust-to-weight ratios on the order of 70:1. Combined with the large tanks necessary for liquid hydrogen storage, this means that solid core nuclear thermal engines are best suited for use in orbit outside Earth's gravity well, not to mention avoiding the radioactive contamination that would result from atmospheric use[1] (if an "open cycle" design was used, as opposed to a lower-performance "closed cycle" design where no radioactive material was allowed to escape with the rocket fuel.[20])

One way to increase the working temperature of the reactor is to change the nuclear fuel elements. This is the basis of the particle-bed reactor, which is fueled by a number of (typically spherical) elements which "float" inside the hydrogen working fluid. Spinning the entire engine could prevent the fuel element from being ejected out the nozzle. This design is thought to be capable of increasing the specific impulse to about 1000 seconds (9.8 kN·s/kg) at the cost of increased complexity. Such a design could share design elements with a pebble-bed reactor, several of which are currently generating electricity.[citation needed] From 1987 through 1991, the Strategic Defense Initiative (SDI) Office funded Project Timberwind, a non-rotating nuclear thermal rocket based on particle bed technology. The project was canceled before testing.[citation needed]

Pulsed nuclear thermal rocket[edit]

Pulsed nuclear thermal rocket unit cell concept for Isp amplification. In this cell, hydrogen-propellant is heated by the continuous intense neutron pulses in the propellant channels. At the same time, the unwanted energy from the fission fragments is removed by a solitary cooling channel with lithium or other liquid metal.

The pulsed nuclear thermal rocket (not to be confused with nuclear pulse propulsion, which is a hypothetical method of spacecraft propulsion that uses nuclear explosions for thrust) is a type of solid nuclear thermal rocket for thrust and specific impulse (Isp) amplification.[21] In this concept, the conventional solid fission NTR can operate in a stationary as well as in a pulsed mode, much like a TRIGA reactor. Because the residence time of the propellant in the chamber is short, an important amplification in energy is attainable by pulsing the nuclear core, which can increase the thrust via increasing the propellant mass flow. However, the most interesting feature is the capability to obtain very high propellant temperatures (higher than the fuel) and then high amplification of exhaust velocity. This is because, in contrast with the conventional stationary solid NTR, propellant is heated by the intense neutron flux from the pulsation, which is directly transported from the fuel to the propellant as kinetic energy. By pulsing the core it is possible to obtain a propellant hotter than the fuel. However, and in clear contrast with classical nuclear thermal rockets (including liquid and gas nuclear rockets), the thermal energy from the decay of fission daughters is unwanted.[citation needed]

Very high instantaneous propellant temperatures are hypothetically attainable by pulsing the solid nuclear core, only limited by the rapid radiative cooling after pulsation.[citation needed]

Liquid core[edit]

Liquid core nuclear engines are fueled by compounds of fissionable elements in liquid phase. A liquid-core engine is proposed to operate at temperatures above the melting point of solid nuclear fuel and cladding, with the maximum operating temperature of the engine instead being determined by the reactor pressure vessel and neutron reflector material. The higher operating temperatures would be expected to deliver specific impulse performance on the order of 1300 to 1500 seconds (12.8-14.8 kN·s/kg).[citation needed]

A liquid-core reactor would be extremely difficult to build with current technology. One major issue is that the reaction time of the nuclear fuel is much longer than the heating time of the working fluid. If the nuclear fuel and working fluid are not physically separated, this means that the fuel must be trapped inside the engine while the working fluid is allowed to easily exit through the nozzle. One possible solution is to rotate the fuel/fluid mixture at very high speeds to force the higher density fuel to the outside, but this would expose the reactor pressure vessel to the maximum operating temperature while adding mass, complexity, and moving parts.[citation needed]

An alternative liquid-core design is the nuclear salt-water rocket. In this design, water is the working fluid and also serves as the neutron moderator. The nuclear fuel is not retained which drastically simplifies the design. However, the rocket would discharge massive quantities of extremely radioactive waste and could only be safely operated well outside the atmosphere of Earth and perhaps even entirely outside the magnetosphere of Earth.[citation needed]

Gas core[edit]

Nuclear gas core closed cycle rocket engine diagram, nuclear "light bulb"
Nuclear gas core open cycle rocket engine diagram

The final fission classification is the gas-core engine. This is a modification to the liquid-core design which uses rapid circulation of the fluid to create a toroidal pocket of gaseous uranium fuel in the middle of the reactor, surrounded by hydrogen. In this case the fuel does not touch the reactor wall at all, so temperatures could reach several tens of thousands of degrees, which would allow specific impulses of 3000 to 5000 seconds (30 to 50 kN·s/kg). In this basic design, the "open cycle", the losses of nuclear fuel would be difficult to control, which has led to studies of the "closed cycle" or nuclear lightbulbengine, where the gaseous nuclear fuel is contained in a super-high-temperature quartz container, over which the hydrogen flows. The closed cycle engine actually has much more in common with the solid-core design, but this time is limited by the critical temperature of quartz instead of the fuel and cladding. Although less efficient than the open-cycle design, the closed-cycle design is expected to deliver a specific impulse of about 1500–2000 seconds (15-20 kN·s/kg).[citation needed]

Solid core fission designs in practice[edit]

The KIWI A prime nuclear thermal rocket engine

Soviet Union and Russia[edit]

The Soviet RD-0410 went through a series of tests at the nuclear test site near Semipalatinsk Test Site.[22][23]

In October 2018, Russia's Keldysh Research Center confirmed a successful ground test of waste heat radiators for a nuclear space engine, as well as previous tests of fuel rods and ion engines.[citation needed]

United States[edit]

File:DOE video about nuclear thermal propulsion rockets.ogv
A United States Department of Energy video about nuclear thermal rockets.

Development of solid core NTRs started in 1955 under the Atomic Energy Commission (AEC) as Project Rover, and ran to 1973.[1] Work on a suitable reactor was conducted at Los Alamos National Laboratory and Area 25 (Nevada National Security Site) in the Nevada Test Site. Four basic designs came from this project: KIWI, Phoebus, Pewee and the Nuclear Furnace. Twenty individual engines were tested, with a total of over 17 hours of engine run time.[24]

When NASA was formed in 1958, it was given authority over all non-nuclear aspects of the Rover program. In order to enable cooperation with the AEC and keep classified information compartmentalized, the Space Nuclear Propulsion Office (SNPO) was formed at the same time. The 1961 NERVA program was intended to lead to the entry of nuclear thermal rocket engines into space exploration. Unlike the AEC work, which was intended to study the reactor design itself, NERVA's goal was to produce a real engine that could be deployed on space missions. The 334 kN (75,000 lbf) thrust baseline NERVA design was based on the KIWI B4 series.[citation needed]

Tested engines included Kiwi, Phoebus, NRX/EST, NRX/XE, Pewee, Pewee 2 and the Nuclear Furnace. Progressively higher power densities culminated in the Pewee.[24] Tests of the improved Pewee 2 design were cancelled in 1970 in favor of the lower-cost Nuclear Furnace (NF-1), and the U.S. nuclear rocket program officially ended in spring of 1973. During this program, the NERVA accumulated over 2 hours of run time, including 28 minutes at full power.[1] The SNPO considered NERVA to be the last technology development reactor required to proceed to flight prototypes.[citation needed]

A number of other solid-core engines have also been studied to some degree. The Small Nuclear Rocket Engine, or SNRE, was designed at the Los Alamos National Laboratory (LANL) for upper stage use, both on uncrewed launchers as well as the Space Shuttle. It featured a split-nozzle that could be rotated to the side, allowing it to take up less room in the Shuttle cargo bay. The design provided 73 kN of thrust and operated at a specific impulse of 875 seconds (8.58 kN·s/kg), and it was planned to increase this to 975 seconds, achieving a mass fraction of about 0.74, comparing with 0.86 for the Space Shuttle main engine (SSME), one of the best conventional engines.[citation needed]

A related design that saw some work, but never made it to the prototype stage, was Dumbo. Dumbo was similar to KIWI/NERVA in concept, but used more advanced construction techniques to lower the weight of the reactor. The Dumbo reactor consisted of several large barrel-like tubes which were in turn constructed of stacked plates of corrugated material. The corrugations were lined up so that the resulting stack had channels running from the inside to the outside. Some of these channels were filled with uranium fuel, others with a moderator, and some were left open as a gas channel. Hydrogen was pumped into the middle of the tube, and would be heated by the fuel as it traveled through the channels as it worked its way to the outside. The resulting system was lighter than a conventional design for any particular amount of fuel.[citation needed]

Between 1987 and 1991, an advanced engine design was studied under Project Timberwind, under the Strategic Defense Initiative, which was later expanded into a larger design in the Space Thermal Nuclear Propulsion (STNP) program. Advances in high-temperature metals, computer modelling and nuclear engineering in general resulted in dramatically improved performance. While the NERVA engine was projected to weigh about 6803 kg, the final STNP offered just over 1/3 the thrust from an engine of only 1650 kg by improving the Isp to between 930 and 1000 seconds.[citation needed]

Test firings[edit]

A KIWI engine being destructively tested.

KIWI was the first to be fired, starting in July 1959 with KIWI 1. The reactor was not intended for flight and named after the flightless bird, Kiwi. The core was simply a stack of uncoated uranium oxide plates onto which the hydrogen was dumped. A thermal output of 70 MW at an exhaust temperature of 2683 K was generated. Two additional tests of the basic concept, A1 and A3, added coatings to the plates to test fuel rod concepts.[citation needed]

The KIWI B series were fueled by tiny uranium dioxide (UO2) spheres embedded in a low-boron graphite matrix and coated with niobium carbide. Nineteen holes ran the length of the bundles, through which the liquid hydrogen flowed. On the initial firings, immense heat and vibration cracked the fuel bundles. The graphite materials used in the reactor's construction were resistant to high temperatures but eroded under the stream of superheated hydrogen, a reducing agent. The fuel species was later switched to uranium carbide, with the last engine run in 1964. The fuel bundle erosion and cracking problems were improved but never completely solved, despite promising materials work at the Argonne National Laboratory.[citation needed]

NERVA NRX (Nuclear Rocket Experimental), started testing in September 1964. The final engine in this series was the XE, designed with flight representative hardware and fired into a low-pressure chamber to simulate a vacuum. SNPO fired NERVA NRX/XE twenty-eight times in March 1968. The series all generated 1100 MW, and many of the tests concluded only when the test-stand ran out of hydrogen propellant. NERVA NRX/XE produced the baseline 334 kN (75,000 lbf) thrust that Marshall Space Flight Center required in Mars mission plans. The last NRX firing lost 17 kg (38 lb) of nuclear fuel in 2 hours of testing, which was judged sufficient for space missions by SNPO.[citation needed]

Building on the KIWI series, the Phoebus series were much larger reactors. The first 1A test in June 1965 ran for over 10 minutes at 1090 MW and an exhaust temperature of 2370 K. The B run in February 1967 improved this to 1500 MW for 30 minutes. The final 2A test in June 1968 ran for over 12 minutes at 4000 MW, at the time the most powerful nuclear reactor ever built.[citation needed]

A smaller version of KIWI, the Pewee was also built. It was fired several times at 500 MW in order to test coatings made of zirconium carbide (instead of niobium carbide) but Pewee also increased the power density of the system. A water-cooled system known as NF-1 (for Nuclear Furnace) used Pewee 2's fuel elements for future materials testing showing a factor of 3 reduction in fuel corrosion still further. Pewee 2 was never tested on the stand, and became the basis for current NTR designs being researched at NASA's Glenn Research Center and Marshall Space flight Center.[citation needed]

The NERVA/Rover project was eventually cancelled in 1972 with the general wind-down of NASA in the post-Apollo era. Without a human mission to Mars, the need for a nuclear thermal rocket is unclear. Another problem would be public concerns about safety and radioactive contamination.

Kiwi-TNT destructive test[edit]

In January 1965, the U.S. Rover program intentionally modified a Kiwi reactor (KIWI-TNT) to go prompt critical, resulting in an immediate destruction of the reactor pressure vessel, nozzle, and fuel assemblies. Intended to simulate a worst-case scenario of a fall from altitude into the ocean, such as might occur in a booster failure after launch, the resulting release of radiation would have caused fatalities out to 183 m (600 ft) and injuries out to 610 m (2,000 ft). The reactor was positioned on a railroad car in the Jackass Flats area of the Nevada Test Site.[25]

United Kingdom[edit]

As of January 2012, the propulsion group for Project Icarus was studying an NTR propulsion system.[26]


https://en.wikipedia.org/wiki/Nuclear_thermal_rocket


See also[edit]


https://en.wikipedia.org/wiki/Nuclear_thermal_rocket

Vacuum systems using a vacuum pump (left) and a venturi (right)

Variations[edit]

Some more expensive heading indicators are "slaved" to a magnetic sensor, called a flux gate. The flux gate continuously senses the Earth's magnetic field, and a servo mechanism constantly corrects the heading indicator.[1] These "slaved gyros" reduce pilot workload by eliminating the need for manual realignment every ten to fifteen minutes.

The prediction of drift in degrees per hour, is as follows:

SourceDrift rate (°/hr)Sign, by hemisphere
NorthernSouthern
Earth rate15 sin(operating latitude)− (causing an under-read)+ (causing an over-read)
Latitude nut15 sin(latitude of setting)+
Transport wander, EastEast ground speed component (or, sin(track angle) × ground speed, or, change in longitude/flight time in hours) × 160tan(operating latitude)+
Transport wander, WestWest ground speed component (or sin(track angle) × ground speed or change in longitude/flight time in hours) × 160tan(operating latitude)+
Real/random wanderAs given in the aircraft operating manualAs givenAs given

Although it is possible to predict the drift, there will be minor variations from this basic model, accounted for by gimbal error (operating the aircraft away from the local horizontal), among others. A common source of error here is the improper setting of the latitude nut (to the opposite hemisphere for example). The table however allows one to gauge whether an indicator is behaving as expected, and as such, is compared with the realignment corrections made with reference to the magnetic compass. Transport wander is an undesirable consequence of apparent drift.

See also[edit]


https://en.wikipedia.org/wiki/Heading_indicator
https://en.wikipedia.org/wiki/Compass
https://en.wikipedia.org/wiki/V_speeds
https://en.wikipedia.org/wiki/Variometer

https://en.wikipedia.org/wiki/Bending
https://en.wikipedia.org/wiki/Plasma_globe
https://en.wikipedia.org/wiki/Capillary_breakup_rheometry
https://en.wikipedia.org/wiki/Fibre-reinforced_plastic_tanks_and_vessels

https://en.wikipedia.org/wiki/Fluid_thread_breakup
https://en.wikipedia.org/wiki/Mercury-vapor_lamp
https://en.wikipedia.org/wiki/Gas-discharge_lamp
https://en.wikipedia.org/wiki/Uranium_glass

https://en.wikipedia.org/wiki/Glass
https://en.wikipedia.org/wiki/Mirror

Contemporary technologies[edit]

Currently mirrors are often produced by the wet deposition of silver, or sometimes nickel or chromium (the latter used most often in automotive mirrors) via electroplating directly onto the glass substrate.[26]

Glass mirrors for optical instruments are usually produced by vacuum deposition methods. These techniques can be traced to observations in the 1920s and 1930s that metal was being ejected from electrodes in gas discharge lamps and condensed on the glass walls forming a mirror-like coating. The phenomenon, called sputtering, was developed into an industrial metal-coating method with the development of semiconductor technology in the 1970s.

A similar phenomenon had been observed with incandescent light bulbs: the metal in the hot filament would slowly sublimate and condense on the bulb's walls. This phenomenon was developed into the method of evaporation coating by Pohl and Pringsheim in 1912.  John D. Strong used evaporation coating to make the first aluminum-coated telescope mirrors in the 1930s.[27] The first dielectric mirror was created in 1937 by Auwarter using evaporated rhodium.[15]

The metal coating of glass mirrors is usually protected from abrasion and corrosion by a layer of paint applied over it. Mirrors for optical instruments often have the metal layer on the front face, so that the light does not have to cross the glass twice. In these mirrors, the metal may be protected by a thin transparent coating of a non-metallic (dielectric) material. The first metallic mirror to be enhanced with a dielectric coating of silicon dioxide was created by Hass in 1937. In 1939 at the Schott Glass company, Walter Geffcken invented the first dielectric mirrors to use multilayer coatings.[15]

Burning mirrors[edit]

The Greek in Classical Antiquity were familiar with the use of mirrors to concentrate light.  Parabolic mirrors were described and studied by the mathematician Dioclesin his work On Burning Mirrors.[28] Ptolemy conducted a number of experiments with curved polished iron mirrors,[2]: p.64  and discussed plane, convex spherical, and concave spherical mirrors in his Optics.[29]

Parabolic mirrors were also described by the Caliphate mathematician Ibn Sahl in the tenth century.[30] The scholar Ibn al-Haytham discussed concave and convex mirrors in both cylindrical and spherical geometries,[31] carried out a number of experiments with mirrors, and solved the problem of finding the point on a convex mirror at which a ray coming from one point is reflected to another point.[32]

Types of mirrors[edit]

A curved mirror at the Universum museum in Mexico City. The image splits between the convex and concave curves.
A large convex mirror. Distortions in the image increase with the viewing distance.

Mirrors can be classified in many ways; including by shape, support and reflective materials, manufacturing methods, and intended application.

By shape[edit]

Typical mirror shapes are planarconvex, and concave.

The surface of curved mirrors is often a part of a sphere. Mirrors that are meant to precisely concentrate parallel rays of light into a point are usually made in the shape of a paraboloid of revolution instead; they are used in telescopes (from radio waves to X-rays), in antennas to communicate with broadcast satellites, and in solar furnaces. A segmented mirror, consisting of multiple flat or curved mirrors, properly placed and oriented, may be used instead.

Mirrors that are intended to concentrate sunlight onto a long pipe may be a circular cylinder or of a parabolic cylinder.[citation needed]

By structural material[edit]

The most common structural material for mirrors is glass, due to its transparency, ease of fabrication, rigidity, hardness, and ability to take a smooth finish.

Back-silvered mirrors[edit]

The most common mirrors consist of a plate of transparent glass, with a thin reflective layer on the back (the side opposite to the incident and reflected light) backed by a coating that protects that layer against abrasion, tarnishing, and corrosion. The glass is usually soda-lime glass, but lead glass may be used for decorative effects, and other transparent materials may be used for specific applications.[citation needed]

A plate of transparent plastic may be used instead of glass, for lighter weight or impact resistance. Alternatively, a flexible transparent plastic film may be bonded to the front and/or back surface of the mirror, to prevent injuries in case the mirror is broken. Lettering or decorative designs may be printed on the front face of the glass, or formed on the reflective layer. The front surface may have an anti-reflection coating.[citation needed]

Front-silvered mirrors[edit]

Mirrors which are reflective on the front surface (the same side of the incident and reflected light) may be made of any rigid material.[33] The supporting material does not necessarily need to be transparent, but telescope mirrors often use glass anyway. Often a protective transparent coating is added on top of the reflecting layer, to protect it against abrasion, tarnishing, and corrosion, or to absorb certain wavelengths.[citation needed]

Flexible mirrors[edit]

Thin flexible plastic mirrors are sometimes used for safety, since they cannot shatter or produce sharp flakes. Their flatness is achieved by stretching them on a rigid frame. These usually consist of a layer of evaporated aluminum between two thin layers of transparent plastic.[citation needed]

By reflective material[edit]

A dielectric mirror-stack works on the principle of thin-film interference. Each layer has a different refractive index, allowing each interface to produce a small amount of reflection. When the thickness of the layers is proportional to the chosen wavelength, the multiple reflections constructively interfere. Stacks may consist of a few to hundreds of individual coats.
A hot mirror used in a camera to reduce red eye

In common mirrors, the reflective layer is usually some metal like silver, tin, nickel, or chromium, deposited by a wet process; or aluminum,[26][34] deposited by sputtering or evaporation in vacuum. The reflective layer may also be made of one or more layers of transparent materials with suitable indices of refraction.

The structural material may be a metal, in which case the reflecting layer may be just the surface of the same. Metal concave dishes are often used to reflect infrared light (such as in space heaters) or microwaves (as in satellite TV antennas). Liquid metal telescopes use a surface of liquid metal such as mercury.

Mirrors that reflect only part of the light, while transmitting some of the rest, can be made with very thin metal layers or suitable combinations of dielectric layers. They are typically used as beamsplitters. A dichroic mirror, in particular, has surface that reflects certain wavelengths of light, while letting other wavelengths pass through. A cold mirror is a dichroic mirror that efficiently reflects the entire visible light spectrum while transmitting infrared wavelengths. A hot mirror is the opposite: it reflects infrared light while transmitting visible light. Dichroic mirrors are often used as filters to remove undesired components of the light in cameras and measuring instruments.

In X-ray telescopes, the X-rays reflect off a highly precise metal surface at almost grazing angles, and only a small fraction of the rays are reflected.[35] In flying relativistic mirrors conceived for X-ray lasers, the reflecting surface is a spherical shockwave (wake wave) created in a low-density plasma by a very intense laser-pulse, and moving at an extremely high velocity.[36]

Nonlinear optical mirrors[edit]

phase-conjugating mirror uses nonlinear optics to reverse the phase difference between incident beams. Such mirrors may be used, for example, for coherent beam combination. The useful applications are self-guiding of laser beams and correction of atmospheric distortions in imaging systems.[37][38][39]

Physical principles[edit]

A mirror reflects light waves to the observer, preserving the wave's curvature and divergence, to form an image when focused through the lens of the eye. The angle of the impinging wave, as it traverses the mirror's surface, matches the angle of the reflected wave.

When a sufficiently narrow beam of light is reflected at a point of a surface, the surface's normal direction  will be the bisector of the angle formed by the two beams at that point. That is, the direction vector  towards the incident beams's source, the normal vector , and direction vector  of the reflected beam will be coplanar, and the angle between  and  will be equal to the angle of incidence between  and , but of opposite sign.[40]

This property can be explained by the physics of an electromagnetic plane wave that is incident to a flat surface that is electrically conductive or where the speed of light changes abruptly, as between two materials with different indices of refraction.

  • When parallel beams of light are reflected on a plane surface, the reflected rays will be parallel too.
  • If the reflecting surface is concave, the reflected beams will be convergent, at least to some extent and for some distance from the surface.
  • A convex mirror, on the other hand, will reflect parallel rays towards divergent directions.

More specifically, a concave parabolic mirror (whose surface is a part of a paraboloid of revolution) will reflect rays that are parallel to its axis into rays that pass through its focus. Conversely, a parabolic concave mirror will reflect any ray that comes from its focus towards a direction parallel to its axis. If a concave mirror surface is a part of a prolate ellipsoid, it will reflect any ray coming from one focus toward the other focus.[40]

A convex parabolic mirror, on the other hand, will reflect rays that are parallel to its axis into rays that seem to emanate from the focus of the surface, behind the mirror. Conversely, it will reflect incoming rays that converge toward that point into rays that are parallel to the axis. A convex mirror that is part of a prolate ellipsoid will reflect rays that converge towards one focus into divergent rays that seem to emanate from the other focus.[40]

Spherical mirrors do not reflect parallel rays to rays that converge to or diverge from a single point, or vice versa, due to spherical aberration. However, a spherical mirror whose diameter is sufficiently small compared to the sphere's radius will behave very similarly to a parabolic mirror whose axis goes through the mirror's center and the center of that sphere; so that spherical mirrors can substitute for parabolic ones in many applications.[40]

A similar aberration occurs with parabolic mirrors when the incident rays are parallel among themselves but not parallel to the mirror's axis, or are divergent from a point that is not the focus – as when trying to form an image of an objet that is near the mirror or spans a wide angle as seen from it. However, this aberration can be sufficiently small if the object image is sufficiently far from the mirror and spans a sufficiently small angle around its axis.[40]

https://en.wikipedia.org/wiki/Mirror


waveguide is a structure that guides waves, such as electromagnetic waves or sound, with minimal loss of energy by restricting the transmission of energy to one direction. Without the physical constraint of a waveguide, wave intensities decrease according to the inverse square law as they expand into three dimensional space.

There are different types of waveguides for different types of waves. The original and most common[1] meaning is a hollow conductive metal pipe used to carry high frequency radio waves, particularly microwaves.  Dielectric waveguides are used at higher radio frequencies, and transparent dielectric waveguides and optical fibers serve as waveguides for light. In acoustics, air ducts and horns are used as waveguides for sound in musical instruments and loudspeakers, and specially-shaped metal rods conduct ultrasonic waves in ultrasonic machining.

The geometry of a waveguide reflects its function; in addition to more common types that channel the wave in one dimension, there are two-dimensional slab waveguides which confine waves to two dimensions. The frequency of the transmitted wave also dictates the size of a waveguide: each waveguide has a cutoff wavelength determined by its size and will not conduct waves of greater wavelength; an optical fiber that guides light will not transmit microwaves which have a much larger wavelength. Some naturally occurring structures can also act as waveguides. The SOFAR channel layer in the ocean can guide the sound of whale song across enormous distances.[2] Any shape of crossection of waveguide can support EM waves. Irregular shapes are difficult to analyse. Commonly used waveguides are rectangular and circular in shape.

https://en.wikipedia.org/wiki/Waveguide


The SOFAR channel (short for Sound Fixing and Ranging channel), or deep sound channel (DSC),[1] is a horizontal layer of water in the ocean at which depth the speed of sound is at its minimum. The SOFAR channel acts as a waveguide for sound, and low frequency sound waves within the channel may travel thousands of miles before dissipating. An example was reception of coded signals generated by the Navy chartered ocean surveillance vessel Cory Chouest off Heard Island, located in the southern Indian Ocean (between Africa, Australia and Antarctica), by hydrophones in portions of all five major ocean basins and as distant as the North Atlantic and North Pacific.[2][3][4][note 1]

This phenomenon is an important factor in ocean surveillance.[5][6][7] The deep sound channel was discovered and described independently by Maurice Ewing and J. Lamar Worzel at Columbia University and Leonid Brekhovskikh at the Lebedev Physics Institute in the 1940s.[8][9] In testing the concept in 1944 Ewing and Worzel hung a hydrophone from Saluda, a sailing vessel assigned to the Underwater Sound Laboratory, with a second ship setting off explosive charges up to 900 nmi (1,000 mi; 1,700 km) miles away.[10][11]

https://en.wikipedia.org/wiki/SOFAR_channel


In oceanography, a halocline (from Greek halshalos 'salt' and klinein 'to slope') is a cline, a subtype of chemocline caused by a strong, vertical salinity gradient within a body of water.[1] Because salinity (in concert with temperature) affects the density of seawater, it can play a role in its vertical stratification. Increasing salinity by one kg/m3 results in an increase of seawater density of around 0.7 kg/m3.

Other types of clines[edit]

https://en.wikipedia.org/wiki/Halocline


Osmotic powersalinity gradient power or blue energy is the energy available from the difference in the salt concentration between seawater and river water. Two practical methods for this are reverse electrodialysis (RED) andpressure retarded osmosis (PRO). Both processes rely on osmosis with membranes. The key waste product is brackish water. This byproduct is the result of natural forces that are being harnessed: the flow of fresh water into seas that are made up of salt water.

In 1954, Pattle[1] suggested that there was an untapped source of power when a river mixes with the sea, in terms of the lost osmotic pressure, however it was not until the mid ‘70s where a practical method of exploiting it using selectively permeable membranes by Loeb [2] was outlined.

The method of generating power by pressure retarded osmosis was invented by Prof. Sidney Loeb in 1973 at the Ben-Gurion University of the Negev, Beersheba, Israel.[3] The idea came to Prof. Loeb, in part, as he observed the Jordan River flowing into the Dead Sea. He wanted to harvest the energy of mixing of the two aqueous solutions (the Jordan River being one and the Dead Sea being the other) that was going to waste in this natural mixing process.[4] In 1977 Prof. Loeb invented a method of producing power by a reverse electrodialysis heat engine.[5]

The technologies have been confirmed in laboratory conditions. They are being developed into commercial use in the Netherlands (RED) and Norway (PRO). The cost of the membrane has been an obstacle. A new, lower cost membrane, based on an electrically modified polyethylene plastic, made it fit for potential commercial use.[6] Other methods have been proposed and are currently under development. Among them, a method based on electric double-layer capacitor technology[7] and a method based on vapor pressure difference.[8]

https://en.wikipedia.org/wiki/Osmotic_power


In physics, the plane wave expansion expresses a plane wave as a linear combination of spherical waves,

where

In the special case where k is aligned with the z-axis,

where θ is the spherical polar angle of r.

https://en.wikipedia.org/wiki/Plane_wave_expansion


transatlantic tunnel is a theoretical tunnel that would span the Atlantic Ocean between North America and Europe possibly for such purposes as mass transit. Some proposals envision technologically advanced trains reaching speeds of 500 to 8,000 kilometres per hour (310 to 4,970 mph).[1] Most conceptions of the tunnel envision it between the United States and the United Kingdom ‒ or more specifically between New York City and London.

Advantages compared to air travel could be increased speed, and use of electricity instead of scarce oil based fuel, considering a future time long after peak oil.

The main barriers to constructing such a tunnel are cost first estimated $88–175 billion, now updated to $1 trillion-20 trillion, as well as limits of current materials science.[2] Existing major tunnels, such as the Channel TunnelSeikan Tunnel and the Gotthard Base Tunnel, despite using less expensive technology than any yet proposed for the transatlantic tunnel, may struggle financially.[citation needed]

Vactrain[edit]

A 1960s proposal has a 3,100 miles (5,000 km)-long near-vacuum tube with vactrains, a theoretical type of maglev train, which could travel at speeds up to 5,000 miles per hour (8,000 km/h). At this speed, the travel-time between New York City and London would be less than one hour. Another modern variation, intended to reduce costs, is a submerged floating tunnel about 160 feet (49 m) below the ocean surface, in order to avoid ships, bad weather, and the high pressure associated with a much deeper tunnel near the sea bed. It would consist of 54,000 prefabricated sections held in place by 100,000 tethering cables. Each section would consist of a layer of foam sandwiched between concentric steel tubes, and the tunnel would also have reduced air pressure.[1]

Jet propulsion[edit]

Ideas proposing rocketjetscramjet, and air-pressurized tunnels for train transportation have also been put forward. In the proposal described in an Extreme Engineering episode, trains would take 18 minutes to reach top speed, and 18 minutes at the end to come to a halt. During the deceleration phase, the resultant 0.2g acceleration would lead to an unpleasant feeling of tilting downward, and it was proposed that the seats would individually rotate to face backwards at the midpoint of the journey, in order to make the deceleration more pleasant.[1]

History[edit]

Early interest[edit]

Suggestions for such a structure go back to Michel Verne, son of Jules Verne, who wrote about it in 1888 in a story entitled Un Express de l'avenir (An Express of the Future). This story was published in English in Strand Magazine in 1895, where it was incorrectly attributed to Jules Verne,[3] a mistake frequently repeated today.[4]1913 saw the publication of the novel Der Tunnel by German author Bernhard Kellermann. It inspired four films of the same name: one in 1915 by William Wauer, and separate GermanFrench, and British versions released in 1933 and 1935. The German and French versions were directed by Curtis Bernhardt, and the British one was written in part by science fiction writer Curt Siodmak. Perhaps suggesting contemporary interest in the topic, an original poster for the American release of the British version (renamed Transatlantic Tunnel) was, in 2006, estimated for auction at $2,000–3,000.[5]

Modern research[edit]

Robert H. Goddard, the father of rocketry,[6][7] was issued two of his 214 patents for the idea.[4] Arthur C. Clarke mentioned intercontinental tunnels in his 1946 short story Rescue Party and again in his 1956 novel The City and the Stars. Harry Harrison's 1975 novel Tunnel Through the Deeps (also published as A Transatlantic Tunnel, Hurrah!) describes a vacuum/maglev system on the ocean floor.[8] The April 2004 issue of Popular Science suggests that a transatlantic tunnel is more feasible than previously thought, and without major engineering challenges. It compares it favorably with laying transatlantic pipes and cables, but with a cost of 88 to 175 billion dollars.[2] In 2003, the Discovery Channel's show Extreme Engineering aired a program, titled "Transatlantic Tunnel",[1] which discussed the proposed tunnel concept in detail.

https://en.wikipedia.org/wiki/Transatlantic_tunnel

Equipment and types of freeze dryers[edit]

Unloading trays of freeze-dried material from a small cabinet-type freeze-dryer

There are many types of freeze-dryers available, however, they usually contain a few essential components. These are a vacuum chamber,[2] shelves, process condenser, shelf-fluid system, refrigeration system, vacuum system, and control system.

Function of essential components[edit]

Chamber[edit]

The chamber is highly polished and contains insulation, internally. It is manufactured with stainless steel and contains multiple shelves for holding the product.[citation needed] A hydraulic or electric motor is in place to ensure the door is vacuum-tight when closed.

Process condenser[edit]

The process condenser consists of refrigerated coils or plates that can be external or internal to the chamber.[31] During the drying process, the condenser traps water. For increased efficiency, the condenser temperature should be 20 °C (68 °F) less than the product during primary drying[31] and have a defrosting mechanism to ensure that the maximum amount of water vapor in the air is condensed.

Shelf fluid[edit]

The amount of heat energy needed at times of the primary and secondary drying phase is regulated by an external heat exchanger.[31] Usually, silicone oil is circulated around the system with a pump.

Refrigeration system[edit]

This system works to cool shelves and the process condenser by using compressors or liquid nitrogen, which will supply energy necessary for the product to freeze.[31]

Vacuum system[edit]

During the drying process, a vacuum of 50-100 microbar is applied, by the vacuum system, to remove the solvent.[31] A two-stage rotary vacuum pump is used, however, if the chamber is large then multiple pumps are needed. This system compresses non-condensable gases through the condenser.

Control system[edit]

Finally, the control system sets up controlled values for shelf temperature, pressure and time that are dependent on the product and/or the process.[32][33] The freeze-dryer can run for a few hours or days depending on the product.[31]

Contact freeze dryers[edit]

Contact freeze dryers use contact (conduction) of the food with the heating element to supply the sublimation energy. This type of freeze dryer is a basic model that is simple to set up for sample analysis. One of the major ways contact freeze dryers heat is with shelf-like platforms contacting the samples. The shelves play a major role as they behave like heat exchangers at different times of the freeze-drying process. They are connected to a silicone oil system that will remove heat energy during freezing and provide energy during drying times.[31]

Additionally, the shelf-fluid system works to provide specific temperatures to the shelves during drying by pumping a fluid (usually silicone oil) at low pressure. The downside to this type of freeze dryer is that the heat is only transferred from the heating element to the side of the sample immediately touching the heater. This problem can be minimized by maximizing the surface area of the sample touching the heating element by using a ribbed tray, slightly compressing the sample between two solid heated plates above and below, or compressing with a heated mesh from above and below.[2]

Radiant freeze dryers[edit]

Radiant freeze dryers use infrared radiation to heat the sample in the tray. This type of heating allows for simple flat trays to be used as an infrared source can be located above the flat trays to radiate downwards onto the product. Infrared radiation heating allows for a very uniform heating of the surface of the product, but has very little capacity for penetration so it is used mostly with very shallow trays and homogeneous sample matrices.[2]

Microwave-assisted freeze dryers[edit]

Microwave-assisted freeze dryers utilize microwaves to allow for deeper penetration into the sample to expedite the sublimation and heating processes in freeze-drying. This method can be very complicated to set up and run as the microwaves can create an electrical field capable of causing gases in the sample chamber to become plasma. This plasma could potentially burn the sample, so maintaining a microwave strength appropriate for the vacuum levels is imperative. The rate of sublimation in a product can affect the microwave impedance, in which power of the microwave must be changed accordingly.[2]


https://en.wikipedia.org/wiki/Freeze-drying#Equipment_and_types_of_freeze_dryers


The cosmic microwave background (CMB, CMBR), in Big Bang cosmology, is electromagnetic radiation which is a remnant from an early stage of the universe, also known as "relic radiation".[1] The CMB is faint cosmic background radiation filling all space. It is an important source of data on the early universe because it is the oldest electromagnetic radiation in the universe, dating to the epoch of recombination. With a traditional optical telescope, the space between stars and galaxies (the background) is completely dark. However, a sufficiently sensitive radio telescope shows a faint background noise, or glow, almost isotropic, that is not associated with any star, galaxy, or other object. This glow is strongest in the microwave region of the radio spectrum. The accidental discovery of the CMB in 1965 by American radio astronomers Arno Penzias and Robert Wilson[2][3] was the culmination of work initiated in the 1940s, and earned the discoverers the 1978 Nobel Prize in Physics.

CMB is landmark evidence of the Big Bang origin of the universe. When the universe was young, before the formation of stars and planets, it was denser, much hotter, and filled with an opaque fog of hydrogen plasma. As the universe expanded the plasma grew cooler and the radiation filling it expanded to longer wavelengths. When the temperature had dropped enough, protons and electrons combined to form neutral hydrogen atoms. Unlike the plasma, these newly conceived atoms could not scatter the thermal radiation by Thomson scattering, and so the universe became transparent.[4] Cosmologists refer to the time period when neutral atoms first formed as the recombination epoch, and the event shortly afterwards when photons started to travel freely through space is referred to as photon decoupling. The photons that existed at the time of photon decoupling have been propagating ever since, though growing less energetic, since the expansion of space causes their wavelength to increase over time (and wavelength is inversely proportional to energy according to Planck's relation). This is the source of the alternative term relic radiation. The surface of last scattering refers to the set of points in space at the right distance from us so that we are now receiving photons originally emitted from those points at the time of photon decoupling.

https://en.wikipedia.org/wiki/Cosmic_microwave_background


No comments:

Post a Comment