Energy price
The present disambiguation page holds the title of a primary topic, and an article needs to be written about it. It is believed to qualify as a broad-concept article. It may be written directly at this page or drafted elsewhere and then moved over here. Related titles should be described in Energy price, while unrelated titles should be moved to Energy price (disambiguation). |
The following articles relate to the price of energy:
https://en.wikipedia.org/wiki/Energy_price
dia
- Miner, a person engaged in mining or digging
- Mining, extraction of mineral resources from the ground through a mine
Grammar
- Mine, a first-person English possessive pronoun
Military
- Anti-tank mine, a land mine made for use against armored vehicles
- Antipersonnel mine, a land mine targeting people walking around, either with explosives or poison gas
- Bangalore mine, colloquial name for the Bangalore torpedo, a man-portable explosive device for clearing a path through wire obstacles and land mines
- Cluster bomb, an aerial bomb which releases many small submunitions, which often act as mines
- Land mine, explosive mines placed under or on the ground
- Mining (military), digging under a fortified military position to penetrate its defenses
- Naval mine, or sea mine, a mine at sea, either floating or on the sea bed, often dropped via parachute from aircraft, or otherwise lain by surface ships or submarines
- Parachute mine, an air-dropped "sea mine" falling gently under a parachute, used as a high-capacity cheaply-cased large bomb against ground targets
Places
- The Mine, Queensland, a locality in the Rockhampton Region, Australia
- Mine, Saga, a Japanese town
- Mine, Yamaguchi, a Japanese city
- Mine District, Yamaguchi, a former district in the area of the city
People
Given name
- Mine Ercan (born 1978), Turkish women's wheelchair basketball player
- Mine Guri, Albanian communist politician
- Miné Okubo (1912–2001), American artist and writer
Nickname
- Mine Boy, nickname of Alex Levinsky (1910–1990), NHL hockey player
Surname
- Kazuki Mine (born 1993), Japanese football player
- George Ralph Mines (born 1886), English cardiac electrophysiologist
Arts, entertainment, and media
Films
- Mine (1985 film), a Turkish film
- Mine (2009 film), an American documentary film
- Mine (2016 film), an Italian-American film
- Abandoned Mine or The Mine, a 2013 horror film
- The Mine (1978 film), Turkish film
Literature
- Mine (novel), a 1990 novel by Robert R. McCammon
- The Mine (novel), 2012 novel by Arnab Ray
Music
Albums
- Mine (Kim Jaejoong EP), 2013
- Mine (Dolly Parton album), 1973
- Mine (Phoebe Ryan EP), 2015
- Mine (Li Yuchun album), 2007
- Mines (album), a 2010 album by indie rock band Menomena
- Mine!, a 1994 album released by musical duo Trout Fishing in America
Songs
- "Mine" (The 1975 song), 2018
- "Mine" (Bazzi song), 2017
- "Mine" (Beyoncé song), 2013
- "Mine" (Kelly Clarkson song), 2023
- "Mine" (Alice Glass song), 2018
- "Mine" (Taylor Swift song), 2010
- "Mine", a song from the 1933 Broadway musical Let 'Em Eat Cake
- "Mine", a song by Bebe Rexha from the album Expectations
- "Mine", a song by Dolly Parton from In the Good Old Days (When Times Were Bad)
- "Mine", a song by Everything but the Girl from Everything but the Girl
- "Mine", a song by Ghinzu from Blow
- "Mine", a song by Jason Webley from Only Just Beginning
- "Mine", a song by Krezip from Days Like This
- "Mine", a song by Mustasch from Mustasch
- "Mine", a song by Savage Garden from Savage Garden
- "Mine", a song by Sepultura from Roots
- "Mine", a song by Taproot from Welcome
- "Mine", a song by Christina Perri from lovestrong
- "Mine", a song by Disturbed from The Lost Children
- "Mine", a song by M.I from the album The Chairman
- "#Mine", a song by Lil' Kim from Lil Kim Season
- "Mine, Mine, Mine", a song from the soundtrack for the 1995 Disney film Pocahontas
Television
- Mine (2021 TV series), a South Korean television series
Organizations and enterprises
- MINE, a design office in San Francisco, US, of which Christopher Simmons is principal creative director
- Colorado School of Mines or "Mines", a university in Golden, Colorado, US
- Mine's, a Japanese auto tuning company
- South Dakota School of Mines and Technology, a university in Rapid City, South Dakota, US
Science and technology
- MINE (chemotherapy), a chemotherapy regimen
- Mine or star mine, a type of fireworks
- MinE, a bacterial protein
- Data mining, the computational process of discovering patterns in large data sets
- Leaf mine, a space in a leaf
- Mina (unit), or mine, an ancient Greek unit of mass
See also
- Disambiguation pages
- Place name disambiguation pages
- Disambiguation pages with given-name-holder lists
- Disambiguation pages with surname-holder lists
https://en.wikipedia.org/wiki/Mine
Part of a series on |
Evolutionary biology |
---|
|
|
|
|
|
Transmutation of species and transformism are unproven 18th and 19th-century evolutionary ideas about the change of one species into another that preceded Charles Darwin's theory of natural selection.[1] The French Transformisme was a term used by Jean Baptiste Lamarck in 1809 for his theory, and other 18th and 19th century proponents of pre-Darwinian evolutionary ideas included Denis Diderot, Étienne Geoffroy Saint-Hilaire, Erasmus Darwin, Robert Grant, and Robert Chambers, the anonymous author of the book Vestiges of the Natural History of Creation. Opposition in the scientific community to these early theories of evolution, led by influential scientists like the anatomists Georges Cuvier and Richard Owen, and the geologist Charles Lyell, was intense. The debate over them was an important stage in the history of evolutionary thought and influenced the subsequent reaction to Darwin's theory.
Terminology
Transmutation was one of the names commonly used for evolutionary ideas in the 19th century before Charles Darwin published On The Origin of Species (1859). Transmutation had previously been used as a term in alchemy to describe the transformation of base metals into gold. Other names for evolutionary ideas used in this period include the development hypothesis (one of the terms used by Darwin) and the theory of regular gradation, used by William Chilton in the periodical press such as The Oracle of Reason.[2] Transformation is another word used quite as often as transmutation in this context. These early 19th century evolutionary ideas played an important role in the history of evolutionary thought.
The proto-evolutionary thinkers of the 18th and early 19th century had to invent terms to label their ideas, but it was first Joseph Gottlieb Kölreuter who used the term "transmutation" to refer to species who have had biological changes through hybridization.[3]
The terminology did not settle down until some time after the publication of the Origin of Species. The word evolved in a modern sense was first used in 1826 in an anonymous paper published in Robert Jameson's journal and evolution was a relative late-comer which can be seen in Herbert Spencer's Social Statics of 1851,[a] and at least one earlier example, but was not in general use until about 1865–70.
Historical development
In the 10th and 11th centuries, Ibn Miskawayh's Al-Fawz al-Kabir (الفوز الأكبر), and the Brethren of Purity's Encyclopedia of the Brethren of Purity (رسائل إخوان الصفا) developed ideas about changes in biological species.[5] In 1993, Muhammad Hamidullah described the ideas in lectures:
[These books] state that God first created matter and invested it with energy for development. Matter, therefore, adopted the form of vapour which assumed the shape of water in due time. The next stage of development was mineral life. Different kinds of stones developed in course of time. Their highest form being mirjan (coral). It is a stone which has in it branches like those of a tree. After mineral life evolves vegetation. The evolution of vegetation culminates with a tree which bears the qualities of an animal. This is the date-palm. It has male and female genders. It does not wither if all its branches are chopped but it dies when the head is cut off. The date-palm is therefore considered the highest among the trees and resembles the lowest among animals. Then is born the lowest of animals. It evolves into an ape. This is not the statement of Darwin. This is what Ibn Maskawayh states and this is precisely what is written in the Epistles of Ikhwan al-Safa. The Muslim thinkers state that ape then evolved into a lower kind of a barbarian man. He then became a superior human being. Man becomes a saint, a prophet. He evolves into a higher stage and becomes an angel. The one higher to angels is indeed none but God. Everything begins from Him and everything returns to Him.[6]
In the 14th century, Ibn Khaldun further developed these ideas. According to some commentators,[who?] statements in his 1377 work, the Muqaddimah anticipate the biological theory of evolution.[7]
Robert Hooke proposed in a speech to the Royal Society in the late 17th century that species vary, change, and especially become extinct. His “Discourse of Earthquakes” was based on comparisons made between fossils, especially the modern pearly nautilus and the curled shells of ammonites.[8]
In the 18th century, Jacques-Antoine des Bureaux claimed a “genealogical ascent of species”. He argued that through crossbreeding and hybridization in reproduction, “progressive organization” occurred, allowing organisms to change and more complex species to develop.[9]
Simultaneously, Retif de la Bretonne wrote La decouverte australe par un homme-volant (1781) and La philosophie de monsieur Nicolas (1796), which encapsulated his view that more complex species, such as mankind, had developed step-by-step from “less perfect” animals. De la Bretonne believed that living forms undergo constant change. Although he believed in constant change, he took a very different approach than Diderot: chance and blind combinations of atoms, in de la Bretonne’s opinion, was not the cause of transmutation. De la Bretonne argued that all species had developed from more primitive organisms, and that nature aimed to reach perfection.[9]
Denis Diderot, chief editor of the Encyclopedia, spent his time pouring over scientific theories attempting to explain rock strata and the diversity of fossils. Geological and fossil evidence was presented to him as contributions to Encyclopedia articles, chief among them “Mammoth,” “Fossil,” and “Ivory Fossil,” all of which noted the existence of mammoth bones in Siberia. As a result of this geological and fossil evidence, Diderot believed that species were mutable. Particularly, he argued that organisms metamorphosized over millennia, resulting in species changes. In Diderot’s theory of transformationism, random chance plays a large role in allowing species to change, develop and become extinct, as well as having new species form. Specifically, Diderot believed that given randomness and an infinite number of times, all possible scenarios would manifest themselves. He proposed that this randomness was behind the development of new traits in offspring and as a result the development and extinction of species.[8][10]
Diderot drew from Leonardo da Vinci’s comparison of the leg structure of a human and a horse as proof of the interconnectivity of species. He saw this experiment as demonstrating that nature could continually try out new variations. Additionally, Diderot argued that organic molecules and organic matter possessed an inherent consciousness, which allowed the smallest particles of organic matter to organize into fibers, then a network, and then organs. The idea that organic molecules have consciousness was derived from both Maupertuis and Lucretian texts. Overall, Diderot’s musings all fit together as a “composite transformist philosophy,” one dependent on the randomness inherent to nature as a transformist mechanism.[8][10]
Erasmus Darwin developed a theory of universal transformation. His major works, The Botanic Garden (1792), Zoonomia (1794–96), and The Temple of Nature all touched on the transformation of organic creatures. In both The Botanic Garden and The Temple of Nature, Darwin used poetry to describe his ideas regarding species. In Zoonomia, however, Erasmus clearly articulates (as a more scientific text) his beliefs about the connections between organic life. He notes particularly that some plants and animals have “useless appendages,” which have gradually changed from their original, useful states. Additionally, Darwin relied on cosmological transformation as a crucial aspect of his theory of transformation, making a connection between William Herschel’s approach to natural historical cosmology and the changing aspects of plants and animals.[10][11][12]
Erasmus believed that life had one origin, a common ancestor, which he referred to as the “filament” of life. He used his understanding of chemical transmutation to justify the spontaneous generation of this filament. His geological study of Derbyshire and the sea- shells and fossils which he found there helped him to come to the conclusion that complex life had developed from more primitive forms (Laniel-Musitelli). Erasmus was an early proponent of what we now refer to as “adaptations,” albeit through a different transformist mechanism – he argued that sexual reproduction could pass on acquired traits through the father’s contribution to the embryon. These changes, he believed, were mainly driven by the three great needs of life: lust, food, and security. Erasmus proposed that these acquired changes gradually altered the physical makeup of organisms as a result of the desires of plants and animals. Notably, he describes insects developing from plants, a grand example of one species transforming into another.[10][11][12]
Erasmus Darwin relied on Lucretian philosophy to form a theory of universal change. He proposed that both organic and inorganic matter changed throughout the course of the universe, and that plants and animals could pass on acquired traits to their progeny. His view of universal transformation placed time as a driving force in the universe’s journey towards improvement. In addition, Erasmus believed that nature had some amount of agency in this inheritance. Darwin spun his own story of how nature began to develop from the ocean, and then slowly became more diverse and more complex. His transmutation theory relied heavily on the needs which drove animal competition, as well as the results of this contest between both animals and plants.[10][11][12]
Charles Darwin acknowledged his grandfather’s contribution to the field of transmutation in his synopsis of Erasmus’ life, The Life of Erasmus Darwin. Darwin collaborated with Ernst Krause to write a forward on Krause’s Erasmus Darwin und Seine Stellung in Der Geschichte Der Descendenz-Theorie, which translates into Erasmus Darwin and His Place in the History of the Descent Theory. Krause explains Erasmus’ motivations for arguing for the theory of descent, including Darwin’s connection with and correspondence with Rousseau, which may have influenced how he saw the world.[13]
Jean-Baptiste Lamarck proposed a hypothesis on the transmutation of species in Philosophie Zoologique (1809). Lamarck did not believe that all living things shared a common ancestor. Rather he believed that simple forms of life were created continuously by spontaneous generation. He also believed that an innate life force, which he sometimes described as a nervous fluid, drove species to become more complex over time, advancing up a linear ladder of complexity that was related to the great chain of being. Lamarck also recognized that species were adapted to their environment. He explained this observation by saying that the same nervous fluid driving increasing complexity, also caused the organs of an animal (or a plant) to change based on the use or disuse of that organ, just as muscles are affected by exercise. He argued that these changes would be inherited by the next generation and produce slow adaptation to the environment. It was this secondary mechanism of adaptation through the inheritance of acquired characteristics that became closely associated with his name and would influence discussions of evolution into the 20th century.[14][15]
The German Abraham Gottlob Werner believed in geological transformism. Specifically, Werner argued that the Earth undergoes irreversible and continuous change. The Edinburgh school, a radical British school of comparative anatomy, fostered a lot of debate around natural history.[16][17] Edinburgh, which included the surgeon Robert Knox and the anatomist Robert Grant, was closely in touch with Lamarck's school of French Transformationism, which contained scientists such as Étienne Geoffroy Saint-Hilaire. Grant developed Lamarck's and Erasmus Darwin's ideas of transmutation and evolutionism, investigating homology to prove common descent. As a young student Charles Darwin joined Grant in investigations of the life cycle of marine animals. He also studied geology under professor Robert Jameson whose journal published an anonymous paper in 1826 praising "Mr. Lamarck" for explaining how the higher animals had "evolved" from the "simplest worms" – this was the first use of the word "evolved" in a modern sense. Professor Jameson was a Wernerian, which allowed him to consider transformation theories and foster the interest in transformism among his students.[16][17] Jameson's course closed with lectures on the "Origin of the Species of Animals".[18][19]
The computing pioneer Charles Babbage published his unofficial Ninth Bridgewater Treatise in 1837, putting forward the thesis that God had the omnipotence and foresight to create as a divine legislator, making laws (or programs) which then produced species at the appropriate times, rather than continually interfering with ad hoc miracles each time a new species was required. In 1844 the Scottish publisher Robert Chambers anonymously published an influential and extremely controversial book of popular science entitled Vestiges of the Natural History of Creation. This book proposed an evolutionary scenario for the origins of the solar system and life on earth. It claimed that the fossil record showed an ascent of animals with current animals being branches off a main line that leads progressively to humanity. It implied that the transmutations led to the unfolding of a preordained orthogenetic plan woven into the laws that governed the universe. In this sense it was less completely materialistic than the ideas of radicals like Robert Grant, but its implication that humans were just the last step in the ascent of animal life incensed many conservative thinkers. Both conservatives like Adam Sedgwick, and radical materialists like Thomas Henry Huxley, who disliked Chambers' implications of preordained progress, were able to find scientific inaccuracies in the book that they could disparage. Darwin himself openly deplored the author's "poverty of intellect", and dismissed it as a "literary curiosity." However, the high profile of the public debate over Vestiges, with its depiction of evolution as a progressive process, and its popular success, would greatly influence the perception of Darwin's theory a decade later.[20][21][22] It also influenced some younger naturalists, including Alfred Russel Wallace, to take an interest in the idea of transmutation.[23]
Opposition to transmutation
Ideas about the transmutation of species were strongly associated with the radical materialism of the Enlightenment and were greeted with hostility by more conservative thinkers. Cuvier attacked the ideas of Lamarck and Geoffroy Saint-Hilaire, agreeing with Aristotle that species were immutable. Cuvier believed that the individual parts of an animal were too closely correlated with one another to allow for one part of the anatomy to change in isolation from the others, and argued that the fossil record showed patterns of catastrophic extinctions followed by re-population, rather than gradual change over time. He also noted that drawings of animals and animal mummies from Egypt, which were thousands of years old, showed no signs of change when compared with modern animals. The strength of Cuvier's arguments and his reputation as a leading scientist helped keep transmutational ideas out of the scientific mainstream for decades.[24]
In Britain, where the philosophy of natural theology remained influential, William Paley wrote the book Natural Theology with its famous watchmaker analogy, at least in part as a response to the transmutational ideas of Erasmus Darwin.[25] Geologists influenced by natural theology, such as Buckland and Sedgwick, made a regular practice of attacking the evolutionary ideas of Lamarck and Grant, and Sedgwick wrote a famously harsh review of The Vestiges of the Natural History of Creation.[26][27] Although the geologist Charles Lyell opposed scriptural geology, he also believed in the immutability of species, and in his Principles of Geology (1830–1833) he criticized and dismissed Lamarck's theories of development. Instead, he advocated a form of progressive creation, in which each species had its "centre of creation" and was designed for this particular habitat, but would go extinct when this habitat changed.[19]
Another source of opposition to transmutation was a school of naturalists who were influenced by the German philosophers and naturalists associated with idealism, such as Goethe, Hegel and Lorenz Oken. Idealists such as Louis Agassiz and Richard Owen believed that each species was fixed and unchangeable because it represented an idea in the mind of the creator. They believed that relationships between species could be discerned from developmental patterns in embryology, as well as in the fossil record, but that these relationships represented an underlying pattern of divine thought, with progressive creation leading to increasing complexity and culminating in humanity. Owen developed the idea of "archetypes" in the divine mind that would produce a sequence of species related by anatomical homologies, such as vertebrate limbs. Owen was concerned by the political implications of the ideas of transmutationists like Robert Grant, and he led a public campaign by conservatives that successfully marginalized Grant in the scientific community. In his famous 1841 paper, which coined the term dinosaur for the giant reptiles discovered by Buckland and Gideon Mantell, Owen argued that these reptiles contradicted the transmutational ideas of Lamarck because they were more sophisticated than the reptiles of the modern world. Darwin would make good use of the homologies analyzed by Owen in his own theory, but the harsh treatment of Grant, along with the controversy surrounding Vestiges, would be factors in his decision to ensure that his theory was fully supported by facts and arguments before publishing his ideas.[19][28][29]
See also
Notes
- There are three examples of the word 'evolution' in Social Statics, but none in the sense that is used today in biology.[4]
References
- (van Wyhe 2007, pp. 181–182)
Bibliography
- Bowler, Peter J. (2003). Evolution:The History of an Idea. University of California Press. ISBN 0-520-23693-9.
- Desmond, Adrian; Moore, James (1994). Darwin: The Life of a Tormented Evolutionist. W. W. Norton & Company. ISBN 0-393-31150-3.
- Bowler, Peter J.; Morus, Iwan Rhys (2005). Making Modern Science. The University of Chicago Press. ISBN 0-226-06861-7.
- Larson, Edward J. (2004). Evolution:The Remarkable History of Scientific Theory. Modern Library. ISBN 0-679-64288-9.
- Secord, James A. (2000). Victorian sensation: the extraordinary publication, reception, and secret authorship of the Vestiges of the Natural History of Creation. Chicago.
- van Wyhe, John (27 March 2007). "Mind the gap: Did Darwin avoid publishing his theory for many years?" (PDF). Notes and Records of the Royal Society. 61 (2): 177–205. doi:10.1098/rsnr.2006.0171. S2CID 202574857.
External links
https://en.wikipedia.org/wiki/Transmutation_of_species
A transborder agglomeration is an urban agglomeration or conurbation that extends into multiple sovereign states and/or dependent territories. It includes city-states that agglomerate with their neighbouring countries.
List of transborder agglomerations
Africa
Main municipalities | Countries | Total population |
---|---|---|
Kinshasa–Brazzaville | Democratic Republic of the Congo–Republic of the Congo | 19,547,463 |
Bukavu–Cyangugu | Democratic Republic of the Congo–Rwanda | ~1.2 million[1][2] |
Goma–Gisenyi | Democratic Republic of the Congo–Rwanda | 756,323 |
Lomé–Aflao | Togo–Ghana | 1,544,206 |
N'djamena–Kousséri | Chad–Cameroon | 1,694,819 |
Asia
Main municipalities | Countries | Total population |
---|---|---|
Astara–Astara | Iran–Azerbaijan | 100,000 |
Basra–Khorramshahr–Abadan | Iraq–Iran | 2,000,000 |
Blagoveshchensk–Heihe | Russia–China | 400,000 |
Dandong–Sinuiju | China–North Korea | 1,000,000 |
Dhahran–Jubail–Manama | Saudi Arabia–Bahrain | 2,000,000 |
Fergana–Kyzyl-Kiya | Uzbekistan–Kyrgyzstan | 300,000 |
Sijori Growth Triangle (Singapore–Johor Bahru–Batam–Bintan) | Singapore–Malaysia–Indonesia | 9,000,000 |
Greater Bay Area (Guangzhou–Dongguan–Shenzhen–Hong Kong–Macau) | China–Hong Kong–Macau | 71,200,000 |
Kara-Suu–Qorasuv | Kyrgyzstan–Uzbekistan | 40,000 |
Lahore–Amritsar | Pakistan–India | 12,000,000 |
Mukdahan–Savannakhet | Thailand–Laos | 200,000 |
Padang Besar–Padang Besar | Malaysia–Thailand | 20,000 |
Tashkent–Saryagash | Uzbekistan–Kazakhstan | 3,000,000 [3] |
Vientiane–Nong Khai | Laos–Thailand | 900,000 |
Europe
North America
Main municipalities | Countries | Total population |
---|---|---|
Paso Canoas | Costa Rica–Panama |
|
Metro Vancouver–Fraser Valley–Bellingham | Canada–United States | 3,050,000 |
Detroit–Windsor | United States–Canada | 5,976,595 |
Buffalo-Niagara Region | United States–Canada | 1,614,790 |
Port Huron–Sarnia–Point Edward | United States–Canada | 105,776 |
Sault Ste. Marie–Sault Ste. Marie | United States–Canada | 93,944 |
San Diego–Tijuana | United States–Mexico | 5,105,769[6] |
Mexicali–Calexico | Mexico–United States | 1,143,000 |
San Luis Río Colorado–San Luis | Mexico–United States | 227,000 |
Nogales–Nogales | United States–Mexico | 240,000 |
El Paso–Juárez | United States–Mexico | 2,500,000[7] |
Laredo–Nuevo Laredo | United States–Mexico | 775,481[8] |
Reynosa–McAllen | Mexico–United States | 1,500,000[9] |
Brownsville–Matamoros | United States–Mexico | 1,136,995[10] |
South America
Main municipalities | Countries | Total population |
---|---|---|
Saint-Laurent-du-Maroni–Albina | French Guiana (France)–Suriname |
|
Saint Georges–Oiapoque | French Guiana (France)–Brazil | 32,000 |
Santana do Livramento–Rivera | Brazil–Uruguay | 140,000 |
Chuí–Chuy | Brazil–Uruguay | 15,592[11][12] |
Corumbá–Puerto Suárez | Brazil–Bolivia | 124,000 |
Leticia–Tabatinga–Santa Rosa | Colombia–Brazil–Peru | 107,000 |
Ponta Porã–Pedro Juan Caballero | Brazil–Paraguay | 209,000 |
Foz do Iguaçu–Ciudad del Este–Puerto Iguazu | Brazil–Paraguay–Argentina | 800,000 |
Bernardo de Irigoyen–Barracão–Dionísio Cerqueira | Argentina–Brazil | 36,000 |
Uruguaiana–Paso de los Libres | Brazil–Argentina | 170,000 |
Cúcuta–San Antonio del Táchira | Colombia–Venezuela | 700,000 [13] |
See also
References
- http://www.lapatilla.com/site/2016/08/15/cucuta-y-san-antonio-dos-ciudades-unidas-pero-separadas-por-cierre-frontera/ "...que une ambas ciudades formando una conurbación, los 650.000 habitantes de Cúcuta y los 50.000 de San Antonio supieron que comenzaban tiempos difíciles."
https://en.wikipedia.org/wiki/Transborder_agglomeration
Transport Layer Security (TLS) is a cryptographic protocol designed to provide communications security over a computer network. The protocol is widely used in applications such as email, instant messaging, and voice over IP, but its use in securing HTTPS remains the most publicly visible.
The TLS protocol aims primarily to provide security, including privacy (confidentiality), integrity, and authenticity through the use of cryptography, such as the use of certificates, between two or more communicating computer applications. It runs in the presentation layer and is itself composed of two layers: the TLS record and the TLS handshake protocols.
The closely related Datagram Transport Layer Security (DTLS) is a communications protocol that provides security to datagram-based applications. In technical writing, references to "(D)TLS" are often seen when it applies to both versions.[1]
TLS is a proposed Internet Engineering Task Force (IETF) standard, first defined in 1999, and the current version is TLS 1.3, defined in August 2018. TLS builds on the now-deprecated SSL (Secure Sockets Layer) specifications (1994, 1995, 1996) developed by Netscape Communications for adding the HTTPS protocol to their Navigator web browser.
https://en.wikipedia.org/wiki/Transport_Layer_Security
The Dynamic Host Configuration Protocol version 6 (DHCPv6) is a network protocol for configuring Internet Protocol version 6 (IPv6) hosts with IP addresses, IP prefixes, default route, local segment MTU, and other configuration data required to operate in an IPv6 network. It is not just the IPv6 equivalent of the Dynamic Host Configuration Protocol for IPv4.
IPv6 hosts may automatically generate IP addresses internally using stateless address autoconfiguration (SLAAC), or they may be assigned configuration data with DHCPv6.
IPv6 hosts that use stateless autoconfiguration may require information other than an IP address or route. DHCPv6 can be used to acquire this information, even though it is not being used to configure IP addresses. DHCPv6 is not necessary for configuring hosts with the addresses of Domain Name System (DNS) servers, because they can be configured using Neighbor Discovery Protocol, which is also the mechanism for stateless autoconfiguration.[1]
Many IPv6 routers, such as routers for residential networks, must be configured automatically with no operator intervention. Such routers require not only an IPv6 address for use in communicating with upstream routers, but also an IPv6 prefix for use in configuring devices on the downstream side of the router. DHCPv6 prefix delegation provides a mechanism for configuring such routers.
https://en.wikipedia.org/wiki/DHCPv6
Transfer genes or tra genes (also transfer operons or tra operons), are some genes necessary for non-sexual transfer of genetic material in both gram-positive and gram-negative bacteria. The tra locus includes the pilin gene and regulatory genes, which together form pili on the cell surface, polymeric proteins that can attach themselves to the surface of F-bacteria and initiate the conjugation. The existence of the tra region of a plasmid genome was first discovered in 1979 by David H. Figurski and Donald R. Helinski[1] In the course of their work, Figurski and Helinski also discovered a second key fact about the tra region – that it can act in trans to the mobilization marker which it affects.[1]
This finding suggested that there were two basic aspects necessary for a plasmid to move from one cell to another:
- An origin of transfer – A plasmid with no origin of transfer is non-mobilizable.[2]
- The transfer genes – Though a functioning set of tra genes is necessary for plasmid transfer, they may be located in a variety of places including the plasmid in question, another plasmid in the same host cell, or even in the bacterial genome.[3]
The tra genes encode proteins which are useful for the propagation of the plasmid from the host cell to a compatible donor cell or maintenance of the plasmid. Not all transfer operons are the same. Some genes are only found in a few species or a single genus of bacteria while others (such as traL) are found in very similar forms in many bacterial species. Many of the transfer systems are incompatible. For example, oriT and bom are two origins of transfer which interact with different sets of transfer genes. A plasmid with a mob site (like many found in Rhodococcus species) cannot be transferred via transfer genes which normally interact with the oriT site (which is common in E. coli)[3]
The roles of some tra-gene encoded proteins:[4] | |
Pili Assembly and Production | traA, traB, traE, traC, traF, traG, traH, traK, traL, traQ, traU, traV, traW, |
Inner Membrane Proteins | traB, traE, traG, traL, traP |
Periplasmic Proteins | traC, traF, traH traK, traU, traW |
DNA transfer | traC, traD, traI, traM, traY |
Surface Exclusion Proteins | traS, traT |
Mating Pair Stabilization | traN, traG |
Each of the individual genes in the tra operon codes for a different protein product. These products may perform a number of tasks including interaction with one another to perform mating pair functions and regulation of different regions of the tra operon itself,[5] or conjugative DNA metabolism and surface exclusion.[4] Also, note that some proteins perform multiple functions or are associated closely with proteins which have non-similar functions.
References
- Grohmann E, Muth G, Espinosa M (2003). "Conjugative Plasmid Transfer in Gram-Positive Bacteria". Microbiology and Molecular Biology Reviews. 67 (2): 277–301. doi:10.1128/MMBR.67.2.277-301.2003. PMC 156469. PMID 12794193.
https://en.wikipedia.org/wiki/Transfer_gene
Pridnestrovian Moldavian Republic
| |
---|---|
Anthem: Мы славим тебя, Приднестровье My slavim tebya, Pridnestrovie "We Sing the Praises of Transnistria"[2] | |
Status | Unrecognised state |
Capital and largest city | Tiraspol 46°50′25″N 29°38′36″E |
Official languages | |
Interethnic language | Russian[3][4][5] |
Ethnic groups (2015) |
|
Demonym(s) |
|
Government | Unitary presidential republic |
Vadim Krasnoselsky | |
Aleksandr Rozenberg | |
Alexander Korshunov | |
Legislature | Supreme Council |
Unrecognised state | |
• Independence from SSR of Moldova declared | 2 September 1990 |
• Independence from Soviet Union declared | 25 August 1991 |
• Succeeds the Pridnestrovian Moldavian Soviet Socialist Republic | 5 November 1991[6] |
2 March – 1 July 1992 | |
Area | |
• Total | 4,163 km2 (1,607 sq mi) |
• Water (%) | 2.35 |
Population | |
• 31 December 2022 estimate | 360,938 (Moldovan estimate)[7] |
• 2015 census | 475,373[8] |
• Density | 73.5/km2 (190.4/sq mi) |
GDP (nominal) | 2021 estimate |
• Total | $1.201 billion[9] |
• Per capita | $2,584 |
Currency | Transnistrian ruble (PRB) |
Time zone | UTC+2 (EET) |
• Summer (DST) | UTC+3 (EEST) |
Calling code | +373[a] |
|
Transnistria, officially the Pridnestrovian Moldavian Republic (PMR),[c] is an unrecognised breakaway state that is internationally recognised as a part of Moldova. Transnistria controls most of the narrow strip of land between the Dniester river and the Moldovan–Ukrainian border, as well as some land on the other side of the river's bank. Its capital and largest city is Tiraspol. Transnistria has been recognised only by three other unrecognised or partially recognised breakaway states: Abkhazia, Artsakh and South Ossetia.[10] Transnistria is officially designated by the Republic of Moldova as the Administrative-Territorial Units of the Left Bank of the Dniester (Romanian: Unitățile Administrativ-Teritoriale din stînga Nistrului)[11] or as Stînga Nistrului ("Left Bank of the Dniester").[12][13][14] In March 2022, the Parliamentary Assembly of the Council of Europe adopted a resolution that defines the territory as under military occupation by Russia.[15]
The region's origins can be traced to the Moldavian Autonomous Soviet Socialist Republic, which was formed in 1924 within the Ukrainian SSR. During World War II, the Soviet Union took parts of the Moldavian ASSR, which was dissolved, and of the Kingdom of Romania's Bessarabia to form the Moldavian Soviet Socialist Republic in 1940. The present history of the region dates to 1990, during the dissolution of the Soviet Union, when the Pridnestrovian Moldavian Soviet Socialist Republic was established in hopes that it would remain within the Soviet Union should Moldova seek unification with Romania or independence, the latter occurring in August 1991. Shortly afterwards, a military conflict between the two parties started in March 1992 and concluded with a ceasefire in July that year.
As part of the ceasefire agreement, a three-party (Russia, Moldova, Transnistria) Joint Control Commission supervises the security arrangements in the demilitarised zone, comprising 20 localities on both sides of the river.[citation needed] Although the ceasefire has held, the territory's political status remains unresolved: Transnistria is an unrecognised but de facto independent presidential republic[16] with its own government, parliament, military, police, postal system, currency, and vehicle registration.[17][18][19][20] Its authorities have adopted a constitution, flag, national anthem, and coat of arms. After a 2005 agreement between Moldova and Ukraine, all Transnistrian companies that seek to export goods through the Ukrainian border must be registered with the Moldovan authorities.[21] This agreement was implemented after the European Union Border Assistance Mission to Moldova and Ukraine (EUBAM) took force in 2005.[22] Most Transnistrians have Moldovan citizenship,[23] but many also have Russian, Romanian, or Ukrainian citizenship.[24][25] The main ethnic groups are Russians, Moldovans/Romanians, and Ukrainians.
Transnistria, along with Abkhazia, South Ossetia, and Artsakh, is a post-Soviet "frozen conflict" zone.[26] These four partially recognised or unrecognised states maintain friendly relations with each other and form the Community for Democracy and Rights of Nations.[27][28][29]
https://en.wikipedia.org/wiki/Transnistria
The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the Transport Layer of the TCP/IP suite. SSL/TLS often runs on top of TCP.
TCP is connection-oriented, and a connection between client and server is established before data can be sent. The server must be listening (passive open) for connection requests from clients before a connection is established. Three-way handshake (active open), retransmission, and error detection adds to reliability but lengthens latency. Applications that do not require reliable data stream service may use the User Datagram Protocol (UDP) instead, which provides a connectionless datagram service that prioritizes time over reliability. TCP employs network congestion avoidance. However, there are vulnerabilities in TCP, including denial of service, connection hijacking, TCP veto, and reset attack.
https://en.wikipedia.org/wiki/Transmission_Control_Protocol
TCP reset attack, also known as a "forged TCP reset" or "spoofed TCP reset", is a way to terminate a TCP connection by sending a forged TCP reset packet. This tampering technique can be used by a firewall or abused by a malicious attacker to interrupt Internet connections.
The Great Firewall of China, and Iranian Internet censors are known to use TCP reset attacks to interfere with and block connections,[1] as a major method to carry out Internet censorship.
https://en.wikipedia.org/wiki/TCP_reset_attack
https://en.wikipedia.org/wiki/Category:Computer_security_exploits
A transformer is a passive component that transfers electrical energy from one electrical circuit to another circuit, or multiple circuits. A varying current in any coil of the transformer produces a varying magnetic flux in the transformer's core, which induces a varying electromotive force (EMF) across any other coils wound around the same core. Electrical energy can be transferred between separate coils without a metallic (conductive) connection between the two circuits. Faraday's law of induction, discovered in 1831, describes the induced voltage effect in any coil due to a changing magnetic flux encircled by the coil.
Transformers are used to change AC voltage levels, such transformers being termed step-up or step-down type to increase or decrease voltage level, respectively. Transformers can also be used to provide galvanic isolation between circuits as well as to couple stages of signal-processing circuits. Since the invention of the first constant-potential transformer in 1885, transformers have become essential for the transmission, distribution, and utilization of alternating current electric power.[1] A wide range of transformer designs is encountered in electronic and electric power applications. Transformers range in size from RF transformers less than a cubic centimeter in volume, to units weighing hundreds of tons used to interconnect the power grid.
https://en.wikipedia.org/wiki/Transformer
Principles
Ideal transformer equations
By Faraday's law of induction:
-
(Eq. 2)
where is the instantaneous voltage, is the number of turns in a winding, dΦ/dt is the derivative of the magnetic flux Φ through one turn of the winding over time (t), and subscripts P and S denotes primary and secondary.
Combining the ratio of eq. 1 & eq. 2:
-
Turns ratio
(Eq. 3)
where for a step-up transformer a < 1 and for a step-down transformer a > 1.[3]
By the law of conservation of energy, apparent, real and reactive power are each conserved in the input and output:
-
(Eq. 4)
where is apparent power and is current.
Combining Eq. 3 & Eq. 4 with this endnote[b][4] gives the ideal transformer identity:
-
(Eq. 5)
where is winding self-inductance.
By Ohm's law and ideal transformer identity:
-
(Eq. 6)
-
(Eq. 7)
where is the load impedance of the secondary circuit & is the apparent load or driving point impedance of the primary circuit, the superscript denoting referred to the primary.
Ideal transformer
An ideal transformer is linear, lossless and perfectly coupled. Perfect coupling implies infinitely high core magnetic permeability and winding inductance and zero net magnetomotive force (i.e. ipnp − isns = 0).[3][c]
A varying current in the transformer's primary winding creates a varying magnetic flux in the transformer core, which is also encircled by the secondary winding. This varying flux at the secondary winding induces a varying electromotive force or voltage in the secondary winding. This electromagnetic induction phenomenon is the basis of transformer action and, in accordance with Lenz's law, the secondary current so produced creates a flux equal and opposite to that produced by the primary winding.
The windings are wound around a core of infinitely high magnetic permeability so that all of the magnetic flux passes through both the primary and secondary windings. With a voltage source connected to the primary winding and a load connected to the secondary winding, the transformer currents flow in the indicated directions and the core magnetomotive force cancels to zero.
According to Faraday's law, since the same magnetic flux passes through both the primary and secondary windings in an ideal transformer, a voltage is induced in each winding proportional to its number of windings. The transformer winding voltage ratio is equal to the winding turns ratio.[6]
An ideal transformer is a reasonable approximation for a typical commercial transformer, with voltage ratio and winding turns ratio both being inversely proportional to the corresponding current ratio.
The load impedance referred to the primary circuit is equal to the turns ratio squared times the secondary circuit load impedance.[7]
Real transformer
Deviations from ideal transformer
The ideal transformer model neglects the following basic linear aspects of real transformers:
(a) Core losses, collectively called magnetizing current losses, consisting of[8]
- Hysteresis losses due to nonlinear magnetic effects in the transformer core, and
- Eddy current losses due to joule heating in the core that are proportional to the square of the transformer's applied voltage.
(b) Unlike the ideal model, the windings in a real transformer have non-zero resistances and inductances associated with:
- Joule losses due to resistance in the primary and secondary windings[8]
- Leakage flux that escapes from the core and passes through one winding only resulting in primary and secondary reactive impedance.
(c) similar to an inductor, parasitic capacitance and self-resonance phenomenon due to the electric field distribution. Three kinds of parasitic capacitance are usually considered and the closed-loop equations are provided[9]
- Capacitance between adjacent turns in any one layer;
- Capacitance between adjacent layers;
- Capacitance between the core and the layer(s) adjacent to the core;
Inclusion of capacitance into the transformer model is complicated, and is rarely attempted; the ‘real’ transformer model's equivalent circuit shown below does not include parasitic capacitance. However, the capacitance effect can be measured by comparing open-circuit inductance, i.e. the inductance of a primary winding when the secondary circuit is open, to a short-circuit inductance when the secondary winding is shorted.
Leakage flux
The ideal transformer model assumes that all flux generated by the primary winding links all the turns of every winding, including itself. In practice, some flux traverses paths that take it outside the windings.[10] Such flux is termed leakage flux, and results in leakage inductance in series with the mutually coupled transformer windings.[11] Leakage flux results in energy being alternately stored in and discharged from the magnetic fields with each cycle of the power supply. It is not directly a power loss, but results in inferior voltage regulation, causing the secondary voltage not to be directly proportional to the primary voltage, particularly under heavy load.[10] Transformers are therefore normally designed to have very low leakage inductance.
In some applications increased leakage is desired, and long magnetic paths, air gaps, or magnetic bypass shunts may deliberately be introduced in a transformer design to limit the short-circuit current it will supply.[11] Leaky transformers may be used to supply loads that exhibit negative resistance, such as electric arcs, mercury- and sodium- vapor lamps and neon signs or for safely handling loads that become periodically short-circuited such as electric arc welders.[8]: 485
Air gaps are also used to keep a transformer from saturating, especially audio-frequency transformers in circuits that have a DC component flowing in the windings.[12] A saturable reactor exploits saturation of the core to control alternating current.
Knowledge of leakage inductance is also useful when transformers are operated in parallel. It can be shown that if the percent impedance [e] and associated winding leakage reactance-to-resistance (X/R) ratio of two transformers were the same, the transformers would share the load power in proportion to their respective ratings. However, the impedance tolerances of commercial transformers are significant. Also, the impedance and X/R ratio of different capacity transformers tends to vary.[14]
Equivalent circuit
Referring to the diagram, a practical transformer's physical behavior may be represented by an equivalent circuit model, which can incorporate an ideal transformer.[15]
Winding joule losses and leakage reactances are represented by the following series loop impedances of the model:
- Primary winding: RP, XP
- Secondary winding: RS, XS.
In normal course of circuit equivalence transformation, RS and XS are in practice usually referred to the primary side by multiplying these impedances by the turns ratio squared, (NP/NS) 2 = a2.
Core loss and reactance is represented by the following shunt leg impedances of the model:
- Core or iron losses: RC
- Magnetizing reactance: XM.
RC and XM are collectively termed the magnetizing branch of the model.
Core losses are caused mostly by hysteresis and eddy current effects in the core and are proportional to the square of the core flux for operation at a given frequency.[8] : 142–143 The finite permeability core requires a magnetizing current IM to maintain mutual flux in the core. Magnetizing current is in phase with the flux, the relationship between the two being non-linear due to saturation effects. However, all impedances of the equivalent circuit shown are by definition linear and such non-linearity effects are not typically reflected in transformer equivalent circuits.[8]: 142 With sinusoidal supply, core flux lags the induced EMF by 90°. With open-circuited secondary winding, magnetizing branch current I0 equals transformer no-load current.[15]
The resulting model, though sometimes termed 'exact' equivalent circuit based on linearity assumptions, retains a number of approximations.[15] Analysis may be simplified by assuming that magnetizing branch impedance is relatively high and relocating the branch to the left of the primary impedances. This introduces error but allows combination of primary and referred secondary resistances and reactances by simple summation as two series impedances.
Transformer equivalent circuit impedance and transformer ratio parameters can be derived from the following tests: open-circuit test, short-circuit test, winding resistance test, and transformer ratio test.
Transformer EMF equation
If the flux in the core is purely sinusoidal, the relationship for either winding between its rms voltage Erms of the winding, and the supply frequency f, number of turns N, core cross-sectional area A in m2 and peak magnetic flux density Bpeak in Wb/m2 or T (tesla) is given by the universal EMF equation:[8]
Polarity
A dot convention is often used in transformer circuit diagrams, nameplates or terminal markings to define the relative polarity of transformer windings. Positively increasing instantaneous current entering the primary winding's ‘dot’ end induces positive polarity voltage exiting the secondary winding's ‘dot’ end. Three-phase transformers used in electric power systems will have a nameplate that indicate the phase relationships between their terminals. This may be in the form of a phasor diagram, or using an alpha-numeric code to show the type of internal connection (wye or delta) for each winding.
Effect of frequency
The EMF of a transformer at a given flux increases with frequency.[8] By operating at higher frequencies, transformers can be physically more compact because a given core is able to transfer more power without reaching saturation and fewer turns are needed to achieve the same impedance. However, properties such as core loss and conductor skin effect also increase with frequency. Aircraft and military equipment employ 400 Hz power supplies which reduce core and winding weight.[16] Conversely, frequencies used for some railway electrification systems were much lower (e.g. 16.7 Hz and 25 Hz) than normal utility frequencies (50–60 Hz) for historical reasons concerned mainly with the limitations of early electric traction motors. Consequently, the transformers used to step-down the high overhead line voltages were much larger and heavier for the same power rating than those required for the higher frequencies.
Operation of a transformer at its designed voltage but at a higher frequency than intended will lead to reduced magnetizing current. At a lower frequency, the magnetizing current will increase. Operation of a large transformer at other than its design frequency may require assessment of voltages, losses, and cooling to establish if safe operation is practical. Transformers may require protective relays to protect the transformer from overvoltage at higher than rated frequency.
One example is in traction transformers used for electric multiple unit and high-speed train service operating across regions with different electrical standards. The converter equipment and traction transformers have to accommodate different input frequencies and voltage (ranging from as high as 50 Hz down to 16.7 Hz and rated up to 25 kV).
At much higher frequencies the transformer core size required drops dramatically: a physically small transformer can handle power levels that would require a massive iron core at mains frequency. The development of switching power semiconductor devices made switch-mode power supplies viable, to generate a high frequency, then change the voltage level with a small transformer.
Transformers for higher frequency applications such as SMPS typically use core materials with much lower hysteresis and eddy-current losses than those for 50/60 Hz. Primary examples are iron-powder and ferrite cores. The lower frequency-dependant losses of these cores often is at the expense of flux density at saturation. For instance, ferrite saturation occurs at a substantially lower flux density than laminated iron.
Large power transformers are vulnerable to insulation failure due to transient voltages with high-frequency components, such as caused in switching or by lightning.
Energy losses
Transformer energy losses are dominated by winding and core losses. Transformers' efficiency tends to improve with increasing transformer capacity.[17] The efficiency of typical distribution transformers is between about 98 and 99 percent.[17][18]
As transformer losses vary with load, it is often useful to tabulate no-load loss, full-load loss, half-load loss, and so on. Hysteresis and eddy current losses are constant at all load levels and dominate at no load, while winding loss increases as load increases. The no-load loss can be significant, so that even an idle transformer constitutes a drain on the electrical supply. Designing energy efficient transformers for lower loss requires a larger core, good-quality silicon steel, or even amorphous steel for the core and thicker wire, increasing initial cost. The choice of construction represents a trade-off between initial cost and operating cost.[19]
Transformer losses arise from:
- Winding joule losses
- Current flowing through a winding's conductor causes joule heating due to the resistance of the wire. As frequency increases, skin effect and proximity effect causes the winding's resistance and, hence, losses to increase.
- Core losses
-
- Hysteresis losses
- Each time the magnetic field is reversed, a small amount of energy is lost due to hysteresis within the core, caused by motion of the magnetic domains within the steel. According to Steinmetz's formula, the heat energy due to hysteresis is given by
- and,
- hysteresis loss is thus given by
- where, f is the frequency, η is the hysteresis coefficient and βmax is the maximum flux density, the empirical exponent of which varies from about 1.4 to 1.8 but is often given as 1.6 for iron.[19] For more detailed analysis, see Magnetic core and Steinmetz's equation.
- Eddy current losses
- Eddy currents are induced in the conductive metal transformer core by the changing magnetic field, and this current flowing through the resistance of the iron dissipates energy as heat in the core. The eddy current loss is a complex function of the square of supply frequency and inverse square of the material thickness.[19] Eddy current losses can be reduced by making the core of a stack of laminations (thin plates) electrically insulated from each other, rather than a solid block; all transformers operating at low frequencies use laminated or similar cores.
- Magnetostriction related transformer hum
- Magnetic flux in a ferromagnetic material, such as the core, causes it to physically expand and contract slightly with each cycle of the magnetic field, an effect known as magnetostriction, the frictional energy of which produces an audible noise known as mains hum or "transformer hum".[20] This transformer hum is especially objectionable in transformers supplied at power frequencies and in high-frequency flyback transformers associated with television CRTs.
- Stray losses
- Leakage inductance is by itself largely lossless, since energy supplied to its magnetic fields is returned to the supply with the next half-cycle. However, any leakage flux that intercepts nearby conductive materials such as the transformer's support structure will give rise to eddy currents and be converted to heat.[21]
- Radiative
- There are also radiative losses due to the oscillating magnetic field but these are usually small.
- Mechanical vibration and audible noise transmission
- In addition to magnetostriction, the alternating magnetic field causes fluctuating forces between the primary and secondary windings. This energy incites vibration transmission in interconnected metalwork, thus amplifying audible transformer hum.[22]
Construction
Cores
Closed-core transformers are constructed in 'core form' or 'shell form'. When windings surround the core, the transformer is core form; when windings are surrounded by the core, the transformer is shell form.[23] Shell form design may be more prevalent than core form design for distribution transformer applications due to the relative ease in stacking the core around winding coils.[23] Core form design tends to, as a general rule, be more economical, and therefore more prevalent, than shell form design for high voltage power transformer applications at the lower end of their voltage and power rating ranges (less than or equal to, nominally, 230 kV or 75 MVA). At higher voltage and power ratings, shell form transformers tend to be more prevalent.[23][24][25] Shell form design tends to be preferred for extra-high voltage and higher MVA applications because, though more labor-intensive to manufacture, shell form transformers are characterized as having inherently better kVA-to-weight ratio, better short-circuit strength characteristics and higher immunity to transit damage.[25]
Laminated steel cores
Transformers for use at power or audio frequencies typically have cores made of high permeability silicon steel.[26] The steel has a permeability many times that of free space and the core thus serves to greatly reduce the magnetizing current and confine the flux to a path which closely couples the windings.[27] Early transformer developers soon realized that cores constructed from solid iron resulted in prohibitive eddy current losses, and their designs mitigated this effect with cores consisting of bundles of insulated iron wires.[28] Later designs constructed the core by stacking layers of thin steel laminations, a principle that has remained in use. Each lamination is insulated from its neighbors by a thin non-conducting layer of insulation.[29] The transformer universal EMF equation can be used to calculate the core cross-sectional area for a preferred level of magnetic flux.[8]
The effect of laminations is to confine eddy currents to highly elliptical paths that enclose little flux, and so reduce their magnitude. Thinner laminations reduce losses,[26] but are more laborious and expensive to construct.[30] Thin laminations are generally used on high-frequency transformers, with some of very thin steel laminations able to operate up to 10 kHz.
One common design of laminated core is made from interleaved stacks of E-shaped steel sheets capped with I-shaped pieces, leading to its name of 'E-I transformer'.[30] Such a design tends to exhibit more losses, but is very economical to manufacture. The cut-core or C-core type is made by winding a steel strip around a rectangular form and then bonding the layers together. It is then cut in two, forming two C shapes, and the core assembled by binding the two C halves together with a steel strap.[30] They have the advantage that the flux is always oriented parallel to the metal grains, reducing reluctance.
A steel core's remanence means that it retains a static magnetic field when power is removed. When power is then reapplied, the residual field will cause a high inrush current until the effect of the remaining magnetism is reduced, usually after a few cycles of the applied AC waveform.[31] Overcurrent protection devices such as fuses must be selected to allow this harmless inrush to pass.
On transformers connected to long, overhead power transmission lines, induced currents due to geomagnetic disturbances during solar storms can cause saturation of the core and operation of transformer protection devices.[32]
Distribution transformers can achieve low no-load losses by using cores made with low-loss high-permeability silicon steel or amorphous (non-crystalline) metal alloy. The higher initial cost of the core material is offset over the life of the transformer by its lower losses at light load.[33]
Solid cores
Powdered iron cores are used in circuits such as switch-mode power supplies that operate above mains frequencies and up to a few tens of kilohertz. These materials combine high magnetic permeability with high bulk electrical resistivity. For frequencies extending beyond the VHF band, cores made from non-conductive magnetic ceramic materials called ferrites are common.[30] Some radio-frequency transformers also have movable cores (sometimes called 'slugs') which allow adjustment of the coupling coefficient (and bandwidth) of tuned radio-frequency circuits.
Toroidal cores
Toroidal transformers are built around a ring-shaped core, which, depending on operating frequency, is made from a long strip of silicon steel or permalloy wound into a coil, powdered iron, or ferrite.[34] A strip construction ensures that the grain boundaries are optimally aligned, improving the transformer's efficiency by reducing the core's reluctance. The closed ring shape eliminates air gaps inherent in the construction of an E-I core.[8] : 485 The cross-section of the ring is usually square or rectangular, but more expensive cores with circular cross-sections are also available. The primary and secondary coils are often wound concentrically to cover the entire surface of the core. This minimizes the length of wire needed and provides screening to minimize the core's magnetic field from generating electromagnetic interference.
Toroidal transformers are more efficient than the cheaper laminated E-I types for a similar power level. Other advantages compared to E-I types, include smaller size (about half), lower weight (about half), less mechanical hum (making them superior in audio amplifiers), lower exterior magnetic field (about one tenth), low off-load losses (making them more efficient in standby circuits), single-bolt mounting, and greater choice of shapes. The main disadvantages are higher cost and limited power capacity (see Classification parameters below). Because of the lack of a residual gap in the magnetic path, toroidal transformers also tend to exhibit higher inrush current, compared to laminated E-I types.
Ferrite toroidal cores are used at higher frequencies, typically between a few tens of kilohertz to hundreds of megahertz, to reduce losses, physical size, and weight of inductive components. A drawback of toroidal transformer construction is the higher labor cost of winding. This is because it is necessary to pass the entire length of a coil winding through the core aperture each time a single turn is added to the coil. As a consequence, toroidal transformers rated more than a few kVA are uncommon. Relatively few toroids are offered with power ratings above 10 kVA, and practically none above 25 kVA. Small distribution transformers may achieve some of the benefits of a toroidal core by splitting it and forcing it open, then inserting a bobbin containing primary and secondary windings.[35]
Air cores
A transformer can be produced by placing the windings near each other, an arrangement termed an "air-core" transformer. An air-core transformer eliminates loss due to hysteresis in the core material.[11] The magnetizing inductance is drastically reduced by the lack of a magnetic core, resulting in large magnetizing currents and losses if used at low frequencies. Air-core transformers are unsuitable for use in power distribution,[11] but are frequently employed in radio-frequency applications.[36] Air cores are also used for resonant transformers such as Tesla coils, where they can achieve reasonably low loss despite the low magnetizing inductance.
Windings
The electrical conductor used for the windings depends upon the application, but in all cases the individual turns must be electrically insulated from each other to ensure that the current travels throughout every turn. For small transformers, in which currents are low and the potential difference between adjacent turns is small, the coils are often wound from enamelled magnet wire. Larger power transformers may be wound with copper rectangular strip conductors insulated by oil-impregnated paper and blocks of pressboard.[37]
High-frequency transformers operating in the tens to hundreds of kilohertz often have windings made of braided Litz wire to minimize the skin-effect and proximity effect losses.[38] Large power transformers use multiple-stranded conductors as well, since even at low power frequencies non-uniform distribution of current would otherwise exist in high-current windings.[37] Each strand is individually insulated, and the strands are arranged so that at certain points in the winding, or throughout the whole winding, each portion occupies different relative positions in the complete conductor. The transposition equalizes the current flowing in each strand of the conductor, and reduces eddy current losses in the winding itself. The stranded conductor is also more flexible than a solid conductor of similar size, aiding manufacture.[37]
The windings of signal transformers minimize leakage inductance and stray capacitance to improve high-frequency response. Coils are split into sections, and those sections interleaved between the sections of the other winding.
Power-frequency transformers may have taps at intermediate points on the winding, usually on the higher voltage winding side, for voltage adjustment. Taps may be manually reconnected, or a manual or automatic switch may be provided for changing taps. Automatic on-load tap changers are used in electric power transmission or distribution, on equipment such as arc furnace transformers, or for automatic voltage regulators for sensitive loads. Audio-frequency transformers, used for the distribution of audio to public address loudspeakers, have taps to allow adjustment of impedance to each speaker. A center-tapped transformer is often used in the output stage of an audio power amplifier in a push-pull circuit. Modulation transformers in AM transmitters are very similar.
Cooling
It is a rule of thumb that the life expectancy of electrical insulation is halved for about every 7 °C to 10 °C increase in operating temperature (an instance of the application of the Arrhenius equation).[39]
Small dry-type and liquid-immersed transformers are often self-cooled by natural convection and radiation heat dissipation. As power ratings increase, transformers are often cooled by forced-air cooling, forced-oil cooling, water-cooling, or combinations of these.[40] Large transformers are filled with transformer oil that both cools and insulates the windings.[41] Transformer oil is often a highly refined mineral oil that cools the windings and insulation by circulating within the transformer tank. The mineral oil and paper insulation system has been extensively studied and used for more than 100 years. It is estimated that 50% of power transformers will survive 50 years of use, that the average age of failure of power transformers is about 10 to 15 years, and that about 30% of power transformer failures are due to insulation and overloading failures.[42][43] Prolonged operation at elevated temperature degrades insulating properties of winding insulation and dielectric coolant, which not only shortens transformer life but can ultimately lead to catastrophic transformer failure.[39] With a great body of empirical study as a guide, transformer oil testing including dissolved gas analysis provides valuable maintenance information.
Building regulations in many jurisdictions require indoor liquid-filled transformers to either use dielectric fluids that are less flammable than oil, or be installed in fire-resistant rooms.[17] Air-cooled dry transformers can be more economical where they eliminate the cost of a fire-resistant transformer room.
The tank of liquid-filled transformers often has radiators through which the liquid coolant circulates by natural convection or fins. Some large transformers employ electric fans for forced-air cooling, pumps for forced-liquid cooling, or have heat exchangers for water-cooling.[41] An oil-immersed transformer may be equipped with a Buchholz relay, which, depending on severity of gas accumulation due to internal arcing, is used to either alarm or de-energize the transformer.[31] Oil-immersed transformer installations usually include fire protection measures such as walls, oil containment, and fire-suppression sprinkler systems.
Polychlorinated biphenyls (PCBs) have properties that once favored their use as a dielectric coolant, though concerns over their environmental persistence led to a widespread ban on their use.[44] Today, non-toxic, stable silicone-based oils, or fluorinated hydrocarbons may be used where the expense of a fire-resistant liquid offsets additional building cost for a transformer vault.[17][45] However, the long life span of transformers can mean that the potential for exposure can be high long after banning.[46]
Some transformers are gas-insulated. Their windings are enclosed in sealed, pressurized tanks and often cooled by nitrogen or sulfur hexafluoride gas.[45]
Experimental power transformers in the 500‐to‐1,000 kVA range have been built with liquid nitrogen or helium cooled superconducting windings, which eliminates winding losses without affecting core losses.[47][48]
Insulation
Insulation must be provided between the individual turns of the windings, between the windings, between windings and core, and at the terminals of the winding.
Inter-turn insulation of small transformers may be a layer of insulating varnish on the wire. Layer of paper or polymer films may be inserted between layers of windings, and between primary and secondary windings. A transformer may be coated or dipped in a polymer resin to improve the strength of windings and protect them from moisture or corrosion. The resin may be impregnated into the winding insulation using combinations of vacuum and pressure during the coating process, eliminating all air voids in the winding. In the limit, the entire coil may be placed in a mold, and resin cast around it as a solid block, encapsulating the windings.[49]
Large oil-filled power transformers use windings wrapped with insulating paper, which is impregnated with oil during assembly of the transformer. Oil-filled transformers use highly refined mineral oil to insulate and cool the windings and core. Construction of oil-filled transformers requires that the insulation covering the windings be thoroughly dried of residual moisture before the oil is introduced. Drying may be done by circulating hot air around the core, by circulating externally heated transformer oil, or by vapor-phase drying (VPD) where an evaporated solvent transfers heat by condensation on the coil and core. For small transformers, resistance heating by injection of current into the windings is used.
Bushings
Larger transformers are provided with high-voltage insulated bushings made of polymers or porcelain. A large bushing can be a complex structure since it must provide careful control of the electric field gradient without letting the transformer leak oil.[50]
Classification parameters
Transformers can be classified in many ways, such as the following:
- Power rating: From a fraction of a volt-ampere (VA) to over a thousand MVA.
- Duty of a transformer: Continuous, short-time, intermittent, periodic, varying.
- Frequency range: Power-frequency, audio-frequency, or radio-frequency.
- Voltage class: From a few volts to hundreds of kilovolts.
- Cooling type: Dry or liquid-immersed; self-cooled, forced air-cooled;forced oil-cooled, water-cooled.
- Application: power supply, impedance matching, output voltage and current stabilizer, pulse, circuit isolation, power distribution, rectifier, arc furnace, amplifier output, etc..
- Basic magnetic form: Core form, shell form, concentric, sandwich.
- Constant-potential transformer descriptor: Step-up, step-down, isolation.
- General winding configuration: By IEC vector group, two-winding combinations of the phase designations delta, wye or star, and zigzag; autotransformer, Scott-T
- Rectifier phase-shift winding configuration: 2-winding, 6-pulse; 3-winding, 12-pulse; . . ., n-winding, [n − 1]·6-pulse; polygon; etc..
Applications
Various specific electrical application designs require a variety of transformer types. Although they all share the basic characteristic transformer principles, they are customized in construction or electrical properties for certain installation requirements or circuit conditions.
In electric power transmission, transformers allow transmission of electric power at high voltages, which reduces the loss due to heating of the wires. This allows generating plants to be located economically at a distance from electrical consumers.[51] All but a tiny fraction of the world's electrical power has passed through a series of transformers by the time it reaches the consumer.[21]
In many electronic devices, a transformer is used to convert voltage from the distribution wiring to convenient values for the circuit requirements, either directly at the power line frequency or through a switch mode power supply.
Signal and audio transformers are used to couple stages of amplifiers and to match devices such as microphones and record players to the input of amplifiers. Audio transformers allowed telephone circuits to carry on a two-way conversation over a single pair of wires. A balun transformer converts a signal that is referenced to ground to a signal that has balanced voltages to ground, such as between external cables and internal circuits. Isolation transformers prevent leakage of current into the secondary circuit and are used in medical equipment and at construction sites. Resonant transformers are used for coupling between stages of radio receivers, or in high-voltage Tesla coils.
History
Discovery of induction
Electromagnetic induction, the principle of the operation of the transformer, was discovered independently by Michael Faraday in 1831 and Joseph Henry in 1832.[53][54][55][56] Only Faraday furthered his experiments to the point of working out the equation describing the relationship between EMF and magnetic flux now known as Faraday's law of induction:
where is the magnitude of the EMF in volts and ΦB is the magnetic flux through the circuit in webers.[57]
Faraday performed early experiments on induction between coils of wire, including winding a pair of coils around an iron ring, thus creating the first toroidal closed-core transformer.[56][58] However he only applied individual pulses of current to his transformer, and never discovered the relation between the turns ratio and EMF in the windings.
Induction coils
The first type of transformer to see wide use was the induction coil, invented by Rev. Nicholas Callan of Maynooth College, Ireland in 1836.[56] He was one of the first researchers to realize the more turns the secondary winding has in relation to the primary winding, the larger the induced secondary EMF will be. Induction coils evolved from scientists' and inventors' efforts to get higher voltages from batteries. Since batteries produce direct current (DC) rather than AC, induction coils relied upon vibrating electrical contacts that regularly interrupted the current in the primary to create the flux changes necessary for induction. Between the 1830s and the 1870s, efforts to build better induction coils, mostly by trial and error, slowly revealed the basic principles of transformers.
First alternating current transformers
By the 1870s, efficient generators producing alternating current (AC) were available, and it was found AC could power an induction coil directly, without an interrupter.
In 1876, Russian engineer Pavel Yablochkov invented a lighting system based on a set of induction coils where the primary windings were connected to a source of AC. The secondary windings could be connected to several 'electric candles' (arc lamps) of his own design. The coils Yablochkov employed functioned essentially as transformers.[59]
In 1878, the Ganz factory, Budapest, Hungary, began producing equipment for electric lighting and, by 1883, had installed over fifty systems in Austria-Hungary. Their AC systems used arc and incandescent lamps, generators, and other equipment.[56][60]
Lucien Gaulard and John Dixon Gibbs first exhibited a device with an open iron core called a 'secondary generator' in London in 1882, then sold the idea to the Westinghouse company in the United States.[28] They also exhibited the invention in Turin, Italy in 1884, where it was adopted for an electric lighting system.[61]
Early series circuit transformer distribution
Induction coils with open magnetic circuits are inefficient at transferring power to loads. Until about 1880, the paradigm for AC power transmission from a high voltage supply to a low voltage load was a series circuit. Open-core transformers with a ratio near 1:1 were connected with their primaries in series to allow use of a high voltage for transmission while presenting a low voltage to the lamps. The inherent flaw in this method was that turning off a single lamp (or other electric device) affected the voltage supplied to all others on the same circuit. Many adjustable transformer designs were introduced to compensate for this problematic characteristic of the series circuit, including those employing methods of adjusting the core or bypassing the magnetic flux around part of a coil.[61] Efficient, practical transformer designs did not appear until the 1880s, but within a decade, the transformer would be instrumental in the war of the currents, and in seeing AC distribution systems triumph over their DC counterparts, a position in which they have remained dominant ever since.[62]
Closed-core transformers and parallel power distribution
In the autumn of 1884, Károly Zipernowsky, Ottó Bláthy and Miksa Déri (ZBD), three Hungarian engineers associated with the Ganz Works, had determined that open-core devices were impracticable, as they were incapable of reliably regulating voltage.[60] In their joint 1885 patent applications for novel transformers (later called ZBD transformers), they described two designs with closed magnetic circuits where copper windings were either wound around an iron wire ring core or surrounded by an iron wire core.[61] The two designs were the first application of the two basic transformer constructions in common use to this day, termed "core form" or "shell form" .[63] The Ganz factory had also in the autumn of 1884 made delivery of the world's first five high-efficiency AC transformers, the first of these units having been shipped on September 16, 1884.[64] This first unit had been manufactured to the following specifications: 1,400 W, 40 Hz, 120:72 V, 11.6:19.4 A, ratio 1.67:1, one-phase, shell form.[64]
In both designs, the magnetic flux linking the primary and secondary windings traveled almost entirely within the confines of the iron core, with no intentional path through air (see Toroidal cores below). The new transformers were 3.4 times more efficient than the open-core bipolar devices of Gaulard and Gibbs.[65] The ZBD patents included two other major interrelated innovations: one concerning the use of parallel connected, instead of series connected, utilization loads, the other concerning the ability to have high turns ratio transformers such that the supply network voltage could be much higher (initially 1,400 to 2,000 V) than the voltage of utilization loads (100 V initially preferred).[66][67] When employed in parallel connected electric distribution systems, closed-core transformers finally made it technically and economically feasible to provide electric power for lighting in homes, businesses and public spaces. Bláthy had suggested the use of closed cores, Zipernowsky had suggested the use of parallel shunt connections, and Déri had performed the experiments;[68] In early 1885, the three engineers also eliminated the problem of eddy current losses with the invention of the lamination of electromagnetic cores.[69]
Transformers today are designed on the principles discovered by the three engineers. They also popularized the word 'transformer' to describe a device for altering the EMF of an electric current[70] although the term had already been in use by 1882.[71][72] In 1886, the ZBD engineers designed, and the Ganz factory supplied electrical equipment for, the world's first power station that used AC generators to power a parallel connected common electrical network, the steam-powered Rome-Cerchi power plant.[73]
Westinghouse improvements
Although George Westinghouse had bought Gaulard and Gibbs' patents in 1885, the Edison Electric Light Company held an option on the US rights for the ZBD transformers, requiring Westinghouse to pursue alternative designs on the same principles. He assigned to William Stanley the task of developing a device for commercial use in United States.[74] Stanley's first patented design was for induction coils with single cores of soft iron and adjustable gaps to regulate the EMF present in the secondary winding (see image). This design[75] was first used commercially in the US in 1886[76] but Westinghouse was intent on improving the Stanley design to make it (unlike the ZBD type) easy and cheap to produce.[75]
Westinghouse, Stanley and associates soon developed a core that was easier to manufacture, consisting of a stack of thin 'E‑shaped' iron plates insulated by thin sheets of paper or other insulating material. Pre-wound copper coils could then be slid into place, and straight iron plates laid in to create a closed magnetic circuit. Westinghouse obtained a patent for the new low-cost design in 1887.[68]
Other early transformer designs
In 1889, Russian-born engineer Mikhail Dolivo-Dobrovolsky developed the first three-phase transformer at the Allgemeine Elektricitäts-Gesellschaft ('General Electricity Company') in Germany.[77]
In 1891, Nikola Tesla invented the Tesla coil, an air-cored, dual-tuned resonant transformer for producing very high voltages at high frequency.[78]
Audio frequency transformers ("repeating coils") were used by early experimenters in the development of the telephone.[79]
See also
Notes
- Percent impedance is the ratio of the voltage drop in the secondary from no load to full load.[13]
References
ZBD transformer.
- "telephone | History, Definition, Invention, Uses, & Facts | Britannica". www.britannica.com. Retrieved 2022-07-17.
Bibliography
- Beeman, Donald, ed. (1955). Industrial Power Systems Handbook. McGraw-Hill.
- Calvert, James (2001). "Inside Transformers". University of Denver. Archived from the original on May 9, 2007. Retrieved May 19, 2007.
- Coltman, J. W. (Jan 1988). "The Transformer". Scientific American. 258 (1): 86–95. Bibcode:1988SciAm.258a..86C. doi:10.1038/scientificamerican0188-86. OSTI 6851152.
- Coltman, J. W. (Jan–Feb 2002). "The Transformer [Historical Overview]". IEEE Industry Applications Magazine. 8 (1): 8–15. doi:10.1109/2943.974352. S2CID 18160717.
- Brenner, Egon; Javid, Mansour (1959). "Chapter 18–Circuits with Magnetic Coupling". Analysis of Electric Circuits. McGraw-Hill. pp. 586–622.
- CEGB, (Central Electricity Generating Board) (1982). Modern Power Station Practice. Pergamon. ISBN 978-0-08-016436-6.
- Crosby, D. (1958). "The Ideal Transformer". IRE Transactions on Circuit Theory. 5 (2): 145. doi:10.1109/TCT.1958.1086447.
- Daniels, A. R. (1985). Introduction to Electrical Machines. Macmillan. ISBN 978-0-333-19627-4.
- De Keulenaer, Hans; Chapman, David; Fassbinder, Stefan; McDermott, Mike (2001). The Scope for Energy Saving in the EU through the Use of Energy-Efficient Electricity Distribution Transformers (PDF). 16th International Conference and Exhibition on Electricity Distribution (CIRED 2001). Institution of Engineering and Technology. doi:10.1049/cp:20010853. Retrieved 10 July 2014.
- Del Vecchio, Robert M.; Poulin, Bertrand; Feghali, Pierre T.M.; Shah, Dilipkumar; Ahuja, Rajendra (2002). Transformer Design Principles: With Applications to Core-Form Power Transformers. Boca Raton: CRC Press. ISBN 978-90-5699-703-8.
- Fink, Donald G.; Beatty, H. Wayne, eds. (1978). Standard Handbook for Electrical Engineers (11th ed.). McGraw Hill. ISBN 978-0-07-020974-9.
- Gottlieb, Irving (1998). Practical Transformer Handbook: for Electronics, Radio and Communications Engineers. Elsevier. ISBN 978-0-7506-3992-7.
- Guarnieri, M. (2013). "Who Invented the Transformer?". IEEE Industrial Electronics Magazine. 7 (4): 56–59. doi:10.1109/MIE.2013.2283834. S2CID 27936000.
- Halacsy, A. A.; Von Fuchs, G. H. (April 1961). "Transformer Invented 75 Years Ago". IEEE Transactions of the American Institute of Electrical Engineers. 80 (3): 121–125. doi:10.1109/AIEEPAS.1961.4500994. S2CID 51632693.
- Hameyer, Kay (2004). Electrical Machines I: Basics, Design, Function, Operation (PDF). RWTH Aachen University Institute of Electrical Machines. Archived from the original (PDF) on 2013-02-10.
- Hammond, John Winthrop (1941). Men and Volts: The Story of General Electric. J. B. Lippincott Company. pp. see esp. 106–107, 178, 238.
- Harlow, James (2004). Electric Power Transformer Engineering (PDF). CRC Press. ISBN 0-8493-1704-5.[permanent dead link]
- Hughes, Thomas P. (1993). Networks of Power: Electrification in Western Society, 1880-1930. Baltimore: The Johns Hopkins University Press. p. 96. ISBN 978-0-8018-2873-7. Retrieved Sep 9, 2009.
- Heathcote, Martin (1998). J & P Transformer Book (12th ed.). Newnes. ISBN 978-0-7506-1158-9.
- Hindmarsh, John (1977). Electrical Machines and Their Applications (4th ed.). Exeter: Pergamon. ISBN 978-0-08-030573-8.
- Kothari, D.P.; Nagrath, I.J. (2010). Electric Machines (4th ed.). Tata McGraw-Hill. ISBN 978-0-07-069967-0.
- Kulkarni, S. V.; Khaparde, S. A. (2004). Transformer Engineering: Design and Practice. CRC Press. ISBN 978-0-8247-5653-6.
- McLaren, Peter (1984). Elementary Electric Power and Machines. Ellis Horwood. ISBN 978-0-470-20057-5.
- McLyman, Colonel William (2004). "Chapter 3". Transformer and Inductor Design Handbook. CRC. ISBN 0-8247-5393-3.
- Pansini, Anthony (1999). Electrical Transformers and Power Equipment. CRC Press. ISBN 978-0-88173-311-2.
- Parker, M. R; Ula, S.; Webb, W. E. (2005). "§2.5.5 'Transformers' & §10.1.3 'The Ideal Transformer'". In Whitaker, Jerry C. (ed.). The Electronics Handbook (2nd ed.). Taylor & Francis. pp. 172, 1017. ISBN 0-8493-1889-0.
- Ryan, H. M. (2004). High Voltage Engineering and Testing. CRC Press. ISBN 978-0-85296-775-1.
External links
General links:
https://en.wikipedia.org/wiki/Transformer
By the 1870s, efficient generators producing alternating current (AC) were available, and it was found AC could power an induction coil directly, without an interrupter.
https://en.wikipedia.org/wiki/Transformer
Audio frequency transformers ("repeating coils") were used by early experimenters in the development of the telephone.[79]
https://en.wikipedia.org/wiki/Transformer
In 1889, Russian-born engineer Mikhail Dolivo-Dobrovolsky developed the first three-phase transformer at the Allgemeine Elektricitäts-Gesellschaft ('General Electricity Company') in Germany.[77]
In 1891, Nikola Tesla invented the Tesla coil, an air-cored, dual-tuned resonant transformer for producing very high voltages at high frequency.[78]
https://en.wikipedia.org/wiki/Transformer
Westinghouse, Stanley and associates soon developed a core that was easier to manufacture, consisting of a stack of thin 'E‑shaped' iron plates insulated by thin sheets of paper or other insulating material. Pre-wound copper coils could then be slid into place, and straight iron plates laid in to create a closed magnetic circuit. Westinghouse obtained a patent for the new low-cost design in 1887.[68]
https://en.wikipedia.org/wiki/Transformer
Electromagnetic induction, the principle of the operation of the transformer, was discovered independently by Michael Faraday in 1831 and Joseph Henry in 1832.[53]
https://en.wikipedia.org/wiki/Transformer
Substation transformer undergoing testing.https://en.wikipedia.org/wiki/Transformer
In electrical engineering, a load profile is a graph of the variation in the electrical load versus time. A load profile will vary according to customer type (typical examples include residential, commercial and industrial), temperature and holiday seasons. Power producers use this information to plan how much electricity they will need to make available at any given time. Teletraffic engineering uses a similar load curve.
Typical seasonal loads of electric utilities in Eastern New England Division in 1919. United States Geological Survey, 1921. Image courtesy of the National Museum of American History from Powering a Generation of Change
https://en.wikipedia.org/wiki/Load_profile
A rectiformer is a rectifier and transformer designed and built as a single entity for converting alternating current into direct current. It is piece of power systems equipment rather than an electronics component. Rectiformers are used for supplying power to different field of ESP (electrostatic precipitator). Rectiformers are also used to create dc supply for Hall process cells in the aluminium smelting industry.
Rectiformers are commonly found in electrowinning operations, where a direct current is required to convert base metal ions such as copper to a metal at the cathode. The passage of an electric current through a purified copper sulfate solution produces cathode copper. The equation is as follows:
Cu2+aq+ 2e− = Cuo
Physical Characteristics
Rectiformers may be designed to output voltages from 30V to over 120KV dc and can weigh over 400 tons.[1]
See also
References
- Rectiformer details, Aluminium smelting rectiformer project.
External links
https://en.wikipedia.org/wiki/Rectiformer
The Transformation of the Ottoman Empire, also known as the Era of Transformation, constitutes a period in the history of the Ottoman Empire from c. 1550 to c. 1700, spanning roughly from the end of the reign of Suleiman the Magnificent to the Treaty of Karlowitz at the conclusion of the War of the Holy League. This period was characterized by numerous dramatic political, social, and economic changes, which resulted in the empire shifting from an expansionist, patrimonial state into a bureaucratic empire based on an ideology of upholding justice and acting as the protector of Sunni Islam.[1] These changes were in large part prompted by a series of political and economic crises in the late 16th and early 17th centuries,[2][3] resulting from inflation, warfare, and political factionalism.[4] Yet despite these crises the empire remained strong both politically and economically,[5] and continued to adapt to the challenges of a changing world. The 17th century was once characterized as a period of decline for the Ottomans, but since the 1980s historians of the Ottoman Empire have increasingly rejected that characterization, identifying it instead as a period of crisis, adaptation, and transformation.[6]
https://en.wikipedia.org/wiki/Transformation_of_the_Ottoman_Empire
In 20th-century discussions of Karl Marx's economics, the transformation problem is the problem of finding a general rule by which to transform the "values" of commodities (based on their socially necessary labour content, according to his labour theory of value) into the "competitive prices" of the marketplace. This problem was first introduced by Marx in chapter 9 of the draft of volume 3 of Capital, where he also sketched a solution. The essential difficulty was this: given that Marx derived profit, in the form of surplus value, from direct labour inputs, and that the ratio of direct labour input to capital input varied widely between commodities, how could he reconcile this with a tendency toward an average rate of profit on all capital invested among industries, if such a tendency (as predicted by Marx and Ricardo) exists?
https://en.wikipedia.org/wiki/Transformation_problem
Transformative learning, as a theory, says that the process of "perspective transformation" has three dimensions: psychological (changes in understanding of the self), convictional (revision of belief systems), and behavioral (changes in lifestyle).[1]
Transformative learning is the expansion of consciousness through the transformation of basic worldview and specific capacities of the self; transformative learning is facilitated through consciously directed processes such as appreciatively accessing and receiving the symbolic contents of the unconscious and critically analyzing underlying premises.[2]
Perspective transformation, leading to transformative learning, occurs infrequently. Jack Mezirow believes that it usually results from a "disorienting dilemma" which is triggered by a life crisis or major life transition—although it may also result from an accumulation of transformations in meaning schemes over a period of time.[3] Less dramatic predicaments, such as those created by a teacher for pedagogical effect, also promote transformation.[4]
An important part of transformative learning is for individuals to change their frames of reference by critically reflecting on their assumptions and beliefs and consciously making and implementing plans that bring about new ways of defining their worlds. This process is fundamentally rational and analytical.[5][6]
https://en.wikipedia.org/wiki/Transformative_learning
For Boyd, transformation is a "fundamental change in one's personality involving [together] the resolution of a personal dilemma and the expansion of consciousness resulting in greater personality integration".[19] This calls upon extra-rational sources such as symbols, images, and archetypes to assist in creating a personal vision or meaning of what it means to be human.[20]
https://en.wikipedia.org/wiki/Transformative_learning
On the surface, the two views of transformative learning presented here are contradictory. One advocates a rational approach that depends primarily on critical reflection whereas the other relies more on intuition and emotion.
https://en.wikipedia.org/wiki/Transformative_learning
In molecular biology and genetics, transformation is the genetic alteration of a cell resulting from the direct uptake and incorporation of exogenous genetic material from its surroundings through the cell membrane(s). For transformation to take place, the recipient bacterium must be in a state of competence, which might occur in nature as a time-limited response to environmental conditions such as starvation and cell density, and may also be induced in a laboratory.[1]
Transformation is one of three processes that lead to horizontal gene transfer, in which exogenous genetic material passes from one bacterium to another, the other two being conjugation (transfer of genetic material between two bacterial cells in direct contact) and transduction (injection of foreign DNA by a bacteriophage virus into the host bacterium).[1] In transformation, the genetic material passes through the intervening medium, and uptake is completely dependent on the recipient bacterium.[1]
As of 2014 about 80 species of bacteria were known to be capable of transformation, about evenly divided between Gram-positive and Gram-negative bacteria; the number might be an overestimate since several of the reports are supported by single papers.[1]
"Transformation" may also be used to describe the insertion of new genetic material into nonbacterial cells, including animal and plant cells; however, because "transformation" has a special meaning in relation to animal cells, indicating progression to a cancerous state, the process is usually called "transfection".[2]
History
Transformation in bacteria was first demonstrated in 1928 by the British bacteriologist Frederick Griffith.[3] Griffith was interested in determining whether injections of heat-killed bacteria could be used to vaccinate mice against pneumonia. However, he discovered that a non-virulent strain of Streptococcus pneumoniae could be made virulent after being exposed to heat-killed virulent strains. Griffith hypothesized that some "transforming principle" from the heat-killed strain was responsible for making the harmless strain virulent. In 1944 this "transforming principle" was identified as being genetic by Oswald Avery, Colin MacLeod, and Maclyn McCarty. They isolated DNA from a virulent strain of S. pneumoniae and using just this DNA were able to make a harmless strain virulent. They called this uptake and incorporation of DNA by bacteria "transformation" (See Avery-MacLeod-McCarty experiment)[4] The results of Avery et al.'s experiments were at first skeptically received by the scientific community and it was not until the development of genetic markers and the discovery of other methods of genetic transfer (conjugation in 1947 and transduction in 1953) by Joshua Lederberg that Avery's experiments were accepted.[5]
It was originally thought that Escherichia coli,
a commonly used laboratory organism, was refractory to transformation.
However, in 1970, Morton Mandel and Akiko Higa showed that E. coli may be induced to take up DNA from bacteriophage λ without the use of helper phage after treatment with calcium chloride solution.[6] Two years later in 1972, Stanley Norman Cohen, Annie Chang and Leslie Hsu showed that CaCl
2 treatment is also effective for transformation of plasmid DNA.[7] The method of transformation by Mandel and Higa was later improved upon by Douglas Hanahan.[8] The discovery of artificially induced competence in E. coli created an efficient and convenient procedure for transforming bacteria which allows for simpler molecular cloning methods in biotechnology and research, and it is now a routinely used laboratory procedure.
Transformation using electroporation was developed in the late 1980s, increasing the efficiency of in-vitro transformation and increasing the number of bacterial strains that could be transformed.[9] Transformation of animal and plant cells was also investigated with the first transgenic mouse being created by injecting a gene for a rat growth hormone into a mouse embryo in 1982.[10] In 1897 a bacterium that caused plant tumors, Agrobacterium tumefaciens, was discovered and in the early 1970s the tumor-inducing agent was found to be a DNA plasmid called the Ti plasmid.[11] By removing the genes in the plasmid that caused the tumor and adding in novel genes, researchers were able to infect plants with A. tumefaciens and let the bacteria insert their chosen DNA into the genomes of the plants.[12] Not all plant cells are susceptible to infection by A. tumefaciens, so other methods were developed, including electroporation and micro-injection.[13] Particle bombardment was made possible with the invention of the Biolistic Particle Delivery System (gene gun) by John Sanford in the 1980s.[14][15][16]
Definitions
Transformation is one of three forms of horizontal gene transfer that occur in nature among bacteria, in which DNA encoding for a trait passes from one bacterium to another and is integrated into the recipient genome by homologous recombination; the other two are transduction, carried out by means of a bacteriophage, and conjugation, in which a gene is passed through direct contact between bacteria.[1] In transformation, the genetic material passes through the intervening medium, and uptake is completely dependent on the recipient bacterium.[1]
Competence refers to a temporary state of being able to take up exogenous DNA from the environment; it may be induced in a laboratory.[1]
It appears to be an ancient process inherited from a common prokaryotic ancestor that is a beneficial adaptation for promoting recombinational repair of DNA damage, especially damage acquired under stressful conditions. Natural genetic transformation appears to be an adaptation for repair of DNA damage that also generates genetic diversity.[1][17]
Transformation has been studied in medically important Gram-negative bacteria species such as Helicobacter pylori, Legionella pneumophila, Neisseria meningitidis, Neisseria gonorrhoeae, Haemophilus influenzae and Vibrio cholerae.[18] It has also been studied in Gram-negative species found in soil such as Pseudomonas stutzeri, Acinetobacter baylyi, and Gram-negative plant pathogens such as Ralstonia solanacearum and Xylella fastidiosa.[18] Transformation among Gram-positive bacteria has been studied in medically important species such as Streptococcus pneumoniae, Streptococcus mutans, Staphylococcus aureus and Streptococcus sanguinis and in Gram-positive soil bacterium Bacillus subtilis.[17] It has also been reported in at least 30 species of Pseudomonadota distributed in several different classes.[19] The best studied Pseudomonadota with respect to transformation are the medically important human pathogens Neisseria gonorrhoeae, Haemophilus influenzae, and Helicobacter pylori.[17]
"Transformation" may also be used to describe the insertion of new genetic material into nonbacterial cells, including animal and plant cells; however, because "transformation" has a special meaning in relation to animal cells, indicating progression to a cancerous state, the process is usually called "transfection".[2]
Natural competence and transformation
As of 2014 about 80 species of bacteria were known to be capable of transformation, about evenly divided between Gram-positive and Gram-negative bacteria; the number might be an overestimate since several of the reports are supported by single papers.[1]
Naturally competent bacteria carry sets of genes that provide the protein machinery to bring DNA across the cell membrane(s). The transport of the exogenous DNA into the cells may require proteins that are involved in the assembly of type IV pili and type II secretion system, as well as DNA translocase complex at the cytoplasmic membrane.[20]
Due to the differences in structure of the cell envelope between Gram-positive and Gram-negative bacteria, there are some differences in the mechanisms of DNA uptake in these cells, however most of them share common features that involve related proteins. The DNA first binds to the surface of the competent cells on a DNA receptor, and passes through the cytoplasmic membrane via DNA translocase.[21] Only single-stranded DNA may pass through, the other strand being degraded by nucleases in the process. The translocated single-stranded DNA may then be integrated into the bacterial chromosomes by a RecA-dependent process. In Gram-negative cells, due to the presence of an extra membrane, the DNA requires the presence of a channel formed by secretins on the outer membrane. Pilin may be required for competence, but its role is uncertain.[22] The uptake of DNA is generally non-sequence specific, although in some species the presence of specific DNA uptake sequences may facilitate efficient DNA uptake.[23]
Natural transformation
Natural transformation is a bacterial adaptation for DNA transfer that depends on the expression of numerous bacterial genes whose products appear to be responsible for this process.[20][19] In general, transformation is a complex, energy-requiring developmental process. In order for a bacterium to bind, take up and recombine exogenous DNA into its chromosome, it must become competent, that is, enter a special physiological state. Competence development in Bacillus subtilis requires expression of about 40 genes.[24] The DNA integrated into the host chromosome is usually (but with rare exceptions) derived from another bacterium of the same species, and is thus homologous to the resident chromosome.
In B. subtilis the length of the transferred DNA is greater than 1271 kb (more than 1 million bases).[25] The length transferred is likely double stranded DNA and is often more than a third of the total chromosome length of 4215 kb.[26] It appears that about 7-9% of the recipient cells take up an entire chromosome.[27]
The capacity for natural transformation appears to occur in a number of prokaryotes, and thus far 67 prokaryotic species (in seven different phyla) are known to undergo this process.[19]
Competence for transformation is typically induced by high cell density and/or nutritional limitation, conditions associated with the stationary phase of bacterial growth. Transformation in Haemophilus influenzae occurs most efficiently at the end of exponential growth as bacterial growth approaches stationary phase.[28] Transformation in Streptococcus mutans, as well as in many other streptococci, occurs at high cell density and is associated with biofilm formation.[29] Competence in B. subtilis is induced toward the end of logarithmic growth, especially under conditions of amino acid limitation.[30] Similarly, in Micrococcus luteus (a representative of the less well studied Actinomycetota phylum), competence develops during the mid-late exponential growth phase and is also triggered by amino acids starvation.[31][32]
By releasing intact host and plasmid DNA, certain bacteriophages are thought to contribute to transformation.[33]
Transformation, as an adaptation for DNA repair
Competence is specifically induced by DNA damaging conditions. For instance, transformation is induced in Streptococcus pneumoniae by the DNA damaging agents mitomycin C (a DNA cross-linking agent) and fluoroquinolone (a topoisomerase inhibitor that causes double-strand breaks).[34] In B. subtilis, transformation is increased by UV light, a DNA damaging agent.[35] In Helicobacter pylori, ciprofloxacin, which interacts with DNA gyrase and introduces double-strand breaks, induces expression of competence genes, thus enhancing the frequency of transformation[36] Using Legionella pneumophila, Charpentier et al.[37] tested 64 toxic molecules to determine which of these induce competence. Of these, only six, all DNA damaging agents, caused strong induction. These DNA damaging agents were mitomycin C (which causes DNA inter-strand crosslinks), norfloxacin, ofloxacin and nalidixic acid (inhibitors of DNA gyrase that cause double-strand breaks[38]), bicyclomycin (causes single- and double-strand breaks[39]), and hydroxyurea (induces DNA base oxidation[40]). UV light also induced competence in L. pneumophila. Charpentier et al.[37] suggested that competence for transformation probably evolved as a DNA damage response.
Logarithmically growing bacteria differ from stationary phase bacteria with respect to the number of genome copies present in the cell, and this has implications for the capability to carry out an important DNA repair process. During logarithmic growth, two or more copies of any particular region of the chromosome may be present in a bacterial cell, as cell division is not precisely matched with chromosome replication. The process of homologous recombinational repair (HRR) is a key DNA repair process that is especially effective for repairing double-strand damages, such as double-strand breaks. This process depends on a second homologous chromosome in addition to the damaged chromosome. During logarithmic growth, a DNA damage in one chromosome may be repaired by HRR using sequence information from the other homologous chromosome. Once cells approach stationary phase, however, they typically have just one copy of the chromosome, and HRR requires input of homologous template from outside the cell by transformation.[41]
To test whether the adaptive function of transformation is repair of DNA damages, a series of experiments were carried out using B. subtilis irradiated by UV light as the damaging agent (reviewed by Michod et al.[42] and Bernstein et al.[41]) The results of these experiments indicated that transforming DNA acts to repair potentially lethal DNA damages introduced by UV light in the recipient DNA. The particular process responsible for repair was likely HRR. Transformation in bacteria can be viewed as a primitive sexual process, since it involves interaction of homologous DNA from two individuals to form recombinant DNA that is passed on to succeeding generations. Bacterial transformation in prokaryotes may have been the ancestral process that gave rise to meiotic sexual reproduction in eukaryotes (see Evolution of sexual reproduction; Meiosis.)
Methods and mechanisms of transformation in laboratory
Bacterial
Artificial competence can be induced in laboratory procedures that involve making the cell passively permeable to DNA by exposing it to conditions that do not normally occur in nature.[43] Typically the cells are incubated in a solution containing divalent cations (often calcium chloride) under cold conditions, before being exposed to a heat pulse (heat shock). Calcium chloride partially disrupts the cell membrane, which allows the recombinant DNA to enter the host cell. Cells that are able to take up the DNA are called competent cells.
It has been found that growth of Gram-negative bacteria in 20 mM Mg reduces the number of protein-to-lipopolysaccharide bonds by increasing the ratio of ionic to covalent bonds, which increases membrane fluidity, facilitating transformation.[44] The role of lipopolysaccharides here are verified from the observation that shorter O-side chains are more effectively transformed – perhaps because of improved DNA accessibility.
The surface of bacteria such as E. coli is negatively charged due to phospholipids and lipopolysaccharides on its cell surface, and the DNA is also negatively charged. One function of the divalent cation therefore would be to shield the charges by coordinating the phosphate groups and other negative charges, thereby allowing a DNA molecule to adhere to the cell surface.
DNA entry into E. coli cells is through channels known as zones of adhesion or Bayer's junction, with a typical cell carrying as many as 400 such zones. Their role was established when cobalamine (which also uses these channels) was found to competitively inhibit DNA uptake. Another type of channel implicated in DNA uptake consists of poly (HB):poly P:Ca. In this poly (HB) is envisioned to wrap around DNA (itself a polyphosphate), and is carried in a shield formed by Ca ions.[44]
It is suggested that exposing the cells to divalent cations in cold condition may also change or weaken the cell surface structure, making it more permeable to DNA. The heat-pulse is thought to create a thermal imbalance across the cell membrane, which forces the DNA to enter the cells through either cell pores or the damaged cell wall.
Electroporation is another method of promoting competence. In this method the cells are briefly shocked with an electric field of 10-20 kV/cm, which is thought to create holes in the cell membrane through which the plasmid DNA may enter. After the electric shock, the holes are rapidly closed by the cell's membrane-repair mechanisms.
Yeast
Most species of yeast, including Saccharomyces cerevisiae, may be transformed by exogenous DNA in the environment. Several methods have been developed to facilitate this transformation at high frequency in the lab.[45]
- Yeast cells may be treated with enzymes to degrade their cell walls, yielding spheroplasts. These cells are very fragile but take up foreign DNA at a high rate.[46]
- Exposing intact yeast cells to alkali cations such as those of caesium or lithium allows the cells to take up plasmid DNA.[47] Later protocols adapted this transformation method, using lithium acetate, polyethylene glycol, and single-stranded DNA.[48] In these protocols, the single-stranded DNA preferentially binds to the yeast cell wall, preventing plasmid DNA from doing so and leaving it available for transformation.[49]
- Electroporation: Formation of transient holes in the cell membranes using electric shock; this allows DNA to enter as described above for bacteria.[50]
- Enzymatic digestion[51] or agitation with glass beads[52] may also be used to transform yeast cells.
Efficiency – Different yeast genera and species take up foreign DNA with different efficiencies.[53] Also, most transformation protocols have been developed for baker's yeast, S. cerevisiae, and thus may not be optimal for other species. Even within one species, different strains have different transformation efficiencies, sometimes different by three orders of magnitude. For instance, when S. cerevisiae strains were transformed with 10 ug of plasmid YEp13, the strain DKD-5D-H yielded between 550 and 3115 colonies while strain OS1 yielded fewer than five colonies.[54]
Plants
A number of methods are available to transfer DNA into plant cells. Some vector-mediated methods are:
- Agrobacterium-mediated transformation is the easiest and most simple plant transformation. Plant tissue (often leaves) are cut into small pieces, e.g. 10x10mm, and soaked for ten minutes in a fluid containing suspended Agrobacterium. The bacteria will attach to many of the plant cells exposed by the cut. The plant cells secrete wound-related phenolic compounds which in turn act to upregulate the virulence operon of the Agrobacterium. The virulence operon includes many genes that encode for proteins that are part of a Type IV secretion system that exports from the bacterium proteins and DNA (delineated by specific recognition motifs called border sequences and excised as a single strand from the virulence plasmid) into the plant cell through a structure called a pilus. The transferred DNA (called T-DNA) is piloted to the plant cell nucleus by nuclear localization signals present in the Agrobacterium protein VirD2, which is covalently attached to the end of the T-DNA at the Right border (RB). Exactly how the T-DNA is integrated into the host plant genomic DNA is an active area of plant biology research. Assuming that a selection marker (such as an antibiotic resistance gene) was included in the T-DNA, the transformed plant tissue can be cultured on selective media to produce shoots. The shoots are then transferred to a different medium to promote root formation. Once roots begin to grow from the transgenic shoot, the plants can be transferred to soil to complete a normal life cycle (make seeds). The seeds from this first plant (called the T1, for first transgenic generation) can be planted on a selective (containing an antibiotic), or if an herbicide resistance gene was used, could alternatively be planted in soil, then later treated with herbicide to kill wildtype segregants. Some plants species, such as Arabidopsis thaliana can be transformed by dipping the flowers or whole plant, into a suspension of Agrobacterium tumefaciens, typically strain C58 (C=Cherry, 58=1958, the year in which this particular strain of A. tumefaciens was isolated from a cherry tree in an orchard at Cornell University in Ithaca, New York). Though many plants remain recalcitrant to transformation by this method, research is ongoing that continues to add to the list the species that have been successfully modified in this manner.
- Viral transformation (transduction): Package the desired genetic material into a suitable plant virus and allow this modified virus to infect the plant. If the genetic material is DNA, it can recombine with the chromosomes to produce transformant cells. However, genomes of most plant viruses consist of single stranded RNA which replicates in the cytoplasm of infected cell. For such genomes this method is a form of transfection and not a real transformation, since the inserted genes never reach the nucleus of the cell and do not integrate into the host genome. The progeny of the infected plants is virus-free and also free of the inserted gene.
Some vector-less methods include:
- Gene gun: Also referred to as particle bombardment, microprojectile bombardment, or biolistics. Particles of gold or tungsten are coated with DNA and then shot into young plant cells or plant embryos. Some genetic material will stay in the cells and transform them. This method also allows transformation of plant plastids. The transformation efficiency is lower than in Agrobacterium-mediated transformation, but most plants can be transformed with this method.
- Electroporation: Formation of transient holes in cell membranes using electric pulses of high field strength; this allows DNA to enter as described above for bacteria.[55]
Fungi
There are some methods to produce transgenic fungi most of them being analogous to those used for plants. However, fungi have to be treated differently due to some of their microscopic and biochemical traits:
- A major issue is the dikaryotic state that parts of some fungi are in; dikaryotic cells contain two haploid nuclei, one of each parent fungus. If only one of these gets transformed, which is the rule, the percentage of transformed nuclei decreases after each sporulation.[56]
- Fungal cell walls are quite thick hindering DNA uptake so (partial) removal is often required;[57] complete degradation, which is sometimes necessary,[56] yields protoplasts.
- Mycelial fungi consist of filamentous hyphae, which are, if at all, separated by internal cell walls interrupted by pores big enough to enable nutrients and organelles, sometimes even nuclei, to travel through each hypha. As a result, individual cells usually cannot be separated. This is problematic as neighbouring transformed cells may render untransformed ones immune to selection treatments, e.g. by delivering nutrients or proteins for antibiotic resistance.[56]
- Additionally, growth (and thereby mitosis) of these fungi exclusively occurs at the tip of their hyphae which can also deliver issues.[56]
As stated earlier, an array of methods used for plant transformation do also work in fungi:
- Agrobacterium is not only capable of infecting plants but also fungi, however, unlike plants, fungi do not secrete the phenolic compounds necessary to trigger Agrobacterium so that they have to be added, e.g. in the form of acetosyringone.[56]
- Thanks to development of an expression system for small RNAs in fungi the introduction of a CRISPR/CAS9-system in fungal cells became possible.[56] In 2016 the USDA declared that it will not regulate a white button mushroom strain edited with CRISPR/CAS9 to prevent fruit body browning causing a broad discussion about placing CRISPR/CAS9-edited crops on the market.[58]
- Physical methods like electroporation, biolistics ("gene gun"), sonoporation that uses cavitation of gas bubbles produced by ultrasound to penetrate the cell membrane, etc. are also applicable to fungi.[59]
Animals
Introduction of DNA into animal cells is usually called transfection, and is discussed in the corresponding article.
Practical aspects of transformation in molecular biology
The discovery of artificially induced competence in bacteria allow bacteria such as Escherichia coli to be used as a convenient host for the manipulation of DNA as well as expressing proteins. Typically plasmids are used for transformation in E. coli. In order to be stably maintained in the cell, a plasmid DNA molecule must contain an origin of replication, which allows it to be replicated in the cell independently of the replication of the cell's own chromosome.
The efficiency with which a competent culture can take up exogenous DNA and express its genes is known as transformation efficiency and is measured in colony forming unit (cfu) per μg DNA used. A transformation efficiency of 1×108 cfu/μg for a small plasmid like pUC19 is roughly equivalent to 1 in 2000 molecules of the plasmid used being transformed.
In calcium chloride transformation, the cells are prepared by chilling cells in the presence of Ca2+
(in CaCl
2 solution), making the cell become permeable to plasmid DNA.
The cells are incubated on ice with the DNA, and then briefly
heat-shocked (e.g., at 42 °C for 30–120 seconds). This method works
very well for circular plasmid DNA. Non-commercial preparations should
normally give 106 to 107 transformants per microgram of plasmid; a poor preparation will be about 104/μg or less, but a good preparation of competent cells can give up to ~108 colonies per microgram of plasmid.[60] Protocols, however, exist for making supercompetent cells that may yield a transformation efficiency of over 109.[61]
The chemical method, however, usually does not work well for linear
DNA, such as fragments of chromosomal DNA, probably because the cell's
native exonuclease
enzymes rapidly degrade linear DNA. In contrast, cells that are
naturally competent are usually transformed more efficiently with linear
DNA than with plasmid DNA.
The transformation efficiency using the CaCl
2
method decreases with plasmid size, and electroporation therefore may
be a more effective method for the uptake of large plasmid DNA.[62]
Cells used in electroporation should be prepared first by washing in
cold double-distilled water to remove charged particles that may create
sparks during the electroporation process.
Selection and screening in plasmid transformation
Because transformation usually produces a mixture of relatively few transformed cells and an abundance of non-transformed cells, a method is necessary to select for the cells that have acquired the plasmid.[63] The plasmid therefore requires a selectable marker such that those cells without the plasmid may be killed or have their growth arrested. Antibiotic resistance is the most commonly used marker for prokaryotes. The transforming plasmid contains a gene that confers resistance to an antibiotic that the bacteria are otherwise sensitive to. The mixture of treated cells is cultured on media that contain the antibiotic so that only transformed cells are able to grow. Another method of selection is the use of certain auxotrophic markers that can compensate for an inability to metabolise certain amino acids, nucleotides, or sugars. This method requires the use of suitably mutated strains that are deficient in the synthesis or utility of a particular biomolecule, and the transformed cells are cultured in a medium that allows only cells containing the plasmid to grow.
In a cloning experiment, a gene may be inserted into a plasmid used for transformation. However, in such experiment, not all the plasmids may contain a successfully inserted gene. Additional techniques may therefore be employed further to screen for transformed cells that contain plasmid with the insert. Reporter genes can be used as markers, such as the lacZ gene which codes for β-galactosidase used in blue-white screening. This method of screening relies on the principle of α-complementation, where a fragment of the lacZ gene (lacZα) in the plasmid can complement another mutant lacZ gene (lacZΔM15) in the cell. Both genes by themselves produce non-functional peptides, however, when expressed together, as when a plasmid containing lacZ-α is transformed into a lacZΔM15 cells, they form a functional β-galactosidase. The presence of an active β-galactosidase may be detected when cells are grown in plates containing X-gal, forming characteristic blue colonies. However, the multiple cloning site, where a gene of interest may be ligated into the plasmid vector, is located within the lacZα gene. Successful ligation therefore disrupts the lacZα gene, and no functional β-galactosidase can form, resulting in white colonies. Cells containing successfully ligated insert can then be easily identified by its white coloration from the unsuccessful blue ones.
Other commonly used reporter genes are green fluorescent protein (GFP), which produces cells that glow green under blue light, and the enzyme luciferase, which catalyzes a reaction with luciferin to emit light. The recombinant DNA may also be detected using other methods such as nucleic acid hybridization with radioactive RNA probe, while cells that expressed the desired protein from the plasmid may also be detected using immunological methods.
References
- Birnboim HC, Doly J (November 1979). "A rapid alkaline extraction procedure for screening recombinant plasmid DNA". Nucleic Acids Research. 7 (6): 1513–23. doi:10.1093/nar/7.6.1513. PMC 342324. PMID 388356.
External links
- Bacterial Transformation (a Flash Animation)
- "Ready, aim, fire!" At the Max Planck Institute for Molecular Plant Physiology in Potsdam-Golm plant cells are 'bombarded' using a particle gun
https://en.wikipedia.org/wiki/Transformation_(genetics)
In United States copyright law, transformative use or transformation is a type of fair use that builds on a copyrighted work in a different manner or for a different purpose from the original, and thus does not infringe its holder's copyright. Transformation is an important issue in deciding whether a use meets the first factor of the fair-use test, and is generally critical for determining whether a use is in fact fair, although no one factor is dispositive.
Transformativeness is a characteristic of such derivative works that makes them transcend, or place in a new light, the underlying works on which they are based. In computer- and Internet-related works, the transformative characteristic of the later work is often that it provides the public with a benefit not previously available to it, which would otherwise remain unavailable. Such transformativeness weighs heavily in a fair use analysis and may excuse what seems a clear copyright infringement from liability.
In United States patent law the term also refers to the test set in In re Bilski: that a patent-eligible invention must "transform a particular article into a different state or thing".
https://en.wikipedia.org/wiki/Transformative_use
Transformation optics is a branch of optics which applies metamaterials to produce spatial variations, derived from coordinate transformations, which can direct chosen bandwidths of electromagnetic radiation. This can allow for the construction of new composite artificial devices, which probably could not exist without metamaterials and coordinate transformation. Computing power that became available in the late 1990s enables prescribed quantitative values for the permittivity and permeability, the constitutive parameters, which produce localized spatial variations. The aggregate value of all the constitutive parameters produces an effective value, which yields the intended or desired results.
Hence, complex artificial materials, known as metamaterials, are used to produce transformations in optical space.
The mathematics underpinning transformation optics is similar to the equations that describe how gravity warps space and time, in general relativity. However, instead of space and time, these equations show how light can be directed in a chosen manner, analogous to warping space. For example, one potential application is collecting sunlight with novel solar cells by concentrating the light in one area. Hence, a wide array of conventional devices could be markedly enhanced by applying transformation optics.[1][2][3][4][5]
Coordinate transformations
Transformation optics has its beginnings in two research endeavors, and their conclusions. They were published on May 25, 2006, in the same issue of the peer-reviewed journal Science. The two papers describe tenable theories on bending or distorting light to electromagnetically conceal an object. Both papers notably map the initial configuration of the electromagnetic fields on to a Cartesian mesh. Twisting the Cartesian mesh, in essence, transforms the coordinates of the electromagnetic fields, which in turn conceal a given object. Hence, with these two papers, transformation optics is born.[5]
Transformation optics subscribes to the capability of bending light, or electromagnetic waves and energy, in any preferred or desired fashion, for a desired application. Maxwell's equations do not vary even though coordinates transform. Instead values of chosen parameters of materials "transform", or alter, during a certain time period. Transformation optics developed from the capability to choose which parameters for a given material, known as a metamaterial. Hence, since Maxwell's equations retain the same form, it is the successive values of permittivity and permeability that change, over time. Permittivity and permeability are in a sense responses to the electric and magnetic fields of a radiated light source respectively, among other descriptions. The precise degree of electric and magnetic response can be controlled in a metamaterial, point by point. Since so much control can be maintained over the responses of the material, this leads to an enhanced and highly flexible gradient-index material. Conventionally predetermined refractive index of ordinary materials become independent spatial gradients, that can be controlled at will. Therefore, transformation optics is a new method for creating novel and unique optical devices.[1][2][6][7]
Transformation optics can go beyond cloaking (mimic celestial mechanics) because its control of the trajectory and path of light is highly effective. Transformation optics is a field of optical and material engineering and science embracing nanophotonics, plasmonics, and optical metamaterials.
Developments
Developments in this field focus on advances in research of transformation optics. Transformation optics is the foundation for exploring a diverse set of theoretical, numerical, and experimental developments, involving the perspectives of the physics and engineering communities. The multi-disciplinary perspectives for inquiry and designing of materials develop understanding of their behaviors, properties, and potential applications for this field.
If a coordinate transformation can be derived or described, a ray of light (in the optical limit) will follow lines of a constant coordinate. There are constraints on the transformations, as listed in the references. In general, however, a particular goal can be accomplished using more than one transformation. The classic cylindrical cloak (first both simulated and demonstrated experimentally) can be created with many transformations. The simplest, and most often used, is a linear coordinate mapping in the radial coordinate. There is significant ongoing research into determining advantages and disadvantages of particular types of transformations, and what attributes are desirable for realistic transformations. One example of this is the broadband carpet cloak: the transformation used was quasi-conformal. Such a transformation can yield a cloak that uses non-extreme values of permittivity and permeability, unlike the classic cylindrical cloak, which required some parameters to vary towards infinity at the inner radius of the cloak.
General coordinate transformations can be derived which compress or expand space, bend or twist space, or even change the topology (e.g. by mimicking a wormhole). Much current interest involves designing invisibility cloaks, event cloaks, field concentrators, or beam-bending waveguides.
Mimicking celestial mechanics
The interactions of light and matter with spacetime, as predicted by general relativity, can be studied using the new type of artificial optical materials that feature extraordinary abilities to bend light (which is actually electromagnetic radiation). This research creates a link between the newly emerging field of artificial optical metamaterials to that of celestial mechanics, thus opening a new possibility to investigate astronomical phenomena in a laboratory setting. The recently introduced, new class, of specially designed optical media can mimic the periodic, quasi-periodic and chaotic motions observed in celestial objects that have been subjected to gravitational fields.[8][9][10]
Hence, a new class of metamaterials introduced with the nomenclature “continuous-index photon traps” (CIPTs). CIPTz have applications as optical cavities. As such, CIPTs can control, slow and trap light in a manner similar to celestial phenomena such as black holes, strange attractors, and gravitational lenses.[8][9]
A composite of air and the dielectric Gallium Indium Arsenide Phosphide (GaInAsP), operated in the infrared spectral range and featured a high refractive index with low absorptions.[8][11]
This opens an avenue to investigate light phenomena that imitates orbital motion, strange attractors and chaos in a controlled laboratory environment by merging the study of optical metamaterials with classical celestial mechanics.[9]
If a metamaterial could be produced that did not have high intrinsic loss and a narrow frequency range of operation then it could be employed as a type of media to simulate light motion in a curved spacetime vacuum. Such a proposal is brought forward, and metamaterials become prospective media in this type of study. The classical optical-mechanical analogy renders the possibility for the study of light propagation in homogeneous media as an accurate analogy to the motion of massive bodies, and light, in gravitational potentials. A direct mapping of the celestial phenomena is accomplished by observing photon motion in a controlled laboratory environment. The materials could facilitate periodic, quasi-periodic and chaotic light motion inherent to celestial objects subjected to complex gravitational fields.[8]
Twisting the optical metamaterial effects its "space" into new coordinates. The light that travels in real space will be curved in the twisted space, as applied in transformational optics. This effect is analogous to starlight when it moves through a closer gravitational field and experiences curved spacetime or a gravitational lensing effect. This analogue between classic electromagnetism and general relativity, shows the potential of optical metamaterials to study relativity phenomena such as the gravitational lens.[8][11]
Observations of such celestial phenomena by astronomers can sometimes take a century of waiting. Chaos in dynamic systems is observed in areas as diverse as molecular motion, population dynamics and optics. In particular, a planet around a star can undergo chaotic motion if a perturbation, such as another large planet, is present. However, owing to the large spatial distances between the celestial bodies, and the long periods involved in the study of their dynamics, the direct observation of chaotic planetary motion has been a challenge. The use of the optical-mechanical analogy may enable such studies to be accomplished in a bench-top laboratory setting at any prescribed time.[8][11]
The study also points toward the design of novel optical cavities and photon traps for application in microscopic devices and lasers systems.[8]
- For related information see:Chaos theory and General relativity
Producing black holes with metamaterials
Matter propagating in a curved spacetime is similar to the electromagnetic wave propagation in a curved space and in an in homogeneous metamaterial, as stated in the previous section. Hence a black hole can possibly be simulated using electromagnetic fields and metamaterials. In July 2009 a metamaterial structure forming an effective black hole was theorized, and numerical simulations showed a highly efficient light absorption.[10][12]
The first experimental demonstration of electromagnetic black hole at microwave frequencies occurred in October 2009. The proposed black hole was composed of non-resonant, and resonant, metamaterial structures, which can absorb electromagnetic waves efficiently coming from all directions due to the local control of electromagnetic fields. It was constructed of a thin cylinder at 21.6 centimeters in diameter comprising 60 concentric rings of metamaterials. This structure created a gradient index of refraction, necessary for bending light in this way. However, it was characterized as being artificially inferior substitute for a real black hole. The characterization was justified by an absorption of only 80% in the microwave range, and that it has no internal source of energy. It is singularly a light absorber. The light absorption capability could be beneficial if it could be adapted to technologies such as solar cells. However, the device is limited to the microwave range.[13][14]
Also in 2009, transformation optics were employed to mimic a black hole of Schwarzschild form. Similar properties of photon sphere were also found numerically for the metamaterial black hole. Several reduced versions of the black hole systems were proposed for easier implementations.[15]
MIT computer simulations by Fung along with lab experiments are designing a metamaterial with a multilayer sawtooth structure that slows and absorbs light over a wide range of wavelength frequencies, and at a wide range of incident angles, at 95% efficiency. This has an extremely wide window for colors of light.
Multi-dimensional universe
Engineering optical space with metamaterials could be useful to reproduce an accurate laboratory model of the physical multiverse. "This ‘metamaterial landscape’ may include regions in which one or two spatial dimensions are compactified." Metamaterial models appear to be useful for non-trivial models such as 3D de Sitter space with one compactified dimension, 2D de Sitter space with two compactified dimensions, 4D de Sitter dS4, and anti-de Sitter AdS4 spaces.[10][16]
Gradient index lensing
Transformation optics is employed to increase capabilities of gradient index lenses.
Conventional optical limitations
Optical elements (lenses) perform a variety of functions, ranging from image formation, to light projection or light collection. The performance of these systems is frequently limited by their optical elements, which dominate system weight and cost, and force tradeoffs between system parameters such as focal length, field of view (or acceptance angle), resolution, and range.[17]
Conventional lenses are ultimately limited by geometry. Available design parameters are a single index of refraction (n) per lens element, variations in the element surface profile, including continuous surfaces (lens curvature) and/or discontinuous surfaces (diffractive optics). Light rays undergo refraction at the surfaces of each element, but travel in straight lines within the lens. Since the design space of conventional optics is limited to a combination of refractive index and surface structure, correcting for aberrations (for example through the use of achromatic or diffractive optics) leads to large, heavy, complex designs, and/or greater losses, lower image quality, and manufacturing difficulties.[17]
GRIN lenses
Gradient index lenses (or GRIN lenses) as the name implies, are optical elements whose index of refraction varies within the lens. Control of the internal refraction allows the steering of light in curved trajectories through the lens. GRIN optics thus increase the design space to include the entire volume of the optical elements, providing the potential for dramatically reduced size, weight, element count, and assembly cost, as well as opening up new space to trade between performance parameters. However, past efforts to make large aperture GRIN lenses have had limited success due to restricted refractive index change, poor control over index profiles, and/or severe limitations in lens diameter.[17]
Recent advances
Recent steps forward in material science have led to at least one method for developing large (>10 mm) GRIN lenses with 3-dimensional gradient indexes. There is a possibility of adding expanded deformation capabilities to the GRIN lenses. This translates into controlled expansion, contraction, and shear (for variable focus lenses or asymmetric optical variations). These capabilities have been demonstrated. Additionally, recent advances in transformation optics and computational power provide a unique opportunity to design, assemble and fabricate elements in order to advance the utility and availability of GRIN lenses across a wide range of optics-dependent systems, defined by needs. A possible future capability could be to further advance lens design methods and tools, which are coupled to enlarged fabrication processes.[17]
Battlefield applications
Transformation optics has potential applications for the battlefield. The versatile properties of metamaterials can be tailored to fit almost any practical need, and transformation optics shows that space for light can be bent in almost any arbitrary way. This is perceived as providing new capabilities to soldiers in the battlefield. For battlefield scenarios benefits from metamaterials have both short term and long term impacts.[18]
For example, determining whether a cloud in the distance is harmless or an aerosol of enemy chemical or biological warfare is very difficult to assess quickly. However, with the new metamaterials being developed, the ability exists to see things smaller than the wavelength of light – something which has yet to be achieved in the far field. Utilizing metamaterials in the creation of a new lens may allow soldiers to be able to see pathogens and viruses that are impossible to detect with any visual device.[18]
Harnessing subwavelength capabilities then allow for other advancements which appear to be beyond the battlefield. All kinds of materials could be manufactured with nano-manufacturing, which could go into electronic and optical devices from night vision goggles to distance sensors to other kinds of sensors. Longer term views include the possibility for cloaking materials, which would provide "invisibility" by redirecting light around a cylindrical shape.[18]
See also
- Acoustic metamaterials
- Chirality (electromagnetism)
- Metamaterial
- Metamaterial absorber
- Metamaterial antennas
- Metamaterial cloaking
- Negative index metamaterials
- Nonlinear metamaterials
- Photonic metamaterials
- Photonic crystal
- Seismic metamaterials
- Split-ring resonator
- Superlens
- Theories of cloaking
- Tunable metamaterials
- Books
References
A recently published theory has suggested that a cloak of invisibility is in principle possible, at least over a narrow frequency band. We describe here the first practical realization of such a cloak.
- Kyzer, Lindy OCPA – Media Relations Division (Aug 21, 2008). "Army research on invisibility not science fiction". U.S. Army. Retrieved 2010-06-04.
Further reading and general references
- Hecht, Jeff; Opto IQ (Oct 1, 2009). "Photonic Frontiers: Metamaterials and Transformation Optics". PennWell Corporation. Retrieved 2011-03-10.
Newest metamaterials promise customized optical properties
- BioScience Technology (Oct 1, 2009). "Artificial black holes made with metamaterials". Advantage Business Media. Retrieved 2011-03-10.
- Pendry, John (2009). "Metamaterials & Transformation Optics" (PDF). Imperial College – The Blackett Laboratory. Retrieved 2011-03-10.
- Toronto (Oct 28, 2019). "Canadian-made 'invisibility shield' could hide people, spacecraft". Jonathan Forani. Retrieved 2019-03-10.
- Chen, Huanyang; Chan, C. T.; Sheng, Ping (2010). "Transformation optics and metamaterials". Nature Materials. 9 (5): 387–96. Bibcode:2010NatMa...9..387C. doi:10.1038/nmat2743. PMID 20414221.
- Shyroki, Dzmitry M. (2003). "Note on transformation to general curvilinear coordinates for Maxwell's curl equations". arXiv:physics/0307029.
- Ward, A. J.; Pendry, J. B. (1996). "Refraction and geometry in Maxwell's equations". Journal of Modern Optics. 43 (4): 773. Bibcode:1996JMOp...43..773W. CiteSeerX 10.1.1.205.5758. doi:10.1080/09500349608232782.
- Leonhardt, Ulf; Philbin, Thomas G. (2009). Chapter 2 Transformation Optics and the Geometry of Light. Progress in Optics. Vol. 53. p. 69. arXiv:0805.4778. doi:10.1016/S0079-6638(08)00202-3. ISBN 9780444533609. S2CID 15151960.
- Chen, Huanyang (2009). "Transformation optics in orthogonal coordinates". Journal of Optics A. 11 (7): 075102. arXiv:0812.4008. Bibcode:2009JOptA..11g5102C. doi:10.1088/1464-4258/11/7/075102.
- Nicolet, André; Zolla, Frédéric; Geuzaine, Christophe (2010). "Transformation Optics, Generalized Cloaking and Superlenses". IEEE Transactions on Magnetics. 46 (8): 2975. arXiv:1002.1644. Bibcode:2010ITM....46.2975N. doi:10.1109/TMAG.2010.2043073. S2CID 12891014.
- Cai, Wenshan; Vladimir Shalaev (November 2009). Optical Metamaterials: Fundamentals and Applications. New York: Springer-Verlag. pp. Chapter 9. ISBN 978-1-4419-1150-6.
https://en.wikipedia.org/wiki/Transformation_optics
In linguistics, transformational grammar (TG) or transformational-generative grammar (TGG) is part of the theory of generative grammar, especially of natural languages. It considers grammar to be a system of rules that generate exactly those combinations of words that form grammatical sentences in a given language and involves the use of defined operations (called transformations) to produce new sentences from existing ones. The method is commonly associated with American linguist Noam Chomsky.
Generative algebra was first introduced to general linguistics by the structural linguist Louis Hjelmslev[1] although the method was described before him by Albert Sechehaye in 1908.[2] Chomsky adopted the concept of transformations from his teacher Zellig Harris, who followed the American descriptivist separation of semantics from syntax. Hjelmslev's structuralist conception including semantics and pragmatics is incorporated into functional grammar.[3]
https://en.wikipedia.org/wiki/Transformational_grammar
In linear algebra, linear transformations can be represented by matrices. If is a linear transformation mapping to and is a column vector with entries, then
Transformation in economics refers to a long-term change in dominant economic activity in terms of prevailing relative engagement or employment of able individuals.
Human economic systems undergo a number of deviations and departures from the "normal" state, trend or development. Among them are Disturbance (short-term disruption, temporary disorder), Perturbation (persistent or repeated divergence, predicament, decline or crisis), Deformation (damage, regime change, loss of self-sustainability, distortion), Transformation (long-term change, restructuring, conversion, new “normal”) and Renewal (rebirth, transmutation, corso-ricorso, renaissance, new beginning).
Transformation is a unidirectional and irreversible change in dominant human economic activity (economic sector). Such change is driven by slower or faster continuous improvement in sector productivity growth rate. Productivity growth itself is fueled by advances in technology, inflow of useful innovations, accumulated practical knowledge and experience, levels of education, viability of institutions, quality of decision making and organized human effort. Individual sector transformations are the outcomes of human socio-economic evolution.
Human economic activity has so far undergone at least two fundamental transformations, as the leading sector has changed:
Beyond industry there is no clear pattern now. Some may argue that service sectors (particularly finance) have eclipsed industry, but the evidence is inconclusive and industrial productivity growth remains the main driver of overall economic growth in most national economies.
This evolution naturally proceeds from securing necessary food, through producing useful things, to providing helpful services, both private and public. Accelerating productivity growth rates speed up the transformations, from millennia, through centuries, to decades of the recent era. It is this acceleration which makes transformation relevant economic category of today, more fundamental in its impact than any recession, crisis or depression. The evolution of four forms of capital (Indicated in Fig. 1) accompanies all economic transformations.
Transformation is quite different from accompanying cyclical recessions and crises, despite the similarity of manifested phenomena (unemployment, technology shifts, socio-political discontent, bankruptcies, etc.). However, the tools and interventions used to combat crisis are clearly ineffective for coping with non-cyclical transformations. The problem is whether we face a mere crisis or a fundamental transformation (globalization→relocalization).
https://en.wikipedia.org/wiki/Transformation_in_economics
Four key forms of capital
Fig. 1 refers to the four transformations through the parallel (and overlapping) evolution of four forms of capital: Natural→Built→Human→Social. These evolved forms of capital present a minimal complex of sustainability and self-sustainability of pre-human and human systems.
Natural capital (N). The nature-produced, renewed and reproduced “resources” of land, water, air, raw materials, biomass and organisms. Natural capital is subject to both renewable and non-renewable depletion, degradation, cultivation, recycling and reuse.
Built capital (B). The man-made physical assets of infrastructures, technologies, buildings and means of transportation. This is the manufactured “hardware” of nations. This national hardware must be continually maintained, renewed and modernized to assure its continued productivity, efficiency and effectiveness.
Human capital (H). The continued investment in people's skills, knowledge, education, health & nutrition, abilities, motivation and effort. This is the “software” and “brainware” of a nation; most important form of capital for developing nations.
Social capital (S). The enabling infrastructure of institutions, civic communities, cultural and national cohesion, collective and family values, trust, traditions, respect and the sense of belonging. This is the voluntary, spontaneous “social order” which cannot be engineered, but its self-production (autopoiesis) can be nurtured, supported and cultivated.
https://en.wikipedia.org/wiki/Transformation_in_economics
Looking back, even the Great Depression of the 1930s was not just a crisis, but a long-term transformation from the pre-war industrial economy to the post-war service economy in the U.S. However, in the early 1980s, the service sector had started slowing down its employment absorption and growth potential, ultimately leading to the jobless economy of 2011.
https://en.wikipedia.org/wiki/Transformation_in_economics
The unrecognized confluence of crisis and transformation, and the inability to separate them, lies at the core of old tools (keynesianism, monetarism) not working properly. The tools for adapting successfully to paradigmatic transformation have not been developed. An example of a paradigmatic transformation would be the shift from geocentric to heliocentric view of our world. Within both views there can be any number of crises, cyclical failures of old and searches for new theories and practices. But there was only one transformation, from geocentric to heliocentric, and there was nothing cyclical about it. It was resisted with all the might of the mighty: remember Galilei and Bruno. While crises are cyclical corrections and adjustments, transformations are evolutionary shifts or even revolutions (industrial, computer) towards new and different levels of existence.
https://en.wikipedia.org/wiki/Transformation_in_economics
Sector dynamics
Economic sectors evolve (in terms of employment levels), albeit through fluctuations, in one general direction (along so called S-curve): they emerge, expand, plateau, contract and exit—just like any self-organizing system or living organism. We are naturally interested in the percentage of total workforce employed in a given sector. The dynamics of this percentage provides clues to where and when are the new jobs generated and old ones abandoned.
Sector's percentage share of employment evolves in dependency on sector's productivity growth rate. Agriculture has emerged and virtually disappeared as a source of net employment. Today, only ½ percent or so of total workforce is employed in US agriculture – the most productive sector of the economy. Manufacturing had emerged, peaked and contracted. Services have emerged and started contracting – all due to incessant, unavoidable and desirable productivity growth rates.
The U.S. absolute manufacturing output has more than tripled over the past 60 years. Because of the productivity growth rates, these goods were produced by ever decreasing number of people. While in 1980-2012 total economic output per hour worked increased 85 percent, in manufacturing output per hour skyrocketed by 189 percent. The number of employees in manufacturing was about a third of total workforce in 1953, about a fifth in 1980, and about a tenth (12 million) in 2012. This decline is now accelerating because of the high-tech automation and robotization.
A public sector of employment has been emerging: government, welfare and unemployment, based on tax-financed consumption rather than added value production, sheltered from market forces, producing public services. (Observe that the unemployed are temporary “employees” of the government, as long as they receive payments.) Creating employment in GWU sector is achievable at the expense of productive sectors, i.e. only at the risk of major debt accumulation, in a non-lasting way and with low added value. Sustaining employment growth in such a sector is severely limited by the growing debt financing.
https://en.wikipedia.org/wiki/Transformation_in_economics
The Four Basic Sectors
The Four Basic Sectors refers to the current stage of sector evolution, in the sequence of four undergone transformations, namely agriculture, industry, services and GWU (government, welfare and unemployment)https://en.wikipedia.org/wiki/Transformation_in_economics
So, the US is at the transforming cusp and hundreds of years of sector evolution comes to a halt. There are only four essential activities humans can do economically: 1. Produce food, 2. Manufacture goods, 3. Provide services (private and public), and 4. Do nothing. This is why the idea of “basic income”, independent of employment, is being considered.
https://en.wikipedia.org/wiki/Transformation_in_economics
Self-service, disintermediation and customization
The new transformational paradigm could be defined by the ongoing self-organization of the market economy itself: its rates of self-service, disintermediation and mass customization are increasing and becoming most effective on local and regional levels. Because there is no new productive sector to emerge, the economy seeks to reinstate its new balance through these new modes of doing business. Producers and providers are outsourcing their production and services to customers and to technology. Outsourcing to customers is a natural and necessary self-organizing process, including disintermediation, customer integration and mass customization, all driven by the global productivity at the cusp of transformation.https://en.wikipedia.org/wiki/Transformation_in_economics
Relocalization
Because there is no new productive sector to emerge, the market system seeks to reinstate its new balance through new modes of doing business. Deglobalization is taking place, supply chains are turning into demand chains, large economies are focusing on their internal markets, outsourcing is followed by “backsourcing”, returning activities back to the countries and locations of their origin. The original slogan of “Think globally, act locally,” is being re-interpreted as exploiting global information and knowledge in local action, under local conditions and contexts.
While globalization refers to a restructuring of the initially distributed and localized world economy into spatially reorganized processes of production and consumption across national economies and political states on a global scale, in deglobalization, people move towards relocalization: the global experience and knowledge becoming embodied in local communities. So, the corso-ricorso of socio-economic transformation is properly captured by a triad Localization → Globalization → Relocalization.
The trend of deglobalization is turning much more significant during these years. The growth of worldwide GDP is now exceeding the overall growth of trade for the first time; foreign investment (Cross Border Capital Flows) is only 60% of their pre-crisis levels and has plunged to some 40 percent.[9] World capital flows include loans and deposits, foreign investments, bonds and equities – all down in their cross-border segment.[10] This means that the rate of globalization has reversed its momentum. Globalizers are still worried about 2014 being the year of irreversible decline. Improvement in the internal growth of U.S.A., EU, and Japan does not carry into external trade – these economies are starting to function as zero-sum.
The income inequality and long-term unemployment lead to re-localized experiments with Guaranteed Minimum Income (to be traced to Thomas Paine) for all citizens.[11][12] In Switzerland this guarantee would be $33,000 per year, regardless of working or not.[13] Under the name Generation Basic Income it is now subject to Swiss referendum. Unemployment then frees people to pursue their own creative talents.
With this transformation, an entirely new vocabulary is emerging in economics: in addition to deglobalization and relocalization, we also encounter glocalization (adjustment of products to local culture) and local community restoration (regional self-government and direct democracy). With relocalization, an entire new cycle of societal corso-ricorso is brought forth. Local services, local production and local agriculture, based on distributed energy generation, additive manufacturing and vertical farming, are enhancing individual, community and regional autonomy through self-service, disintermediation and mass customization. Both requisite technologies and appropriate business models necessary for relocalization are already in place, forming a vital part of our daily business and life experience. New transformation is well on its way.
See also
- Economic transformation
- Transformation of culture
- Structural change
- Overconsumption
- Fourth Industrial Revolution
- Digital Revolution
- Sharing economy
- Peer production
- Technological unemployment
https://en.wikipedia.org/wiki/Transformation_in_economics
Technological unemployment is the loss of jobs caused by technological change. It is a key type of structural unemployment.
https://en.wikipedia.org/wiki/Technological_unemployment
The Digital Revolution (also known as the Third Industrial Revolution) is the shift from mechanical and analogue electronic technologies from the Industrial Revolution towards digital electronics which began in the later half of the 20th century, with the adoption and proliferation of digital computers and digital record-keeping, that continues to the present day.[1] Implicitly, the term also refers to the sweeping changes brought about by digital computing and communication technologies during this period. From analogies to the Agricultural Revolution (Neolithic) and the First Industrial Revolution (1770-1840), the Digital Revolution marked the beginning of the Information Age.[2]
Central to this revolution is the mass production and widespread use of digital logic, MOSFETs (MOS transistors), integrated circuit (IC) chips, and their derived technologies, including computers, microprocessors, digital cellular phones, and the Internet.[3] These technological innovations have transformed traditional production and business techniques.[4]
The Third Industrial Revolution is expected to be followed by a Fourth Industrial Revolution.[citation needed]
https://en.wikipedia.org/wiki/Digital_Revolution
In economics, structural change is a shift or change in the basic ways a market or economy functions or operates.[1]
Such change can be caused by such factors as economic development, global shifts in capital and labor, changes in resource availability due to war or natural disaster or discovery or depletion of natural resources, or a change in political system. For example, a subsistence economy may be transformed into a manufacturing economy, or a regulated mixed economy may be liberalized.[2] A current driver of structural change in the world economy is globalization.[3] Structural change is possible because of the dynamic nature of the economic system.[4]
Patterns and changes in sectoral employment drive demand shifts through the income elasticity. Shifting demand for both locally sourced goods and for imported products is a fundamental part of development.[5][6] The structural changes that move countries through the development process are often viewed in terms of shifts from primary, to secondary and finally, to tertiary production. Technical progress is seen as crucial in the process of structural change as it involves the obsolescence of skills, vocations, and permanent changes in spending and production resulting in structural unemployment.[4][7]
https://en.wikipedia.org/wiki/Structural_change
Articles related to economic globalization, one of the three main dimensions of globalization commonly found in academic literature, with the two others being political globalization and cultural globalization, as well as the general term of globalization. Economic globalization refers to the widespread international movement of goods, capital, services, technology and information. It is the increasing economic integration and interdependence of national, regional, and local economies across the world through an intensification of cross-border movement of goods, services, technologies and capital.Economic globalization primarily comprises the globalization of production, finance, markets, technology, organizational regimes, institutions, corporations, and labour.
https://en.wikipedia.org/wiki/Category:Economic_globalization
Containerization is a system of intermodal freight transport using intermodal containers (also called shipping containers, or ISO containers).[1] Containerization, also referred as container stuffing or container loading, is the process of unitization of cargoes in exports. Containerization is the predominant form of unitization of export cargoes, as opposed to other systems such as the barge system or palletization.[2] The containers have standardized dimensions. They can be loaded and unloaded, stacked, transported efficiently over long distances, and transferred from one mode of transport to another—container ships, rail transport flatcars, and semi-trailer trucks—without being opened. The handling system is completely mechanized so that all handling is done with cranes[3] and special forklift trucks. All containers are numbered and tracked using computerized systems.
Containerization originated several centuries ago but was not well developed or widely applied until after World War II, when it dramatically reduced the costs of transport, supported the post-war boom in international trade, and was a major element in globalization. Containerization eliminated manual sorting of most shipments and the need for dockfront warehouses, while displacing many thousands of dock workers who formerly simply handled break bulk cargo. Containerization reduced congestion in ports, significantly shortened shipping time, and reduced losses from damage and theft.[4]
Containers can be made from a wide range of materials such as steel, fibre-reinforced polymer, aluminum or a combination. Containers made from weathering steel are used to minimize maintenance needs.
https://en.wikipedia.org/wiki/Containerization
Conceptual economy is a term describing the contribution of creativity, innovation, and design skills to economic competitiveness, especially in the global context.
https://en.wikipedia.org/wiki/Conceptual_economy
Global imbalances refers to the situation where some countries have more assets than the other countries. In theory, when the current account is in balance, it has a zero value: inflows and outflows of capital will be cancelled by each other. Hence, if the current account is persistently showing deficits for certain period it is said to show an inequilibrium. Since, by definition, all current accounts and net foreign assets of the countries in the world must become zero, then other countries become indebted with the other nations. During recent years, global imbalances have become a concern in the rest of the world. The United States has run long term deficits, as well as many other advanced economies, while in Asia and emerging economies the opposite has occurred.
https://en.wikipedia.org/wiki/Global_imbalances
In traditional usage, a global public good (or global good) is a public good available on a more-or-less worldwide basis.[1] There are many challenges to the traditional definition, which have far-reaching implications in the age of globalization.
https://en.wikipedia.org/wiki/Global_public_good
The transnational nature of such resources points to another problem with a traditional definition of global public goods. Remedies to problems such as air and water pollution are typically legal remedies, and such laws often exist only in the context of geographically-bounded governmental systems.[8] In the case of global public goods—such as climate change mitigation, financial stability, security, knowledge production, and global public health—either international or supranational legal entities (both public and private) must be created to manage these goods.[9] As different types of global public goods often require different types of legal structures to manage them,[9] this can contribute to a proliferation of non-governmental organizations (NGOs) and intergovernmental organizations (IGOs), such as has been the case in the recent past.
https://en.wikipedia.org/wiki/Global_public_good
Implications
At a time when processes of globalization are encompassing increasingly more cultural and natural resources, the ways in which global public goods are created, designed, and managed have far-reaching implications. Issues of globalization, today, are precisely those that are beyond the policy endeavors of states, reflecting a mismatch between the scope of the problem and the authority of decision-making bodies attempting to address such issues.[10] Many goods that might be public by default would be best designated at the policy level as common goods (global-level common-pool resources or global commons), with appropriate regulation, until such time as levels of knowledge, foresight and governing structures might become available to designate such resources as either private or public goods.
Although not the only example, no better example can be found than the issue of potable water. Water has always been an important and life-sustaining drink to humans and is essential to the survival of all known organisms. Over large parts of the world, humans have inadequate access to potable water and use sources contaminated with disease vectors, pathogens or unacceptable levels of toxins or suspended solids. Drinking or using such water in food preparation leads to widespread waterborne diseases, causing acute and chronic illnesses or death and misery in many countries.[11] While the global water cycle is the subject of advanced scientific study and observation, it is still an incompletely understood process. If availability of water for human consumption is left solely to market forces, those who are most in need of water for subsistence-level survival are also those least likely to be able to purchase it at a market price. Since the water cycle and the natural flows of fresh water resources do not obey the limits of political boundaries, neither can these water resources be managed solely by local- or national-level public authorities. Privatization of such resources can be used as a method of avoiding contentious public policy-making processes, but is likely to produce inequities.[12][13][14] The history of the development of water supply and sanitation in Ecuador and resulting water conflicts there are an example.[15][16] Thoughtful design of transnational or international water management authorities over such global common-pool resources will play a large part in possible solutions to peak water problems.
Moreover, there are a number of global public goods—or global-level common-pool resources—that are necessary conditions for continuing global trade and transactions.[17] Even if one takes a position that globalization has more negative impacts than positive, the economic interdependence of national-level economies has reached a kind of point of no return in terms of continued global economic stability. Thus, continuing global trade and transactions require global public goods such as widespread peace, international economic stability, functioning supranational trade authorities, stable financial and monetary systems, effective law enforcement, relatively healthy populations of consumers and laborers, etc.[17]
https://en.wikipedia.org/wiki/Global_public_good
Global commons is a term typically used to describe international, supranational, and global resource domains in which common-pool resources are found. Global commons include the earth's shared natural resources, such as the high oceans, the atmosphere and outer space and the Antarctic in particular.[1] Cyberspace may also meet the definition of a global commons.
https://en.wikipedia.org/wiki/Global_commons
Definition and usage
"Global commons" is a term typically used to describe international, supranational, and global resource domains in which common-pool resources are found. In economics, common goods are rivalrous and non-excludable, constituting one of the four main types of goods.[2] A common-pool resource, also called a common property resource, is a special case of a common good (or public good) whose size or characteristics makes it costly, but not impossible, to exclude potential users. Examples include both natural or human-made resource domains (e.g., a "fishing hole" or an irrigation system). Unlike global public goods, global common-pool resources face problems of congestion, overuse, or degradation because they are subtractable (which makes them rivalrous).[3]
The term "commons" originates from the term common land in the British Isles.[4] "Commoners rights" referred to traditional rights held by commoners, such as mowing meadows for hay or grazing livestock on common land held in the open field system of old English common law. Enclosure was the process that ended those traditional rights, converting open fields to private property. Today, many commons still exist in England, Wales, Scotland, and the United States, although their extent is much reduced from the millions of acres that existed until the 17th century.[5] There are still over 7,000 registered commons in England alone.[6]
The term "global commons" is typically used to indicate the earth's shared natural resources, such as the deep oceans, the atmosphere, outer space and the Northern and Southern polar regions, the Antarctic in particular.[7]
According to the World Conservation Strategy, a report on conservation published by the International Union for Conservation of Nature and Natural Resources (IUCN) in collaboration with UNESCO and with the support of the United Nations Environment Programme (UNEP) and the World Wildlife Fund (WWF):
"A commons is a tract of land or water owned or used jointly by the members of a community. The global commons includes those parts of the Earth's surface beyond national jurisdictions — notably the open ocean and the living resources found there — or held in common — notably the atmosphere. The only landmass that may be regarded as part of the global commons is Antarctica ..."[8]
Today, the Internet, World Wide Web and resulting cyberspace are often referred to as global commons.[9] Other usages sometimes include references to open access information of all kinds, including arts and culture, language and science, though these are more formally referred to as the common heritage of mankind.[10]
https://en.wikipedia.org/wiki/Global_commons
Management of the global commons
The key challenge of the global commons is the design of governance structures and management systems capable of addressing the complexity of multiple public and private interests, subject to often unpredictable changes, ranging from the local to the global level.[11] As with global public goods, management of the global commons requires pluralistic legal entities, usually international and supranational, public and private, structured to match the diversity of interests and the type of resource to be managed, and stringent enough with adequate incentives to ensure compliance.[12] Such management systems are necessary to avoid, at the global level, the classic tragedy of the commons, in which common resources become overexploited.[13]
There are several key differences in management of resources in the global commons from those of the commons, in general.[14] There are obvious differences in scale of both the resources and the number of users at the local versus the global level. Also, there are differences in the shared culture and expectations of resource users; more localized commons users tend to be more homogeneous and global users more heterogeneous. This contributes to differences in the possibility and time it takes for new learning about resource usage to occur at the different levels. Moreover, global resource pools are less likely to be relatively stable and the dynamics are less easily understood. Many of the global commons are non-renewable on human time scales. Thus, resource degradation is more likely to be the result of unintended consequences that are unforeseen, not immediately observable, or not easily understood. For example, the carbon dioxide emissions that drive climate change continue to do so for at least a millennium after they enter the atmosphere[15] and species extinctions last forever. Importantly, because there are significant differences in the benefits, costs, and interests at the global level, there are significant differences in externalities between more local resource uses and uses of global-level resources.
Several environmental protocols have been established (see List of international environmental agreements) as a type of international law, "an intergovernmental document intended as legally binding with a primary stated purpose of preventing or managing human impacts on natural resources."[16] International environmental protocols came to feature in environmental governance after trans-boundary environmental problems became widely perceived in the 1960s.[17] Following the Stockholm Intergovernmental Conference in 1972, creation of international environmental agreements proliferated.[18] Due to the barriers already discussed, environmental protocols are not a panacea for global commons issues. Often, they are slow to produce the desired effects, tend to the lowest common denominator, and lack monitoring and enforcement. They also take an incremental approach to solutions where sustainable development principles suggest that environmental concerns should be mainstream political issues.
The global ocean
The global or world ocean, as the interconnected system of the Earth's oceanic (or marine) waters that comprise the bulk of the hydrosphere, is a classic global commons.[19] It is divided into a number of principal oceanic areas that are delimited by the continents and various oceanographic features. In turn, oceanic waters are interspersed by many smaller seas, gulfs, and bays. Further, most freshwater bodies ultimately empty into the ocean and are derived through the Earth's water cycle from ocean waters. The Law of the Sea is a body of public international law governing relationships between nations in respect to navigational rights, mineral rights, and jurisdiction over coastal waters. Maritime law, also called Admiralty law, is a body of both domestic law governing maritime activities and private international law governing the relationships between private entities which operate vessels on the oceans. It deals with matters including marine commerce, marine navigation, shipping, sailors, and the transportation of passengers and goods by sea. However, these bodies of law do little to nothing to protect deep oceans from human threats.
In addition to providing significant means of transportation, a large proportion of all life on Earth exists in its ocean, which contains about 300 times the habitable volume of terrestrial habitats. Specific marine habitats include coral reefs, kelp forests, seagrass meadows, tidepools, muddy, sandy and rocky bottoms, and the open ocean (pelagic) zone, where solid objects are rare and the surface of the water is the only visible boundary. The organisms studied range from microscopic phytoplankton and zooplankton to huge cetaceans (whales) 30 meters (98 feet) in length.
At a fundamental level, marine life helps determine the very nature of our planet. Marine life resources provide food (especially food fish), medicines, and raw materials. It is also becoming understood that the well-being of marine organisms and other organisms are linked in very fundamental ways. The human body of knowledge regarding the relationship between life in the sea and important cycles is rapidly growing, with new discoveries being made nearly every day. These cycles include those of matter (such as the carbon cycle) and of air (such as Earth's respiration, and movement of energy through ecosystems including the ocean). Marine organisms contribute significantly to the oxygen cycle, and are involved in the regulation of the Earth's climate.[20] Shorelines are in part shaped and protected by marine life, and some marine organisms even help create new land.[21]
The United Nations Environment Programme (UNEP) has identified several areas of need in managing the global ocean: strengthen national capacities for action, especially in developing countries; improve fisheries management; reinforce cooperation in semi-enclosed and regional seas; strengthen controls over ocean disposal of hazardous and nuclear wastes; and advance the Law of the Sea. Specific problems identified as in need of attention include [[Current sea level rise|rising sea levels]]; contamination by hazardous chemicals (including oil spills); microbiological contamination; ocean acidification; harmful algal blooms; and over-fishing and other overexploitation.[22] Further, the Pew Charitable Trusts Environmental Initiative program has identified a need for a worldwide system of very large, highly protected marine reserves where fishing and other extractive activities are prohibited.[23]
Atmosphere
The atmosphere is a complex dynamic natural gaseous system that is essential to support life on planet Earth. A primary concern for management of the global atmosphere is air pollution, the introduction into the atmosphere of chemicals, particulates, or biological materials that cause discomfort, disease, or death to humans, damage other living organisms such as food crops, or damage the natural environment or built environment. Stratospheric ozone depletion due to air pollution has long been recognized as a threat to human health as well as to the Earth's ecosystems.
Pollution of breathable air is a central problem in the management of the global commons. Pollutants can be in the form of solid particles, liquid droplets, or gases and may be natural or man-made. Although controversial and limited in scope by methods of enforcement, in several parts of the world the polluter pays principle, which makes the party responsible for producing pollution responsible for paying for the damage done to the natural environment, is accepted. It has strong support in most Organisation for Economic Co-operation and Development (OECD) and European Community (EC) countries. It is also known as extended producer responsibility (EPR). EPR seeks to shift the responsibility dealing with waste from governments (and thus, taxpayers and society at large) to the entities producing it. In effect, it attempts to internalise the cost of waste disposal into the cost of the product, theoretically resulting in producers improving the waste profile of their products, decreasing waste and increasing possibilities for reuse and recycling.
The 1979 Convention on Long-Range Transboundary Air Pollution, or CLRTAP, is an early international effort to protect and gradually reduce and prevent air pollution. It is implemented by the European Monitoring and Evaluation Programme (EMEP), directed by the United Nations Economic Commission for Europe (UNECE). The Montreal Protocol on Substances that Deplete the Ozone Layer, or Montreal Protocol (a protocol to the Vienna Convention for the Protection of the Ozone Layer), is an international treaty designed to protect the ozone layer by phasing out the production of numerous substances believed to be responsible for ozone depletion. The treaty was opened for signature on 16 September 1987, and entered into force on 1 January 1989. After more three decades of work the Vienna Convention and Montreal Protocol were widely regarded as highly successful, both in achieving ozone reductions and as a pioneering model for management of the global commons.[24]
Global dimming is the gradual reduction in the amount of global direct irradiance at the Earth's surface, which has been observed for several decades after the start of systematic measurements in the 1950s. Global dimming is thought to have been caused by an increase in particulates such as sulfate aerosols in the atmosphere due to human action.[25] It has interfered with the hydrological cycle by reducing evaporation and may have reduced rainfall in some areas. Global dimming also creates a cooling effect that may have partially masked the effect of greenhouse gases on global warming.
Global warming and climate change in general are a major concern of global commons management. The Intergovernmental Panel on Climate Change (IPCC), established in 1988 to develop a scientific consensus, concluded in a series of reports that reducing emissions of greenhouse gases was necessary to prevent catastrophic harm. Meanwhile, a 1992 United Nations Framework Convention on Climate Change (FCCC) pledged to work toward "stabilisation of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic [i.e., human-induced] interference with the climate system" (as of 2019 there were 197 parties to the convention, although not all had ratified it).[26] The 1997 Kyoto Protocol to the FCCC set forth binding obligations on industrialised countries to reduce emissions. These were accepted by many countries but not all, and many failed to meet their obligations. The Protocol expired in 2012 and was followed by the 2015 Paris Agreement in which nations made individual promises of reductions. However, the IPCC concluded in a 2018 report that dangerous climate change was inevitable unless much greater reductions were promised and carried out.
Polar regions
The eight Arctic nations Canada, Denmark (Greenland and the Faroe Islands), Norway, the United States (Alaska), Sweden, Finland, Iceland, and Russia, are all members of the treaty organization, the Arctic Council, as are organizations representing six indigenous populations. The council operates on consensus basis, mostly dealing with environmental treaties and not addressing boundary or resource disputes.[27] Currently, the Antarctic Treaty and related agreements, collectively called the Antarctic Treaty System or ATS, regulate international relations with respect to Antarctica, Earth's only continent without a native human population. The treaty, entering into force in 1961 and currently having 50 signatory nations, sets aside Antarctica as a scientific preserve, establishes freedom of scientific investigation and bans military activity on that continent.[28]
Climate change in the Arctic region is leading to widespread ecosystem restructuring.[29] The distribution of species is changing along with the structure of food webs. Changes in ocean circulation appear responsible for the first exchanges of zooplankton between the North Pacific and North Atlantic regions in perhaps 800,000 years. These changes can allow the transmission of diseases from subarctic animals to Arctic ones, and vice versa, posing an additional threat to species already stressed by habitat loss and other impacts. Where these changes lead is not yet clear, but are likely to have far-reaching impacts on Arctic marine ecosystems.
Climate models tend to reinforce that temperature trends due to global warming will be much smaller in Antarctica than in the Arctic,[30] but ongoing research may show otherwise.[31][32]
Outer space
Management of outer space global commons has been contentious since the successful launch of the Sputnik satellite by the former Soviet Union on 4 October 1957. There is no clear boundary between Earth's atmosphere and space, although there are several standard boundary designations: one that deals with orbital velocity (the Kármán line), one that depends on the velocity of charged particles in space, and some that are determined by human factors such as the height at which human blood begins to boil without a pressurized environment (the Armstrong line).
Space policy regarding a country's civilian space program, as well as its policy on both military use and commercial use of outer space, intersects with science policy, since national space programs often perform or fund research in space science, and also with defense policy, for applications such as spy satellites and anti-satellite weapons. It also encompasses government regulation of third-party activities such as commercial communications satellites and private spaceflight[33] as well as the creation and application of space law and space advocacy organizations that exist to support the cause of space exploration.
Scientists have outlined rationale for governance that regulates the current free externalization of true costs and risks, treating orbital space around the Earth as part of the global commons – as an "additional ecosystem" or "part of the human environment" – which should be subject to the same concerns and regulations like e.g. oceans on Earth. The study concluded in 2022 that it needs "new policies, rules and regulations at national and international level".[35][34]
Policies
The Outer Space Treaty provides a basic framework for international space law. It covers the legal use of outer space by nation states. The treaty states that outer space is free for all nation states to explore and is not subject to claims of national sovereignty. It also prohibits the deployment of nuclear weapons in outer space. The treaty was passed by the United Nations General Assembly in 1963 and signed in 1967 by the USSR, the United States of America and the United Kingdom. As of mid-year, 2013 the treaty has been ratified by 102 states and signed by an additional 27 states.
Beginning in 1958, outer space has been the subject of multiple resolutions by the United Nations General Assembly. Of these, more than 50 have concerned the international co-operation in the peaceful uses of outer space and preventing an arms race in space. Four additional space law treaties have been negotiated and drafted by the UN's Committee on the Peaceful Uses of Outer Space. Still, there remain no legal prohibitions against deploying conventional weapons in space and anti-satellite weapons have been successfully tested by the US, USSR and China. The 1979 Moon Treaty turned the jurisdiction of all heavenly bodies (including the orbits around such bodies) over to the international community. However, this treaty has not been ratified by any nation that currently practices manned spaceflight.
In 1976 eight equatorial states (Ecuador, Colombia, Brazil, Congo, Zaire, Uganda, Kenya, and Indonesia) met in Bogotá, Colombia to make the "Declaration of the First Meeting of Equatorial Countries," also known as "the Bogotá Declaration", a claim to control the segment of the geosynchronous orbital path corresponding to each country. These claims are not internationally accepted.
The International Space Station
The International Space Station programme is a joint project among five participating space agencies: NASA, the Russian Federal Space Agency (RSA), Japan Aerospace Exploration Agency (JAXA), European Space Agency (ESA), and Canadian Space Agency (CSA). National budget constraints led to the merger of three space station projects into the International Space Station. In 1993 the partially built components for a Soviet/Russian space station Mir-2, the proposed American Freedom, and the proposed European Columbus merged into this multinational programme.[36] The ownership and use of the space station is established by intergovernmental treaties and agreements. The ISS is arguably the most expensive single item ever constructed,[37] and may be one of the most significant instances of international cooperation in modern history.
According to the original Memorandum of Understanding between NASA and the RSA, the International Space Station was intended to be a laboratory, observatory and factory in space. It was also planned to provide transportation, maintenance, and act as a staging base for possible future missions to the Moon, Mars and asteroids. In the 2010 United States National Space Policy, it was given additional roles of serving commercial, diplomatic[38] and educational purposes.[39]
Internet
As a global system of computers interconnected by telecommunication technologies consisting of millions of private, public, academic, business, and government resources, it is difficult to argue that the Internet is a global commons. These computing resources are largely privately owned and subject to private property law, although many are government owned and subject to public law. The World Wide Web, as a system of interlinked hypertext documents, either public domain (like Wikipedia itself) or subject to copyright law, is, at best, a mixed good.
The resultant virtual space or cyberspace, however, is often viewed as an electronic global commons that allows for as much or more freedom of expression as any public space. Access to those digital commons and the actual freedom of expression allowed does vary widely by geographical area. Management of the electronic global commons presents as many issues as do other commons. In addition to issues related to inequity in access, issues such as net neutrality, Internet censorship, Internet privacy, and electronic surveillance arise.[40] However, the term global commons generally represents stateless maneuver space, where no nation or entity can claim preeminence, and since 100 percent of cyberspace is owned by either a public or private entity, although it is often perceived as such, cyberspace may not be said to be a true global commons.
See also
- Environmental economics
- Environmental law
- Free and open-source software
- Goods
- Global public goods
- Human ecology
- Tragedy of the commons
- Wikipedia
References
{{cite web}}
: Check |url=
value (help)
Such a level should be achieved within a time-frame sufficient to allow ecosystems to adapt naturally to climate change, to ensure that food production is not threatened and to enable economic development to proceed in a sustainable manner.
- Loader, Brian D (2004). The Governance of Cyberspace: Politics, Technology and Global Restructuring. Routledge. ISBN 978-0415147248.
External links
- The Global Environmental Facility
- Share the World's Resources Sustainable Economics to End Global Poverty – the Global Commons in Economic Practice.
Further reading
Library resources about Global commons |
- Goldman, Michael (1998). Privatizing Nature: Political Struggles for the Global Commons. Rutgers University Press. ISBN 978-0813525549.
- Amstutz, Mark R. (2008). International Ethics: Concepts, Theories, and Cases in Global Politics. Rowman & Littlefield. ISBN 978-0742556041.
- Harrison, Kathryn; Lisa McIntosh Sundstrom (2010). Global Commons, Domestic Decisions: The Comparative Politics of Climate Change. MIT Press. ISBN 978-0262514316.
- Milun, Kathryn (2010). The Political Uncommons: The Cross-cultural Logic of the Global Commons. Ashgate Publishing, Ltd. ISBN 978-0754671398.
- Jasper, Scott (2012). Conflict and Cooperation in the Global Commons: A Comprehensive Approach for International Security. Georgetown University Press. ISBN 978-1589019232.
https://en.wikipedia.org/wiki/Global_commons
The International Union for Conservation of Nature (IUCN) is an international organization working in the field of nature conservation and sustainable use of natural resources.[3] It is involved in data gathering and analysis, research, field projects, advocacy, and education. IUCN's mission is to "influence, encourage and assist societies throughout the world to conserve nature and to ensure that any use of natural resources is equitable and ecologically sustainable".
https://en.wikipedia.org/wiki/International_Union_for_Conservation_of_Nature
https://en.wikipedia.org/w/index.php?title=International_Union_for_Conservation_of_Nature_and_Natural_Resources&redirect=no
Outer space, commonly referred to simply as space, is the expanse that exists beyond Earth and its atmosphere and between celestial bodies. Outer space is not completely empty; it is a near-perfect vacuum[1] containing a low density of particles, predominantly a plasma of hydrogen and helium, as well as electromagnetic radiation, magnetic fields, neutrinos, dust, and cosmic rays. The baseline temperature of outer space, as set by the background radiation from the Big Bang, is 2.7 kelvins (−270 °C; −455 °F).[2]
https://en.wikipedia.org/wiki/Outer_space
The atmosphere of Earth is the layer of gases, known collectively as air, retained by Earth's gravity that surrounds the planet and forms its planetary atmosphere. The atmosphere of Earth creates pressure, absorbs most meteoroids and ultraviolet solar radiation, warms the surface through heat retention (greenhouse effect), allowing life and liquid water to exist on the Earth's surface, and reduces temperature extremes between day and night (the diurnal temperature variation).
https://en.wikipedia.org/wiki/Atmosphere_of_Earth
World ocean
The contemporary concept of the World Ocean was coined in the early 20th century by the Russian oceanographer Yuly Shokalsky to refer to the continuous ocean that covers and encircles most of Earth.[24][25] The global, interconnected body of salt water is sometimes referred to as the world ocean, global ocean or the great ocean.[26][27][28] The concept of a continuous body of water with relatively free interchange among its parts is of fundamental importance to oceanography.[29]
https://en.wikipedia.org/wiki/Ocean#World_ocean
In agriculture, grazing is a method of animal husbandry whereby domestic livestock are allowed outdoors to roam around and consume wild vegetations in order to convert the otherwise indigestible (by human gut) cellulose within grass and other forages into meat, milk, wool and other animal products, often on land unsuitable for arable farming.
https://en.wikipedia.org/wiki/Grazing
A meadow (/ˈmɛdoʊ/ MED-oh) is an open habitat or field, vegetated by grasses, herbs, and other non-woody plants. Trees or shrubs may sparsely populate meadows, as long as these areas maintain an open character. Meadows can occur naturally under favourable conditions, but are often artificially created from cleared shrub or woodland for the production of hay, fodder, or livestock.[1] Meadow habitats, as a group, are characterized as "semi-natural grasslands", meaning that they are largely composed of species native to the region, with only limited human intervention.
https://en.wikipedia.org/wiki/Meadow
The open-field system was the prevalent agricultural system in much of Europe during the Middle Ages and lasted into the 20th century in Russia, Iran, and Turkey.[1] Each manor or village had two or three large fields, usually several hundred acres each, which were divided into many narrow strips of land. The strips or selions were cultivated by peasants, often called tenants or serfs. The holdings of a manor also included woodland and pasture areas for common usage and fields belonging to the lord of the manor and the religious authorities, usually Roman Catholics in medieval Western Europe. The farmers customarily lived in separate houses in a nucleated village with a much larger manor house and church nearby. The open-field system necessitated co-operation among the residents of the manor.
https://en.wikipedia.org/wiki/Open-field_system
Common land is land owned by a person or collectively by a number of persons, over which other persons have certain common rights, such as to allow their livestock to graze upon it, to collect wood, or to cut turf for fuel.[1]
A person who has a right in, or over, common land jointly with another or others is usually called a commoner.[2]
In Great Britain, common land or former common land is usually referred to as a common; for instance, Clapham Common and Mungrisdale Common. Due to enclosure, the extent of common land is now much reduced from the hundreds of square kilometres that existed until the 17th century, but a considerable amount of common land still exists, particularly in upland areas. There are over 8,000 registered commons in England alone.[3]
https://en.wikipedia.org/wiki/Common_land
In traditional usage, a global public good (or global good) is a public good available on a more-or-less worldwide basis.[1] There are many challenges to the traditional definition, which have far-reaching implications in the age of globalization.
Definition
In traditional usage, a pure global public good is a good that has the three following properties:[2]
- It is non-rivalrous. Consumption of this good by anyone does not reduce the quantity available to other agents.
- It is non-excludable. It is impossible to prevent anyone from consuming that good.
- It is available more-or-less worldwide.
This concept is an extension of American economist Paul Samuelson's classic notion of public goods[3] to the economics of globalization.
The traditional theoretical concept of public goods does not distinguish with regard to the geographical region in which a good may be produced or consumed. However, the term "global public good" has been used to mean a public good which is non-rivalrous and non-excludable throughout the whole world, as opposed to a public good which exists in just one national area. Knowledge has been used as a classic example of a global public good.[4] In some academic literature, it has become associated with the concept of a common heritage of mankind.[5]
https://en.wikipedia.org/wiki/Global_public_good
In economics, goods are items that satisfy human wants[1] and provide utility, for example, to a consumer making a purchase of a satisfying product. A common distinction is made between goods which are transferable, and services, which are not transferable.[2]
A good is an "economic good" if it is useful to people but scarce in relation to its demand so that human effort is required to obtain it.[3] In contrast, free goods, such as air, are naturally in abundant supply and need no conscious effort to obtain them. Private goods are things owned by people, such as televisions, living room furniture, wallets, cellular telephones, almost anything owned or used on a daily basis that is not food-related.
A consumer good or "final good" is any item that is ultimately consumed, rather than used in the production of another good. For example, a microwave oven or a bicycle that is sold to a consumer is a final good or consumer good, but the components that are sold to be used in those goods are intermediate goods. For example, textiles or transistors can be used to make some further goods.
Commercial goods are construed as tangible products that are manufactured and then made available for supply to be used in an industry of commerce. Commercial goods could be tractors, commercial vehicles, mobile structures, airplanes, and even roofing materials. Commercial and personal goods as categories are very broad and cover almost everything a person sees from the time they wake up in their home, on their commute to work to their arrival at the workplace.
Commodities may be used as a synonym for economic goods but often refer to marketable raw materials and primary products.[4]
Although common goods are tangible, certain classes of goods, such as information, only take intangible forms. For example, among other goods an apple is a tangible object, while news belongs to an intangible class of goods and can be perceived only by means of an instrument such as printers or television.
Utility and characteristics of goods
Goods may increase or decrease their utility directly or indirectly and may be described as having marginal utility. Some things are useful, but not scarce enough to have monetary value, such as the Earth's atmosphere, these are referred to as 'free goods'.
In normal parlance, "goods" is always a plural word,[5][6] but economists have long termed a single item of goods "a good".
In economics, a bad is the opposite of a good.[7] Ultimately, whether an object is a good or a bad depends on each individual consumer and therefore, not all goods are goods to all people.
Types of goods
Goods' diversity allows for their classification into different categories based on distinctive characteristics, such as tangibility and (ordinal) relative elasticity. A tangible good like an apple differs from an intangible good like information due to the impossibility of a person to physically hold the latter, whereas the former occupies physical space. Intangible goods differ from services in that final (intangible) goods are transferable and can be traded, whereas a service cannot.
Price elasticity also differentiates types of goods. An elastic good is one for which there is a relatively large change in quantity due to a relatively small change in price, and therefore is likely to be part of a family of substitute goods; for example, as pen prices rise, consumers might buy more pencils instead. An inelastic good is one for which there are few or no substitutes, such as tickets to major sporting events,[citation needed] original works by famous artists,[citation needed] and prescription medicine such as insulin. Complementary goods are generally more inelastic than goods in a family of substitutes. For example, if a rise in the price of beef results in a decrease in the quantity of beef demanded, it is likely that the quantity of hamburger buns demanded will also drop, despite no change in buns' prices. This is because hamburger buns and beef (in Western culture) are complementary goods. It is important to note that goods considered complements or substitutes are relative associations and should not be understood in a vacuum. The degree to which a good is a substitute or a complement depends on its relationship to other goods, rather than an intrinsic characteristic, and can be measured as cross elasticity of demand by employing statistical techniques such as covariance and correlation.
Goods classified by exclusivity and competitiveness
Fourfold model of goods
Goods can be classified based on their degree of excludability and rivalry (competitiveness). Considering excludability can be measured on a continuous scale, some goods would not be able to fall into one of the four common categories used.
There are four types of goods based on the characteristics of rival in consumption and excludability: Public Goods, Private Goods, Common Resources, and Club Goods.[8] These four types plus examples for anti-rivalry appear in the accompanying table.[9]
|
Excludable | Non-excludable |
Rivalrous | Private goods food, clothing, cars, parking spaces |
Common-pool resources fish stocks, timber, coal, free public transport |
Non-rivalrous | Club goods cinemas, private parks, satellite television, public transport |
Public goods free-to-air television, air, national defense, free and open-source software |
Public goods
Goods that are both non-rival and non-excludable are called public goods. In many cases, renewable resources, such as land, are common commodities but some of them are contained in public goods. Public goods are non-exclusive and non-competitive, meaning that individuals cannot be stopped from using them and anyone can consume this good without hindering the ability of others to consume them. Examples in addition to the ones in the matrix are national parks, or firework displays. It is generally accepted by mainstream economists that the market mechanism will under-provide public goods, so these goods have to be produced by other means, including government provision. Public goods can also suffer from the Free-Rider problem.
Private goods
Private goods are excludable goods, which prevent other consumers from consuming them. Private goods are also rivalrous because one good in private ownership cannot be used by someone else. That is to say, consuming some goods will deprive another consumer of the ability to consume the goods. Private goods are the most common type of goods. They include what you have to get from the store. For examples food, clothing, cars, parking spaces,etc. An individual who consumes an apple denies another individual from consuming the same one. It is excludable because consumption is only offered to those willing to pay the price.[10]
Common-pool resources
Common-pool resources are rival in consumption and non-excludable. An example is that of fisheries, which harvest fish from a shared common resource pool of fish stock. Fish caught by one group of fishermen are no longer accessible to another group, thus being rivalrous. However, oftentimes, due to an absence of well-defined property rights, it is difficult to restrict access to fishermen who may overfish.[11]
Club goods
Club goods are excludable but not rivalrous in the consumption. That is, not everyone can use the good, but when one individual has claim to use it, they do not reduce the amount or the ability for others to consume the good. By joining a specific club or organization we can obtain club goods; As a result, some people are excluded because they are not members. Examples in addition to the ones in the matrix are cable television, golf courses, and any merchandise provided to club members. A large television service provider would already have infrastructure in place which would allow for the addition of new customers without infringing on existing customers viewing abilities. This would also mean that marginal cost would be close to zero, which satisfies the criteria for a good to be considered non-rival. However, access to cable TV services is only available to consumers willing to pay the price, demonstrating the excludability aspect.[12]
Economists set these categories for these goods and their impact on consumers. The government is usually responsible for public goods and common goods, and enterprises are generally responsible for the production of private and club goods. But this pattern does not fit for all the goods as they can intermingle.
History of the fourfold model of goods
In 1977, Nobel winner Elinor Ostrom and her husband Vincent Ostrom proposed additional modifications to the existing classification of goods so to identify fundamental differences that affect the incentives facing individuals. Their definitions are presented on the matrix.[13]
Elinor Ostrom proposed additional modifications to the classification of goods to identify fundamental differences that affect the incentives facing individuals[14]
- Replacing the term "rivalry of consumption" with "subtractability of use".
- Conceptualizing subtractability of use and excludability to vary from low to high rather than characterizing them as either present or absent.
- Overtly adding a very important fourth type of good—common-pool resources—that shares the attribute of subtractability with private goods and difficulty of exclusion with public goods. Forests, water systems, fisheries, and the global atmosphere are all common-pool resources of immense importance for the survival of humans on this earth.
- Changing the name of a "club" good to a "toll" good since goods that share these characteristics are provided by small scale public as well as private associations.
Expansion of Fourfold model: Anti-rivalrous
Consumption can be extended to include "Anti-rivalrous" consumption.
Excludable | ||
---|---|---|
yes | no | |
Rivalrous | Private Good | Common-pool good |
Non-rivalrous | Club / toll Good | Public Good |
Anti-rivalrous | "network" good, e.g., data on the internet; good that improves public health | "symbiotic" good, e.g., language |
Expansion of Fourfold model: Semi-Excludable
The additional definition matrix shows the four common categories alongside providing some examples of fully excludable goods, Semi-excludable goods and fully non-excludeable goods. Semi-excludable goods can be considered goods or services that a mostly successful in excluding non-paying customer, but are still able to be consumed by non-paying consumers. An example of this is movies, books or video games that could be easily pirated and shared for free.
|
Fully Excludable | Semi-Excludable | Fully Non-Excludable |
---|---|---|---|
Rivalrous | Private Goods
food, clothing, cars, parking spaces |
Piracy of copyrighted goods
like movies, books, video games |
Common-pool Resources
fish, timber, coal, free public transport |
Non-Rivalrous | Club Goods
cinemas, private parks, television, public transport |
Sharing pay television or streaming subscriptions
to more users than what is being paid for |
Public Goods
free-to-air, air, national defense, free and open-source software |
Trading of goods
Goods are capable of being physically delivered to a consumer. Goods that are economic intangibles can only be stored, delivered, and consumed by means of media.
Goods, both tangibles and intangibles, may involve the transfer of product ownership to the consumer. Services do not normally involve transfer of ownership of the service itself, but may involve transfer of ownership of goods developed or marketed by a service provider in the course of the service. For example, sale of storage related goods, which could consist of storage sheds, storage containers, storage buildings as tangibles or storage supplies such as boxes, bubble wrap, tape, bags and the like which are consumables, or distributing electricity among consumers is a service provided by an electric utility company. This service can only be experienced through the consumption of electrical energy, which is available in a variety of voltages and, in this case, is the economic goods produced by the electric utility company. While the service (namely, distribution of electrical energy) is a process that remains in its entirety in the ownership of the electric service provider, the goods (namely, electric energy) is the object of ownership transfer. The consumer becomes an electric energy owner by purchase and may use it for any lawful purposes just like any other goods.
See also
Notes
- Elinor, Ostrom (2005). Understanding Institutional Diversity. Princeton, NJ: Princeton University Press.
References
- Bannock, Graham et al. (1997). Dictionary of Economics, Penguin Books.
- Milgate, Murray (1987), "goods and commodities," The New Palgrave: A Dictionary of Economics, v. 2, pp. 546–48. Includes historical and contemporary uses of the terms in economics.
- Vuaridel, R. (1968). Une définition des biens économiques. (A definition of economic goods). L'Année sociologique (1940/1948-), 19, 133-170. Stable JStor URL: [1]
External links
- Media related to Goods (economics) at Wikimedia Commons
https://en.wikipedia.org/wiki/Goods
Examples |
In economics, a public good (also referred to as a social good or collective good)[1] is a good that is both non-excludable and non-rivalrous. For such goods, users cannot be barred from accessing or using them for failing to pay for them. Also, use by one person neither prevents access of other people nor does it reduce availability to others.[1] Therefore, the good can be used simultaneously by more than one person.[2] This is in contrast to a common good, such as wild fish stocks in the ocean, which is non-excludable but rivalrous to a certain degree. If too many fish were harvested, the stocks would deplete, limiting the access of fish for others. A public good must be valuable to more than one user, otherwise, the fact that it can be used simultaneously by more than one person would be economically irrelevant.
Capital goods may be used to produce public goods or services that are "...typically provided on a large scale to many consumers."[3] Unlike other types of economic goods, public goods are described as “non-rivalrous” or “non-exclusive,” and use by one person neither prevents access of other people nor does it reduce availability to others.[1] Similarly, using capital goods to produce public goods may result in the creation of new capital goods. In some cases, public goods or services are considered "...insufficiently profitable to be provided by the private sector.... (and), in the absence of government provision, these goods or services would be produced in relatively small quantities or, perhaps, not at all."[3]Public goods include knowledge,[4] official statistics, national security, common languages,[5] law enforcement, public parks, free roads, television and radio broadcasts.[6] Additionally, flood control systems, lighthouses, and street lighting are also common social goods. Collective goods that are spread all over the face of the earth may be referred to as global public goods. This is not limited to physical book literature, but also media, pictures and videos.[7] For instance, knowledge is well shared globally. Information about men, women and youth health awareness, environmental issues, and maintaining biodiversity is common knowledge that every individual in the society can get without necessarily preventing others access. Also, sharing and interpreting contemporary history with a cultural lexicon, particularly about protected cultural heritage sites and monuments are other sources of knowledge that the people can freely access.
Public goods problems are often closely related to the "free-rider" problem, in which people not paying for the good may continue to access it. Thus, the good may be under-produced, overused or degraded.[8] Public goods may also become subject to restrictions on access and may then be considered to be club goods; exclusion mechanisms include toll roads, congestion pricing, and pay television with an encoded signal that can be decrypted only by paid subscribers.
There is a good deal of debate and literature on how to measure the significance of public goods problems in an economy, and to identify the best remedies.
https://en.wikipedia.org/wiki/Public_good_(economics)
In economics, a common-pool resource (CPR) is a type of good consisting of a natural or human-made resource system (e.g. an irrigation system or fishing grounds), whose size or characteristics makes it costly, but not impossible, to exclude potential beneficiaries from obtaining benefits from its use. Unlike pure public goods, common pool resources face problems of congestion or overuse, because they are subtractable. A common-pool resource typically consists of a core resource (e.g. water or fish), which defines the stock variable, while providing a limited quantity of extractable fringe units, which defines the flow variable. While the core resource is to be protected or nurtured in order to allow for its continuous exploitation, the fringe units can be harvested or consumed.[1]
https://en.wikipedia.org/wiki/Common-pool_resource
This article needs additional citations for verification. (May 2015) |
Common goods (also called common-pool resources[1]) are defined in economics as goods that are rivalrous and non-excludable. Thus, they constitute one of the four main types based on the criteria:
- whether the consumption of a good by one person precludes its consumption by another person (rivalrousness)
- whether it is possible to prevent people (consumers) who have not paid for it from having access to it (excludability)
As common goods are accessible by everybody, they are at risk of being subject to overexploitation which leads to diminished availability if people act to serve their own self-interests.
Characteristics of common goods
Common-pool resources are sufficiently large that it is difficult, but not impossible, to define recognized users and exclude other users altogether.[2] Based on the criteria, common goods are:
- rivalrous: When one person consumes a good, another person is unable to subsequently consume that good and the overall stock of the good decreases. For example, when a fisherman catches a fish, no other fisherman is able to catch that fish.
- non-excludable: There is no possibility to exclude anybody from consumption of this good.
Common goods can be institutions, facilities, constructions or nature itself. As long as it can be used by all members of society and not privately consumed by specific individuals or not all parts of society as private goods.
For common goods to be able to exist, in most cases payment of taxes is needed, as common goods are socially beneficial and everyone is interested in satisfy some considered basic necessities. As the government is commonly the agent who drives expenses to create common goods, the community pays an amount in exchange.
A society requires to have certain elements in order to succeed in the creation of common goods. Developed countries normally share those elements such as being a democracy and having basic rights and freedoms, a transportation system, cultural institutions, police and public safety, a judicial system, an electoral system, public education, clean air and water, safe and ample food supply, and national defense.
A common problem with the common goods today is that its existence affects society as a whole, so we must all make a sacrifice to create a common good. Society then have to choose between the interest of a few or the sacrifice of all.
Accomplishing a common good has consistently required a level of individual penance. Today, the compromises and forfeits important for the benefit of everyone regularly include paying taxes, tolerating individual bother, or surrendering certain advantages and cultural beliefs. While infrequently offered intentionally, these penances and compromises are generally joined into laws and public policy. Some cutting-edge instances of the benefit of all and the penances associated with accomplishing them are:
- Public Infrastructure Improvement: Usually the improvement of highways, water, sewer and power lines require the addition or increase of taxes, as well as the use of eminent domain.
- Civil Rights and Racial Equality: Even though inequality and racial disparities must move in the way to seize to exist, vestiges of privileges for a fraction of the society still exist and had been progressively eliminated by new laws.
- Environmental Quality: New laws and movements increase regarding the global environmental problem as a healthy environment benefit the common good and now it isn't going to be only a matter of a few.
History
Despite its growing importance in modern society, the concept of the common good was first mentioned more than two thousand years ago in the writings of Plato, Aristotle, and Cicero. Regardless the time period Aristotle described the problem with common goods accurately: “What is common to many is taken least care of, for all men have greater regard for what is their own than for what they possess in common with others.”[1] As early as the second century AD, the Catholic religious tradition defined the common good as “the sum of those conditions of social life which allow social groups and their individual members relatively thorough and ready access to their own fulfilment.”
In later centuries, philosophers, politicians and economists have referred to the concept of common good such as Jean-Jacques Rousseau, in his 1762 book "The Social Contract". The Swiss philosopher, writer, and political theorist argues that in successful societies, the “general will” of the people will always be directed toward achieving the collectively agreed common good. Rousseau contrasts the will of all—the total of the desires of each individual—with the general will—the “one will which is directed towards their common preservation and general well-being.” Rousseau further contends that political authority, in the form of laws, will be viewed as legitimate and enforceable only if it is applied according to the general will of the people and directed toward their common good.
Adam Smith also referred to common goods in his book “The Wealth of Nations”, as individuals moved by an “invisible hand” to satisfy their own interests serve the purpose of the common good. He advocated that in order to realize common interests, society should shoulder common responsibilities to ensure that the welfare of the most economically disadvantaged class is maintained.
This view was later shared by the American philosopher John Rawls, who in his book “Theory of Justice” believes that public good is the core of a healthy moral, economic and political system. Rawls defined the common interest as “certain general conditions that are … equally to everyone's advantage.”
In this case, Rawls equates the common interest with the combination of social conditions for the equal sharing of citizenship, such as basic freedom and fair economic opportunities.
Examples
Congested roads - Roads may be considered either public or common resources. Road is public good whenever there is no congestion, thus the use of the road does not affect the use of someone else. However, if the road is congested, one more person driving the car makes the road more crowded which causes slower passage. In other words, it creates a negative externality and road becomes common good.[1]
Clean water and air - Climate stability belongs to classic modern examples.[3] Water and air pollution is caused by market negative externality. Water flows can be tapped beyond sustainability, and air is often used in combustion, whether by motor vehicles, smokers, factories, wood fires. In the production process these resources and others are changed into finished products such as food, shoes, toys, furniture, cars, houses and televisions.
Fish stocks in international waters - Oceans remain one of the least regulated common resources.[1] When fish are withdrawn from the water without any limits being imposed just because of their commercial value, living stocks of fish are likely to be depleted for any later fishermen. This phenomenon is caused by no incentives to let fish for others. To describe situations in which economic users withdraw resources to secure short-term gains without regard for the long-term consequences, the term tragedy of the commons was coined. For example, forest exploitation leads to barren lands, and overfishing leads to a reduction of overall fish stocks, both of which eventually result in diminishing yields to be withdrawn periodically.
Other natural resources - Another example of a private exploitation treated as a renewable resource and commonly cited have been trees or timber at critical stages, oil, mined metals, crops, or freely accessible grazing.
Debates about sustainability can be both philosophical and scientific.[4][5][6] However, wise-use advocates consider common goods that are an exploitable form of a renewable resource, such as fish stocks, grazing land, etc., to be sustainable in the following two cases:
- As long as demand for the goods withdrawn from the common good does not exceed a certain level, future yields are not diminished and the common good as such is being preserved as a 'sustainable' level.
- If access to the common good is regulated at the community level by restricting exploitation to community members and by imposing limits to the quantity of goods being withdrawn from the common good, the tragedy of the commons may be avoided. Common goods that are sustained through an institutional arrangement of this kind are referred to as common-pool resources.
Tragedy of the commons
Tragedy of commons is a problem in economics in which everybody has an incentive to use a resource at the expense of everyone else who uses it, with no way of preventing anyone from consuming it. Generally, the resource in question is without barriers to entry and is demanded in excess of its supply, leading to depletion of the resource.
Example
For example, imagine there are several shepherds, each with their own flock of sheep, who have access to a communal field which they all use for grazing. As the sheep graze unhindered, they deplete the overall stock of grass in the field and there is less for other sheep to consume. The tragedy is that eventually the field will become barren and will be no use to any of the shepherds.[1][7]
Possible solutions
Assigning property rights is one possible solution to the problem. This involves essentially converting what was a common-pool resource into a private good. This would prevent that over-consumption of the good as the owner(s) of the good would have an incentive to regulate their consumption in order to keep the stock of that good at a healthy level.
Next solution is government intervention. Right to use the land can be allocated, the number of sheep in every herd can be regulated or externality made by sheep can be internalized by taxing sheep.[1]
Collective solutions can also be reached to solve the problem. Before English enclosure laws were enacted, there were agreements in place between lords and rural villagers to overcome this problem. Practices such as seasonal grazing and crop rotation regulated land use. Over-using the land resulted in enforceable sanctions.[8]
Common goods and normal goods
Normal goods are goods that experience an increase in demand as the income of consumers increases.. The demand function of a normal good is downward sloping, which means there is an inverse relationship between the price and quantity demanded.[9] In other words, price elasticity of demand is negative for normal goods. Common goods mean that demand and price change in the opposite direction. If something is a normal goods, then the consumer's demand for the goods and the consumer's income level change in the same direction. At this time, the substitution effect and income effect will strengthen each other, so the price change will lead to the opposite direction of demand change. Then the goods must be a common goods, so the normal goods must be a common goods.
Other goods
|
Excludable | Non-excludable |
Rivalrous | Private goods food, clothing, cars, parking spaces |
Common-pool resources fish stocks, timber, coal, free public transport |
Non-rivalrous | Club goods cinemas, private parks, satellite television, public transport |
Public goods free-to-air television, air, national defense, free and open-source software |
In addition to common goods, there are three other kinds of economic goods, including public goods, private goods, and club goods. Common goods that a businessman gives a thumbs up can include international fish stocks and other goods. Most international fishing areas have no limit on the number of fish that can be caught. Therefore, anyone can fish as he likes, which makes the good things not excluded. However, if there are no restrictions, fish stocks may be depleted when other fishermen arrive later. This means that fish populations are competitive. Other common commodities include water and game animals.
Tragedy of the commons
The tragedy of the commons was originally mentioned in 1833 by the Victorian economist William Forster Lloyd, who was a member of the Royal Society . He offered the example of a hypothetical tract of shared grazing land, in which all of the villagers brought their cows to this common grazing space, resulting in overgrazing and the depletion of the resource(Lloyd, 1833). Individuals may theoretically limit their use in order to avoid depleting a shared resource, if they so chose. However, there is a problem with free riders. In situations where people rely on others to reduce their productivity. The result of everyone taking advantage of the system and making the most of it is a scenario of over-consumption.
See also
- Common-pool resource
- Social goods
- Social trap
- Somebody else's problem
- Stone Soup – a story opposite the tragedy of the commons
- Tragedy of the anticommons
- Tyranny of small decisions
- Tragedy of commons
- Game theory
- Public good
References
Citations
- "What are Normal Goods?". Corporate Finance Institute. Retrieved 26 April 2022.
Bibliography[1][2]
Hardin, Garrett (1968-12-13). "The Tragedy of the Commons". Science. American Association for the Advancement of Science. 162 (3859): 1243–1248. doi:10.1126/science.162.3859.1243. ISSN 0036-8075. PMID 5699198.
Mankiw, N.Gregory (2015). Principles of Economics (7th ed.). USA: Cengage Learning, Inc. ISBN 978-1-285-16587-5.
Ostrom, Elinor; Roy, Gardner; Walker, James (1994). Rules, games, and common-pool resources. United States of America: University of Michigan Press. ISBN 0-472-09546-3.
Tirole, Jean (2017). Economics for the Common Good. USA, New Jersey: Princeton University Press. ISBN 978-0-691-17516-4.
- Jenkins, Megan E.; Simmons, Randy; Lofthouse, Jordan; Edwards, Eric (2022-05-09). "The Environmental Optimism of Elinor Ostrom". The Center for Growth and Opportunity.
https://en.wikipedia.org/wiki/Common_good_(economics)
Digital public goods are public goods in the form of software, data sets, AI models, standards or content that are generally free cultural works and contribute to sustainable national and international digital development.
Use of the term "digital public good" appears as early as April 2017, when Nicholas Gruen wrote Building the Public Goods of the Twenty-First Century, and has gained popularity with the growing recognition of the potential for new technologies to be implemented at a national scale to better service delivery to citizens.[1] Digital technologies have also been identified by countries, NGOs and private sector entities as a means to achieve the Sustainable Development Goals (SDGs).[1] This translation of public goods onto digital platforms has resulted in the use of the term "digital public goods".
Several international agencies, including UNICEF and UNDP, are exploring DPGs as a possible solution to address the issue of digital inclusion, particularly for children in emerging economies.[2]
Definition
A digital public good is defined by the UN Secretary-General’s Roadmap for Digital Cooperation, as: "open source software, open data, open AI models, open standards and open content that adhere to privacy and other applicable laws and best practices, do no harm, and help attain the SDGs."[3]
Most physical resources exist in limited supply. When a resource is removed and used, the supply becomes scarce or depleted. Scarcity can result in competing rivalry for the resource. The nondepletable, nonexclusive, and nonrivalrous nature of digital public goods means the rules and norms for managing them can be different from how physical public goods are managed. Digital public goods can be infinitely stored, copied, and distributed without becoming depleted, and at close to zero cost. Abundance rather than scarcity is an inherent characteristic of digital resources in the digital commons.
Digital public goods share some traits with public goods including non-rivalry and non-excludability.[4]
Usage
This Wikimania submission from 2019 shows how the definition of a public good evolves into a digital public good:
A public good is a good that is both non-excludable (no one can be prevented from consuming this good) and non-rivalrous (the consumption of this good by anyone does not reduce the quantity available to others). Extending this definition to global public goods, they become goods with benefits that extend to all countries, people, and generations and are available across national borders everywhere. Knowledge and information goods embody global public goods when provided for free (otherwise the trait of non-excludability could not be met on the basis of excluding those who cannot pay for those goods). The online world provides a great medium for the provision of global public goods, where they become global digital public goods. Once produced in their digital form, global public goods are essentially costless to replicate and make available to all, under the assumption that users have Internet connectivity to access these goods.[5]
Examples
In sectors from information science, education, finance, healthcare and beyond there are relevant examples of technologies that are likely to be digital public goods as defined above.
One such example is Wikipedia itself. Others include DHIS2, an open source health management system.[6]
Free and open-source software (FOSS) is an example of digital public good. Since FOSS is licensed to allow it to be shared freely, modified and redistributed, it is available as a digital public good.
Another example of digital public good is open educational resources, which by their copyright are allowed to be freely re-used, revised and shared.
Free and open-source software
While the original motivation of the free software movement, was political in nature - aiming to preserve the freedom of all to study, copy, modify and re-distribute any software/code, given that the marginal costs of duplication of software is negligible, FOSS becomes digital public good.
FOSS has allowed greater dissemination of software in society. Since FOSS applications can be customized, users can add local language interfaces (localization), which expands the availability of the digital public good to more in that country/society/region, where users speak that language.
Open educational resources
Copyright law makes the default copyright as 'all rights reserved', this applies to digital content as well. The open educational resources (OER) movement has popularized the use of copyright ('copyleft') licenses like the Creative Commons, which allows the content to be freely re-used, shared, modified and re-distributed. Thus all OER are digital public goods. OER have reduced the costs of accessing learning materials in schools and higher education institutions in many countries of the world. In India, the Ministry of Education has supported the development of the DIKSHA OER portal for teachers to upload and download materials for their teaching-learning.
OER itself is an output of using editing/authoring software applications. The Commonwealth of Learning, a Commonwealth inter-governmental institution, has been popularizing the use of FOSS editors to create OER, and has supported IT for Change to develop the Teachers' toolkit for creating and re-purposing OER using FOSS. Such an approach will lead to expansion in one digital public good (content or OER), using another digital public good FOSS.
Open data
Digital public goods as defined by the UN Secretary-General’s High-level Panel on Digital Cooperation published in The Age of Digital Interdependence includes open data.[7]
Beginning with open data in a machine readable format, startups and enterprises can build applications and services that utilize that data. This can create interoperability at a large scale.
The UNCTAD Digital Economy Report 2019 recommends commissioning the private sector to build the necessary infrastructure for extracting data, which can be stored in a public data fund that is part of the national data commons.[8] Alternative solutions include mandating companies through public procurement contracts to provide data they collect to governments (this is being tested in Barcelona, for example).[9]
Digital Public Goods Alliance
In mid-2019 the UN Secretary-General’s High-level Panel on Digital Cooperation published The Age of Digital Interdependence.[7] The report recommended advancing a global discussion about how stakeholders could work better together to realize the potential of digital technologies for advancing human well-being. Recommendation 1B in that report states "that a broad, multi-stakeholder alliance, involving the UN, create a platform for sharing digital public goods, engaging talent and pooling data sets, in a manner that respects privacy, in areas related to attaining the SDGs".[10]
In response, in late 2019 the Governments of Norway and Sierra Leone, UNICEF and iSPIRT formally initiated the Digital Public Goods Alliance as a follow-up on the High-level Panel.[11]
The subsequent UN Secretary-General’s Roadmap for Digital Cooperation, published in June 2020, mentions the Digital Public Goods Alliance specifically as "a multi-stake-holder initiative responding directly to the lack of a "go to" platform, as highlighted by the Panel in its report."[3] The report further highlights digital public goods as essential to achieving the Sustainable Development Goals in low- and middle-income countries and calls on all stakeholders, including the UN to assist in their development and implementation.[3]
See also
References
This article needs additional or more specific categories. (March 2021) |
https://en.wikipedia.org/wiki/Digital_public_goods
Trade globalization is a type of economic globalization and a measure (economic indicator) of economic integration. On a national scale, it loosely represents the proportion of all production that crosses the boundaries of a country, as well as the number of jobs in that country dependent upon external trade. On a global scale, it represents the proportion of all world production that is used for imports and exports between countries.
- For an individual country, trade globalization is measured as the proportion of that country's total volume of trade to its Gross Domestic Product (GDP):[1]
- For the world as a whole, trade globalization is the share of total world trade in total world production (GDP), where the sums are taken over all countries:[2]
Definition
Preyer and Brös provide a simple operationalization of trade globalization as "the proportion of all world production that crosses international boundaries".[2] Chase-Dunn et al. note that trade globalization is one of the types of economic globalization, and define trade globalization as "the extent to which the long-distance and global exchange of commodities has increased (or decreased) relative to the exchange of commodities within national societies", and precisely operationalize it as "the sum of all international exports as a percentage of the global product, which is the sum of all the national gross domestic products (GDPs)."[3] Erreygers and Vermeire define trade globalization as "the degree of dissimilarity between the actual distribution of bilateral trade flows and their gravity benchmark, determined only by size and distance."[4] They note that trade globalization would be maximized in a situation where only size and distance affected the intensity of bilateral trade flows - in other words, in a situation where neither trade barriers nor other factors would matter.[4]
Salvatore Babones notes that trade globalization is the indicator of a country's level of globalization most commonly used in empirical literature.[5] Data for most countries in the modern era are available from the World Bank World Development Indicators database.[5]
Trend
Chase-Dunn et al. note that there have been cyclical waves of trade globalization, with declines corresponding to wars and economic depressions, and that there has been a steady trend over the centuries for trade globalization to increase.[3] With regards to the modern age, trade globalization increased until 1880, then decreased until 1905, increased again until 1914, decreased during World War I, increased until 1929, decreased until the end of World War II, and has been growing steadily since.[3] They note that the main explanatory factors in this trend are the continued decline in transportation and communication costs, and stability provided by the "hegemonic system" supportive of trade in recent world-systems.[3] Decreases can be explained by wars, and periods of conflict and tension often leading to them, where international actors cannot reach consensus on trade agreements and usually give in to protectionism.[3]
See also
- Global financial system
- International marketing
- International trade
- Internationalization
- List of economic communities
- List of free trade agreements
References
- Babones, Salvatore (15 April 2008). "Studying Globalization: Methodological Issues". In George Ritzer (ed.). The Blackwell Companion to Globalization. John Wiley & Sons. pp. 147–149. ISBN 978-0-470-76642-2. Retrieved 1 February 2013.
External links
- Chase-Dunn, Christopher, and Shoon Lio, "Global Class Formation and the New Global Left in World Historical Perspective"
- Chase-Dunn, Christopher, and Roy Kwon, "Crises and Counter-Movements in World Evolutionary Perspective". Contains graphs of trade globalization for 1860-2008
- Appendix to "Trade Globalization since 1795: waves of integration in the world-system"
https://en.wikipedia.org/wiki/Trade_globalization
Non-violation nullification of benefits (NVNB) claims are a species of dispute settlement in the World Trade Organization arising under World Trade Organization multilateral and bilateral trade agreements.[clarification needed] NVNB claims are controversial in that they are widely perceived to promote the social vices of unpredictability and uncertainty in international trade law.[1] Other commentators have described NVNB claims as potentially inserting corporate competition policy into the World Trade Organization Dispute Settlement Understanding (DSU).[2]
Location of NVNB claims
NVNB claims are directly referred to in Article 26 of the World Trade Organization DSU, Article XXIII of the General Agreement on Tariffs and Trade 1994 (GATT 1994) Article XXIII of the General Agreement on Trade in Services (GATS) and Article 64 of the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS).[3]
In
GATT jurisprudence, NVNB complaints appear to have originally been
designed to counter the capacity of countries to avoid relatively simple
obligations and specific tariff concessions in multilateral trade
agreements, by making ambiguous domestic regulatory arrangements.[4]
NVNB claims provisions also exist in many bilateral trade agreements. In the Australia-United States Free Trade Agreement(AUSFTA) article 21.2 (c) provides an NVNB claim:
Except as otherwise provided in this Agreement or as the Parties otherwise agree, the dispute settlement provisions of this Section shall apply with respect to the avoidance or settlement of all disputes between the Parties regarding the interpretation or application of this Agreement or wherever a Party considers that:
(a) a measure of the other party is inconsistent with its obligations under this Agreement (b) the other Party has otherwise failed to carry out its obligations under this Agreement; or
(c) a benefit the Party could reasonably have expected to accrue to it under Chapters Two (National Treatment and Market Access for Goods [including Annex 2C on pharmaceuticals]), Three (Agriculture), Five (Rules of Origin), Ten (Cross-Border Trade in Services), Fifteen (Government Procurement) or Seventeen (Intellectual Property Rights) is being nullified or impaired as a result of a measure that is not inconsistent with this Agreement.
The Australian academic Thomas Alured Faunce has argued that by expressly applying Annex 2C on pharmaceuticals, the NVNB claim in article 21.2(c) of the AUSFTA may have been responsible for lobbying by United States negotiators around the constructive ambiguity of reward of innovation (through the Medicines Working Group established by article 2C of the AUSFTA) that influenced Australian legislative changes impacting on reference pricing under the Pharmaceutical Benefits Scheme. He maintains that such pressure from NVNB claims is most likely to arise from 'behind doors' lobbying using threats of cross-retaliation (threatening a trade dispute in one trade area to obtain a result in a different sector) if a planned or existing domestic policy is perceived to breach the 'spirit' of the relevant bilateral trade agreement. Formal dispute resolution proceedings may never be initiated or be intended to commence if such lobbying is persuasive.[5] If this hypothesis is correct, it represents a disturbing example of regulatory capture and has worrying implications for democratic sovereignty. The Australian government, however, strenuously denies such claims.
Operation of NVNB claims
Under
such NVNB provisions, the full range of dispute resolution mechanisms
may be invoked whether or not a breach of any specific provision is
alleged or substantiated. The precondition is that a 'reasonably
expected' 'benefit' accruing under the relevant trade agreement, has
been 'nullified or impaired' by a 'measure' applied by a WTO Member.
Five requisite elements of a NVNB claim arguably have been identified by
Dispute Resolution Panels:
1. That a 'measure' has been applied by a party subsequent to the entry into force of the relevant trade agreement;
2. That a 'benefit' was reasonably expected by the other party as being negotiated in return for some textual agreement; and
3. That as a result of the application of the measure that benefit has been 'nullified or impaired.'
4. That the nullification or impairment was contrary to the legitimate
or reasonable expectations of the complainant at the time of the
negotiations
5. That such claims will only be used in extremely rare circumstances
(for example proven bad faith during negotiations), due to their
capacity to upset the certainty of the international trading order.[1]
Debate about NVNB claims
Article 3.2 of the World Trade Organization DSU requires panels to clarify existing provisions of agreements in accordance with customary rules of interpretation of public international law. This leads to consideration of how the NVNB principle interacts with Article 26 of the Vienna Convention on the Law of Treaties, incorporating the principle of pacta sunt servanda: "Every treaty in force is binding upon the parties to it and must be performed by them in good faith." NVNB claims appear to undermine this fundamental principle of international law by subsequent reinterpretations based on the 'spirit' of the agreement.[1]
Both the United States and European Economic Community have argued before a GATT 1994 panel that recourse to NVNB claims should remain 'exceptional' or the trading world would be plunged into a state of precariousness and uncertainty.[6] Despite this, however, the United States has inserted NVNB claims in many bilateral trade agreements.
The Appellate Body in the WTO EC - Asbestos Case agreed with the Panel in the WTO Japan – Film Case,[7] stating that the non-violation nullification or impairment remedy in GATT Article XXIII:1(b): "should be approached with caution and treated as an exceptional concept." The reason for this caution is straightforward. Members negotiate the rules that they agree to follow and only exceptionally would expect to be challenged for actions not in contravention of those rules.[8]
The governments of Canada, the Czech Republic, the European Communities, Hungary and Turkey have stated at the World Trade Organization that "the uncertainty regarding the application of such non-violation complaints needs to be resolved so as to minimize the possibility of unintended interpretation."[9]
Contemporary controversy over NVNB claims and proceedings arises in large part from their potential to allow a WTO Member to threaten a trade dispute if a wide and largely undefined range of domestic regulatory components are not altered, or compensation organised. It may facilitate a WTO dispute settlement process involving deliberate diplomatic ‘gaming’ of trade ‘rules,’ breaking of finely balanced textual truces and dispute panel interpretations that more an act of ongoing negotiation, than judicial analysis.[10]
WTO Moratorium on NVNB Claims
At the WTO meeting in Hong Kong in December 2005, the United States delegation pushed hard behind the scenes for trade concessions in return for its acquiescence to the moratorium on the use of NVNB provisions under TRIPS.[11] The resultant Ministerial Declaration left the position of NVNB claims under TRIPS extremely uncertain.
45. We take note of the work done by the Council for Trade-Related Aspects of Intellectual Property Rights pursuant to paragraph 11.1 of the Doha Decision on Implementation-Related Issues and Concerns and paragraph 1.h of the Decision adopted by the General Council on 1 August 2004, and direct it to continue its examination of the scope and modalities for complaints of the types provided for under subparagraphs 1(b) and 1(c) of Article XXIII of GATT 1994 and make recommendations to our next Session. It is agreed that, in the meantime, Members will not initiate such complaints under the TRIPS Agreement.
— WTO Ministerial Conference Sixth Session Hong Kong, [12]
Circumscribing NVNB Claims in Trade Law Dispute Resolution Panels
One way of circumscribing NVNB claims so they are compatible with the obligation to act in good faith and with other rules of treaty interpretation under Articles 31 and 32 of the Vienna Convention on the Law of Treaties is to restrict their operation to ensuring 'transparency and openness' in the negotiating process. In consequence, in NVNB disputes, the inquiry to be made by a dispute resolution Panel is whether the complaining party was induced into error during negotiations by the other treaty Party about a fact or situation, that the former could not reasonably have foreseen.[1]
See also
References
- WTO Ministerial Conference Sixth Session Hong Kong, 13–18 December 2005. Doha Work Programme. Draft Ministerial Declaration. WT/MIN(05)/W/3/Rev.2 18 December 2005.
https://en.wikipedia.org/wiki/Non-violation_nullification_of_benefits
McWorld is a term referring to the spreading of McDonald's restaurants throughout the world as the result of globalization, and more generally to the effects of international 'McDonaldization' of services and commercialization of goods as an element of globalization as a whole. The name also refers to a 1990s advertising campaign for McDonald's, and to a children's website launched by the firm in 2008.
https://en.wikipedia.org/wiki/McWorld
Freight transport, also referred as freight forwarding, is the physical process of transporting commodities and merchandise goods and cargo.[1] The term shipping originally referred to transport by sea but in American English, it has been extended to refer to transport by land or air (International English: "carriage") as well. "Logistics", a term borrowed from the military environment, is also used in the same sense.
Modes of shipment
In 2015, 108 trillion tonne-kilometers were transported worldwide (anticipated to grow by 3.4% per year until 2050 (128 Trillion in 2020)): 70% by sea, 18% by road, 9% by rail, 2% by inland waterways and less than 0.25% by air.[2]
Grounds
Land or "ground" shipping can be made by train or by truck (British English: lorry). Ground transport is typically more affordable than air, but more expensive than sea, especially in developing countries, where inland infrastructure may not be efficient. In air and sea shipments, ground transport is required to take the cargo from its place of origin to the airport or seaport and then to its destination because it is not always possible to establish a production facility near ports due to the limited coastlines of countries.
Ship
Much freight transport is done by cargo ships. An individual nation's fleet and the people that crew it are referred to as its merchant navy or merchant marine. According to a 2018 report from the United Nations Conference on Trade and Development (UNCTAD), merchant shipping (or seaborne trade) carries 80-90% of international trade and 60-70% by value.[3]: 4 On rivers and canals, barges are often used to carry bulk cargo.
Air
Cargo is transported by air in specialized cargo aircraft and in the luggage compartments of passenger aircraft. Air freight is typically the fastest mode for long-distance freight transport, but it is also the most expensive.
Multimodal
Cargo is exchanged between different modes of transportation via transport hubs, also known as transport interchanges or nodes (e.g. train stations, airports, etc.). Cargo is shipped under a single contract but performed using at least two different modes of transport (e.g. ground and air). Cargo may not be containerized.
Intermodal
Multimodal transport featuring containerized cargo (or intermodal container) that is easily transferred between ship, rail, plane and truck.
For example, a shipper works together with both ground and air transportation to ship an item overseas. Intermodal freight transport is used to plan the route and carry out the shipping service from the manufacturer to the door of the recipient.[4][5]
Terms of shipment
Admiralty law |
---|
History |
Features |
Contract of carriage/Charterparty |
Parties |
Judiciary |
International conventions |
International organisations |
The Incoterms (or International Commercial Terms) published by the International Chamber of Commerce (ICC) are accepted by governments, legal authorities, and practitioners worldwide for the interpretation of the most commonly used terms in international trade. Common terms include:
- Free on Board (FOB)
- Cost and Freight (CFR, C&F, CNF)
- Cost, Insurance and Freight (CIF)
The term "best way" generally implies that the shipper will choose the carrier that offers the lowest rate (to the shipper) for the shipment. In some cases, however, other factors, such as better insurance or faster transit time, will cause the shipper to choose an option other than the lowest bidder.
Door-to-door shipping
Door-to-door (DTD or D2D) shipping refers to the domestic or international shipment of cargo from the point of origin (POI) to the destination while generally remaining on the same piece of equipment and avoiding multiple transactions, trans-loading, and cross-docking without interim storage.
International DTD is a service provided by many international shipping companies and may feature intermodal freight transport using containerized cargo. The quoted price of this service includes all shipping, handling, import and customs duties, making it a hassle-free option for customers to import goods from one jurisdiction to another. This is compared to standard shipping, the price of which typically includes only the expenses incurred by the shipping company in transferring the object from one place to another. Customs fees, import taxes and other tariffs may contribute substantially to this base price before the item ever arrives.[6]
See also
- Affreightment
- Automatic Identification System
- Mid-stream operation
- Outline of transport
- Ship transport
- Rail transport
- Transshipment
- Greek shipping
- Chinese shipping
- Environmental issues with shipping
- List of cargo types
- Right of way (shipping)
- Shipping markets
- Full container load (FCL)
- Less than container load (LCL)
References
- "Delta Cargo, Roadie partner to offer door-to-door parcel delivery service in US". Stat Trade Times. October 31, 2019. Retrieved 2019-10-31.
Citations
- "Review of Maritime Transport 2014" (PDF). United Nations Conference on Trade and Development. 2014.
- "Special Chapter: Asia". Review Maritime Transport 2010 Flyer. United Nations Conference on Trade and Development. 2010. Retrieved 9 December 2011.[permanent dead link]
External links
- Schreiber, Zvi 2016: The Year Freight Goes Online. December 2015
- Humplik, Carmen Winds of change in freight transportation supply chain: Platooning technology. July 2017
- Bloomberg.com First Cryptocurrency Freight Deal Takes Russian Wheat to Turkey. January 2018
https://en.wikipedia.org/wiki/Freight_transport
"First globalization" is a phrase used by economists to describe the world's first major period of globalization of trade and finance, which took place between 1870 and 1914. The "second globalization" began in 1944 and ended in 1971. This led to the third era of globalization, which began in 1989 and continues today.[1]
The period from 1870 to 1914 represents the peak of 19th-century globalization. First globalization is known for increasing transfers of commodities, people, capital and labour between and within continents. However, it is not only about the movement of goods or factors of production. First globalization also includes technological transfers and the rise of international cultural and scientific cooperation. The 1876 World Fair in Philadelphia was the first not to take place in Europe. The modern Olympics began in 1896. The first Nobel prizes were awarded in 1901.[2][3][4]
International trade grew for many reasons. Constant technological improvement and increased usage associated with the decline in international freight rates. The development of railways lowered the transport costs, which resulted in a massive migration within Europe and from the Old World to the New World. Exchange-trade stability and reduction of uncertainty in trade made possible by the gold standard. Peace between main powers and reduction of trade barriers promoted trade.[2][3][4]
1870-1914 is also known as the laissez-faire period, thus mostly liberal international policies are in place. However, the trade policies of the time lacked reciprocity.[3]
This period saw financial crises comparable to those of the late twentieth and early twenty-first centuries and the end of the First globalisation is associated with the collapse of international trade when World War I. started.[3]
History
Globalization revolves around technological and social advances, which further leads to advances in trade and cultural relativism throughout the world. Some economists claim that globalization was first started by the discovery of the Americas by Christopher Columbus. This assumption is considered false due to the mass discovery of gold and silver in mines. This discovery led to the decrease in value of silver and gold in Europe, causing inflation in the Spanish and Portuguese empires.[5] However, the discovery of the Americas and the natives gave European traders a new source of labor between the continents, which also increased trade. This stage has not been officially deemed the "first era of globalization" because the world trade numbers were not increasing exponentially. World trade increased by 1% per year from 1500 to 1800, which further led to the first era of globalization.[6]
Entering the 18th century, due to new technological breakthroughs world trade started to increase rapidly. The first technological advancement that contributed to this was the steam engine, introduced in the 17th century. This led to major progress in international trade among the economic powers of the world.[7]
The invention of the steamship had a great impact on the first wave of globalization. Before its invention, trade routes were reliant on wind patterns, but steamships reduced shipping time and shipping cost. By 1850, nearly 129 countries used steamships for trade, and approximately 5,000 imports and exports were made to 5,000 cities, thus making a great impact on the world's global economy.[8]
Trade
Integration during the First globalization period can be demonstrated in many ways. The volume of international flows, the ratio of commodity trade to GDP and the cost of moving goods or factors of production across borders are a few of the measures, which help us show the increasing trade trend between 1870 and 1914. The third mentioned measure shows up in the international price gaps and for example, the price gap of wheat between Liverpool and Chicago fell from 57,6% to 15,6%, and the price gap of bacon between London and Cincinnati fell from 92,5% to 17,9%.[2]
Many factors contributed to the growth of international trade. Falling transportation cost, reduction of trade barriers and move to free trade in several countries are just a few of those factors. Europe was a net exporter of manufacturers and a net importer of primary products. New World exchanged food and raw materials for European manufactured goods. This ended up being beneficial for European workers because, in the era where a large portion of income was still spent on food, cheaper transport meant cheaper food and thus higher real wages. However, it was not so beneficial for farmers. Only in countries that retained agricultural free trade, like the United Kingdom, were less vulnerable to the price and rent reductions that globalization implied. Trade between industrialized economies was the prevalent form of trade before 1914.[2][3][4]
Capital
International capital market integration was impressive during this period. By 1914, foreign assets accounted for nearly 20% of the worlds GDP. A figure, that was not measured again until the 1970s. Europe was the main moving power. In 1914, over 87% of total foreign investment belonged to European countries. While economic institutions and policies helped with the expend of international capital integration, the absence of military conflict between main lending countries and reduction in exchange-rate risk and transaction due to the gold standard kicked off the trend.[2][3]
Investment went in economies with exploitable natural resources rather than economies with cheap labour. The target was not to internationalize production but to facilitate access to raw materials, which Europe was not able to produce in great quantities. Therefore, international investment was highly concentrated. Investment mainly went into the construction of railways, land improvement, housing and other social projects that made it more pleasant for workers and beneficial for European consumers.[2][3]
Migration
Migration was a large part of the First globalization. Migration rates were enormous in European countries like Italy, Greece or Ireland. Migrations were not just transoceanic, but within Europe as well. The fact that American and Australian workers earned higher wages than their European counterparts was the main reason for the mass migrations. Combined with low travel cost and liberal policies, mass migration was inevitable. However, migration had the greatest impact on the European workers living standard during the First Globalization. Lowering the labour supply pushed up real wages. On the other hand, migration hurt their counterparts overseas. Immigration in the United States lowered unskilled wages. This resulted in tightening restrictions on immigration in the main destination countries.[2]
Technology
In Europe and the Atlantic world, technologies had been circulating for a long time and relatively freely in the late 19th century, despite laws forbidding the emigration of skilled workers and machinery exports. The decline in transport and communication costs helped the diffusion of ideas, new goods and machines. The diffusion of technologies was also supported by the creation of international scientific and technical organizations. However, science was seen as one of the weapons in the struggle between European nations. Between France and Germany, each hoped to tighten their links with allied and neutral countries, especially the United States. Later restrictive policies, aimed at import substitution, resulted in firms setting up production in foreign countries and transforming themselves into multinationals.[2]
Gold standard
The period of the First globalization saw the rise and fall of the gold standard. During the trade boom from 1870 to 1914 one country after another joined the gold standard regime, and gradually the system spread. The gold standard allows countries to convert their currencies to gold. This reduces the exchange-rate risk, transaction costs and assures potential investors that returns are reasonably safe.[2][3][4]
The gold standard was the central pillar of the First globalization. Global financial integration collapse in the summer of 1914 saw the fall of the gold standard as well. The final collapse of the gold standard came in the 1930s.[2][4]
After 1914
The beginning of World War I. has associated with a collapse of global financial integration and a decline in trade. The emerging of new borders and a rise in levels of protection shot up to trade barriers that would be still rising after the end of World War I. Meanwhile, tariffs, quotas and other commercial policy barriers were on a rise. Global bodies and international conferences tried to normalize, but governments were unwilling to undo their barriers and after the Imperial Economic Conference in Ottawa in 1932, international cooperation was no longer even an illusion. Interested parties thought that the restoration of the gold standard is a goal worth pursuing. However, after a brief return between 1925 and 1929 came a collapse of the gold standard in the 1930s, which drove trade volumes even lower.[2][4]
Sources
- "On #Trade, #Globalization, #Development and Steamships". The NEP-HIS Blog. 2014-10-08. Retrieved 2018-04-09.
https://en.wikipedia.org/wiki/First_globalization
Fair trade is a term for an arrangement designed to help producers in developing countries achieve sustainable and equitable trade relationships. The fair trade movement combines the payment of higher prices to exporters with improved social and environmental standards. The movement focuses in particular on commodities, or products that are typically exported from developing countries to developed countries but is also used in domestic markets (e.g., Brazil, the United Kingdom and Bangladesh), most notably for handicrafts, coffee, cocoa, wine, sugar, fruit, flowers and gold.[1][2]
Fair trade labelling organizations commonly use a definition of fair trade developed by FINE, an informal association of four international fair trade networks: Fairtrade Labelling Organizations International, World Fair Trade Organization (WFTO), Network of European Worldshops and European Fair Trade Association (EFTA). Fair trade, by this definition, is a trading partnership based on dialogue, transparency and respect, that seeks greater equity in international trade. Fair trade organizations, backed by consumers, support producers, raise awareness and campaign for changes in the rules and practice of conventional international trade.[3]
There are several recognized fair trade certifiers, including Fairtrade International (formerly called FLO, Fairtrade Labelling Organizations International), IMO, Make Trade Fair, and Eco-Social. Additionally, Fair Trade USA, formerly a licensing agency for the Fairtrade International label, broke from the system and implemented its own fair trade labelling scheme, which expanded the scope of fair trade to include independent smallholders and estates for all crops. In 2008, Fairtrade International certified approximately (€3.4B) of products.[4][5]
On 6 June 2008, Wales became the world's first Fair Trade Nation[clarification needed]; followed by Scotland in February 2013.[6][7][8] The fair trade movement is popular in the UK, where there are over 500 Fairtrade towns[clarification needed], 118 universities, over 6,000 churches, and over 4,000 UK schools registered in the Fairtrade Schools Scheme[clarification needed].[9] In 2011, more than 1.2 million farmers and workers in more than 60 countries participated in Fairtrade International's fair trade system, which included €65 million in fairtrade premium paid to producers for use developing their communities.[10]
Some criticisms have been raised about fair trade systems. One 2015 study concluded that producer benefits were close to zero because there was an oversupply of certification, and only a fraction of produce classified as fair trade was actually sold on fair trade markets, just enough to recoup the costs of certification.[11] A study published by the Journal of Economic Perspectives, however, suggests that Fair Trade does achieve many of its intended goals, although on a comparatively modest scale relative to the size of national economies.[12] Some research indicates that the implementation of certain fair trade standards can cause greater inequalities in some markets where these rigid rules are inappropriate for the specific market.[13][14][15] In the fair trade debate there are complaints of failure to enforce the fair trade standards, with producers, cooperatives, importers, and packers profiting by evading them.[16][17][18][19][20] One proposed alternative to fair trade is direct trade, which eliminates the overhead of the fair trade certification and allows suppliers to receive higher prices much closer to the retail value of the end product. Some suppliers use relationships started in a fair trade system to autonomously springboard into direct sales relationships they negotiate themselves, whereas other direct trade systems are supplier-initiated for social responsibility reasons similar to a fair trade systems.
System
A large number of fair trade and ethical marketing organizations employ a variety of marketing strategies.[21] Most fair trade marketers believe it is necessary to sell the products through supermarkets to get a sufficient volume of trade to affect the developing world.[21] In 2018, nearly 700,000 metric tons of fair-trade bananas were sold worldwide, with the next largest fair-trade commodity being cocoa beans (260,000 tons) then coffee beans (207,000 tons). The biggest product in the market in terms of units was fair-trade flowers, with over 825 million units sold.[22]
To gain a licence to use the FAIRTRADE mark, businesses need to apply for products to be certified by submitting information about their supply chain. Then they can have individual products certified depending on how these are sourced.[23] Coffee packers in developed countries pay a fee to the Fairtrade Foundation for the right to use the brand and logo. Packers and retailers can charge as much as they want for the coffee. The coffee has to come from a certified fair trade cooperative, and there is a minimum price when the world market is oversupplied. Additionally, the cooperatives are paid an additional 10c[clarification needed] per pound premium by buyers for community development projects.[24][pages needed] The cooperatives can, on average, sell only a third of their output as fair trade, because of lack of demand, and sell the rest at world prices.[25][pages needed][26][27][28][29][30][31][25] The exporting cooperative can spend the money in several ways. Some go to meeting the costs of conformity and certification: as they have to meet fair trade standards on all their produce, they have to recover the costs from a small part of their turnover,[27] sometimes as little as 8%,[30] and may not make any profit. Some meet other costs. Some is spent on social projects such as building schools, health clinics and baseball pitches. Sometimes there is money left over for the farmers. The cooperatives sometimes pay farmers a higher price than farmers do, sometimes less, but there is no evidence[weasel words] on which is more common.[32]
The marketing system for fair trade and non-fair trade coffee is identical in the consuming and developing countries, using mostly the same importing, packing, distributing, and retailing firms used worldwide. Some independent brands operate a "virtual company", paying importers, packers and distributors, and advertising agencies to handle their brand, for cost reasons.[33] In the producing country, fair trade is marketed only by fair trade cooperatives, while other coffee is marketed by fair trade cooperatives (as uncertified coffee), by other cooperatives and by ordinary traders.[25][26][27][30][28]
To become a certified fair trade producer, the primary cooperative and its member farmers must operate to certain political standards, imposed from Europe. FLO-CERT, the for-profit side, handles producer certification, inspecting and certifying producer organizations in more than 50 countries in Africa, Asia, and Latin America.[34] In the fair trade debate there are many complaints of failure to enforce these standards, with producers, cooperatives, importers, and packers profiting by evading them.[repetition][16][18][35][19][20][36][28][37][38][39][40][41]
There remain many fair trade organizations that adhere more or less to the original objectives of fair trade and that market products through alternative channels where possible and through specialist fair trade shops, but they have a small proportion of the total market.[42]
Effect on growers
Fair trade benefits workers in developing countries, considerably or just a little. The nature of fair trade makes it a global phenomenon; therefore, there are diverse motives for group formation related to fair trade. The social transformation caused by the fair trade movement also varies around the world.[43]
A study of coffee growers in Guatemala illustrates the effect of fair trade practices on growers. In this study, thirty-four farmers were interviewed. Of those thirty-four growers, twenty-two had an understanding of fair trade based on internationally recognized definitions, for example, describing fair trade in market and economical terms or knowing what the social premium is and how their cooperative has used it. Three growers explained a deep understanding of fair trade, showing a knowledge of both fair market principles and how fair trade affects them socially. Nine growers had erroneous or no knowledge of Fair Trade.[43] The three growers who had a deeper knowledge of the social implications of fair trade all had responsibilities within their cooperatives. One was a manager, one was in charge of the wet mill, and one was his group's treasurer. These farmers did not have a pattern[clarification needed] in terms of years of education, age, or years of membership in the cooperative; their answers to the questions, "Why did you join?" differentiate them from other members and explain why they have such an extensive knowledge of fair trade. These farmers cited switching to organic farming, wanting to raise money for social projects, and more training offered as reasons for joining the cooperative, other than receiving a better price for their coffee.[43]
Many farmers around the world are unaware of fair trade practices that they could be implementing to earn a higher wage. Coffee is one of the most highly traded commodities in the world, yet the farmers who grow it typically earn less than $2 a day.[44] When surveyed, farmers from Cooperativa Agraria Cafetalera Pangoa (CAC Pangoa) in San Martín de Pangoa, Peru, could answer positively that they have heard about fair trade, but were not able to give a detailed description about what fair trade is. They could, however, identify fair trade based on some of its possible benefits to their community. When asked, farmers responded that fair trade has had a positive effect on their lives and communities. They also wanted consumers to know that fair trade is important for supporting their families and their cooperatives.[44]
Some producers also profit from the indirect benefits of fair trade practices. Fair trade cooperatives create a space of solidarity and promote an entrepreneurial spirit among growers. When growers feel like they have control over their own lives within the network of their cooperative, it can be empowering. Operating a profitable business allows growers to think about their future, rather than worrying about how they are going to survive in poverty.[43]
As far as farmers' satisfaction with the fair trade system, growers want consumers to know that fair trade has provided important support to their families and their cooperative.[repetition] Overall, farmers are satisfied with the current fair trade system, but some farmers, such as the Mazaronquiari group from CAC Pangoa, desire yet a higher price for their products in order to live a higher quality of life.[44]
A component of trade is the social premium that buyers of fair trade goods pay to the producers or producer-groups of such goods. An important factor of the fair trade social premium is that the producers or producer-groups decide where and how it is spent. These premiums usually go towards socioeconomic development, wherever the producers or producer-groups see fit. Within producer-groups, the decisions about how the social premium will be spent are handled democratically, with transparency and participation[clarification needed].[43]
Producers and producer-groups spend this social premium to support socioeconomic development in a variety of ways. One common way to spend the social premium of fair trade is to privately invest in public goods that infrastructure and the government are lacking in. These include environmental initiatives, public schools, and water projects. At some point, all producer-groups re-invest their social premium back into their farms and businesses. They buy capital, like trucks and machinery, and education for their members, like organic farming education. Thirty-eight percent of producer-groups spend the social premium in its entirety on themselves, but the rest invest in public goods, like paying for teachers' salaries, providing a community health care clinic, and improving infrastructure, such as bringing in electricity and bettering roads.[43]
Farmers' organisations that use their social premium for public goods often finance educational scholarships. For example, Costa Rican coffee cooperative Coocafé has supported hundreds of children and youth at school and university through the financing of scholarships from funding from their fair trade social premium. In terms of education, the social premium can be used to build and furnish schools too.[45]
Organizations promoting fair trade
Most fair trade import organizations are members of, or certified by, one of several national or international federations. These federations coordinate, promote, and facilitate the work of fair trade organizations. The following are some of the largest:
- Fairtrade International, created in 1997, is an association of three producer networks and twenty national labeling initiatives that develop fair trade standards, license buyers, label usage, and market the Fair trade Certification Mark in consuming countries. The Fairtrade International labeling system is the largest and most widely recognized standard setting and certification body for labeled Fair trade. Formerly named Fairtrade Labelling Organizations International, it changed its name to Fairtrade International in 2009, when its producer certification and standard setting activities were separated into two separate, but connected entities. FLO-CERT, the for-profit side, handles producer certification, inspecting and certifying producer organizations in more than 50 countries in Africa, Asia, and Latin America.[34] Fairtrade International, the nonprofit arm, oversees standards development and licensing organization activity. Only products from certain developing countries are eligible for certification, and for some products such as coffee and cocoa, certification is restricted to cooperatives. Cooperatives and large estates with hired labor may be certified for bananas, tea, and other crops.[46]
- Fair Trade USA[47] is an independent, nonprofit organization that sets standards, certifies, and labels products that promote sustainable livelihoods for farmers and workers and protect the environment. Founded in 1998, Fair Trade USA currently partners with over 1,000 brands, as well as 1.3 million farmers and workers across the globe.[48]
- Global Goods Partners (GGP) is a fair-trade nonprofit organization founded in 2005 that provides support and U.S. market access to women-led cooperatives in the developing world.
- World Fair Trade Organization (formerly the International Fair Trade Association) is a global association created in 1989 of fair trade producer cooperatives and associations, export marketing companies, importers, retailers, national and regional fair trade networks, and fair trade support organizations. In 2004 WFTO launched the FTO Mark which identifies registered fair trade organizations (as opposed to the FLO system, which labels products).
- The Network of European Worldshops (NEWS), created in 1994, is the umbrella network of 15 national worldshop associations in 13 countries in Europe.
- The European Fair Trade Association (EFTA), created in 1990, is a network of European alternative trading organizations that import products from some 400 economically disadvantaged producer groups in Africa, Asia, and Latin America. EFTA's goal is to promote fair trade and to make fair trade importing more efficient and effective. The organization also publishes yearly various publications on the evolution of the fair trade market. EFTA currently has eleven members in nine countries.
In 1998, the four federations listed above joined together as FINE, an informal association whose goal is to harmonize fair trade standards and guidelines, increase the quality and efficiency of fair trade monitoring systems, and advocate fair trade politically.
- Additional certifiers include IMO (Fair for Life, Social and Fair Trade labels), and Eco-Social.
- The Fair Trade Federation (FTF), created in 1994, is an association of Canadian and American fair trade wholesalers, importers, and retailers. The organization links its members to fair trade producer groups while acting as a clearinghouse for information on fair trade and providing resources and networking opportunities to its members. Members self-certify adherence to defined fair trade principles for 100% of their purchasing/business. Those who sell products certifiable by Fairtrade International must be 100% certified by Fairtrade International to join FTF.
Student groups have also been increasingly promoting fair trade products.[49] Although hundreds of independent student organizations are active worldwide, most groups in North America are either affiliated with United Students for Fair Trade (USA), the Canadian Student Fair Trade Network (Canada), or Fair Trade Campaigns[50] (USA), which also houses Fair Trade Universities[49] and Fair Trade Schools.[51]
The involvement of church organizations has been and continues to be an integral part of the fair trade movement:
- Ten Thousand Villages[52] is affiliated with the Mennonite Central Committee[53]
- SERRV[54] is partnered with Catholic Relief Services[55] and Lutheran World Relief[56]
- Village Markets[57] is a Lutheran Fair Trade organization connecting mission sites around the world with churches in the United States[58]
- Catholic Relief Services has their own Fair Trade mission in CRS Fair Trade[59]
History
The first attempts to commercialize fair trade goods in markets in the global north were initiated in the 1940s and 1950s by religious groups and various politically oriented non-governmental organizations (NGOs). Ten Thousand Villages, an NGO within the Mennonite Central Committee (MCC), and SERRV International were the first, in 1946 and 1949 respectively, to develop fair trade supply chains in developing countries.[60] The products, almost exclusively handicrafts ranging from jute goods to cross-stitch work, were mostly sold in churches or fairs. The goods themselves had often no other function than to indicate that a donation had been made.[61]
Solidarity trade
The current fair trade movement was shaped in Europe in the 1960s. Fair trade during that period was often seen as a political gesture against neo-imperialism: radical student movements began targeting multinational corporations, and concerns emerged that traditional business models were fundamentally flawed. The slogan at the time, "Trade not Aid", gained international recognition in 1968 when it was adopted by the United Nations Conference on Trade and Development (UNCTAD) to put the emphasis on the establishment of fair trade relations with the developing world.[62]
1965 saw the creation of the first alternative trading organization (ATO): that year, British NGO Oxfam launched "Helping-by-Selling", a program that sold imported handicrafts in Oxfam stores in the UK and from mail-order catalogues.[63]
By 1968, the Whole Earth Catalog was connecting thousands of specialized merchants, artisans, and scientists directly with consumers who were interested in supporting independent producers, with the goal of bypassing corporate retail and department stores. The Whole Earth Catalog sought to balance the international free market by allowing direct purchasing of goods produced primarily in the U.S. and Canada but also in Central and South America.
In 1969, the first worldshop opened its doors in the Netherlands. It aimed at bringing the principles of fair trade to the retail sector by selling almost exclusively goods produced under fair trade terms in "underdeveloped regions". The first shop was run by volunteers and was so successful that dozens of similar shops soon went into business in the Benelux countries, Germany, and other Western European countries.
Throughout the 1960s and 1970s, segments of the fair trade movement worked to find markets for products from countries that were excluded from the mainstream trading channels for political reasons. Thousands of volunteers sold coffee from Angola and Nicaragua in worldshops, in the back of churches, from their homes, and from stands in public places, using the products as a vehicle to deliver their message: give disadvantaged producers in developing countries a fair chance on the world's market.[citation needed]
Handicrafts vs. agricultural goods
In the early 1980s, alternative trading organizations faced challenges: the novelty of fair trade products began to wear off, demand reached a plateau and some handicrafts began to look "tired and old fashioned" in the marketplace. The decline of segments of the handicrafts market forced fair trade supporters to rethink their business model and their goals. Moreover, several fair trade supporters were worried by the effect on small farmers of structural reforms in the agricultural sector as well as the fall in commodity prices. Many came to believe it was the movement's responsibility to address the issue and remedies usable in the ongoing crisis in the industry.[clarification needed]
In subsequent years, fair trade agricultural commodities played an important role in the growth of many ATOs: successful on the market, they offered a source of income for producers and provided alternative trading organizations a complement to the handicrafts market. The first fair trade agricultural products were tea and coffee, followed by: dried fruits, cocoa, sugar, fruit juices, rice, spices and nuts. While in 1992, a sales value ratio of 80% handcrafts to 20% agricultural goods was the norm, in 2002 handcrafts amounted to 25% of fair trade sales while commodity food was up at 69%.[64]
Rise of labeling initiatives
Sales of fair trade products only took off with the arrival of the first Fairtrade certification initiatives. Although buoyed by growing sales, fair trade had been generally confined to small worldshops scattered across Europe and, to a lesser extent, North America. Some felt[weasel words] that these shops were too disconnected from the rhythm and the lifestyle of contemporary developed societies. The inconvenience of going to them to buy only a product or two was too high even for the most dedicated customers. The only way to increase sale opportunities was to offer fair trade products where consumers normally shop, in large distribution channels.[65] The problem was to find a way to expand distribution without compromising consumer trust in fair trade products and in their origins.
A solution was found in 1988, when the first fair trade certification initiative, Max Havelaar, was created in the Netherlands under the initiative of Nico Roozen, Frans Van Der Hoff, and Dutch development NGO Solidaridad. The independent certification allowed the goods to be sold outside the worldshops and into the mainstream, reaching a larger consumer segment and boosting fair trade sales significantly. The labeling initiative also allowed customers and distributors alike to track the origin of the goods to confirm that the products were really benefiting the producers at the end of the supply chain.
The concept caught on: in ensuing years, similar non-profit Fairtrade labelling organizations were set up in other European countries and North America. In 1997, a process of convergence among "LIs" ("Labeling Initiatives") led to the creation of Fairtrade Labelling Organizations International, an umbrella organization whose mission is to set fair trade standards, support, inspect, and certify disadvantaged producers, and harmonize the fair trade message across the movement.[66]
In 2002, FLO launched an International Fairtrade Certification Mark. The goals were to improve the visibility of the Mark on supermarket shelves, facilitate cross border trade, and simplify procedures for both producers and importers. The certification mark is used in more than 50 countries and on dozens of different products, based on FLO's certification for coffee, tea, rice, bananas, mangoes, cocoa, cotton, sugar, honey, fruit juices, nuts, fresh fruit, quinoa, herbs and spices, wine, footballs, etc.
With ethical labeling, consumers can take moral responsibility for their economic decisions and actions. This supports the notion of fair trade practices as "moral economies".[67] The presence of labeling gives consumers the feeling of "doing the right thing" with a simple purchase.
Labeling practices place the burden of getting certification on the producers in the Global South, furthering inequality between the Global North and the Global South. The process of securing certification is burdensome and expensive. Northern consumers are able to make a simple choice while being spared these burdens and expenses.[68]
Psychology
Consumers of fair trade products usually make the intentional choice to purchase fair trade goods based on attitude[clarification needed], moral norms, perceived behavioral control[clarification needed] and social norms. It is useful to include of measure of moral norms to improve the predictive power of intentions to buy fair trade over the basic predictors, like attitude and perceived behavioral control.[clarification needed][67]
University students have significantly increased their consumption of fair trade products over the last several decades.[may be outdated as of March 2022] Women college students have a more favorable attitude than men toward buying fair trade products and they feel more morally obligated to do so. Women are also reported to have stronger intentions to buy fair trade products.[67] Producers organize and strive for fair trade certification for several reasons, either through religious ties, wants for social justice, wants for autonomy, political liberalization or simply because they want to be paid more for their labor efforts and products. Farmers are more likely to identify with organic farming than fair trade farming practices because organic farming is a visible way that these farmers are different from their neighbors and it influences the way they farm. They place importance on natural growing methods.[44] Fair trade farmers are also more likely to attribute their higher prices to the quality of their products rather than fair market prices.[43]
Product certification
Fairtrade labelling (usually simply Fairtrade or Fair Trade Certified in the United States) is a certification system that allows consumers to identify goods that meet certain standards. Overseen by a standard-setting body (Fairtrade International) and a certification body (FLO-CERT), the system involves independent auditing of producers and traders to ensure the standards are met. For a product to carry either the International Fairtrade Certification Mark or the Fair Trade Certified Mark, it must come from FLO-CERT inspected and certified producer organizations. The crops must be grown and harvested in accordance with the standards set by FLO International. The supply chain must be monitored by FLO-CERT, to ensure the integrity of the labelled product.
Fairtrade certification purports to guarantee not only fair prices, but also ethical purchasing principles. These principles include adherence to ILO agreements such as those banning child and slave labour, guaranteeing a safe workplace and the right to unionise, adherence to the United Nations charter of human rights, a fair price that covers the cost of production and facilitates social development, and protection of the environment. The Fairtrade certification also attempts to promote long-term business relationships between buyers and sellers, crop pre-financing, and greater transparency throughout the supply chain.
The Fairtrade certification system covers a growing[may be outdated as of March 2022] range of products, including bananas, honey, coffee, oranges, Cocoa bean, cocoa, cotton, dried and fresh fruits and vegetables, juices, nuts and oil seeds, quinoa, rice, spices, sugar, tea and wine. Companies offering products that meet Fairtrade standards may apply for licences to use one of the Fairtrade Certification Marks for those products. The International Fairtrade Certification Mark was launched in 2002 by FLO, and replaced twelve Marks used by various Fairtrade labelling initiatives. The new Certification Mark is currently used worldwide (with the exception of the United States). The Fair Trade Certified Mark is still used to identify Fairtrade goods in the United States.
The fair trade industry standards provided by Fairtrade International use the word "producer" in many different senses, often in the same specification document. Sometimes it refers to farmers, sometimes to the primary cooperatives they belong to, to the secondary cooperatives that the primary cooperatives belong to, or to the tertiary cooperatives that the secondary cooperatives may belong to[69] but "Producer [also] means any entity that has been certified under the Fairtrade International Generic Fairtrade Standard for Small Producer Organizations, Generic Fairtrade Standard for Hired Labour Situations, or Generic Fairtrade Standard for Contract Production."[70] The word is used in all these meanings in key documents.[71] In practice, when price and credit are discussed, "producer" means the exporting organization, "For small producers' organizations, payment must be made directly to the certified small producers' organization".[72] and "In the case of a small producers' organization [e.g. for coffee], Fairtrade Minimum Prices are set at the level of the Producer Organization, not at the level of individual producers (members of the organization)" which means that the "producer" here is halfway up the marketing chain between the farmer and the consumer.[72] The part of the standards referring to cultivation, environment, pesticides, and child labour has the farmer as "producer".
Alternative trading organizations
An alternative trading organization (ATO) is usually a non-governmental organization (NGO) or mission-driven business aligned with the fair trade movement that aims "to contribute to the alleviation of poverty in developing regions of the world by establishing a system of trade that allows marginalized producers in developing regions to gain access to developed markets."[73] ATOs have fair trade at the core of their mission and activities, using it as a development tool to support disadvantaged producers and to reduce poverty[74] and combining their marketing with awareness-raising and campaigning.
ATOs are often based on political and religious groups, though their secular purpose precludes sectarian identification and evangelical activity. The grassroots political-action agenda of ATOs associates them with progressive political causes active since the 1960s: foremost, a belief in collective action and commitment to moral principles based on social, economic, and trade justice.[citation needed]
According to EFTA, the defining characteristic of ATOs is equal partnership and respect–partnership between the developing region producers and importers, shops, labelling organizations, and consumers. Alternative trade "humanizes" the trade process–making the producer-consumer chain as short as possible so that consumers become aware of the culture, identity, and conditions in which producers live. All actors[clarification needed] are committed to the principle of alternative trade, the need for advocacy in their working relations[clarification needed], and the importance of awareness-raising and advocacy work.[73] Examples of such organisations are Ten Thousand Villages, Greenheart Shop, Equal Exchange, and SERRV International in the U.S. and Equal Exchange Trading, Traidcraft, Oxfam Trading, Twin Trading, and Alter Eco in Europe as well as Siem Fair Trade Fashion in Australia.
Universities
The concept of a Fair Trade school or Fair Trade university emerged from the United Kingdom, where the Fairtrade Foundation maintains a list of colleges and schools that comply with the requirements to be labeled such a university. In order to be considered a Fair Trade University, a university must establish a Fairtrade School Steering Group. They must have a written and implemented, school-wide, fair trade policy. The school or university must be dedicated to selling and using Fair Trade products. They must learn and educate about Fair Trade issues. Finally, they must promote fair trade not only within the school but throughout the wider community.[43]
A Fair Trade University develops all aspects of fair trade practices in their coursework. In 2007, the Director of the Environmental Studies program at the University of Wisconsin-Oshkosh, David Barnhill, endeavored to become the first Fair Trade University. This received positive reactions from faculty and students. To begin, the university agreed that it would need support from four institutional groups—faculty, staff, support staff, and students—to maximize support and educational efforts. The University endorsed the Earth Charter and created a Campus Sustainability Plan to align with the efforts of becoming a Fair Trade University.[75]
The University of Wisconsin-Oshkosh also offers courses in different disciplines that implement fair trade learning. They offer a business course with a trip to Peru to visit coffee farmers, an environmental science class that discusses fair trade as a way for cleaner food systems, an English course that focuses on the Earth Charter and the application of fair trade principles, and several upper-level anthropology courses focused on fair trade.[75]
In 2010, the University of California, San Diego became the second Fair Trade University in the United States. UC San Diego considered the efforts of the Fairtrade Foundation in the UK, but wanted to be more detailed about how their declaration as a Fair Trade University would change the way on-campus franchises do business with the university. They required constant assessment and improvement. Being a Fair Trade University for UC San Diego is a promise between the university and the students about the continual effort by the university to increase the accessibility of fair trade-certified food and drinks and to encourage sustainability in other ways, such as buying from local, organic farmers and decreasing waste.[43]
Fair Trade Universities have been successful because they are a "feel good" movement. Because the movement has an established history, it is not just a fad. It raises awareness about an issue and offers a solution. The solution is an easy one for college students to handle: paying about five cents more for a cup of coffee or tea.[43]
Worldshops
Worldshops, or fair trade shops, are specialized retail outlets that offer and promote fair trade products. Worldshops also typically organize educational fair trade activities and play a role in trade justice and other North-South political campaigns[clarification needed]. Worldshops are often not-for-profit organizations run by local volunteer networks. The movement emerged in Europe and a majority of worldshops are still based on the continent, but worldshops also exist in North America, Australia, and New Zealand.
Worldshops aim to make trade as direct and fair with the trading partners as possible. Usually, this means a producer in a developing country and consumers in industrialized countries. Worldshops aim to pay the producers a fair price that guarantees substinence and positive social development. They often cut out intermediaries in the import chain. A web movement began in the 2000s to provide fair trade items at fair prices to consumers. One is "Fair Trade a Day"[76] on which a different fair trade item is featured each day.
World wide
Every year the sales of Fair Trade products grow close to 30%[may be outdated as of March 2022] and in 2004 were worth over US$500 million. In the case of coffee, sales grow[may be outdated as of March 2022] nearly 50% per year in certain countries.[77] In 2002, 16,000 tons of Fairtrade coffee were purchased by consumers in 17 countries.[77] "Fair trade coffee is currently produced in 24 countries in Latin America, Africa, and Asia".[77] The 165 FLO associations in Latin America and Caribbean are located in 14 countries and As of 2004 together export over 85% of the world's Fair Trade coffee.[77] There is a North/South divide of fair trade products, with producers in the South and consumers in the North. Discrepancies in the perspectives of producers and consumers prompt disputes about how the purchasing power of consumers may or may not promote the development of southern countries.[78] "Purchasing patterns of fairtrade products have remained strong despite the global economic downturn[may be outdated as of March 2022]. In 2008, global sales of fairtrade products exceeded US$3.5 billion."[79]
Africa
Africa’s labor market is becoming[may be outdated as of March 2022] an integral fragment of the global supply chain (GSC) and is expected[may be outdated as of March 2022] to attract foreign direct investment (FDI).[80] As the continent closes its infrastructure gap, it increases its export to the world.[citation needed] Africa's exports, from places like South Africa, Ghana, Uganda, Tanzania and Kenya, were valued at US$24 million As of 2009.[81] Between 2004 and 2006, Africa expanded the number of FLO-certified producer groups from 78 to 171, nearly half of which are in Kenya; following closely behind are Tanzania and South Africa.[81] The FLO products Africa is known for are tea, cocoa, flowers, and wine.[81] In Africa smallholder cooperatives and plantations produce Fair Trade certified tea.[81] Cocoa-producing countries in West Africa often form cooperatives that produce fair trade cocoa, such as Kuapa Kokoo in Ghana.[82] West African countries without strong fair trade industries are subject to deterioration in cocoa quality as they compete with other countries for a profit. These countries include Cameroon, Nigeria, and the Ivory Coast.[83]
Latin America
Studies in the early 2000s showed that the income, education, and health of coffee producers involved with Fair Trade in Latin America improved in comparison to producers who were not participating.[84] Brazil, Nicaragua, Peru, and Guatemala, having the biggest populations of coffee producers, use some of the most substantial land[clarification needed] for coffee production in Latin America and do so by taking part in Fair Trade.[84]
Latin American countries are also large exporters of fair trade bananas. The Dominican Republic is the largest producer of fair trade bananas, followed by Mexico, Ecuador, and Costa Rica. Producers in the Dominican Republic set up associations rather than cooperatives so that individual farmers can each own their own land, but meet regularly.[82]
Fundación Solidaridad was created in Chile to increase the earnings and social participation of handicraft producers. These goods are marketed locally in Chile and internationally.[83] Fair trade handicraft and jewellery production has risen in recent years[may be outdated as of March 2022], aided by North American and European online retailers developing direct relationships to import and sell the products online. The sale of fair trade handicrafts online has aided the development of female artisans in Latin America.[85]
Asia
The Asia Fair Trade Forum aims to increase the competitiveness of fair trade organizations in Asia in the global market. Garment factories in Asian countries including China, Burma and Bangladesh are regularly accused of human rights violations, including the use of child labour.[82] These violations conflict with the principles outlined by fair trade certifiers. In India, Trade Alternative Reform Action (TARA) Projects, formed in the 1970s, worked to increase production capacity, quality standards, and entrance into markets for home-based craftsmen that were previously unattainable due to their lower caste identity.[83] Fairtrade India was established in 2013 in Bangalore.[86]
Australia
The Fair Trade Association of Australia and New Zealand (FTAANZ) supports two systems of fair trade: The first is as the Australia and New Zealand member of FLO International, which unites Fairtrade producer and labelling initiatives across Europe, Asia, Latin America, North America, Africa, Australia, and New Zealand. The second is the World Fair Trade Organization (WFTO), of more than 450 worldwide members, of which FTAANZ is one. Fairtrade (one word) refers to FLO-certified commodities and associated products. Fair trade (two words) encompasses the wider fair trade movement, including the Fairtrade commodities and other artisan craft products.
Commodities
Fair trade commodities are import/export goods that are certified by a fair trade certification organization such as Fair Trade USA or World Fair Trade Organization. Such organizations are typically overseen by Fairtrade International. Fairtrade International sets international fair trade standards and supports fair trade producers and cooperatives.[87] Sixty percent of the fair trade market consists of food products such as coffee, tea, cocoa, honey, and bananas.[88] Non-food commodities include crafts, textiles, and flowers. Shima Baradaran of Brigham Young University suggests that fair trade techniques could be productively applied to products that might involve[clarification needed] child labor.[89] Although fair trade represents only .01% of the food and beverage industry in the United States, it is growing rapidly[may be outdated as of March 2022].[90]
Coffee
Coffee is the most well-established fair trade commodity. Most Fair Trade coffee is Coffea arabica, which is grown at high altitudes. Fair Trade markets emphasize the quality of coffee because they usually appeal to customers who are motivated by taste rather than price. The fair trade movement fixated on coffee first because it is a highly traded commodity for most producing countries, and almost half the world's coffee is produced by smallholder farmers.[43] At first fair trade coffee was sold at small scale; now multinationals like Starbucks and Nestlé use fair trade coffee.[91]
Internationally recognized Fair Trade coffee standards outlined by FLO are as follows: small producers are grouped in democratic cooperatives or groups; buyers and sellers establish long-term, stable relationships; buyers pay the producers at least the minimum Fair Trade price or, when the market price is higher, the market price; and, buyers pay a social premium of US$0.2 per pound of coffee to the producers. The current[may be outdated as of March 2022] minimum Fair Trade price for high-grade, washed Arabica coffee is US$1.4 per pound; US$1.7 per pound if the coffee is organic.[43]
Locations
The largest sources of fair trade coffee are Uganda and Tanzania, followed by Latin American countries such as Guatemala and Costa Rica.[88] As of 1999, major importers of fair trade coffee included Germany, the Netherlands, Switzerland, and the United Kingdom. There is a North/South divide between fair trade consumers and producers. North American countries are not yet[may be outdated as of March 2022] among the top importers of fair trade coffee.[88]
Labour
Starbucks began to purchase more fair trade coffee in 2001 because of charges of labor rights violations in Central American plantations. Several competitors, including Nestlé, followed suit.[92] Large corporations that sell non-fair trade coffee take 55% of what consumers pay for coffee while only 10% goes to the producers. Small growers dominate the production of coffee, especially in Latin American countries such as Peru. Coffee is the fastest expanding[clarification needed] fairly traded commodity, and an increasing[may be outdated as of March 2022] number of producers are small farmers that own their own land and work in cooperatives. The incomes of growers of fair trade coffee beans depend on the market value of coffee where it is consumed, so farmers of fair trade coffee do not necessarily live above the poverty line or get completely fair prices[clarification needed] for their commodity.[82]
Unsustainable farming practices can harm plantation owners and laborers. Unsustainable practices such as using chemicals[clarification needed] and unshaded growing are risky. Small growers who put themselves at economic risk by not having diverse farming practices[clarification needed] could lose money and resources due to fluctuating coffee prices, pest problems, or policy shifts.[93]
The effectiveness of Fairtrade is questionable; workers on Fairtrade farms have a lower standard of living than on similar farms outside the Fairtrade system.[94]
Sustainability
As coffee becomes one of the most important export crops in certain regions such as northern Latin America, nature and agriculture are transformed. Increased productivity requires technological innovations, and the coffee agroecosystem has been changing. In the nineteenth century in Latin America, coffee plantations began replacing sugarcane and subsistence crops. Coffee crops became more managed; they were put into rows and unshaded, meaning diversity of the forest was decreased and Coffea trees shortened. As plant and tree diversity decreased, so did animal diversity. Unshaded plantations allow a higher density of Coffea trees, are less protected from wind and lead to more soil erosion. Technified coffee plantations also use chemicals such as fertilizers, insecticides, and fungicides.[93]
Fair trade certified commodities must adhere to sustainable agro-ecological practices, including reduction of chemical fertilizer use, prevention of erosion, and protection of forests. Coffee plantations are more likely to be fair trade certified if they use traditional farming practices with shading and without chemicals. This protects the biodiversity of the ecosystem and ensures that the land will be usable for farming in the future and not just for short-term planting.[88] In the United States, 85% of fair trade certified coffee is also organic.[83]
Consumer attitudes
Consumers typically have positive attitudes about products that are ethically made. These products may promise fair labor conditions, protection of the environment, and protection of human rights. Fair trade products meet standards like these. Despite positive attitudes toward ethical products such as fair trade commodities, consumers often are not willing to pay higher prices for fair trade coffee. The attitude-behavior gap can help explain why ethical and fair trade products take up less than 1% of the market. Coffee consumers may say they are willing to pay a premium for fair trade coffee, but most consumers are more concerned with the brand, label, and flavor of the coffee. However, socially conscious consumers with a commitment to buying fair trade products are more likely to pay the premium associated with fair trade coffee.[95] When a sufficient number of consumers begin purchasing fair trade, companies will be more likely to carry fair trade products. Safeway Inc. began carrying fair trade coffee after individual consumers dropped off postcards asking for it.[96]
Coffee companies
The following coffee roasters and companies offer fair trade coffee or some roasts that are fair trade certified:
- Anodyne Coffee Roasting Company [97]
- Breve Coffee Company [97]
- Cafedirect[97][98]
- Counter Culture Coffee[99]
- Equal Exchange[97]
- GEPA[100]
- Green Mountain Coffee Roasters[97]
- Just Us![97]
- Peace Coffee[101]
- Pura Vida Coffee[97]
Cocoa
Many countries that export cocoa rely on it as their single export crop. In Africa in particular, governments tax cocoa as their main source of revenue. Cocoa is a permanent crop, which means that it occupies land for long periods of time and does not need to be replanted after each harvest.[102]
Locations
Cocoa is farmed in the tropical regions of West Africa, Southeast Asia, and Latin America. In Latin America, cocoa is produced in Costa Rica, Panama, Peru, Bolivia, and Brazil. Much of the cocoa produced in Latin America is organic and regulated by an Internal control system. Bolivia has fair trade cooperatives that permit a fair share of money for cocoa producers. African cocoa-producing countries include Cameroon, Madagascar, São Tomé and Príncipe, Ghana, Tanzania, Uganda, and Côte d'Ivoire.[102] Côte d'Ivoire exports over a third of the world's cocoa beans.[103] Southeast Asia accounts for about 14% of the world's cocoa production. Major cocoa-producing countries are Indonesia, Malaysia, and Papua New Guinea.[104]
Labour
Africa and other developing countries received low prices for their exported commodities such as cocoa, which caused poverty to abound. Fair trade seeks to establish a system of direct trade from developing countries to counteract this unfair system.[103] Most cocoa comes from small family-run farms in West Africa. These farms have little market access and so rely on middlemen to bring their products to market. Sometimes middlemen are unfair to farmers.[105] Farmers may join an Agricultural cooperative that pays farmers a fair price for their cocoa.[106] One of the main tenets of fair trade is that farmers receive a fair price, but this does not mean that the higher price paid for fair trade cocoa goes directly to the farmers. Much of this money goes to community projects such as water wells rather than to individual farmers. Nevertheless, cooperatives such as fair trade-endorsed Kuapa Kokoo in Ghana are often the only Licensed Buying Companies that will give farmers a fair price and not cheat them or rig sales.[107] Farmers in cooperatives are frequently their own bosses and get bonuses[clarification needed] per bag of cocoa beans. These arrangements are not always assured and fair trade organizations can't always buy all of the cocoa available to them from cooperatives.[82]
Marketing
Marketing of fair trade cocoa to European consumers often portrays cocoa farmers as dependent on western purchases for their livelihood and well-being. Showing African cocoa producers in this way is problematic because it is reminiscent of the imperialistic view that Africans cannot live happily without the help of westerners. It portrays the balance of power as being in favor of the consumers rather than the producers.[107]
Consumers often aren't willing to pay the extra price for fair trade cocoa because they do not know what fair trade is. Activist groups can educate consumers about the unethical aspects of unfair trade and thereby promote demand for fairly traded commodities. Activism and ethical consumption not only promote fair trade but also act against powerful corporations such as Mars, Incorporated that refuse to acknowledge the use of forced child labor in the harvesting of their cocoa.[96]
Sustainability
Smallholding farmers frequently lack access not only to markets but also to resources for sustainable cocoa farming practices. Lack of sustainability can be due to pests, diseases that attack cocoa trees, lack of farming supplies, and lack of knowledge about modern farming techniques.[105] One issue pertaining to cocoa plantation sustainability is the amount of time it takes for a cocoa tree to produce pods. A solution is to change the type of cocoa tree being farmed. In Ghana, a hybrid cocoa tree yields two crops after three years rather than the typical one crop after five years.
Cocoa companies
The following chocolate companies use all or some fair trade cocoa in their chocolate:
Harkin-Engel Protocol
The Harkin-Engel Protocol, also commonly known as the Cocoa Protocol, is an international agreement meant to end some of the world's worst forms of child labor, as well as forced labor in the cocoa industry. It was first negotiated by Senator Tom Harkin and Representative Eliot Engel after they watched a documentary that showed the cocoa industry's widespread issue of child slavery and trafficking. The parties involved agreed to a six-article plan:
- Public statement of the need for and terms of an action plan—The cocoa industry acknowledged the problem of forced child labor and will commit "significant resources" to address the problem.
- Formation of multi-sectoral advisory groups—By 1 October 2001, an advisory group will be formed to research labor practices. By 1 December 2001, industry will form an advisory group and formulate appropriate remedies to address the worst forms of child labor.
- Signed joint statement on child labor to be witnessed at the ILO—By 1 December 2001, a statement must be made recognizing the need to end the worst forms of child labor and identify developmental alternatives for the children removed from labor.
- Memorandum of cooperation—By 1 May 2002, Establish a joint action program of research, information exchange, and action to enforce standards to eliminate the worst forms of child labor. Establish a monitor and compliance with the standards.
- Establish a joint foundation—By 1 July 2002, industry will form a foundation to oversee efforts to eliminate the worst forms of child labor. It will perform field projects and be a clearinghouse on best practices.
- Building toward credible standards—By 1 July 2005, the industry will develop and implement industry-wide standards of public certification that cocoa has been grown without any of the worst forms of child labor.[117]
Textiles
Fair trade textiles are primarily made from fair trade cotton. By 2015, nearly 75,000 cotton farmers in developing countries had obtained fair trade certification. The minimum price that Fair trade pays allows cotton farmers to sustain and improve their livelihoods.[118] Fair trade textiles are frequently grouped with fair trade crafts and goods made by artisans in contrast to cocoa, coffee, sugar, tea, and honey, which are agricultural commodities.[82]
Locations
India, Pakistan, and West Africa are the primary exporters of fair trade cotton, although many countries grow fair trade cotton.[119][120] Production of Fairtrade cotton was initiated in 2004 in four countries in West and Central Africa (Mali, Senegal, Cameroon, and Burkina Faso).[121] Textiles and clothing are exported from Hong Kong, Thailand, Malaysia, and Indonesia.[82]
Labour
Labour is different for textile production than for agricultural commodities because textile production takes place in a factory, not on a farm. Children are a source of cheap labor, and child labor is prevalent in Pakistan, India, and Nepal. Fair trade cooperatives ensure fair and safe labor practices, and do not allow child labor.[122] Fair trade textile producers are most often women in developing countries. They struggle to meet consumer tastes in North America and Europe. In Nepal, textiles were originally made for household and local use. In the 1990s, women began joining cooperatives and exporting their crafts for profit. Now[may be outdated as of March 2022] handicrafts are Nepal's largest export. It is often difficult for women to balance textile production, domestic responsibilities, and agricultural work. Cooperatives foster the growth of democratic communities in which women have a voice despite being historically in underprivileged positions.[122] For fair trade textiles and other crafts to be successful in Western markets, World Fair Trade Organizations require a flexible[clarification needed] workforce of artisans in need of stable income, links from consumers to artisans, and a market for quality ethnic products.[120]
Making cotton and textiles "fair trade" does not always benefit laborers. Burkina Faso and Mali export the largest amount of cotton in Africa. Although many cotton plantations in these countries attained fair trade certification in the 1990s, participation in fair trade strengthened existing power relations and inequalities that cause poverty in Africa rather than challenging them. Fair trade does not do much for farmers when it does not challenge the system that marginalizes producers. Despite not empowering farmers, the change to fair trade cotton has positive effects including female participation in cultivation.[119]
Textiles and garments are intricate and require one individual operator[clarification needed], in contrast to the collective farming of coffee and cocoa beans. Textiles are not a straightforward commodity because to be fairly traded, there must be regulation in cotton cultivation, dyeing, stitching, and every other step in the process of textile production.[82] Fair trade textiles are distinct from the sweat-free movement although the two movements intersect at the worker level.[83]
Forced or unfair labor in textile production is not limited to developing countries. Charges of use of sweatshop labor are endemic in the United States. Immigrant women work long hours and receive less than minimum wage. In the United States, there is more of a stigma against child labor than forced labor in general. Consumers in the United States are willing to suspend the importation of textiles made with child labor in other countries but do not expect American exports to be suspended by other countries, even when produced using forced labor.[123]
Clothing and textile companies
The following companies use fair trade production and/or distribution techniques for clothing and textiles:
Seafood
With increasing media scrutiny of the conditions of fishermen, particularly in Southeast Asia, the lack of transparency and traceability in the seafood industry prompted new fair trade efforts. In 2014, Fair Trade USA created its Capture Fisheries Program that led to the first instance of Fair Trade fish being sold globally in 2015. The program "requires fishermen to source and trade according to standards that protect fundamental human rights, prevent forced and child labor, establish safe working conditions, regulate work hours and benefits, and enable responsible resource management."[132]
Flowers
Fair trade flowers have been recognised as "an important niche product", with Kenya noted as a significant location for their production.[133] The launch of fair trade flower marketing in the UK, led by retailer Tesco, raised some certains as to whether the organisation of flower production in Kenya met the conditions needed for fair trade certification.[134]
Large companies and commodities
Large transnational companies have started to use fair trade commodities in their products. In April 2000, Starbucks began offering fair trade coffee in all of their stores. In 2005, the company promised to purchase ten million pounds of fair trade coffee over the next 18 months. This would account for a quarter of the fair trade coffee purchases in the United States and 3% of Starbucks' total coffee purchases.[96] The company maintains that increasing its fair trade purchases would require an unprofitable reconstruction of the supply chain.[135] Fair trade activists have made gains with other companies: Sara Lee Corporation in 2002 and Procter & Gamble (the maker of Folgers) in 2003 agreed to begin selling a small amount of fair trade coffee. Nestlé, the world's biggest coffee trader, began selling a blend of fair trade coffee in 2005.[96] In 2006, The Hershey Company acquired Dagoba, an organic and fair trade chocolate brand.
Much contention surrounds the issue of fair trade products becoming a part of large companies. Starbucks is still only 3% fair trade–enough to appease consumers, but not enough to make a real difference to small farmers, according to some activists. The ethics of buying fair trade from a company that is not committed to the cause are questionable; these products are only making a small dent in a big company even though these companies' products account for a significant portion of global fair trade.[96]
Business type | Engagement with fair trade products |
---|---|
Highest | |
Fair trade organizations | Equal Exchange |
Global Crafts | |
Ten Thousand Villages | |
Values-driven organizations | The Body Shop |
Green Mountain Coffee | |
Pro-active socially responsible businesses | Starbucks |
Whole Foods
The Ethical Olive | |
Defensive socially responsible businesses | Procter & Gamble |
Lowest |
Luxury commodities
There have been efforts to introduce fair trade practices to the luxury goods industry, particularly for gold and diamonds.
Diamonds and sourcing
In parallel to efforts to commoditize diamonds, some industry players launched campaigns to introduce benefits to mining centers in the developing world. Rapaport Fair Trade was established with the goal "to provide ethical education for jewelry suppliers, buyers, first time or seasoned diamond buyers, social activists, students, and anyone interested in jewelry, trends, and ethical luxury."[136]
The company's founder, Martin Rapaport, as well as Kimberley Process initiators Ian Smillie and Global Witness, are among several industry insiders and observers who have called for greater checks and certification programs among other programs to ensure protection for miners and producers in developing countries. Smillie and Global Witness have since withdrawn support for the Kimberley Process. Other concerns in the diamond industry include working conditions in diamond cutting centers as well as the use of child labor. Both of these concerns come up when considering issues in Surat, India.[clarification needed][137]
Gold
Fairtrade certified gold is used in manufacturing processes as well as for jewellery.[138] The Fairtrade Standard for Gold and Associated Precious Metals for Artisanal and Small-Scale Mining covers the requirements for gold products to identified as "Fairtrade". Silver and platinum are also Fairtrade precious metals.[139]
In February 2011, the United Kingdom's Fairtrade Foundation became the first NGO to begin certifying gold under the fair trade rubric.[140]
Pornography or Sex industry
Fair trade also influences the porn industry. Feminist writers and academics advocate a pornography industry with mutual consent and no exploiting labor conditions for actors and actresses.[141]
Politics
European Union
In 1994, the European Commission prepared the "Memo on alternative trade" in which it declared its support for strengthening fair trade and its intention to establish an EC Working Group on Fair Trade. The same year, the European Parliament adopted the "Resolution on promoting fairness and solidarity in North South trade",[142] voicing its support for fair trade. In 1996, the Economic and Social Committee adopted an "Opinion on the European 'Fair Trade' marking movement". A year later, a resolution adopted by the European Parliament called on the European Commission to support fair trade banana operators, and the European Commission published a survey on "Attitudes of EU consumers to Fair Trade bananas", concluding that Fair Trade bananas would be commercially viable in several EU Member States.[143]
In 1998, the European Parliament adopted the "Resolution on Fair Trade",[144] which was followed by a Commission in 1999 that adopted the "Communication from the Commission to the Council on 'Fair Trade'".[145] In 2000, public institutions in Europe started purchasing Fairtrade Certified coffee and tea, and the Cotonou Agreement made specific reference to the promotion of Fair Trade in article 23(g) and in the Compendium. The European Parliament and Council Directive 2000/36/EC also suggested promoting Fair Trade.[143] In 2001 and 2002, several other EU papers explicitly mentioned fair trade, most notably the 2001 Green Paper on Corporate Social Responsibility and the 2002 Communication on Trade and Development.
In 2004, the European Union adopted "Agricultural Commodity Chains, Dependence and Poverty–A proposal for an EU Action Plan", with a specific reference to the fair trade movement, which has "been setting the trend for a more socio-economically responsible trade."[146] In 2005, in the European Commission communication "Policy Coherence for Development–Accelerating progress towards attaining the Millennium Development Goals",[147] fair trade is mentioned as "a tool for poverty reduction and sustainable development".[143]
On July 6, 2006, the European Parliament unanimously adopted a resolution on fair trade, recognizing the benefits achieved by the fair trade movement, suggesting the development of an EU-wide policy on fair trade, defining criteria that need to be fulfilled under fair trade to protect it from abuse, and calling for greater support for fair trade.[148] "This resolution responds to the impressive growth of Fair Trade, showing the increasing interest of European consumers in responsible purchasing," said Green MEP Frithjof Schmidt during the plenary debate. Peter Mandelson, EU Commissioner for External Trade, responded that the resolution would be well received at the European Commission. "Fair Trade makes the consumers think and therefore it is even more valuable. We need to develop a coherent policy framework and this resolution will help us."[149]
France
In 2005, French National Assembly member Antoine Herth issued the report "40 proposals to sustain the development of Fair Trade". The report was followed the same year by a law that would establish a commission to recognize fair trade Organisations.[150][143] In parallel to the legislativents[clarification needed], also in 2006, the French chapter of ISO (AFNOR) adopted a reference document on Fair Trade after five years of discussion.
Italy
In 2006, Italian lawmakers debated how to introduce a law on fair trade in Parliament. A consultation process involving a wide range of stakeholders was launched in early October.[151] A definition of fair trade was developed. However, its adoption is still pending[may be outdated as of March 2022] as the efforts were stalled by the 2008 Italian political crisis.
Netherlands
The Dutch province of Groningen was sued in 2007 by coffee supplier Douwe Egberts for requiring its coffee suppliers to meet fair trade criteria, most notably the payment of a minimum price and a development premium to producer cooperatives. Douwe Egberts, which sells coffee brands under self-developed ethical criteria, believed the requirements were discriminatory. After several months of discussions and legal challenges, the province of Groningen prevailed. Coen de Ruiter, director of the Max Havelaar Foundation, called the victory a landmark event: "it provides governmental institutions the freedom in their purchasing policy to require suppliers to provide coffee that bears the fair trade criteria, so that a substantial and meaningful contribution is made in the fight against poverty through the daily cup of coffee".[152]
Criticism
This article's Criticism or Controversy section may compromise the article's neutrality by separating out potentially negative information. (February 2013) |
While some studies claim fair trade is beneficial and efficient,[153] other studies have been less favourable. Sometimes the criticism is intrinsic to fair trade, sometimes efficiency depends on the broader context such as the lack of government help or volatile prices in the global market.[clarification needed][154]
Ethical basis
Studies shows a significant number of consumers were content to pay higher prices for fair trade products, in the belief that this helps the poor.[155] One ethical criticism of Fairtrade is that this premium over non-Fairtrade products does not reach the producers and is instead collected by businesses or by employees of co-operatives, or is used for unnecessary expenses. Some research finds the implementation of certain fair trade standards causes greater inequalities in markets where these rigid rules are inappropriate for the specific market.[32][self-published source][13][15][156][14][dead link]
What occurs with the money?
Little money may reach the developing countries
The Fairtrade Foundation does not monitor how much retailers charge for fair trade goods, so it is rarely possible to determine how much extra is charged or how much of that premium reaches the producers. In four cases it has been possible to find out. One British café chain was passing on less than one percent of the extra charged to the exporting cooperative;[32] in Finland, Valkila, Haaparanta, and Niemi[157] found that consumers paid much more for Fairtrade, and that only 11.5% reached the exporter. Kilian, Jones, Pratt, and Villalobos[26] talk of U.S. Fairtrade coffee getting US$5 per pound extra at retail, of which the exporter receives only 2%. Mendoza and Bastiaensen[158] calculated that in the UK only 1.6%–18% of the extra charged for one product line reached the farmer. These studies assume that the importers paid the full Fairtrade price, which is not necessarily the case.[159][19][20]
Less money reaches farmers
The Fairtrade Foundation does not monitor how much of the extra money paid to the exporting cooperatives reaches the farmer. The cooperatives incur costs in reaching fair trade standards, and these are incurred on all production, even if only a small amount is sold at fair trade prices. The most successful cooperatives appear to spend a third of the extra price received on this: some less successful cooperatives spend more than they gain. While this appears to be agreed by proponents and critics of fair trade,[160] there is a dearth of economic studies setting out the actual revenues and what the money was spent on. FLO figures[161] are that 40% of the money reaching the developing world is spent on "business and production", which would include these costs as well as costs incurred by any inefficiency and corruption in the cooperative or the marketing system. The rest is spent on social projects, rather than being passed on to farmers.
Differing anecdotes state farmers are paid more or less by traders than by fair trade cooperatives. Few of these anecdotes address the problems of price reporting in developing world markets,[162] and few appreciate the complexity of the different price packages that may or may not include credit, harvesting, transport, processing, etc. Cooperatives typically average prices over the year, so they pay less than traders at some times, more at others. Bassett (2009)[163] compares prices only where Fairtrade and non-Fairtrade farmers have to sell cotton to the same monopsonistic ginneries which pay low prices. Prices would have to be higher to compensate farmers for the increased costs they incur to produce fair trade. For instance, fair trade encouraged Nicaraguan farmers to switch to organic coffee, which resulted in a higher price per pound, but a lower net income because of higher costs and lower yields.[19][26][164]
Effects of low barriers to entry
A 2015 study concluded that the low barriers to entry in a competitive market such as coffee undermines any effort to give higher benefits to producers through fair trade. They used data from Central America to establish that the producer benefits were close to zero. This is because there is an oversupply of certification, and only a fraction of produce classified as fair trade is actually sold on fair trade markets, just enough to recoup the costs of certification.[11]
Inefficient marketing system
One reason for high prices is that fair trade farmers have to sell through a monopsonist cooperative, which may be inefficient or corrupt–certainly some private traders are more efficient than some cooperatives. They cannot choose the buyer who offers the best price, or switch when their cooperative is going bankrupt[165] if they wish to retain fairtrade status. Fairtrade deviates from the free market ideal of some economists. Brink Lindsey calls fairtrade a "misguided attempt to make up for market failures" that encourages market inefficiencies and overproduction.[166]
Fair trade harms other farmers
Overproduction argument
Critics argue that fair trade harms non-Fairtrade farmers. Fair trade claims that its farmers are paid higher prices and are given special advice on increasing yields and quality. Economists[32][self-published source][166][167][168] argue that if this is so, Fairtrade farmers will increase production. As the demand for coffee is highly inelastic, a small increase in supply means a large fall in market price, so perhaps a million Fairtrade farmers get a higher price and 24 million others get a substantially lower price. Critics cite the example of farmers in Vietnam being paid a premium over the world market price in the 1980s, planting much coffee, then flooding the world market in the 1990s. The fair trade minimum price means that when the world market price collapses, it is the non-fair trade farmers, particularly the poorest, who have to cut down their coffee trees. This argument is supported by mainstream economists, not just free marketers.[citation needed]
Other ethical issues
Secrecy
Under EU law (Directive 2005/29/EC on Unfair Commercial Practices) the criminal offense of unfair trading is committed if (a) "it contains false information and is therefore untruthful or in any way, including overall presentation, deceives or is likely to deceive the average consumer, even if the information is factually correct", (b) "it omits material information that the average consumer needs… and thereby causes or is likely to cause the average consumer to take a transactional decision that he would not have taken otherwise", or (c) "fails to identify the commercial intent of the commercial practice… [which] causes or is likely to cause the average consumer to take a transactional decision that he would not have taken otherwise." Peter Griffiths (2011)[32] points to false claims that fair trade producers get higher prices and to the almost universal failure to disclose the extra price charged for fair trade products, how much of this actually reaches the developing world, what this is spent on in the developing world, how much (if any) reaches farmers, and the harm that fair trade does to non-fair trade farmers. He also points to the failure to disclose when "the primary commercial intent" is to make money for retailers and distributors in rich countries.
Unethical selling techniques
Economist Philip Booth says that the selling techniques used by some sellers and supporters of fair trade are bullying, misleading, and unethical,[169] such as the use of boycott campaigns and other pressure to force sellers to stock a product they think ethically suspect. However, the opposite has been argued, that a more participatory and multi-stakeholder approach to auditing might improve the quality of the process.[clarification needed][170]
Some people[who?] argue that strategic use of labeling may help embarrass (or encourage) major suppliers into changing their practices. It may bring to light corporate vulnerabilities that activists can exploit. Or it may encourage ordinary people to get involved with broader projects of social change.[171]
Failure to monitor standards
There are complaints that fair trade standards are inappropriate and may harm producers, sometimes making them work several months more for little return.[32][self-published source][172][173][19]
Enforcement of standards by Fairtrade was described as "seriously weak" by Christian Jacquiau.[174] Paola Ghillani, who spent four years as president of Fairtrade Labelling Organizations, agreed that "certain arguments carry some weight".[174] There are many complaints of poor enforcement: labourers on Fairtrade farms in Peru are paid less than the minimum wage;[175] some non-Fairtrade coffee is sold as Fairtrade[176] "the standards are not very strict in the case of seasonally hired labour in coffee production."[19] "[S]ome fair trade standards are not strictly enforced."[20][177] In 2006, a Financial Times journalist found that ten out of ten mills visited had sold uncertified coffee to co-operatives as certified. It reported on "evidence of at least one coffee association that received an organic, Fair Trade or other certifications despite illegally growing some 20 per cent of its coffee in protected national forest land."[176]
Trade justice and fair trade
Segments of the trade justice movement have criticized fair trade for focusing too much on individual small producer groups without advocating for trade policy changes that would have a larger effect on disadvantaged producers' lives. French author and RFI correspondent Jean-Pierre Boris championed this view in his 2005 book Commerce inéquitable.[178]
Political objections
There have been political criticisms of fair trade from the left and the right. Some believe the fair trade system is not radical enough. Christian Jacquiau, in his book Les coulisses du commerce équitable, calls for stricter fair trade standards and criticizes the fair trade movement for working within the current system (i.e., partnerships with mass retailers, multinational corporations, etc.) rather than establishing a new, fairer, fully autonomous (i.e., government monopoly) trading system. Jacquiau also supports significantly higher fair trade prices in order to maximize the effect since most producers only sell a portion of their crop under fair trade terms.[37][citation needed] It has been argued[weasel words] that the fair trade approach is too rooted in a Northern consumerist view of justice that Southern producers do not participate in setting. "A key issue is therefore to make explicit who possesses the power to define the terms of Fairtrade, that is who possesses the power in order to determine the need of an ethic in the first instance, and subsequently command a particular ethical vision as the truth."[179]
See also
- Alter-globalization
- Anti-globalization movement
- Ethical consumerism
- Fair chain
- Free produce movement
- Kimberley Process Certification Scheme
- Slow movement (culture)
- Product tracing systems: allow to see source factory of a product
- Solidarity economy
- South–South cooperation
Bibliography
- Berndt, CE (2007), Is Fair Trade in coffee production fair and useful? Evidence from Costa Rica and Guatemala and implications for policy, Policy Series, Policy Comment, vol. 65, Washington, DC: Mercatus Centre, George Mason University.
- Ballet, Jérôme; Carimentrand, Aurélie (April 2010), "Fair Trade and the Depersonalization of Ethics", Journal of Business Ethics, 92 (2 Supplement): 317–30, doi:10.1007/s10551-010-0576-0, S2CID 143666363.
- Brown, Keith R. (2013). Buying into Fair Trade: Culture, Morality, and Consumption. ISBN 978-0814725375.
- Hamel, I (2006), "Fairtrade Firm Accused of Foul Play", Swiss info[permanent dead link].
- Jacquiau, C (2006), Les Coulisees du Commerce Équitable [The columns of the equitable commerce] (in French), Paris: Mille et Une Nuits.
- Johnson, George M. How Hope Became An Activist. London: Dixibooks, 2021.
- Kilian, B; Jones, C; Pratt, L; Villalobos, A (2006), "Is Sustainable Agriculture a Viable Strategy to Improve Farm Income in Central America? A Case Study on Coffee", Journal of Business Research, 59 (3): 322–30, doi:10.1016/j.jbusres.2005.09.015.
- Kohler, P (2006), The economics of Fair Trade: for whose benefit? An investigation into the limits of Fair Trade as a development tool and the risk of clean-washing, HEI Working Papers, Geneva: Economics Section, Graduate Institute of International Studies, October.
- Mohan, S (2010), Fair Trade Without the Froth – a dispassionate economic analysis of 'Fair Trade', London: Institute of Economic Affairs.
- Moore, G; Gibbon, J; Slack, R (2006), "The mainstreaming of Fair Trade: a macromarketing perspective" (PDF), Journal of Strategic Marketing, 14 (4): 329–52, doi:10.1080/09652540600947961, S2CID 46523470.
- Riedel, CP; Lopez, FM; Widdows, A; Manji, A; Schneider, M (2005), "Impacts of Fair Trade: trade and market linkages", Proceedings of the 18th International Farming Symposium, 31 October – 3 November, Rome: Food and Agricultural Organisation.
- Weitzman, H (September 8, 2006), "The bitter cost of 'Fair Trade' coffee", The Financial Times.
References
- "Our Story". SERRV. Retrieved 2013-05-01.
- "'God's love is what they pass on' : Fair trade is a mission for a Wittenberg University grad, students and faculty". The Lutheran. 2012-03-29. Archived from the original on 2013-01-16. Retrieved 2013-05-01.
- "What is Fair Trade? | Catholic Relief Services Fair Trade Program". Archived from the original on 2014-07-30. Retrieved 2014-07-14.
- Fairtrade International (FLO) (2011?), "Explanatory Document for the Fairtrade Standard for Small Producer Organizations" Archived 2013-05-02 at the Wayback Machine (PDF)
- Fairtrade Labelling Organizations International e.V. (2011) "Generic Fairtrade Trade Standard" Archived 2013-05-02 at the Wayback Machine (PDF)
- Fairtrade International, (2011) "Explanatory Document for the Fairtrade Trade Standard Archived 2013-05-02 at the Wayback Machine (PDF)
- Fairtrade International (FLO) (2011);"Fairtrade Standard for Coffee for Small Producer Organizations" version: 1 April 2011 (PDF)
fair trade should be considered as a proven alternative to current strategies to eliminate child labor
- "The 8 best sites to watch ethical, fair trade porn". The Daily Dot. 2017-12-16. Archived from the original on 2018-08-06. Retrieved 2018-08-06.
- Mondin, Alessandra (2014). "Fair-trade porn + niche markets + feminist audience". Porn Studies. 1 (1–2): 189–192. doi:10.1080/23268743.2014.888251.
- Cooper, Thia (2010). "Fair Trade Sex: Reflections on God, Sex, and Economics". Feminist Theology. 19 (2): 194–207. doi:10.1177/0966735010384332. S2CID 144063686.
- Niemi, N. (2010). "Empowering Coffee Traders? The Coffee Value Chain from Nicaraguan Fair Trade Farmers to Finnish Consumers". Journal of Business Ethics. 97 (2): 257–270. doi:10.1007/s10551-010-0508-z. S2CID 146802807.
- Trudel, R.; Cotte, J. (2009). "Does it pay to be good?". MIT Sloan Management Review.
- Winter, Arnot C.; Boxall, P.; & Cash, S. (2006). "Do ethical consumers care about price? A revealed preference analysis of Fair Trade coffee purchases". 54. Canadian Journal of Agricultural Economics: 555–565.
- Valkila, J.; Haaparanta, P.; Niemi, N. (2010). "Empowering Coffee Traders? The Coffee Value Chain from Nicaraguan Fair Trade Farmers to Finnish Consumers". Journal of Business Ethics. 97 (257–270): 264. doi:10.1007/s10551-010-0508-z. S2CID 146802807.
- Barrientos, S., Conroy, M.E., & Jones, E. (2007). "Northern Social Movements and Fair Trade." In L. Raynolds, D. D. Murray, & J. Wilkinson, Fair Trade: The Challenges of Transforming Globalization (pp. 51–62). London and New York: Routledge. Quoted by Reed, D (2009). "What do Corporations have to do with Fair Trade? Positive and normative analysis from a value chain perspective". Journal of Business Ethics. 86: 3–26 [21]. doi:10.1007/s10551-008-9757-5. S2CID 55809844.
- Harford, T: "The Undercover Economist.", 2005
- Sam Bowman (11 March 2011). "Markets, poverty, and Fair Trade". Adam Smith Institute. Archived from the original on 2012-04-01. Retrieved 2011-09-30.
- Weitzman, H. (2006, September 9). "Ethical-coffee' workers paid below legal minimum." Financial Times.
- Catherine S. Dolan (2008), Research in Economic Anthropology, "Arbitrating risk through moral values: the case of Kenyan fairtrade", volume 28, pp. 271–96.
https://en.wikipedia.org/wiki/Fair_trade
There are wide varieties of economic inequality, most notably income inequality measured using the distribution of income (the amount of money people are paid) and wealth inequality measured using the distribution of wealth (the amount of wealth people own). Besides economic inequality between countries or states, there are important types of economic inequality between different groups of people.[2]
Important types of economic measurements focus on wealth, income, and consumption. There are many methods for measuring economic inequality,[3] the Gini coefficient being a widely used one. Another type of measure is the Inequality-adjusted Human Development Index, which is a statistic composite index that takes inequality into account.[4] Important concepts of equality include equity, equality of outcome, and equality of opportunity.
Whereas globalization has reduced global inequality (between nations), it has increased inequality within nations.[5][6][7][8] Income inequality between nations peaked in the 1970s, when world income was distributed bimodally into "rich" and "poor" countries. Since then, income levels across countries have been converging, with most people now living in middle-income countries.[5][9] However, inequality within most nations has risen significantly in the last 30 years, particularly among advanced countries.[5][6][7][8] In this period, close to 90 percent of advanced economies have seen an increase in income inequality, with over 70% recording an increase in their Gini coefficients exceeding two points.[5]
Research has generally linked economic inequality to political and social instability, including revolution, democratic breakdown and civil conflict.[5][10][11][12] Research suggests that greater inequality hinders economic growth and macroeconomic stability, and that land and human capital inequality reduce growth more than inequality of income.[5][13] Inequality is at the center stage of economic policy debate across the globe, as government tax and spending policies have significant effects on income distribution.[5] In advanced economies, taxes and transfers decrease income inequality by one-third, with most of this being achieved via public social spending (such as pensions and family benefits).[5]
Measurements
In 1820, the ratio between the income of the top and bottom 20 percent of the world's population was three to one. By 1991, it was eighty-six to one.[14] A 2011 study titled "Divided we Stand: Why Inequality Keeps Rising" by the Organisation for Economic Co-operation and Development (OECD) sought to explain the causes for this rising inequality by investigating economic inequality in OECD countries; it concluded that the following factors played a role:[15]
- Changes in the structure of households can play an important role. Single-headed households in OECD countries have risen from an average of 15% in the late 1980s to 20% in the mid-2000s, resulting in higher inequality.
- Assortative mating refers to the phenomenon of people marrying people with similar background, for example doctors marrying other doctors rather than nurses. OECD found out that 40% of couples where both partners work belonged to the same or neighbouring earnings deciles compared with 33% some 20 years before.[16]
- In the bottom percentiles, number of hours worked has decreased.[16]
- The main reason for increasing inequality seems to be the difference between the demand for and supply of skills.[16]
The study made the following conclusions about the level of economic inequality:
- Income inequality in OECD countries is at its highest level for the past half century. The ratio between the bottom 10% and the top 10% has increased from 1:7 to 1:9 in 25 years.[16]
- There are tentative signs of a possible convergence of inequality levels towards a common and higher average level across OECD countries.[16]
- With very few exceptions (France, Japan, and Spain), the wages of the 10% best-paid workers have risen relative to those of the 10% lowest paid.[16]
A 2011 OECD study investigated economic inequality in Argentina, Brazil, China, India, Indonesia, Russia, and South Africa. It concluded that key sources of inequality in these countries include "a large, persistent informal sector, widespread regional divides (e.g. urban-rural), gaps in access to education, and barriers to employment and career progression for women."[16]
A study by the World Institute for Development Economics Research at United Nations University reported that the richest 1% of adults alone owned 40% of global assets in the year 2000. The three richest people in the world possess more financial assets than the lowest 48 nations combined.[17] The combined wealth of the "10 million dollar millionaires" grew to nearly $41 trillion in 2008.[18]
Oxfam's 2021 report on global inequality said that the COVID-19 pandemic has increased economic inequality substantially; the wealthiest people across the globe were impacted the least by the pandemic and their fortunes recovered quickest, with billionaires seeing their wealth increase by $3.9 trillion, while at the same time the number of people living on less than $5.50 a day likely increased by 500 million.[19] The 2022 Oxfam report said that growing economic inequality has been a factor in increased mortality rates during the pandemic, contributing to the deaths of 21,000 people on a daily basis, while the wealth of the world's 10 richest billionaires doubled.[20][21] The 2023 report stated that roughly two thirds of all new wealth went to the top 1% at the same time that extreme poverty has been increasing globally.[22] According to economist Joseph Stiglitz, the pandemic's "most significant outcome" will be rising economic inequality in the United States and between the developed and developing world.[23]
According to PolitiFact, the top 400 richest Americans "have more wealth than half of all Americans combined."[25][26][27][28] According to The New York Times on July 22, 2014, the "richest 1 percent in the United States now own more wealth than the bottom 90 percent".[29] Inherited wealth may help explain why many Americans who have become rich may have had a "substantial head start".[30][31] A 2017 report by the IPS said that three individuals, Jeff Bezos, Bill Gates and Warren Buffett, own as much wealth as the bottom half of the population, or 160 million people, and that the growing disparity between the wealthy and the poor has created a "moral crisis", noting that "we have not witnessed such extreme levels of concentrated wealth and power since the first gilded age a century ago."[32][33] In 2016, the world's billionaires increased their combined global wealth to a record $6 trillion.[34] In 2017, they increased their collective wealth to 8.9 trillion.[35] In 2018, U.S. income inequality reached the highest level ever recorded by the Census Bureau.[36]
The existing data and estimates suggest a large increase in international (and more generally inter-macroregional) components between 1820 and 1960. It might have slightly decreased since that time at the expense of increasing inequality within countries.[37] The United Nations Development Programme in 2014 asserted that greater investments in social security, jobs, and laws that protect vulnerable populations are necessary to prevent widening income inequality.[38]
There is a significant difference in the measured wealth distribution and the public's understanding of wealth distribution. Michael Norton of the Harvard Business School and Dan Ariely of the Department of Psychology at Duke University found this to be true in their research conducted in 2011. The actual wealth going to the top quintile in 2011 was around 84%, whereas the average amount of wealth that the general public estimated to go to the top quintile was around 58%.[39]
According to a 2020 study, global earnings inequality has decreased substantially since 1970. During the 2000s and 2010s, the share of earnings by the world's poorest half doubled.[40] Two researchers claim that global income inequality is decreasing due to strong economic growth in developing countries.[41] According to a January 2020 report by the United Nations Department of Economic and Social Affairs, economic inequality between states had declined, but intrastate inequality has increased for 70% of the world population over the period 1990–2015.[42] In 2015, the OECD reported in 2015 that income inequality is higher than it has ever been within OECD member nations and is at increased levels in many emerging economies.[43] According to a June 2015 report by the International Monetary Fund (IMF):
Widening income inequality is the defining challenge of our time. In advanced economies, the gap between the rich and poor is at its highest level in decades. Inequality trends have been more mixed in emerging markets and developing countries (EMDCs), with some countries experiencing declining inequality, but pervasive inequities in access to education, health care, and finance remain.[44]
In October 2017, the IMF warned that inequality within nations, in spite of global inequality falling in recent decades, has risen so sharply that it threatens economic growth and could result in further political polarization. The Fund's Fiscal Monitor report said that "progressive taxation and transfers are key components of efficient fiscal redistribution."[45] In October 2018 Oxfam published a Reducing Inequality Index which measured social spending, tax and workers' rights to show which countries were best at closing the gap between the rich and the poor.[46]
The 2022 World Inequality Report, a four-year research project organized by the economists Lucas Chancel, Thomas Piketty, Emmanuel Saez, and Gabriel Zucman, shows that "the world is marked by a very high level of income inequality and an extreme level of wealth inequality" and that these inequalities "seem to be about as great today as they were at the peak of western imperialism in the early 20th century." According to the report, the bottom half of the population owns 2% of global wealth, while the top 10% owns 76% of it. The top 1% owns 38%.[47][48][49]
Wealth distribution within individual countries
The following table shows information about individual wealth distribution in different countries from a 2018 report by Crédit Suisse.[50] The wealth is calculated by various factors, for instance: liabilities, debts, exchange rates and their expected development, real estate prices, human resources, natural resources and technical advancements, etc.
|
|
Income distribution within individual countries
Income inequality is measured by Gini coefficient (expressed in percent %) that is a number between 0 and 1. Here 0 expresses perfect equality, meaning that everyone has the same income, whereas 1 represents perfect inequality, meaning that one person has all the income and others have none. A Gini index value above 50% is considered high; countries including Brazil, Colombia, South Africa, Botswana, and Honduras can be found in this category. A Gini index value of 30% or above is considered medium; countries including Vietnam, Mexico, Poland, the United States, Argentina, Russia and Uruguay can be found in this category. A Gini index value lower than 30% is considered low; countries including Austria, Germany, Denmark, Norway, Slovenia, Sweden, and Ukraine can be found in this category.[51] In the low income inequality category (below 30%) is a wide representation of countries previously being part of Soviet Union or its satellites, like Slovakia, Czech Republic, Ukraine and Hungary.
In 2012 the Gini index for income inequality for whole European Union was only 30.6%.
Income distribution can differ from wealth distribution within each country. The wealth inequality is also measured in Gini index. There the higher Gini index signify greater inequality within the wealth distribution in country, 0 means total wealth equality and 1 represents situation, where everyone has no wealth, except an individual that has everything. For instance countries like Denmark, Norway and Netherlands, all belonging to the last category (below 30%, low income inequality) also have very high Gini index in wealth distribution, ranging from 70% up to 90%.
Consumption distribution within individual countries
In economics, the consumption distribution or consumption inequality is an alternative to the income distribution or wealth distribution for judging economic inequality, comparing levels of consumption rather than income or wealth.[52] This is an important measure of inequality as the basic utility of the wealth or income is the expenditure.[53] People experience the inequality directly in consumption, rather than income or wealth.[54]
Factors proposed to affect economic inequality
There are various reasons for economic inequality within societies, including both global market functions (such as trade, development, and regulation) as well as social factors (including gender, race, and education).[55] Recent growth in overall income inequality, at least within the OECD countries, has been driven mostly by increasing inequality in wages and salaries.[15]
Economist Thomas Piketty argues that widening economic disparity is an inevitable phenomenon of free market capitalism when the rate of return of capital (r) is greater than the rate of growth of the economy (g).[56]
Labour market
In modern market economies, if competition is imperfect; information unevenly distributed; opportunities to acquire education and skills unequal; market failure results. Many such imperfect conditions exist in virtually every market. According to Joseph Stiglitz this means that there is an enormous potential role for government to correct such market failures.[57]
In the United States, real wages are flat over the past 40 years for occupations across income and education levels, e.g. auto mechanics, cashiers, doctors, and software engineers.[58] However, stock ownership favors higher income and education levels,[59] thereby resulting in disparate investment income.
Taxes
Another cause is the rate at which income is taxed coupled with the progressivity of the tax system. A progressive tax is a tax by which the tax rate increases as the taxable base amount increases.[60][61][62][63][64] In a progressive tax system, the level of the top tax rate will often have a direct impact on the level of inequality within a society, either increasing it or decreasing it, provided that income does not change as a result of the change in tax regime. Additionally, steeper tax progressivity applied to social spending can result in a more equal distribution of income across the board.[65] Tax credits such as the Earned Income Tax Credit in the US can also decrease income inequality.[66] The difference between the Gini index for an income distribution before taxation and the Gini index after taxation is an indicator for the effects of such taxation.[67]
Education
An important factor in the creation of inequality is variation in individuals' access to education.[68] Education, especially in an area where there is a high demand for workers, creates high wages for those with this education.[69] However, increases in education first increase and then decrease growth as well as income inequality. As a result, those who are unable to afford an education, or choose not to pursue optional education, generally receive much lower wages. The justification for this is that a lack of education leads directly to lower incomes, and thus lower aggregate saving and investment. Conversely, quality education raises incomes and promotes growth because it helps to unleash the productive potential of the poor.[70]
Access to education was in turn influenced by land inequalities. In the less industrialized parts of 19th century Europe, for example, landowners still held more political power than industrialists. These landowners did not benefit from educating their workers as much as industrialists did, since "educated workers have more incentives to migrate to urban, industrial areas than their less educated counterparts."[71] Consequently, lower incentives to promote education in regions where land inequality was high led to lower levels of numeracy in these regions.[71]
Economic liberalism, deregulation and decline of unions
John Schmitt and Ben Zipperer (2006) of the CEPR point to economic liberalism and the reduction of business regulation along with the decline of union membership as one of the causes of economic inequality. In an analysis of the effects of intensive Anglo-American liberal policies in comparison to continental European liberalism, where unions have remained strong, they concluded "The U.S. economic and social model is associated with substantial levels of social exclusion, including high levels of income inequality, high relative and absolute poverty rates, poor and unequal educational outcomes, poor health outcomes, and high rates of crime and incarceration. At the same time, the available evidence provides little support for the view that U.S.-style labor market flexibility dramatically improves labor-market outcomes. Despite popular prejudices to the contrary, the U.S. economy consistently affords a lower level of economic mobility than all the continental European countries for which data is available."[72]
More recently, the International Monetary Fund has published studies which found that the decline of unionization in many advanced economies and the establishment of neoliberal economics have fueled rising income inequality.[73][74]
Information technology
The growth in importance of information technology has been credited with increasing income inequality.[75] Technology has been called "the main driver of the recent increases in inequality" by Erik Brynjolfsson, of MIT.[76] In arguing against this explanation, Jonathan Rothwell notes that if technological advancement is measured by high rates of invention, there is a negative correlation between it and inequality. Countries with high invention rates — "as measured by patent applications filed under the Patent Cooperation Treaty" — exhibit lower inequality than those with less. In one country, the United States, "salaries of engineers and software developers rarely reach" above $390,000/year (the lower limit for the top 1% earners).[77]
Some researchers, such as Juliet B. Schor, highlight the role of for-profit online sharing economy platforms as an accelerator of income inequality and calls into question their supposed contribution in empowering outsiders of the labour market.[78]
Taking the example of TaskRabbit, a labour service platform, she shows that a large proportion of providers already have a stable full-time job and participate part-time in the platform as an opportunity to increase their income by diversifying their activities outside employment, which tends to restrict the volume of work remaining for the minority of platform workers.
In addition, there is an important phenomenon of labour substitution as manual tasks traditionally performed by workers without a degree (or just a college degree) integrated into the labour market in the traditional economy sectors are now performed by workers with a high level of education (in 2013, 70% of TaskRabbit's workforce held a bachelor's degree, 20% a master's degree and 5% a PhD).[79] The development of platforms, which are increasingly capturing demand for these manual services at the expense of non-platform companies, may therefore benefit mainly skilled workers who are offered more earning opportunities that can be used as supplemental or transitional work during periods of unemployment.
It has also been proposed that information technologies contribute to "winner take most" market concentration, reducing the need for labor across competing suppliers.[80] Market concentration drives down labor's share of the GDP, increasing the wealth of capital and thereby exacerbating inequality.
Economists have linked automation to increases in economic inequality, as automation raises the returns to wealth and contributes to stagnating wages at the lower end of the wage distribution.[81]
Globalization
Trade liberalization may shift economic inequality from a global to a domestic scale.[83] When rich countries trade with poor countries, the low-skilled workers in the rich countries may see reduced wages as a result of the competition, while low-skilled workers in the poor countries may see increased wages. Trade economist Paul Krugman estimates that trade liberalisation has had a measurable effect on the rising inequality in the United States. He attributes this trend to increased trade with poor countries and the fragmentation of the means of production, resulting in low skilled jobs becoming more tradeable.[84]
Anthropologist Jason Hickel contends that globalization and "structural adjustment" set off the "race to the bottom", a significant driver of surging global inequality. Another driver Hickel mentions is the debt system which advanced the need for structural adjustment in the first place.[85]
Gender
In many countries, there is a gender pay gap in favor of males in the labor market. Several factors other than discrimination contribute to this gap. On average, women are more likely than men to consider factors other than pay when looking for work, and may be less willing to travel or relocate.[87][88] Thomas Sowell, in his book Knowledge and Decisions, claims that this difference is due to women not taking jobs due to marriage or pregnancy. A U.S. Census's report stated that in US once other factors are accounted for there is still a difference in earnings between women and men.[89] A study done on three post-soviet countries Armenia, Georgia, and Azerbaijan reveals that gender is one of the driving forces of income inequality, and being female has a significant negative effect on income when other factors are held equal. The results show more than 50% gender pay gap in all three countries.[90] These findings are because usually employers tend to avoid hiring women because of possible maternity leave. Other reason for this can be occupational segregation, which implies that women are usually accumulated in lower-paid positions and sectors, such as social services and education.
Race
There is also a globally recognized disparity in the wealth, income, and economic welfare of people of different races. In many nations, data exists to suggest that members of certain racial demographics experience lower wages, fewer opportunities for career and educational advancement, and intergenerational wealth gaps.[91] Studies have uncovered the emergence of what is called "ethnic capital", by which people belonging to a race that has experienced discrimination are born into a disadvantaged family from the beginning and therefore have less resources and opportunities at their disposal.[92][93] The universal lack of education, technical and cognitive skills, and inheritable wealth within a particular race is often passed down between generations, compounding in effect to make escaping these racialized cycles of poverty increasingly difficult.[93] Additionally, ethnic groups that experience significant disparities are often also minorities, at least in representation though often in number as well, in the nations where they experience the harshest disadvantage. As a result, they are often segregated either by government policy or social stratification, leading to ethnic communities that experience widespread gaps in wealth and aid.[94]
As a general rule, races which have been historically and systematically colonized (typically indigenous ethnicities) continue to experience lower levels of financial stability in the present day. The global South is considered to be particularly victimized by this phenomenon, though the exact socioeconomic manifestations change across different regions.[91]
Westernized Nations
Even in economically developed societies with high levels of modernization such as may be found in Western Europe, North America, and Australia, minority ethnic groups and immigrant populations in particular experience financial discrimination.[citation needed] While the progression of civil rights movements and justice reform has improved access to education and other economic opportunities in politically advanced nations, racial income and wealth disparity still prove significant.[95] In the United States for example, a survey[when?] of African-American populations show that they are more likely to drop out of high school and college, are typically employed for fewer hours at lower wages, have lower than average intergenerational wealth, and are more likely to use welfare as young adults than their white counterparts.[96]
Mexican-Americans, while suffering less debilitating socioeconomic factors than black Americans, experience deficiencies in the same areas when compared to whites and have not assimilated financially to the level of stability experienced by white Americans as a whole.[97] These experiences are the effects of the measured disparity due to race in countries like the US, where studies show that in comparison to whites, blacks suffer from drastically lower levels of upward mobility, higher levels of downward mobility, and poverty that is more easily transmitted to offspring as a result of the disadvantage stemming from the era of slavery and post-slavery racism that has been passed through racial generations to the present.[98][99][100] These are lasting financial inequalities that apply in varying magnitudes to most non-white populations in nations such as the US, the UK, France, Spain, Australia, etc.[91]
Latin America
In the countries of the Caribbean, Central America, and South America, many ethnicities continue to deal with the effects of European colonization, and in general nonwhites tend to be noticeably poorer than whites in this region. In many countries with significant populations of indigenous races and those of Afro-descent (such as Mexico, Colombia, Chile, etc.) income levels can be roughly half as high as those experiences by white demographics, and this inequity is accompanied by systematically unequal access to education, career opportunities, and poverty relief. This region of the world, apart from urbanizing areas like Brazil and Costa Rica, continues to be understudied and often the racial disparity is denied by Latin Americans who consider themselves to be living in post-racial and post-colonial societies far removed from intense social and economic stratification despite the evidence to the contrary.[101]
Africa
African countries, too, continue to deal with the effects of the Trans-Atlantic Slave Trade, which set back economic development as a whole for blacks of African citizenship more than any other region. The degree to which colonizers stratified their holdings on the continent on the basis of race has had a direct correlation in the magnitude of disparity experienced by nonwhites in the nations that eventually rose from their colonial status. Former French colonies, for example, see much higher rates of income inequality between whites and nonwhites as a result of the rigid hierarchy imposed by the French who lived in Africa at the time.[102] Another example is found in South Africa, which, still reeling from the socioeconomic impacts of Apartheid, experiences some of the highest racial income and wealth inequality in all of Africa.[98] In these and other countries like Nigeria, Zimbabwe, and Sierra Leone, movements of civil reform have initially led to improved access to financial advancement opportunities, but data[when?] shows that for nonwhites this progress is either stalling or erasing itself in the newest generation of blacks that seek education and improved transgenerational wealth. The economic status of one's parents continues to define and predict the financial futures of African and minority ethnic groups.[103][needs update]
Asia
Asian regions and countries such as China, the Middle East, and Central Asia have been vastly understudied in terms of racial disparity, but even here the effects of Western colonization provide similar results to those found in other parts of the globe.[91] Additionally, cultural and historical practices such as the caste system in India leave their marks as well. While the disparity is greatly improving in the case of India, there still exists social stratification between peoples of lighter and darker skin tones that cumulatively result in income and wealth inequality, manifesting in many of the same poverty traps seen elsewhere.[104]
Economic development
Economist Simon Kuznets argued that levels of economic inequality are in large part the result of stages of development. According to Kuznets, countries with low levels of development have relatively equal distributions of wealth. As a country develops, it acquires more capital, which leads to the owners of this capital having more wealth and income and introducing inequality. Eventually, through various possible redistribution mechanisms such as social welfare programs, more developed countries move back to lower levels of inequality.[105]
Wealth concentration
Wealth concentration is the process by which, under certain conditions, newly created wealth concentrates in the possession of already-wealthy individuals or entities. Accordingly, those who already hold wealth have the means to invest in new sources of creating wealth or to otherwise leverage the accumulation of wealth, and thus they are the beneficiaries of the new wealth. Over time, wealth concentration can significantly contribute to the persistence of inequality within society. Thomas Piketty in his book Capital in the Twenty-First Century argues that the fundamental force for divergence is the usually greater return of capital (r) than economic growth (g), and that larger fortunes generate higher returns.[106]
Rent seeking
Economist Joseph Stiglitz argues that rather than explaining concentrations of wealth and income, market forces should serve as a brake on such concentration, which may better be explained by the non-market force known as "rent-seeking" or unjust enrichment. While the market will bid up compensation for rare and desired skills to reward wealth creation, greater productivity, etc., it will also prevent successful entrepreneurs from earning excess profits by fostering competition to cut prices, profits and large compensation.[107] A better explainer of growing inequality, according to Stiglitz, is the use of political power generated by wealth by certain groups to shape government policies financially beneficial to them. This process, known to economists as rent-seeking, brings income not from creation of wealth but from "grabbing a larger share of the wealth that would otherwise have been produced without their effort".[108]
Finance industry
Jamie Galbraith argues that countries with larger financial sectors have greater inequality, and the link is not an accident.[109][110][why?]
Global warming
A 2019 study published in PNAS found that global warming plays a role in increasing economic inequality between countries, boosting economic growth in developed countries while hampering such growth in developing nations of the Global South. The study says that 25% of gap between the developed world and the developing world can be attributed to global warming.[111]
A 2020 report by Oxfam and the Stockholm Environment Institute says that the wealthiest 10% of the global population were responsible for more than half of global carbon dioxide emissions from 1990 to 2015, which increased by 60%.[112] According to a 2020 report by the UNEP, overconsumption by the rich is a significant driver of the climate crisis, and the wealthiest 1% of the world's population are responsible for more than double the greenhouse gas emissions of the poorest 50% combined. Inger Andersen, in the foreword to the report, said "this elite will need to reduce their footprint by a factor of 30 to stay in line with the Paris Agreement targets."[113] A 2022 report by Oxfam found that the business investments of the wealthiest 125 billionaires emit 393 million metric tonnes of greenhouse gas emissions annually.[114]
Politics
Joseph Stiglitz argues in The Price of Inequality (2012) that the economic inequality is inevitable and permanent, because it is caused by the great amount of political power the richest have.[57] He wrote, "While there may be underlying economic forces at play, politics have shaped the market, and shaped it in ways that advantage the top at the expense of the rest."
Mitigating factors
Countries with a left-leaning legislature generally have lower levels of inequality.[116][117] Many factors constrain economic inequality – they may be divided into two classes: government sponsored, and market driven. The relative merits and effectiveness of each approach is a subject of debate.
Typical government initiatives to reduce economic inequality include:
- Public education: increasing the supply of skilled labor and reducing income inequality due to education differentials.[118]
- Progressive taxation: the rich are taxed proportionally more than the poor, reducing the amount of income inequality in society if the change in taxation does not cause changes in income.[119]
Market forces outside of government intervention that can reduce economic inequality include:
- propensity to spend: with rising wealth & income, a person may spend more. In an extreme example, if one person owned everything, they would immediately need to hire people to maintain their properties, thus reducing the wealth concentration.[120] On the other hand, high-income persons have higher propensity to save.[121] Robin Maialeh then shows that increasing economic wealth decreases propensity to spend and increases propensity to invest which consequently leads to even greater growth rate of already rich agents.[122]
Research shows that since 1300, the only periods with significant declines in wealth inequality in Europe were the Black Death and the two World Wars.[123] Historian Walter Scheidel posits that, since the Stone Age, only extreme violence, catastrophes and upheaval in the form of total war, Communist revolution, pestilence and state collapse have significantly reduced inequality.[124][125] He has stated that "only all-out thermonuclear war might fundamentally reset the existing distribution of resources" and that "peaceful policy reform may well prove unequal to the growing challenges ahead."[126][127]
Policy responses intended to mitigate
A 2011 OECD study makes a number of suggestions to its member countries, including:[16]
- Well-targeted income-support policies.
- Facilitation and encouragement of access to employment.
- Better job-related training and education for the low-skilled (on-the-job training) would help to boost their productivity potential and future earnings.
- Better access to formal education.
Progressive taxation reduces absolute income inequality when the higher rates on higher-income individuals are paid and not evaded, and transfer payments and social safety nets result in progressive government spending.[128][129][130] Wage ratio legislation has also been proposed as a means of reducing income inequality. The OECD asserts that public spending is vital in reducing the ever-expanding wealth gap.[131]
Deferred investment programs that increase stock ownership amongst lower income levels can supplement income to compensate wage stagnation.[58]
The economists Emmanuel Saez and Thomas Piketty recommend much higher top marginal tax rates on the wealthy, up to 50 percent, 70 percent or even 90 percent.[132] Ralph Nader, Jeffrey Sachs, the United Front Against Austerity, among others, call for a financial transaction tax (also known as the Robin Hood tax) to bolster the social safety net and the public sector.[133][134]
The Economist wrote in December 2013: "A minimum wage, providing it is not set too high, could thus boost pay with no ill effects on jobs....America's federal minimum wage, at 38% of median income, is one of the rich world's lowest. Some studies find no harm to employment from federal or state minimum wages, others see a small one, but none finds any serious damage."[135]
General limitations on and taxation of rent-seeking are popular across the political spectrum.[136]
Public policy responses addressing causes and effects of income inequality in the US include: progressive tax incidence adjustments, strengthening social safety net provisions such as Aid to Families with Dependent Children, welfare, the food stamp program, Social Security, Medicare, and Medicaid, organizing community interest groups, increasing and reforming higher education subsidies, increasing infrastructure spending, and placing limits on and taxing rent-seeking.[137]
A 2017 study in the Journal of Political Economy by Daron Acemoglu, James Robinson and Thierry Verdier argues that American "cutthroat" capitalism and inequality gives rise to technology and innovation that more "cuddly" forms of capitalism cannot.[138] As a result, "the diversity of institutions we observe among relatively advanced countries, ranging from greater inequality and risk-taking in the United States to the more egalitarian societies supported by a strong safety net in Scandinavia, rather than reflecting differences in fundamentals between the citizens of these societies, may emerge as a mutually self-reinforcing world equilibrium. If so, in this equilibrium, 'we cannot all be like the Scandinavians,' because Scandinavian capitalism depends in part on the knowledge spillovers created by the more cutthroat American capitalism."[138] A 2012 working paper by the same authors, making similar arguments, was challenged by Lane Kenworthy, who posited that, among other things, the Nordic countries are consistently ranked as some of the world's most innovative countries by the World Economic Forum's Global Competitiveness Index, with Sweden ranking as the most innovative nation, followed by Finland, for 2012–2013; the U.S. ranked sixth.[139]
There are however global initiative like the United Nations Sustainable Development Goal 10 which aims to garner international efforts in reducing economic inequality considerably by 2030.[140]
Effects
A lot of research has been done about the effects of economic inequality on different aspects in society:
- Health: For long time the higher material living standards lead to longer life, as those people were able to get enough food, water and access to warmth. British researchers Richard G. Wilkinson and Kate Pickett have found higher rates of health and social problems (obesity, mental illness, homicides, teenage births, incarceration, child conflict, drug use) in countries and states with higher inequality.[141][142] Their research included 24 developed countries, including most U.S. states, and found that in the more developed countries, such as Finland and Japan, the heath issues are much lower than in states with rather higher inequality rates, such as Utah and New Hampshire. Some studies link a surge in "deaths of despair", suicide, drug overdoses and alcohol related deaths, to widening income inequality.[143][144] Conversely, other research did not find these effects or concluded that research suffered from issues of confounding variables.[145]
- Social goods: British researchers Richard G. Wilkinson and Kate Pickett have found lower rates of social goods (life expectancy by country, educational performance, trust among strangers, women's status, social mobility, even numbers of patents issued) in countries and states with higher inequality.[141][142]
- Social cohesion: Research has shown an inverse link between income inequality and social cohesion. In more equal societies, people are much more likely to trust each other, measures of social capital (the benefits of goodwill, fellowship, mutual sympathy and social connectedness among groups who make up a social units) suggest greater community involvement.
- Crime: The cross national research shows that in societies with less economic inequality the homicide rates are consistently lower.[146] A 2016 study finds that interregional inequality increases terrorism.[147] Other research has argued inequality has little effect on crime rates.[148][149]
- Welfare: Studies have found evidence that in societies where inequality is lower, population-wide satisfaction and happiness tend to be higher.[150][151][152][153]
- Poverty: Study made by Jared Bernstein and Elise Gould suggest, that the poverty in the United States could have been reduced by the lowering of economic inequality for the past few decades.[154][155]
- Debt: Income inequality has been the driving factor in the growing household debt,[156][157] as high earners bid up the price of real estate and middle income earners go deeper into debt trying to maintain what once was a middle class lifestyle.[158]
- Economic growth: A 2016 meta-analysis found that "the effect of inequality on growth is negative and more pronounced in less developed countries than in rich countries", though the average impact on growth was not significant. The study also found that wealth inequality is more pernicious to growth than income inequality.[13]
- Civic participation: Higher income inequality led to less of all forms of social, cultural, and civic participation among the less wealthy.[159]
- Political instability: Studies indicate that economic inequality leads to greater political instability, including an increased risk of democratic breakdown[11][160][161][162][163] and civil conflict.[164][12] A significant impact of inequality on civil war probability has been found through anthropometric methods.[165]
- Political party responses: One study finds that economic inequality prompts attempts by left-leaning politicians to pursue redistributive policies while right-leaning politicians seek to repress the redistributive policies.[166]
Perspectives
Fairness vs. equality
According to Christina Starmans et al. (Nature Hum. Beh., 2017), the research literature contains no evidence on people having an aversion on inequality. In all studies analyzed, the subjects preferred fair distributions (inequity aversion) to equal distributions, in both laboratory and real-world situations. In public, researchers may loosely speak of equality instead of fairness, when referring to studies where fairness happens to coincide with equality, but in many studies fairness is carefully separated from equality and the results are univocal. Very young children seem to prefer fairness over equality.[167]
When people were asked, what would be the wealth of each quintile in their ideal society, they gave a 50-fold sum to the richest quintile than to the poorest quintile. The preference for inequality increases in adolescence, and so do the capabilities to favor fortune, effort and ability in the distribution.[167]
Preference for unequal distribution has been developed to the human race possibly because it allows for better co-operation and allows a person to work with a more productive person so that both parties benefit from the co-operation. Inequality is also said to be able to solve the problems of free-riders, cheaters and ill-behaving people, although this is heavily debated.[167] Researches demonstrate that people usually underestimate the level of actual inequality, which is also much higher than their desired level of inequality.[168]
In many societies, such as the USSR, the distribution led to protests from wealthier landowners.[169] In the current U.S., many feel that the distribution is unfair in being too unequal. In both cases, the cause is unfairness, not inequality, the researchers conclude.[167]
Socialist perspectives
Socialists attribute the vast disparities in wealth to the private ownership of the means of production by a class of owners, creating a situation where a small portion of the population lives off unearned property income by virtue of ownership titles in capital equipment, financial assets and corporate stock. By contrast, the vast majority of the population is dependent on income in the form of a wage or salary. In order to rectify this situation, socialists argue that the means of production should be socially owned so that income differentials would be reflective of individual contributions to the social product.[170]
Marxian economics attributes rising inequality to job automation and capital deepening within capitalism. The process of job automation conflicts with the capitalist property form and its attendant system of wage labor. In this analysis, capitalist firms increasingly substitute capital equipment for labor inputs (workers) under competitive pressure to reduce costs and maximize profits. Over the long term, this trend increases the organic composition of capital, meaning that less workers are required in proportion to capital inputs, increasing unemployment (the "reserve army of labour"). This process exerts a downward pressure on wages. The substitution of capital equipment for labor (mechanization and automation) raises the productivity of each worker, resulting in a situation of relatively stagnant wages for the working class amidst rising levels of property income for the capitalist class.[171]
Marxist socialists ultimately predict the emergence of a communist society based on the common ownership of the means of production, where each individual citizen would have free access to the articles of consumption (From each according to his ability, to each according to his need). According to Marxist philosophy, equality in the sense of free access is essential for freeing individuals from dependent relationships, thereby allowing them to transcend alienation.[172]
Meritocracy
Meritocracy favors an eventual society where an individual's success is a direct function of his merit, or contribution. Economic inequality would be a natural consequence of the wide range in individual skill, talent and effort in human population. David Landes stated that the progression of Western economic development that led to the Industrial Revolution was facilitated by men advancing through their own merit rather than because of family or political connections.[173]
Liberal perspectives
Most modern social liberals, including centrist or left-of-center political groups, believe that the capitalist economic system should be fundamentally preserved, but the status quo regarding the income gap must be reformed. Social liberals favor a capitalist system with active Keynesian macroeconomic policies and progressive taxation (to even out differences in income inequality). Research indicates that people who hold liberal beliefs tend to see greater income inequality as morally wrong.[174]
However, contemporary classical liberals and libertarians generally do not take a stance on wealth inequality, but believe in equality under the law regardless of whether it leads to unequal wealth distribution. In 1966 Ludwig von Mises, a prominent figure in the Austrian School of economic thought, explains:
The liberal champions of equality under the law were fully aware of the fact that men are born unequal and that it is precisely their inequality that generates social cooperation and civilization. Equality under the law was in their opinion not designed to correct the inexorable facts of the universe and to make natural inequality disappear. It was, on the contrary, the device to secure for the whole of mankind the maximum of benefits it can derive from it. Henceforth no man-made institutions should prevent a man from attaining that station in which he can best serve his fellow citizens.
Robert Nozick argued that government redistributes wealth by force (usually in the form of taxation), and that the ideal moral society would be one where all individuals are free from force. However, Nozick recognized that some modern economic inequalities were the result of forceful taking of property, and a certain amount of redistribution would be justified to compensate for this force but not because of the inequalities themselves.[175] John Rawls argued in A Theory of Justice[176] that inequalities in the distribution of wealth are only justified when they improve society as a whole, including the poorest members. Rawls does not discuss the full implications of his theory of justice. Some see Rawls's argument as a justification for capitalism since even the poorest members of society theoretically benefit from increased innovations under capitalism; others believe only a strong welfare state can satisfy Rawls's theory of justice.[177]
Classical liberal Milton Friedman believed that if government action is taken in pursuit of economic equality then political freedom would suffer. In a famous quote, he said:
A society that puts equality before freedom will get neither. A society that puts freedom before equality will get a high degree of both.
Economist Tyler Cowen has argued that though income inequality has increased within nations, globally it has fallen over the 20 years leading up to 2014. He argues that though income inequality may make individual nations worse off, overall, the world has improved as global inequality has been reduced.[178]
Social justice arguments
Patrick Diamond and Anthony Giddens (professors of Economics and Sociology, respectively) hold that 'pure meritocracy is incoherent because, without redistribution, one generation's successful individuals would become the next generation's embedded caste, hoarding the wealth they had accumulated'.[179]
They also state that social justice requires redistribution of high incomes and large concentrations of wealth in a way that spreads it more widely, in order to "recognize the contribution made by all sections of the community to building the nation's wealth." (Patrick Diamond and Anthony Giddens, June 27, 2005, New Statesman)[180]
Pope Francis stated in his Evangelii gaudium, that "as long as the problems of the poor are not radically resolved by rejecting the absolute autonomy of markets and financial speculation and by attacking the structural causes of inequality, no solution will be found for the world's problems or, for that matter, to any problems."[181] He later declared that "inequality is the root of social evil."[182]
When income inequality is low, aggregate demand will be relatively high, because more people who want ordinary consumer goods and services will be able to afford them, while the labor force will not be as relatively monopolized by the wealthy.[183]
Effects on social welfare
In most western democracies, the desire to eliminate or reduce economic inequality is generally associated with the political left. One practical argument in favor of reduction is the idea that economic inequality reduces social cohesion and increases social unrest, thereby weakening the society. There is evidence that this is true (see inequity aversion) and it is intuitive, at least for small face-to-face groups of people.[184] Alberto Alesina, Rafael Di Tella, and Robert MacCulloch find that inequality negatively affects happiness in Europe but not in the United States.[185]
It has also been argued that economic inequality invariably translates to political inequality, which further aggravates the problem. Even in cases where an increase in economic inequality makes nobody economically poorer, an increased inequality of resources is disadvantageous, as increased economic inequality can lead to a power shift due to an increased inequality in the ability to participate in democratic processes.[186]
Capabilities approach
The capabilities approach – sometimes called the human development approach – looks at income inequality and poverty as form of "capability deprivation".[187] Unlike neoliberalism, which "defines well-being as utility maximization", economic growth and income are considered a means to an end rather than the end itself.[188] Its goal is to "wid[en] people's choices and the level of their achieved well-being"[189] through increasing functioning (the things a person values doing), capabilities (the freedom to enjoy functionings) and agency (the ability to pursue valued goals).[190]
When a person's capabilities are lowered, they are in some way deprived of earning as much income as they would otherwise. An old, ill man cannot earn as much as a healthy young man; gender roles and customs may prevent a woman from receiving an education or working outside the home. There may be an epidemic that causes widespread panic, or there could be rampant violence in the area that prevents people from going to work for fear of their lives.[187] As a result, income inequality increases, and it becomes more difficult to reduce the gap without additional aid.
See also
- Accumulation of capital
- Anti-capitalism
- Aporophobia
- Class conflict
- Criticism of capitalism
- Cycle of poverty
- Donor Class
- Economic anxiety
- Economic migrant
- Economic security
- Equal opportunity
- Great Divergence, disproportionate economic advancement of Europe
- Human Development Index
- Income distribution
- Inequality for All
- International inequality
- List of countries by distribution of wealth
- List of countries by income equality
- List of countries by wealth per adult
- Occupy movement
- Paradise Papers
- Poverty reduction
- Public university
- Rent-seeking
- Social inequality
- Spatial inequality
- Tax haven
- Theories of poverty
- Wealth concentration
- Wealth distribution
References
- Deneulin, Séverine; Alkire, Sabina (2009), "The human development and capability approach", in Deneulin, Séverine; Shahani, Lila (eds.), An introduction to the human development and capability approach freedom and agency, Sterling, VA & Ottawa, Ontario: Earthscan International Development Research Centre, pp. 22–48, ISBN 978-1844078066
Further reading
- Books
- Atkinson, Anthony B.; Bourguignon, François (2000). Handbook of income distribution. Amsterdam & New York: Elsevier. ISBN 978-0444816313.
- Atkinson, Anthony B. (2015). Inequality: What Can Be Done? Cambridge, Massachusetts: Harvard University Press. ISBN 0674504763
- Barro, Robert J.; Sala-i-Martin, Xavier (2003) [1995]. Economic growth (2nd ed.). Massachusetts: MIT Press. ISBN 978-0262025539.
- Deneulin, Séverine; Shahani, Lila (2009). An introduction to the human development and capability approach freedom and agency. Sterling, VA & Ottawa, Ontario: Earthscan International Development Research Centre. ISBN 978-1844078066.
- Giddens, Anthony; Diamond, Patrick (2005). The new egalitarianism. Cambridge, UK & Malden, MA: Polity. ISBN 978-0745634319.
- Gilens, Martin (2012). Affluence and influence: Economic inequality and political power in America. Princeton, NJ & New York: Princeton University Press Russell Sage Foundation. ISBN 978-0691162423.
- Gradín, Carlos; Leibbrandt, Murray; Tarp, Finn, eds. (2021). Inequality In The Developing World. WIDER Studies in Development Economics. Oxford University Press. ISBN 978-0198863960.
- Lambert, Peter J. (2001). The distribution and redistribution of income (3rd ed.). Manchester, NY: Manchester University Press Palgrave. ISBN 978-0719057328.
- Lynn, Richard; Vanhanen, Tatu (2002). IQ and the wealth of nations. Westport, Connecticut: Praeger. ISBN 978-0275975104.
- Merino, Noël, ed. (2016). Income inequality. Opposing Viewpoints Series. Farmington Hills, MI: Greenhaven Press. ISBN 978-0737775259.
- Page, Benjamin I.; Jacobs, Lawrence R. (2009). Class war?: What Americans really think about economic inequality. Chicago: University of Chicago Press. ISBN 978-0226644554.
- Ribeiro, Marcelo Byrro (2020). Income Distribution Dynamics of Economic Systems: An Econophysical Approach. Cambridge University Press. ISBN 978-1107092532.
- Salverda, Wiemer; Nolan, Brian; Smeeding, Timothy M. (2009). The Oxford handbook of economic inequality. Oxford & New York: Oxford University Press. ISBN 978-0199231379.
- Schmidtz, David (2006). The elements of justice. Cambridge & New York: Cambridge University Press. ISBN 978-0521539364.
- Sen, Amartya (1999). Development as Freedom. New York: Oxford University Press. ISBN 978-0198297581.
- Sen, Amartya; Foster, James E. (1997). On economic inequality. Radcliffe Lectures. Oxford & New York: Clarendon Press Oxford University Press. ISBN 978-0198281931.
- von Braun, Joachim; Diaz-Bonilla, Eugenio (2008). Globalization of food, and agriculture, and the poor. New Delhi; Washington D.C': Oxford University Press International Food Policy Research Institute. ISBN 978-0195695281.
- Wilkinson, Richard G. (2005). The impact of inequality: how to make sick societies healthier. London: Routledge. ISBN 978-0415372695.
- Wilkinson, Richard G.; Pickett, Kate (2009). The spirit level: why more equal societies almost always do better. London: Allen Lane. ISBN 978-1846140396.
- Articles
- Ahamed, Liaquat (September 2, 2019). "Widening Gyre: The rise and fall and rise of economic inequality". The New Yorker. pp. 26–29.
[T]here seems to [be] some sort of cap on inequality – a limit to the economic divisions a country can ultimately cope with.
- Alesina, Alberto; Di Tella, Rafael; MacCulloch, Robert (2004). "Inequality and happiness: Are Europeans and Americans different?". Journal of Public Economics. 88 (9–10): 2009–42. CiteSeerX 10.1.1.203.664. doi:10.1016/j.jpubeco.2003.07.006.
- Andersen, Robert (2012). "Support for Democracy in Cross-national Perspective: The Detrimental Effect of Economic Inequality" (PDF). Research in Social Stratification and Mobility. 30 (4): 389–402. doi:10.1016/j.rssm.2012.04.002.
- Andersen, Robert; Fetner, Tina (2008). "Economic Inequality and Intolerance: Attitudes toward Homosexuality in 35 Democracies". American Journal of Political Science. 52 (4): 942–58. doi:10.1111/j.1540-5907.2008.00352.x. hdl:11375/22293. JSTOR 25193859.
- Barro, Robert J. (1991). "Economic Growth in a Cross Section of Countries". The Quarterly Journal of Economics. 106 (2): 407–43. CiteSeerX 10.1.1.312.3126. doi:10.2307/2937943. JSTOR 2937943.
- Barro, Robert J. (2000). "Inequality and Growth in a Panel of Countries". Journal of Economic Growth. 5 (1): 5–32. doi:10.1023/A:1009850119329. S2CID 2089406.
- Cousin, Bruno; Chauvin, Sébastien (2021). "Is there a global super-bourgeoisie?" Sociology Compass 15 (6): 1–15.
- Cousin, Bruno; Shamus Khan; Ashley Mears (2018). "Theoretical and methodological pathways for research on elites" Socio-Economic Review 16 (2): 225–249.
- Fukuda-Parr, Sakiko (2003). "The Human Development Paradigm: Operationalizing Sen's Ideas on Capabilities". Feminist Economics. 9 (2–3): 301–17. doi:10.1080/1354570022000077980. S2CID 18178004.
- Galor, Oded; Zeira, Joseph (1993). "Income Distribution and Macroeconomics". The Review of Economic Studies. 60 (1): 35–52. CiteSeerX 10.1.1.636.8225. doi:10.2307/2297811. JSTOR 2297811.
- Hatch, Megan E.; Rigby, Elizabeth (2015). "Laboratories of (In)equality? Redistributive Policy and Income Inequality in the American States". Policy Studies Journal. 43 (2): 163–187. doi:10.1111/psj.12094.
- Kaldor, Nicholas (1955). "Alternative Theories of Distribution". The Review of Economic Studies. 23 (2): 83–100. doi:10.2307/2296292. JSTOR 2296292.
- Kenworthy, Lane (2010). "Rising Inequality, Public Policy, and America's Poor". Challenge. 53 (6): 93–109. doi:10.2753/0577-5132530606. JSTOR 27896630. S2CID 154630590.
- Kenworthy, Lane (2017). "Why the Surge in Income Inequality?". Contemporary Sociology. 46 (1): 1–9. doi:10.1177/0094306116681789. S2CID 151979382.
- Lagerlof, Nils-Petter (2005). "Sex, equality, and growth". Canadian Journal of Economics. 38 (3): 807–31. doi:10.1111/j.0008-4085.2005.00303.x. S2CID 154768462.
- Lazzarato, Maurizio (2009). "Neoliberalism in Action: Inequality, Insecurity and the Reconstitution of the Social". Theory, Culture & Society. 26 (6): 109–33. doi:10.1177/0263276409350283. S2CID 145758386.
- Maavak, Mathew (December 2012). "Class warfare, anarchy and the future society" (PDF). Journal of Futures Studies. 17 (2): 15–36. Archived from the original (PDF) on October 19, 2017. Retrieved March 18, 2013.
- García-Peñalosa, Cecilia; Turnovsky, Stephen J. (2007). "Growth, Income Inequality, and Fiscal Policy: What Are the Relevant Trade-offs?". Journal of Money, Credit and Banking. 39 (2–3): 369–94. CiteSeerX 10.1.1.186.2754. doi:10.1111/j.0022-2879.2007.00029.x.
- Pigou, Arthur C. (1932) [1920], "Part I, Chapter VIII: Economic welfare and changes in the distribution of the national dividend (section I.VIII.3)", in Pigou, Arthur C. (ed.), The economics of welfare (4th ed.), London: Macmillan and Co., OCLC 302702.
- Sala-i-Martin, X. (2006). "The World Distribution of Income: Falling Poverty and ... Convergence, Period". The Quarterly Journal of Economics. 121 (2): 351–97. doi:10.1162/qjec.2006.121.2.351. JSTOR 25098796.
- Seguino, Stephanie (2000). "Gender Inequality and Economic Growth: A Cross-Country Analysis". World Development. 28 (7): 1211–30. doi:10.1016/S0305-750X(00)00018-8.
- Smeeding, Timothy M.; Thompson, Jeffrey P. (2011). "Recent Trends in Income Inequality". In Immervoll, Herwig; Peichl, Andreas; Tatsiramos, Konstantinos (eds.). Who Loses in the Downturn? Economic Crisis, Employment and Income Distribution. Research in Labor Economics. Vol. 32. pp. 1–50. doi:10.1108/S0147-9121(2011)0000032004. ISBN 978-0857247490.
- Solow, Robert M. (1956). "A Contribution to the Theory of Economic Growth". The Quarterly Journal of Economics. 70 (1): 65–94. doi:10.2307/1884513. hdl:10338.dmlcz/143862. JSTOR 1884513.
- Stewart, Alexander J.; McCarty, Nolan; Bryson, Joanna J. (2020). "Polarization under rising inequality and economic decline". Science Advances. 6 (50): eabd4201. arXiv:1807.11477. Bibcode:2020SciA....6.4201S. doi:10.1126/sciadv.abd4201. PMC 7732181. PMID 33310855. S2CID 216144890.
- Svizzero, Serge; Tisdell, Clem (2003). "Income inequality between skilled individuals" (PDF). International Journal of Social Economics. 30 (11): 1118–30. doi:10.1108/03068290310497486. S2CID 153963662.
- Vicencio, Eduardo Rivera (2019). "Inequality, Precariousness and Social Costs of Capitalism. In the Era of Corporate Governmentality". International Journal of Critical Accounting (IJCA). 11 (1): 40–70. doi:10.1504/IJCA.2019.10025189. S2CID 211435244.
Historical
- Alfani, Guido, and Matteo Di Tullio. The Lion's Share: Inequality and the Rise of the Fiscal State in Preindustrial Europe, Cambridge University Press, Cambridge 2019. The Lion's Share: Inequality and the Rise of the Fiscal State in Preindustrial Europe
- Crayen, Dorothee, and Joerg Baten. "New evidence and new methods to measure human capital inequality before and during the industrial revolution: France and the US in the seventeenth to nineteenth centuries." Economic History Review 63.2 (2010): 452–478.online
- Hickel, Jason (2018). The Divide: Global Inequality from Conquest to Free Markets. W. W. Norton & Company. ISBN 978-0393651362.
- Hoffman, Philip T., et al. "Real inequality in Europe since 1500." Journal of Economic History 62.2 (2002): 322–355. online
- Morrisson, Christian, and Wayne Snyder. "The income inequality of France in historical perspective." European Review of Economic History 4.1 (2000): 59–83. online
- Lindert, Peter H., and Steven Nafziger. "Russian inequality on the eve of revolution." Journal of Economic History 74.3 (2014): 767-798. online
- Nicolini, Esteban A.; Ramos Palencia, Fernando (2016). "Decomposing income inequality in a backward pre-industrial economy: Old Castile (Spain) in the middle of the eighteenth century". Economic History Review. 69 (3): 747–772. doi:10.1111/ehr.12122. S2CID 154988112.
- Piketty, Thomas, and Emmanuel Saez. "The evolution of top incomes: a historical and international perspective." American economic review 96.2 (2006): 200–205. online
- Piketty, Thomas, and Emmanuel Saez. "Income inequality in the United States, 1913–1998." Quarterly journal of economics 118.1 (2003): 1-41. online
- Saito, Osamu. "Growth and inequality in the great and little divergence debate: a Japanese perspective." Economic History Review 68.2 (2015): 399–419. Covers 1600–1868 with comparison to Stuart England and Mughal India.
- Scheidel, Walter (2017). The Great Leveler: Violence and the History of Inequality from the Stone Age to the Twenty-First Century. Princeton: Princeton University Press. ISBN 978-0691165028.
- Stewart, Frances. "Changing perspectives on inequality and development." Studies in Comparative International Development 51.1 (2016): 60–80. covers 1801 to 2016.
- Sutch, Richard. "The One Percent across Two Centuries: A Replication of Thomas Piketty's Data on the Concentration of Wealth in the United States." Social Science History 41.4 (2017): 587–613. Strongly rejects all Piketty's estimates for US inequality before 1910 for both top 1% and top 10%. online
- Van Zanden, Jan Luiten. "Tracing the beginning of the Kuznets curve: Western Europe during the early modern period." Economic History Review 48.4 (1995): 643–664. covers 1400 to 1800.
- Wei, Yehua Dennis. "Geography of inequality in Asia." Geographical Review 107.2 (2017): 263–275. covers 1981 to 2015.
External links
- Bowles, Samuel; Carlin, Wendy (2020). "Inequality as experienced difference: A reformulation of the Gini coefficient". Economics Letters. 186: 108789. doi:10.1016/j.econlet.2019.108789. ISSN 0165-1765.
https://en.wikipedia.org/wiki/Economic_inequality
Downward harmonization is an econo-political term describing the act of adapting the trade laws of a country with an established economy "downward" to the trade laws of the country with a developing economy. This "harmonizing" may affect labor laws, human rights laws, minimum-wage, industry standards, quality control, anti-terrorism, etc.
General information
Downward harmonization refers to the process of reducing regulatory requirements or standards in order to bring them into alignment with the lowest common denominator. This may be done in order to facilitate trade or cooperation between countries or regions with different regulatory regimes, or to reduce the costs or burdens associated with complying with regulations.
Downward harmonization can take many forms, and may involve the reduction of standards for consumer protection, environmental protection, labor standards, or other areas. It may be driven by a desire to lower costs for businesses or to reduce barriers to trade.
Overall, the process of downward harmonization is an important aspect of global trade and economic cooperation, and its impact on regulatory standards and the costs and benefits of trade and cooperation are areas of ongoing debate and discussion.
See also
References
Kenneth A. Reinert; Ramkishen S. Rajan; Amy Joycelyn Glass; Lewis S. Davis (2 August 2010). The Princeton Encyclopedia of the World Economy. (Two volume set). Princeton University Press. p. 1006. ISBN 1-4008-3040-0.
- World economy
- Economic globalization
- Political term stubs
- Economics and finance stubs
- Globalization stubs
https://en.wikipedia.org/wiki/Downward_harmonization
Part of a series on |
Ecological economics |
---|
|
|
|
|
Uneconomic growth is economic growth that reflects or creates a decline in the quality of life. The concept is used in human development theory, welfare theory, and ecological economics. It is usually attributed to ecological economist Herman Daly, though other theorists may also be credited for the incipient idea,[1][2] According to Daly, "uneconomic growth occurs when increases in production come at an expense in resources and well-being that is worth more than the items made."[3] The cost, or decline in well-being, associated with extended economic growth is argued to arise as a result of "the social and environmental sacrifices made necessary by that growing encroachment on the eco-system."[4][5]
Types of growth
The rate or type of economic growth may have important consequences for the environment (the climate and natural capital of ecologies). Concerns about possible negative effects of growth on the environment and society have led some to advocate lower levels of growth, from which comes the idea of uneconomic growth and Green parties which argue that economies are part of a global society and a global ecology and cannot outstrip their natural growth without damaging them.
Canadian scientist David Suzuki argued in the 1990s that ecologies can only sustain typically about 1.5–3% new growth per year, and thus any requirement for greater returns from agriculture or forestry will necessarily cannibalize the natural capital of soil or forest. Some think this argument can be applied even to more developed economies.
The role of technology, and Jevons paradox
Mainstream economists would argue that economies are driven by new technology—for instance, we have faster computers today than a year ago, but not necessarily physically more computers[citation needed]. Growth that relies entirely on exploiting increased knowledge rather than exploiting increased resource consumption may thus not qualify as uneconomic growth. In some cases, this may be true where technology enables lower amounts of input to be used in producing the same unit of product (and/or it reduces the amount or hazardousness of the waste generated per unit product produced) (e.g., the increased availability of movies through the Internet or cable television electronically may reduce the demand for physical video tapes or DVDs for films). Nonetheless, it is crucial to also recognise that innovation- or knowledge-driven growth still may not entirely resolve the problem of scale, or increasing resource consumption. For instance, there might likely be more computers due to greater demand and replacements for slower computers.
The Jevons Paradox is the proposition that technological progress that increases the efficiency with which a resource is used, tends to increase (rather than decrease) the rate of consumption of that resource.[6][7] For example, given that expenditure on necessities and taxes remain the same, (i) the availability of energy-saving lightbulbs may mean lower electricity usage and fees for a household but this frees up more discretionary, disposable income for additional consumption elsewhere (an example of the "rebound effect")[8][9] and (ii) technology (or globalisation) that leads to the availability of cheaper goods for consumers also frees up discretionary income for increased consumptive spending.
On the other hand, new renewable energy and climate change mitigation technology (such as artificial photosynthesis) has been argued to promote a prolonged era of human stewardship over ecosystems known as the Sustainocene. In the Sustainocene, "instead of the cargo-cult ideology of perpetual economic growth through corporate pillage of nature, globalised artificial photosynthesis will facilitate a steady state economy and further technological revolutions such as domestic nano-factories and e-democratic input to local communal and global governance structures. In such a world, humans will no longer feel economically threatened, but rather proud, that their moral growth has allowed them to uphold Rights of Nature."[10]
See also
References
- Thomas Faunce. 'Artificial Photosynthesis Could Extend Rights to Nature. The Conversation 2 July 2013. https://theconversation.com/artificial-photosynthesis-could-extend-rights-to-nature-15380 (accessed 2 July 2013).
Further reading
- Baker, Linda (May–June 1999). "Real Wealth: The Genuine Progress Indicator Could Provide an Environmental Measure of the Planet's Health". E Magazine: 37–41.
- Cobb, Clifford; Ted Halstead; Jonathan Rowe (October 1995). "If the GDP Is Up, Why Is America Down?". Atlantic Monthly: 59–78.
- Takis Fotopoulos: "The Multidimensional Crisis and Inclusive Democracy", Athens 2005. English online version:[1]
- Rowe, Jonathan; Judith Silverstein (March 1999). "The GDP Myth: Why 'Growth' Isn't Always a Good Thing". Washington Monthly: 17–21.
- Rowe, Jonathan (July–August 1999). "The Growth Consensus Unravels". Dollars & Sense: 15–18, 33.
External links
https://en.wikipedia.org/wiki/Uneconomic_growth
A tax hell is generally used to refer to a country or place with very high rates of taxation.[1][2]
In some definitions, tax hell also means an oppressive or otherwise onerous tax bureaucracy.[3][4][5]
In some cases, the effective tax pressure is difficult to measure for a comparison.[6]
Tax hell usually includes socialist and communist countries such as Belarus, Venezuela, Argentina, Nicaragua, Bolivia, Haiti and France.[7][8][9]
See also
References
Wisconsin is still a tax hell. Here's why. To begin with, even accepting the findings above — and I have great respect for the Wisconsin Taxpayers Alliance — ranking 15th out of 50 states in tax burden gives us no reason to brag. It still means that 35 states are more competitive than Wisconsin.
Ceteris paribus, they prefer to reside in countries with large welfare programs financed by substantial taxation which we call tax hells for obvious reasons.
Six years ago, June and Ron Speltz got caught by the alternative minimum tax, which triggered a tax bill of more than $260,000 on income they'd never see. Their fight to change the law finally paid off.
Even though the IRS validated that I had done everything correctly, the experience completely changed how I look at buying things I need for my business. I always ask myself: Will this be questioned? I have a heightened sense of the IRS being involved in my business.
Your home-based activity can be a business for tax purposes only if you can show that you are engaged in it to earn a profit, not simply to have fun or pursue a personal interest. If you can't prove a profit motive for the activity, you will be considered a hobbyist and forced to enter tax hell. The IRS has established two tests to determine whether someone has a profit motive. One is a simple mechanical test that looks whether you have earned a profit in three of the last five years. The other is a more complex test designed to determine whether you act like you want to earn a profit.
Another hellish idea, which serves to justify-legitimise these tax hikes, is that tax pressure in Spain is low compared with other countries in Europe. Something which is true, only apparently. Spain's fiscal pressure is 34.6% while in Germany it's 40%, 48% in France and 43.5% in Italy. But the problem is that while the fiscal pressure in northern European countries is more less evenly distributed, this is not the case here.
- Goudron, Claude (2022-06-05). "Les paradis fiscaux existent car la France est un enfer fiscal". Contrepoints (in French). Retrieved 2022-11-06.
https://en.wikipedia.org/wiki/Tax_hell
Regulatory competition, also called competitive governance or policy competition, is a phenomenon in law, economics and politics concerning the desire of lawmakers to compete with one another in the kinds of law offered in order to attract businesses or other actors to operate in their jurisdiction. Regulatory competition depends upon the ability of actors such as companies, workers or other kinds of people to move between two or more separate legal systems. Once this is possible, then the temptation arises for the people running those different legal systems to compete to offer better terms than their "competitors" to attract investment. Historically, regulatory competition has operated within countries having federal systems of regulation - particularly the United States, but since the mid-20th century and the intensification of economic globalisation, regulatory competition became an important issue internationally.
One opinion is that regulatory competition in fact creates a "race to the top" in standards, due to the ability of different actors to select the most efficient rules by which to be governed. The main fields of law affected by the phenomenon of regulatory competition are corporate law, labour law, tax and environmental law. Another opinion is that regulatory competition between jurisdictions creates a "race to the bottom" in standards, due to the decreased ability of any jurisdiction to enforce standards without the cost of driving investment abroad.
History
The concept of regulatory competition emerged from the late 19th and early 20th century experience with charter competition among US states to attract corporations to domicile in their jurisdiction. In 1890 New Jersey enacted a liberal corporation charter, which charged low fees for company registration and lower franchise taxes than other states. Delaware attempted to copy the law to attract companies to its own state. This competition ended when Woodrow Wilson as Governor tightened New Jersey's laws again through a series of seven statutes.
In academic literature the phenomenon of regulatory competition reducing standards overall was argued for by AA Berle and GC Means in The Modern Corporation and Private Property (1932) while the concept received formal recognition by the US Supreme Court in a decision of Justice Louis Brandeis in the 1933 case Ligget Co. v. Lee[1] In 1932 Brandeis also coined the term “laboratories of democracy” in New State Ice Company v. Liebmann,[2] noting that the Federal government was capable of ending experiment.
This section needs expansion. You can help by adding to it. (July 2010) |
Private law
Corporate law
American corporate law scholars have debated on the role of the regulatory competition on corporate law for more than one decade. A Comparative Bibliography In the United States legal academia, corporate law is conventionally said to be the product of a "race" among states to attract incorporations by making their corporate laws attractive to those who choose where to incorporate. Given that it has long been possible to incorporate in one state while doing business primarily in other states, US states have rarely been able or willing to use law tied to where a firm is incorporated to regulate or constrain corporations or those who run them. However, U.S. states have long regulated corporations with other laws (e.g., environmental laws, employment laws) that are not tied to where a firm is incorporated, but are based on where a firm does business.[citation needed]
From the "race" to attract incorporations, Delaware has emerged as the winner, at least among publicly traded corporations. The corporate franchise tax accounts for between 15% and 20% of the state's budget.[citation needed]
In Europe, regulatory competition has long been prevented by the real seat doctrine prevailing in private international law of many EU and EEA member countries, which essentially required companies to be incorporated in the state where their main office was located. However, in a series of cases between 1999 and 2003 (Centros Ltd. vs. Erhvervs- og Selskabsstyrelsen, Überseering BV v Nordic Construction Company Baumanagement GmbH, Kamer van Koophandel en Fabrieken voor Amsterdam v Inspire Art Ltd.), the European Court of Justice has forced member states to recognize companies chartered in other member states, which is likely to foster regulatory competition in European company law. For instance, in 2008, Germany adopted new regulations on the GmbH (Limited Liability Company), allowing the incorporation of Limited Liability Companies [UG (haftungsbeschränkt)] without a minimum capital of EUR 25,000 (though 25% of earnings have to be retained until this threshold is reached).
Labour law
Countries may, for instance, seek to attract foreign direct investment by enacting a lower minimum wage than other countries,[3] or by making the labor market more flexible.[4]
- International Transport Workers Federation v Viking Line ABP or The Rosella [2008] IRLR 143 (C-438/05
This section needs expansion. You can help by adding to it. (July 2010) |
Taxation
This section needs expansion. You can help by adding to it. (July 2010) |
Environmental law
Legal scholars often cite environmental law as a field in which regulatory competition is particularly likely to produce a “race to the bottom” due to the externalities produced by changes in any individual state's environmental law. Because a state is unlikely to bear all of the costs associated with any environment damage caused by industries in that state, it has an incentive to lower standards below the level that would be desirable if the state were forced to bear all of the costs.[5] One commonly cited example of this effect is clean air laws, as states may be incentivized to lower their standards to attract business, knowing that the effects of the increased pollution will be spread across a wide area, and not simply localized within the state. Furthermore, a reduction in the standards of one state will incentivize other states to similarly lower their standards so as to not lose business.
This section needs expansion. You can help by adding to it. (July 2010) |
Public services
Education
Sometimes higher-level governing bodies institute incentives to competition among lower-level governing bodies,[6] an example being the Race to the Top program, designed by the United States Department of Education to spur reforms in state and local district K-12 education.
The German Federal Ministry of Education and Research likewise has initiated a program called InnoRegio to reward innovative practices.
Health
The high degree of politicization of the genetically modified organism issue made it a key battleground for competition for leadership, particularly between the European Commission and the European Council of Ministers. The result has been a protracted battle over agenda setting and issue framing and a cycle of competitive regulatory reinforcement.[7]
Security
The struggle between insurgents and various Afghanistan states for power, control, popular support and legitimacy in the eyes of the public has been described as competitive governance.[8]
While during the Cold War security was provided by centralized institutions such as NATO and the Warsaw Pact, now competing profit-seeking firms provide personal, national, and international security.[9]
Theory
Arnold Kling notes, "In democratic government, people take jurisdictions as given, and they elect leaders. In competitive government, people take leaders as given, and they select jurisdictions."[10] Competitive governance has thus far not produced an ultra-libertarian government; although Zac Gochenour has pointed out the role of potential international migrants' switching costs in hindering consumer choice from creating greater intergovernmental competition, Bryan Caplan has stated that "[t]he bigger problem is that almost all existing governments are either non-profits (the democracies), have short time horizons (the unstable dictatorships), or reasonably worry that if they liberalize they're lose power (the stable dictatorships)." Indeed, it has been argued by Maria Brouwer that most autocracies prefer stagnation to the vagaries inherent to expansion and other forms of innovation, since the exploration of new possibilities could lead to failure, which would undermine autocratic authority.[11] There has been some question as to whether competitive governance can be revived in Australia.[12]
- Advantages
Brennan and Buchanan (1980) argue that the public sector is a 'Leviathan' which is inherently biased towards extracting money from taxpayers, but that competitive government structures can minimize such exploitation.[13] It has also been argued that a decentralized competitive government structure allows for an experimentation of new public policies without doing too much harm if they fail.[14]
- Disadvantages
An alternative to market-based or competitive governance is civic-based or partnership governance.[15] Alleged disadvantages of competitive governance, compared to collaborative government, include less potential to harness the power of knowledge sharing, cooperation and collaboration within government.[16]
See also
- International economics
- International law
- Charter city
- Corporate haven
- Tax haven
- Seasteading
- Jurisdictional arbitrage
- Indices of economic freedom
- Lists of countries by GDP per capita
Notes
- F Salem; Y Jarrar (2009), Government 2.0? Technology, Trust and Collaboration in the UAE Public Sector (PDF), archived from the original (PDF) on 2011-07-25, retrieved 2010-08-22
References
- General
- RL Revesz, 'Federalism and Regulation: Some Generalizations' in DC Esty and D Geradin, Regulatory Competition and Economic Integration: Comparative Perspectives (New York, OUP 2001) 3-27
- M Carlberg, Policy Competition and Policy Cooperation in a Monetary Union (1990) ISBN 3-540-20914-X
- T Besley (2001). "Political institutions and policy competition". CiteSeerX 10.1.1.142.4669.
- J Brettschneider, Das Herkunftslandlandprinzip und mögliche Alternativen aus ökonomischer Sicht, Auswirkungen auf und Bedeutung für den Systemwettbewerb (Berlin, Duncker & Humblot 2015) ISBN 3428144635
- Corporate law
- AA Berle and GC Means, The Modern Corporation and Private Property (1932)
- WL Cary, 'Federalism and Corporate Law: Reflections upon Delaware' (1974) 83 Yale Law Journal 663
- E von Halle, Trusts, or, Industrial Combinations and Coalitions in the United States (1896)
- C Grandy, 'New Jersey Chartermongering 1875-1929' (1989) 49(3) The Journal of Economic History 677
- K Kocaoglu, 'A Comparative Bibliography: Regulatory Competition on Corporate Law' (2008) Georgetown University Law Center Working Paper
- CM Yablon, 'The historical race competition for corporate charters and the rise and decline of New Jersey: 1880-1910' (2007) The Journal of Corporation Law
- Labour law
- S Deakin, 'Regulatory Competition after Laval' (2008) 10 Cambridge Yearbook of European Legal Studies 581
External links
https://en.wikipedia.org/wiki/Regulatory_competition
A post-industrial economy is a period of growth within an industrialized economy or nation in which the relative importance of manufacturing reduces and that of services, information, and research grows.[1]
Such economies are often marked by a declining manufacturing sector, resulting in de-industrialization, and a large service sector as well as an increase in the amount of information technology, often leading to an "information age"; information, knowledge, and creativity are the new raw materials of such an economy. The industry aspect of a post-industrial economy is sent into less developed nations which manufacture what is needed at lower costs through outsourcing. This occurrence is typical of nations that industrialized in the past such as the United Kingdom (first industrialised nation), most of Western Europe and the United States.
See also
References
- Krahn, Harvey J.; Lowe, Graham S.; Hughes, Karen D. (2008). Work, Industry, and Canadian Society (6th ed.). Toronto, ON: Nelson Education. pp. 26–27. ISBN 9780176501136.
https://en.wikipedia.org/wiki/Post-industrial_economy
No comments:
Post a Comment