Blog Archive

Wednesday, September 22, 2021

09-22-2021-0800 - Instrumental and intrinsic value

Instrumental and intrinsic value

 

Max Weber[edit]

The classic names instrumental and intrinsic were coined by sociologist Max Weber, who spent years studying good meanings people assigned to their actions and beliefs. According to Weber, "[s]ocial action, like all action, may be" judged as:[4]: 24–5 

  1. Instrumental rational (zweckrational): action "determined by expectations as to the behavior of objects in the environment of other human beings; these expectations are used as 'conditions' or "means' for the attainment of the actor's own rationally pursued and calculated ends."
  2. Value-rational (wertrational): action "determined by a conscious belief in the value for its own sake of some ethical, aesthetic, religious, or other form of behavior, independently of its prospects of success."

Weber's original definitions also include a comment showing his doubt that conditionally efficient means can achieve unconditionally legitimate ends:[4]: 399–400 

[T]he more the value to which action is oriented is elevated to the status of an absolute [intrinsic] value, the more "irrational" in this [instrumental] sense the corresponding action is. For the more unconditionally the actor devotes himself to this value for its own sake…the less he is influenced by considerations of the [conditional] consequences of his action.

John Dewey[edit]

John Dewey thought that belief in intrinsic value was a mistake. Although the application of instrumental value is easily contaminated, it is the only means humans have to coordinate group behaviour efficiently and legitimately.

Every social transaction has good or bad consequences depending on prevailing conditions, which may or may not be satisfied. Continuous reasoning adjusts institutions to keep them working on the right track as conditions change. Changing conditions demand changing judgments to maintain efficient and legitimate correlation of behavior.[5]

For Dewey, "restoring integration and cooperation between man's beliefs about the world in which he lives and his beliefs about the values [valuations] and purposes that should direct his conduct is the deepest problem of modern life."[6]: 255  Moreover, a "culture which permits science to destroy traditional values [valuations] but which distrusts its power to create new ones is a culture which is destroying itself."[7]

Dewey agreed with Max Weber that people talk as if they apply instrumental and intrinsic criteria. He also agreed with Weber's observation that intrinsic value is problematic in that it ignores the relationship between context and consequences of beliefs and behaviors. Both men questioned how anything valued intrinsically "for its own sake" can have operationally efficient consequences. However, Dewey rejects the common belief—shared by Weber—that supernatural intrinsic value is necessary to show humans what is permanently "right." He argues that both efficient and legitimate qualities must be discovered in daily life:

Man who lives in a world of hazards…has sought to attain [security] in two ways. One of them began with an attempt to propitiate the [intrinsic] powers which environ him and determine his destiny. It expressed itself in supplication, sacrifice, ceremonial rite and magical cult.… The other course is to invent [instrumental] arts and by their means turn the powers of nature to account.…[6]: 3  [F]or over two thousand years, the…most influential and authoritatively orthodox tradition…has been devoted to the problem of a purely cognitive certification (perhaps by revelation, perhaps by intuition, perhaps by reason) of the antecedent immutable reality of truth, beauty, and goodness.… The crisis in contemporary culture, the confusions and conflicts in it, arise from a division of authority. Scientific [instrumental] inquiry seems to tell one thing, and traditional beliefs [intrinsic valuations] about ends and ideals that have authority over conduct tell us something quite different.… As long as the notion persists that knowledge is a disclosure of [intrinsic] reality…prior to and independent of knowing, and that knowing is independent of a purpose to control the quality of experienced objects, the failure of natural science to disclose significant values [valuations] in its objects will come as a shock.[6]: 43–4 

Finding no evidence of "antecedent immutable reality of truth, beauty, and goodness," Dewey argues that both efficient and legitimate goods are discovered in the continuity of human experience:[6]: 114, 172–3, 197 

Dewey's ethics replaces the goal of identifying an ultimate end or supreme principle that can serve as a criterion of ethical evaluation with the goal of identifying a method for improving our value judgments. Dewey argued that ethical inquiry is of a piece with empirical inquiry more generally.… This pragmatic approach requires that we locate the conditions of warrant for our value judgments in human conduct itself, not in any a priori fixed reference point outside of conduct, such as in God's commands, Platonic Forms, pure reason, or "nature," considered as giving humans a fixed telos [intrinsic end].[8]

Philosophers label a "fixed reference point outside of conduct' a "natural kind," and presume it to have eternal existence knowable in itself without being experienced. Natural kinds are intrinsic valuations presumed to be "mind-independent" and "theory-independent."[9]

Dewey grants the existence of "reality" apart from human experience, but denied that it is structured as intrinsically real natural kinds.[6]: 122, 196  Instead, he sees reality as functional continuity of ways-of-acting, rather than as interaction among pre-structured intrinsic kinds. Humans may intuit static kinds and qualities, but such private experience cannot warrant inferences or valuations about mind-independent reality. Reports or maps of perceptions or intuitions are never equivalent to territories mapped.[10]

People reason daily about what they ought to do and how they ought to do it. Inductively, they discover sequences of efficient means that achieve consequences. Once an end is reached—a problem solved—reasoning turns to new conditions of means-end relations. Valuations that ignore consequence-determining conditions cannot coordinate behavior to solve real problems; they contaminate rationality.

Value judgments have the form: if one acted in a particular way (or valued this object), then certain consequences would ensue, which would be valued. The difference between an apparent and a real good [means or end], between an unreflectively and a reflectively valued good, is captured by its value [valuation of goodness] not just as immediately experienced in isolation, but in view of its wider consequences and how they are valued.… So viewed, value judgments are tools for discovering how to live a better life, just as scientific hypotheses are tools for uncovering new information about the world.[8]

In brief, Dewey rejects the traditional belief that judging things as good in themselves, apart from existing means-end relations, can be rational. The sole rational criterion is instrumental value. Each valuation is conditional but, cumulatively, all are developmental—and therefore socially-legitimate solutions of problems. Competent instrumental valuations treat the "function of consequences as necessary tests of the validity of propositions, provided these consequences are operationally instituted and are such as to resolve the specific problems evoking the operations."[11][12]: 29–31 

J. Fagg Foster[edit]

John Fagg Foster made John Dewey's rejection of intrinsic value more operational by showing that its competent use rejects the legitimacy of utilitarian ends—satisfaction of whatever ends individuals adopt. It requires recognizing developmental sequences of means and ends.[13][14][15]: 40–8 

Utilitarians hold that individual wants cannot be rationally justified; they are intrinsically worthy subjective valuations and cannot be judged instrumentally. This belief supports philosophers who hold that facts ("what is") can serve as instrumental means for achieving ends, but cannot authorize ends ("what ought to be"). This fact-value distinction creates what philosophers label the is-ought problem: wants are intrinsically fact-free, good in themselves; whereas efficient tools are valuation-free, usable for good or bad ends.[15]: 60  In modern North-American culture, this utilitarian belief supports the libertarian assertion that every individual's intrinsic right to satisfy wants makes it illegitimate for anyone—but especially governments—to tell people what they ought to do.[16]

Foster finds that the is-ought problem is a useful place to attack the irrational separation of good means from good ends. He argues that want-satisfaction ("what ought to be") cannot serve as an intrinsic moral compass because 'wants' are themselves consequences of transient conditions.

[T]he things people want are a function of their social experience, and that is carried on through structural institutions that specify their activities and attitudes. Thus the pattern of people's wants takes visible form partly as a result of the pattern of the institutional structure through which they participate in the economic process. As we have seen, to say that an economic problem exists is to say that part of the particular patterns of human relationships has ceased or failed to provide the effective participation of its members. In so saying, we are necessarily in the position of asserting that the instrumental efficiency of the economic process is the criterion of judgment in terms of which, and only in terms of which, we may resolve economic problems.[17]

Since 'wants' are shaped by social conditions, they must be judged instrumentally; they arise in problematic situations when habitual patterns of behavior fail to maintain instrumental correlations.[15]: 27 

Examples[edit]

Foster uses with homely examples to support his thesis that problematic situations ("what is") contain the means for judging legitimate ends ("what ought to be"). Rational efficient means achieve rational developmental ends. Consider the problem all infants face learning to walk. They spontaneously recognize that walking is more efficient than crawling—an instrumental valuation of a desirable end. They learn to walk by repeatedly moving and balancing, judging the efficiency with which these means achieve their instrumental goal. When they master this new way-of-acting, they experience great satisfaction, but satisfaction is never their end-in-view.[18]

Revised definition of 'instrumental value'[edit]

To guard against contamination of instrumental value by judging means and ends independently, Foster revised his definition to embrace both.

Instrumental value is the criterion of judgment which seeks instrumentally-efficient means that "work" to achieve developmentally-continuous ends. This definition stresses the condition that instrumental success is never short term; it must not lead down a dead-end street. The same point is made by the currently popular concern for sustainability—a synonym for instrumental value.[19]

Dewey's and Foster's argument that there is no intrinsic alternative to instrumental value continues to be ignored rather than refuted. Scholars continue to accept the possibility and necessity of knowing "what ought to be" independently of transient conditions that determine actual consequences of every action. Jacques Ellul and Anjan Chakravartty were prominent exponents of the truth and reality of intrinsic value as constraint on relativistic instrumental value.

Jacques Ellul[edit]

Jacques Ellul made scholarly contributions to many fields, but his American reputation grew out of his criticism of the autonomous authority of instrumental value, the criterion that John Dewey and J. Fagg Foster found to be the core of human rationality. He specifically criticized the valuations central to Dewey's and Foster's thesis: evolving instrumental technology.

His principal work, published in 1954, bore the French title La technique and tackles the problem that Dewey addressed in 1929: a culture in which the authority of evolving technology destroys traditional valuations without creating legitimate new ones. Both men agree that conditionally-efficient valuations ("what is") become irrational when viewed as unconditionally efficient in themselves ("what ought to be"). However, while Dewey argues that contaminated instrumental valuations can be self-correcting, Ellul concludes that technology has become intrinsically destructive. The only escape from this evil is to restore authority to unconditional sacred valuations:[20]: 143 

Nothing belongs any longer to the realm of the gods or the supernatural. The individual who lives in the technical milieu knows very well that there is nothing spiritual anywhere. But man cannot live without the [intrinsic] sacred. He therefore transfers his sense of the sacred to the very thing which has destroyed its former object: to technique itself.

The English edition of La technique was published in 1964, titled The Technological Society, and quickly entered ongoing disputes in the United States over the responsibility of instrumental value for destructive social consequences. The translator[who?] of Technological Society summarizes Ellul's thesis:[21]

Technological Society is a description of the way in which an autonomous [instrumental] technology is in process of taking over the traditional values [intrinsic valuations] of every society without exception, subverting and suppressing those values to produce at last a monolithic world culture in which all non-technological difference and variety is mere appearance.

Ellul opens The Technological Society by asserting that instrumental efficiency is no longer a conditional criterion. It has become autonomous and absolute:[20]: xxxvi 

The term technique, as I use it, does not mean machines, technology, or this or that procedure for attaining an end. In our technological society, technique is the totality of methods rationally arrived at and having absolute efficiency (for a given stage of development) in every field of human activity.

He blames instrumental valuations for destroying intrinsic meanings of human life: "Think of our dehumanized factories, our unsatisfied senses, our working women, our estrangement from nature. Life in such an environment has no meaning."[20]: 4–5  While Weber had labeled the discrediting of intrinsic valuations as disenchantment, Ellul came to label it as "terrorism."[22]: 384, 19  He dates its domination to the 1800s, when centuries-old handicraft techniques were massively eliminated by inhuman industry.

When, in the 19th century, society began to elaborate an exclusively rational technique which acknowledged only considerations of efficiency, it was felt that not only the traditions but the deepest instincts of humankind had been violated.[20]: 73  Culture is necessarily humanistic or it does not exist at all.… [I]t answers questions about the meaning of life, the possibility of reunion with ultimate being, the attempt to overcome human finitude, and all other questions that they have to ask and handle. But technique cannot deal with such things.… Culture exists only if it raises the question of meaning and values [valuations].… Technique is not at all concerned about the meaning of life, and it rejects any relation to values [intrinsic valuations].[22]: 147–8 

Ellul's core accusation is that instrumental efficiency has become absolute, i.e., a good-in-itself;[20]: 83  it wraps societies in a new technological milieu with six intrinsically inhuman characteristics:[3]: 22 

  1. artificiality;
  2. autonomy, "with respect to values [valuations], ideas, and the state;"
  3. self-determinative, independent "of all human intervention;"
  4. "It grows according to a process which is causal but not directed to [good] ends;"
  5. "It is formed by an accumulation of means which have established primacy over ends;"
  6. "All its parts are mutually implicated to such a degree that it is impossible to separate them or to settle any technical problems in isolation."

Criticism[edit]

Philosophers Tiles and Oberdiek (1995) find Ellul's characterization of instrumental value inaccurate.[3]: 22–31  They criticize him for anthropomorphizing and demonizing instrumental value. They counter this by examining the moral reasoning of scientists whose work led to nuclear weapons: those scientists demonstrated the capacity of instrumental judgments to provide them with a moral compass to judge nuclear technology; they were morally responsible without intrinsic rules. Tiles and Oberdiek's conclusion coincides with that of Dewey and Foster: instrumental value, when competently applied, is self-correcting and provides humans with a developmental moral compass.

For although we have defended general principles of the moral responsibilities of professional people, it would be foolish and wrongheaded to suggest codified [intrinsic] rules. It would be foolish because concrete cases are more complex and nuanced than any code could capture; it would be wrongheaded because it would suggest that our sense of moral responsibility can be fully captured by a code.[3]: 193  In fact, as we have seen in many instances, technology simply allows us to go on doing stupid things in clever ways. The questions that technology cannot solve, although it will always frame and condition the answers, are "What should we be trying to do? What kind of lives should we, as human beings, be seeking to live? And can this kind of life be pursued without exploiting others? But until we can at least propose [instrumental] answers to those questions we cannot really begin to do sensible things in the clever ways that technology might permit.[3]: 197 

Semi realism (Anjan Chakravartty)[edit]

Anjan Chakravartty came indirectly to question the autonomous authority of instrumental value. He viewed it as a foil for the currently dominant philosophical school labeled "scientific realism," with which he identifies. In 2007, he published a work defending the ultimate authority of intrinsic valuations to which realists are committed. He links the pragmatic instrumental criterion to discredited anti-realist empiricist schools including logical positivism and instrumentalism.

Chakravartty began his study with rough characterizations of realist and anti-realist valuations of theories. Anti-realists believe "that theories are merely instruments for predicting observable phenomena or systematizing observation reports;" they assert that theories can never report or prescribe truth or reality "in itself." By contrast, scientific realists believe that theories can "correctly describe both observable and unobservable parts of the world."[23]: xi, 10  Well-confirmed theories—"what ought to be" as the end of reasoning—are more than tools; they are maps of intrinsic properties of an unobservable and unconditional territory—"what is" as natural-but-metaphysical real kinds.[23]: xiii, 33, 149 

Chakravartty treats criteria of judgment as ungrounded opinion, but admits that realists apply the instrumental criterion to judge theories that "work."[23]: 25  He restricts such criterion's scope, claiming that every instrumental judgment is inductiveheuristicaccidental. Later experience might confirm a singular judgment only if it proves to have universal validity, meaning it possesses "detection properties" of natural kinds.[23]: 231  This inference is his fundamental ground for believing in intrinsic value.

He commits modern realists to three metaphysical valuations or intrinsic kinds of knowledge of truth. Competent realists affirm that natural kinds exist in a mind-independent territory possessing 1) meaningful and 2) mappable intrinsic properties.

Ontologically, scientific realism is committed to the existence of a mind-independent world or reality. A realist semantics implies that the theoretical claims [valuations] about this reality have truth values, and should be construed literally.… Finally, the epistemological commitment is to the idea that these theoretical claims give us knowledge of the world. That is, predictively successful (mature, non-ad hoc) theories, taken literally as describing the nature of a mind-independent reality are (approximately) true.[23]: 9 

He labels these intrinsic valuations as semi-realist, meaning they are currently the most accurate theoretical descriptions of mind-independent natural kinds. He finds these carefully qualified statements necessary to replace earlier realist claims of intrinsic reality discredited by advancing instrumental valuations. Science has destroyed for many people the supernatural intrinsic value embraced by Weber and Ellul. But Chakravartty defended intrinsic valuations as necessary elements of all science—belief in unobservable continuities. He advances the thesis of semi-realism, according to which well-tested theories are good maps of natural kinds, as confirmed by their instrumental success; their predictive success means they conform to mind-independent, unconditional reality.

Causal properties are the fulcrum of semirealism. Their [intrinsic] relations compose the concrete structures that are the primary subject matters of a tenable scientific realism. They regularly cohere to form interesting units, and these groupings make up the particulars investigated by the sciences and described by scientific theories.[23]: 119  Scientific theories describe [intrinsic] causal properties, concrete structures, and particulars such as objects, events, and processes. Semirealism maintains that under certain conditions it is reasonable for realists to believe that the best of these descriptions tell us not merely about things that can be experienced with the unaided senses, but also about some of the unobservable things underlying them.[23]: 151 

Chakravartty argues that these semirealist valuations legitimize scientific theorizing about pragmatic kinds. The fact that theoretical kinds are frequently replaced does not mean that mind-independent reality is changing, but simply that theoretical maps are approximating intrinsic reality.

The primary motivation for thinking that there are such things as natural kinds is the idea that carving nature according to its own divisions yields groups of objects that are capable of supporting successful inductive generalizations and prediction. So the story goes, one's recognition of natural categories facilitates these practices, and thus furnishes an excellent explanation for their success.[23]: 151  The moral here is that however realists choose to construct particulars out of instances of properties, they do so on the basis of a belief in the [mind-independent] existence of those properties. That is the bedrock of realism. Property instances lend themselves to different forms of packaging [instrumental valuations], but as a feature of scientific description, this does not compromise realism with respect to the relevant [intrinsic] packages.[23]: 81 

In sum, Chakravartty argues that contingent instrumental valuations are warranted only as they approximate unchanging intrinsic valuations. Scholars continue to perfect their explanations of intrinsic value, as they deny the developmental continuity of applications of instrumental value.

Abstraction is a process in which only some of the potentially many relevant factors present in [unobservable] reality are represented in a model or description with some aspect of the world, such as the nature or behavior of a specific object or process. ... Pragmatic constraints such as these play a role in shaping how scientific investigations are conducted, and together which and how many potentially relevant factors [intrinsic kinds] are incorporated into models and descriptions during the process of abstraction. The role of pragmatic constraints, however, does not undermine the idea that putative representations of factors composing abstract models can be thought to have counterparts in the [mind-independent] world.[23]: 191 

Realist intrinsic value as proposed by Chakravartty, is widely endorsed in modern scientific circles, while the supernatural intrinsic value endorsed by Max Weber and Jacques Ellul maintains its popularity throughout the world. Doubters about the reality of instrumental and intrinsic value are few.

See also[edit]


https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value

"Instrumental" and "value-rational action" are terms scholars use to identify two kinds of behavior that humans can engage in. Scholars call using means that "work" as tools, instrumental action, and pursuing ends that are "right" as legitimate ends, value-rational action.

These terms were coined by sociologist Max Weber, who observed people attaching subjective meanings to their actions. Acts people treated as conditional means he labeled "instrumentally rational." Acts people treated as unconditional ends he labeled "value-rational." He found everyone acting for both kinds of reasons, but justifying individual acts by one reason or the other.

Here are Weber's original definitions, followed by a comment showing his doubt that ends considered unconditionally right can be achieved by means considered to be conditionally efficient. An action may be:

instrumentally rational (zweckrational), that is, determined by expectations as to the behavior of objects in the environment of other human beings; these expectations are used as "conditions" or "means" for the attainment of the actor's own rationally pursued and calculated ends;

value-rational (wertrational), that is, determined by a conscious belief in the value for its own sake of some ethical, aesthetic, religious, or other form of behavior, independently of its prospects of success;

[1]: 24–5 

... the more the value to which action is oriented is elevated to the status of an absolute [intrinsic] value, the more "irrational" in this [instrumental] sense the corresponding action is. For the more unconditionally the actor devotes himself to this value for its own sake, ... the less he is influenced by considerations of the consequences of his action.[1]: 26, 399–4004 

https://en.wikipedia.org/wiki/Instrumental_and_value-rational_action

In social sciencedisenchantment (GermanEntzauberung) is the cultural rationalization and devaluation of religion apparent in modern society. The term was borrowed from Friedrich Schiller by Max Weber to describe the character of a modernizedbureaucraticsecularized Western society.[1] In Western society, according to Weber, scientific understanding is more highly valued than belief, and processes are oriented toward rational goals, as opposed to traditional society, in which "the world remains a great enchanted garden".[2]
https://en.wikipedia.org/wiki/Disenchantment

Secularism is the principle of seeking to conduct human affairs based on secularnaturalistic considerations. It is most commonly defined as the separation of religion from civic affairs and the state, and may be broadened to a similar position concerning the need to remove or minimalize the role of religion in any public sphere.[1] The term has a broad range of meanings, and in the most schematic, may encapsulate any stance that promotes the secular in any given context.[2][3] It may connote anticlericalismatheismnaturalism, or removal of religious symbols from public institutions.[4]

As a philosophy, secularism seeks to interpret life based on principles derived solely from the material world, without recourse to religion. It shifts the focus from religion towards "temporal" and material concerns.[5]

There are distinct traditions of secularism in the West, like the French, Turkish and Anglo-American models, and beyond, as in India,[4] where the emphasis is more on equality before law and state neutrality rather than blanket separation. The purposes and arguments in support of secularism vary widely, ranging from assertions that it is a crucial element of modernization, or that religion and traditional values are backward and divisive, to the claim that it is the only guarantor of free religious exercise.

https://en.wikipedia.org/wiki/Secularism

Secularity, also the secular or secularness (from Latin Saeculum, "worldly" or "of a generation"), is the state of being unrelated or neutral in regards to religion and irreligion. Anything that does not have an explicit reference to religion, either negatively or positively, may be considered secular.[1] The process in which things become secular or more so is named secularization, and any concept or ideology promoting the secular may be termed secularism.
https://en.wikipedia.org/wiki/Secularity

State atheism is the incorporation of positive atheism or non-theism into political regimes.[27] It may also refer to large-scale secularization attempts by governments.[28] It is a form of religion-state relationship that is usually ideologically linked to irreligion and the promotion of irreligion to some extent.[29] State atheism may refer to a government's promotion of anti-clericalism, which opposes religious institutional power and influence in all aspects of public and political life, including the involvement of religion in the everyday life of the citizen.[27][30][31] In some instances, religious symbols and public practices that were once held by religion were replaced with secularized versions.[32] State atheism can also exist in a politically neutral fashion, in which case it is considered as non-secular.[27]

The majority of communist states followed similar policies from 1917 onwards.[9][28][30][33][34][35][36] The Russian Soviet Federative Socialist Republic (1917–1991) and more broadly the Soviet Union (1922–1991) had a long history of state atheism, whereby those seeking social success generally had to profess atheism and to stay away from houses of worship; this trend became especially militant during the middle of the Stalinist era which lasted from 1929 to 1939. In Eastern Europe, countries like BelarusBulgariaEstoniaLatviaRussia, and Ukraine experienced strong state atheism policies.[34] East Germany and Czechoslovakiaalso had similar policies.[28] The Soviet Union attempted to suppress public religious expression over wide areas of its influence, including places such as Central Asia. Either currently or in their past, China,[28][33][36][37] North Korea,[36][37]Vietnam,[38] Cambodia,[9] and Cuba[35] are or were officially atheist.

In contrast, a secular state purports to be officially neutral in matters of religion, supporting neither religion nor irreligion.[27][39][40] In a review of 35 European states in 1980, 5 states were considered 'secular' in the sense of religious neutrality, 9 considered "atheistic", and 21 states considered "religious".[41]

https://en.wikipedia.org/wiki/State_atheism


In philosophymoral responsibility is the status of morally deserving praiseblamereward, or punishment for an act or omission in accordance with one's moral obligations.[1][2] Deciding what (if anything) counts as "morally obligatory" is a principal concern of ethics.

Philosophers refer to people who have moral responsibility for an action as moral agents. Agents have the capability to reflect upon their situation, to form intentionsabout how they will act, and then to carry out that action. The notion of free will has become an important issue in the debate on whether individuals are ever morally responsible for their actions and, if so, in what sense. Incompatibilists regard determinism as at odds with free will, whereas compatibilists think the two can coexist.

Moral responsibility does not necessarily equate to legal responsibility. A person is legally responsible for an event when a legal system is liable to penalise that person for that event. Although it may often be the case that when a person is morally responsible for an act, they are also legally responsible for it, the two states do not always coincide.[citation needed]

https://en.wikipedia.org/wiki/Moral_responsibility


Free will is the capacity of agents to choose between different possible courses of action unimpeded.[1][2]

Free will is closely linked to the concepts of moral responsibilitypraiseguiltsin, and other judgements which apply only to actions that are freely chosen. It is also connected with the concepts of advicepersuasiondeliberation, and prohibition. Traditionally, only actions that are freely willed are seen as deserving credit or blame. Whether free will exists, what it is and the implications of whether it exists or not are some of the longest running debates of philosophy and religion. Some conceive of free will as the right to act outside of external influences or wishes.

Some conceive free will to be the capacity to make choices undetermined by past events. Determinism suggests that only one course of events is possible, which is inconsistent with a libertarian model of free will.[3] Ancient Greek philosophy identified this issue,[4] which remains a major focus of philosophical debate. The view that conceives free will as incompatible with determinism is called incompatibilism and encompasses both metaphysical libertarianism (the claim that determinism is false and thus free will is at least possible) and hard determinism (the claim that determinism is true and thus free will is not possible). Incompatibilism also encompasses hard incompatibilism, which holds not only determinism but also its negation to be incompatible with free will and thus free will to be impossible whatever the case may be regarding determinism.

In contrast, compatibilists hold that free will is compatible with determinism. Some compatibilists even hold that determinism is necessary for free will, arguing that choice involves preference for one course of action over another, requiring a sense of how choices will turn out.[5][6] Compatibilists thus consider the debate between libertarians and hard determinists over free will vs. determinism a false dilemma.[7] Different compatibilists offer very different definitions of what "free will" means and consequently find different types of constraints to be relevant to the issue. Classical compatibilists considered free will nothing more than freedom of action, considering one free of will simply if, had one counterfactually wanted to do otherwise, one could have done otherwise without physical impediment. Contemporary compatibilists instead identify free will as a psychological capacity, such as to direct one's behavior in a way responsive to reason, and there are still further different conceptions of free will, each with their own concerns, sharing only the common feature of not finding the possibility of determinism a threat to the possibility of free will.[8]

https://en.wikipedia.org/wiki/Free_will


Moral agency is an individual's ability to make moral judgments based on some notion of right and wrong and to be held accountable for these actions.[1] A moral agent is "a being who is capable of acting with reference to right and wrong."[2]

https://en.wikipedia.org/wiki/Moral_agency


Self-justification describes how, when a person encounters cognitive dissonance, or a situation in which a person's behavior is inconsistent with their beliefs (hypocrisy), that person tends to justify the behavior and deny any negative feedback associated with the behavior.

https://en.wikipedia.org/wiki/Self-justification


In the field of psychologycognitive dissonance is the perception of contradictory information. Relevant items of information include a person's actions, feelings, ideasbeliefs, and values, and things in the environment. Cognitive dissonance is typically experienced as psychological stress when they participate in an action that goes against one or more of them.[1] According to this theory, when two actions or ideas are not psychologically consistent with each other, people do all in their power to change them until they become consistent.[1][2] The discomfort is triggered by the person's belief clashing with new information perceived, wherein they try to find a way to resolve the contradiction to reduce their discomfort.[1][2][3]

In When Prophecy Fails: A Social and Psychological Study of a Modern Group That Predicted the Destruction of the World (1956) and A Theory of Cognitive Dissonance (1957), Leon Festinger proposed that human beings strive for internal psychological consistency to function mentally in the real world.[1] A person who experiences internal inconsistency tends to become psychologically uncomfortable and is motivated to reduce the cognitive dissonance.[1][2]They tend to make changes to justify the stressful behavior, either by adding new parts to the cognition causing the psychological dissonance (rationalization) or by avoiding circumstances and contradictory information likely to increase the magnitude of the cognitive dissonance (confirmation bias).[1][2][3]

Coping with the nuances of contradictory ideas or experiences is mentally stressful. It requires energy and effort to sit with those seemingly opposite things that all seem true. Festinger argued that some people would inevitably resolve dissonance by blindly believing whatever they wanted to believe.

https://en.wikipedia.org/wiki/Cognitive_dissonance


Confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one's prior beliefs or values.[1] People display this bias when they select information that supports their views, ignoring contrary information, or when they interpret ambiguous evidence as supporting their existing attitudes. The effect is strongest for desired outcomes, for emotionally charged issues, and for deeply entrenched beliefs. Confirmation bias cannot be eliminated entirely, but it can be managed, for example, by education and training in critical thinking skills.

Confirmation bias is a broad construct covering a number of explanations. Biased search for information, biased interpretation of this information, and biased memory recall, have been invoked to explain four specific effects: 1) attitude polarization (when a disagreement becomes more extreme even though the different parties are exposed to the same evidence); 2) belief perseverance (when beliefs persist after the evidence for them is shown to be false); 3) the irrational primacy effect (a greater reliance on information encountered early in a series); and 4) illusory correlation (when people falsely perceive an association between two events or situations).

A series of psychological experiments in the 1960s suggested that people are biased toward confirming their existing beliefs. Later work re-interpreted these results as a tendency to test ideas in a one-sided way, focusing on one possibility and ignoring alternatives (myside bias, an alternative name for confirmation bias). In general, current explanations for the observed biases reveal the limited human capacity to process the complete set of information available, leading to a failure to investigate in a neutral, scientific way.

Flawed decisions due to confirmation bias have been found in political, organizational, financial and scientific contexts. These biases contribute to overconfidence in personal beliefs and can maintain or strengthen beliefs in the face of contrary evidence. For example, confirmation bias produces systematic errors in scientific research based on inductive reasoning (the gradual accumulation of supportive evidence). Similarly, a police detective may identify a suspect early in an investigation, but then may only seek confirming rather than disconfirming evidence. A medical practitioner may prematurely focus on a particular disorder early in a diagnostic session, and then seek only confirming evidence. In social media, confirmation bias is amplified by the use of filter bubbles, or "algorithmic editing", which display to individuals only information they are likely to agree with, while excluding opposing views.

https://en.wikipedia.org/wiki/Confirmation_bias


In psychology rationalization or rationalisation is a defense mechanism in which controversial behaviors or feelings are justified and explained in a seemingly rational or logical manner in the absence of a true explanation, and are made consciously tolerable—or even admirable and superior—by plausible means.[1] It is also an informal fallacy of reasoning.[2]

Rationalization happens in two steps:

  1. A decision, action, judgement is made for a given reason, or no (known) reason at all.
  2. A rationalization is performed, constructing a seemingly good or logical reason, as an attempt to justify the act after the fact (for oneself or others).

Rationalization encourages irrational or unacceptable behavior, motives, or feelings and often involves ad hoc hypothesizing. This process ranges from fully conscious (e.g. to present an external defense against ridicule from others) to mostly unconscious (e.g. to create a block against internal feelings of guilt or shame). People rationalize for various reasons—sometimes when we think we know ourselves better than we do. Rationalization may differentiate the original deterministic explanation of the behavior or feeling in question.[3][4]

Many conclusions individuals come to do not fall under the definition of rationalization as the term is denoted above.

https://en.wikipedia.org/wiki/Rationalization_(psychology)


In science and philosophy, an ad hoc hypothesis is a hypothesis added to a theory in order to save it from being falsified. Often, ad hoc hypothesizing is employed to compensate for anomalies not anticipated by the theory in its unmodified form.

https://en.wikipedia.org/wiki/Ad_hoc_hypothesis


In computer programmingstring interpolation (or variable interpolationvariable substitution, or variable expansion) is the process of evaluating a string literalcontaining one or more placeholders, yielding a result in which the placeholders are replaced with their corresponding values. It is a form of simple template processing[1] or, in formal terms, a form of quasi-quotation (or logic substitution interpretation). String interpolation allows easier and more intuitive string formatting and content-specification compared with string concatenation.[2]

String interpolation is common in many programming languages which make heavy use of string representations of data, such as Apache GroovyJuliaKotlinPerlPHPPythonRubyScalaSwiftTcl and most Unix shells. Two modes of literal expression are usually offered: one with interpolation enabled, the other without (termed raw string). Placeholders are usually represented by a bare or a named sigil (typically $ or %), e.g. $placeholder or %123. Expansion of the string usually occurs at run time.

https://en.wikipedia.org/wiki/String_interpolation


time-invariant (TIV) system has a time-dependent system function that is not a direct function of time. Such systems are regarded as a class of systems in the field of system analysis. The time-dependent system function is a function of the time-dependent input function. If this function depends only indirectly on the time-domain (via the input function, for example), then that is a system that would be considered time-invariant. Conversely, any direct dependence on the time-domain of the system function could be considered as a "time-varying system".

Mathematically speaking, "time-invariance" of a system is the following property:[4]: p. 50 

Given a system with a time-dependent output function  and a time-dependent input function  the system will be considered time-invariant if a time-delay on the input  directly equates to a time-delay of the output function. For example, if time  is "elapsed time", then "time-invariance" implies that the relationship between the input function  and the output function  is constant with respect to time :

In the language of signal processing, this property can be satisfied if the transfer function of the system is not a direct function of time except as expressed by the input and output.

In the context of a system schematic, this property can also be stated as follows, as shown in the figure to the right:

If a system is time-invariant then the system block commutes with an arbitrary delay.

If a time-invariant system is also linear, it is the subject of linear time-invariant theory (linear time-invariant) with direct applications in NMR spectroscopyseismologycircuitssignal processingcontrol theory, and other technical areas. Nonlinear time-invariant systems lack a comprehensive, governing theory. Discrete time-invariant systems are known as shift-invariant systems. Systems which lack the time-invariant property are studied as time-variant systems.

https://en.wikipedia.org/wiki/Time-invariant_system


parameter (from the Ancient Greek παράpara: "beside", "subsidiary"; and μέτρονmetron: "measure"), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when identifying the system, or when evaluating its performance, status, condition, etc.

Parameter has more specific meanings within various disciplines, including mathematicscomputer programmingengineeringstatisticslogiclinguistics, electronic musical composition.

In addition to its technical uses, there are also extended uses, especially in non-scientific contexts, where it is used to mean defining characteristics or boundaries, as in the phrases 'test parameters' or 'game play parameters'.[1]

https://en.wikipedia.org/wiki/Parameter


Penrose also claims that the technical difficulty of modern physics forces young scientists to rely on the preferences of established researchers, rather than forging new paths of their own.[137] Lee Smolin expresses a slightly different position in his critique, claiming that string theory grew out of a tradition of particle physics which discourages speculation about the foundations of physics, while his preferred approach, loop quantum gravity, encourages more radical thinking. According to Smolin,

String theory is a powerful, well-motivated idea and deserves much of the work that has been devoted to it. If it has so far failed, the principal reason is that its intrinsic flaws are closely tied to its strengths—and, of course, the story is unfinished, since string theory may well turn out to be part of the truth. The real question is not why we have expended so much energy on string theory but why we haven't expended nearly enough on alternative approaches.[138]

Smolin goes on to offer a number of prescriptions for how scientists might encourage a greater diversity of approaches to quantum gravity research.[139]

https://en.wikipedia.org/wiki/String_theory


Life expectancy is a statistical measure of the average time an organism is expected to live, based on the year of its birth, its current age, and other demographic factors including sex. The most commonly used measure is life expectancy at birth (LEB), which can be defined in two ways. Cohort LEB is the mean length of life of an actual birth cohort (all individuals born in a given year) and can be computed only for cohorts born many decades ago so that all their members have died. Period LEB is the mean length of life of a hypothetical cohort[1][2] assumed to be exposed, from birth through death, to the mortality rates observed at a given year.[3]

https://en.wikipedia.org/wiki/Life_expectancy


paradox is a logically self-contradictory statement or a statement that runs contrary to one's expectation.[1][2][3] It is a statement that, despite apparently valid reasoning from true premises, leads to a seemingly self-contradictory or a logically unacceptable conclusion.[4][5] A paradox usually involves contradictory-yet-interrelated elements that exist simultaneously and persist over time.[6][7][8]

https://en.wikipedia.org/wiki/Paradox


motive is the cause that moves people to induce a certain action.[1] In criminal law, motive in itself is not an element of any given crime; however, the legal system typically allows motive to be proven to make plausible the accused's reasons for committing a crime, at least when those motives may be obscure or hard to identify with. However, a motive is not required to reach a verdict.[2] Motives are also used in other aspects of a specific case, for instance, when police are initially investigating.[2]

The law technically distinguishes between motive and intent. "Intent" in criminal law is synonymous with Mens rea, which means the mental state shows liability which is enforced by law as an element of a crime.[3] "Motive" describes instead the reasons in the accused's background and station in life that are supposed to have induced the crime. Motives are often broken down into three categories; biological, social and personal.[4]

https://en.wikipedia.org/wiki/Motive_(law)


mirror is an object that reflects an image. Light that bounces off a mirror will show an image of whatever is in front of it, when focused through the lens of the eye or a camera. Mirrors reverse the direction of the image in an equal yet opposite angle from which the light shines upon it. This allows the viewer to see themselves or objects behind them, or even objects that are at an angle from them but out of their field of view, such as around a corner. Natural mirrors have existed since prehistoric times, such as the surface of water, but people have been manufacturing mirrors out of a variety of materials for thousands of years, like stone, metals, and glass. In modern mirrors, metals like silver or aluminum are often used due to their high reflectivity, applied as a thin coating on glass because of its naturally smooth and very hard surface.

https://en.wikipedia.org/wiki/Mirror


Vergence is the angle formed by rays of light that are not perfectly parallel to one another. Rays that move closer to the optical axis as they propagate are said to be converging, while rays that move away from the axis are diverging. These imaginary rays are always perpendicular to the wavefront of the light, thus the vergence of the light is directly related to the radii of curvature of the wavefronts. A convex lens or concave mirror will cause parallel rays to focus, converging toward a point. Beyond that focal point, the rays diverge. Conversely, a concave lens or convex mirror will cause parallel rays to diverge.

Light does not actually consist of imaginary rays and light sources are not single-point sources, thus vergence is typically limited to simple ray modeling of optical systems. In a real system, the vergence is a product of the diameter of a light source, its distance from the optics, and the curvature of the optical surfaces. An increase in curvature causes an increase in vergence and a decrease in focal length, and the image or spot size (waist diameter) will be smaller. Likewise, a decrease in curvature decreases vergence, resulting in a longer focal length and an increase in image or spot diameter. This reciprocal relationship between vergence, focal length, and waist diameter are constant throughout an optical system, and is referred to as the optical invariant. A beam that is expanded to a larger diameter will have a lower degree of divergence, but if condensed to a smaller diameter the divergence will be greater.

The simple ray model fails for some situations, such as for laser light, where Gaussian beam analysis must be used instead.

https://en.wikipedia.org/wiki/Vergence_(optics)


Ioptics, a Gaussian beam is a beam of electromagnetic radiation with high monochromaticity whose amplitude envelope in the transverse plane is given by a Gaussian function; this also implies a Gaussian intensity (irradiance) profile. This fundamental (or TEM00transverse Gaussian mode describes the intended output of most (but not all) lasers, as such a beam can be focused into the most concentrated spot. When such a beam is refocused by a lens, the transverse phasedependence is altered; this results in a different Gaussian beam. The electric and magnetic field amplitude profiles along any such circular Gaussian beam (for a given wavelength and polarization) are determined by a single parameter: the so-called waist w0. At any position z relative to the waist (focus) along a beam having a specified w0, the field amplitudes and phases are thereby determined[1] as detailed below.

https://en.wikipedia.org/wiki/Gaussian_beam


Polarization (also polarisation) is a property applying to transverse waves that specifies the geometrical orientation of the oscillations.[1][2][3][4][5] In a transverse wave, the direction of the oscillation is perpendicular to the direction of motion of the wave.[4] A simple example of a polarized transverse wave is vibrations traveling along a taut string (see image); for example, in a musical instrument like a guitar string. Depending on how the string is plucked, the vibrations can be in a vertical direction, horizontal direction, or at any angle perpendicular to the string. In contrast, in longitudinal waves, such as sound waves in a liquid or gas, the displacement of the particles in the oscillation is always in the direction of propagation, so these waves do not exhibit polarization. Transverse waves that exhibit polarization include electromagnetic waves such as light and radio wavesgravitational waves,[6] and transverse sound waves (shear waves) in solids.

https://en.wikipedia.org/wiki/Polarization_(waves)


In solid mechanicsshearing forces are unaligned forces pushing one part of a body in one specific direction, and another part of the body in the opposite direction. When the forces are colinear (aligned into each other), they are called compression forces. An example is a deck of cards being pushed one way on the top, and the other at the bottom, causing the cards to slide. Another example is when windblows at the side of a peaked roof of a house - the side walls experience a force at their top pushing in the direction of the wind, and their bottom in the opposite direction, from the ground or foundation. William A. Nash defines shear force in terms of planes: "If a plane is passed through a body, a force acting along this plane is called a shear force or shearing force."[1]

https://en.wikipedia.org/wiki/Shear_force


The cantilever method is an approximate method for calculating shear forces and moments developed in beams and columns of a frame or structure due to lateral loads. The applied lateral loads typically include wind loads and earthquake loads, which must be taken into consideration while designing buildings. The assumptions used in this method are that the points of contraflexure (or points of inflection of the moment diagram) in both the vertical and horizontal members are located at the midpoint of the member, and that the direct stresses in the columns are proportional to their distances from the centroidal axis of the frame.[1] The frame is analysed in step-wise (iterative) fashion, and the results can then be described by force diagrams drawn up at the end of the process. The method is quite versatile and can be used to analyse frames of any number of storeys or floors.

The position of the centroidal axis (the center of gravity line for the frame) is determined by using the areas of the end columns and interior columns. The cantilever method is considered as one of the two primary approximate methods (the other being the portal method) for indeterminate structural analysis of frames for lateral loads. Its use is recommended for frames that are taller than they are wide, and therefore behave similar to a beam cantilevered up from the ground.

https://en.wikipedia.org/wiki/Cantilever_method


Bayesian network (also known as a Bayes networkBayes netbelief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

Efficient algorithms can perform inference and learning in Bayesian networks. Bayesian networks that model sequences of variables (e.g. speech signals or protein sequences) are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams.

https://en.wikipedia.org/wiki/Bayesian_network


Machine learning (ML) is the study of computer algorithms that can improve automatically through experience and by the use of data.[1] It is seen as a part of artificial intelligence. Machine learning algorithms build a model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so.[2] Machine learning algorithms are used in a wide variety of applications, such as in medicine, email filteringspeech recognition, and computer vision, where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks.[3]

A subset of machine learning is closely related to computational statistics, which focuses on making predictions using computers; but not all machine learning is statistical learning. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning.[5][6] Some implementations of machine learning use data and neural networks in a way that mimics the working of a biological brain.[7][8] In its application across business problems, machine learning is also referred to as predictive analytics.

https://en.wikipedia.org/wiki/Machine_learning


cascading failure is a process in a system of interconnected parts in which the failure of one or few parts can trigger the failure of other parts and so on. Such a failure may happen in many types of systems, including power transmission, computer networking, finance, transportation systems, organisms, the human body, and ecosystems.

Cascading failures may occur when one part of the system fails. When this happens, other parts must then compensate for the failed component. This in turn overloads these nodes, causing them to fail as well, prompting additional nodes to fail one after another.

https://en.wikipedia.org/wiki/Cascading_failure


Mount St. Helens (known as  Lawetlat'la to the Indigenous Cowlitz people, and Loowit or Louwala-Clough to the Klickitat) is an active stratovolcano located in Skamania County, Washington[1] in the Pacific Northwest region of the United States. It lies 52 miles (83 km) northeast of Portland, Oregon[2] and 98 miles (158 km) south of Seattle.[3] Mount St. Helens takes its English name from the British diplomat Lord St Helens, a friend of explorer George Vancouver who surveyed the area in the late 18th century.[1] The volcano is part of the Cascade Volcanic Arc, a segment of the Pacific Ring of Fire.

https://en.wikipedia.org/wiki/Mount_St._Helens


Critical race theory (CRT) is a body of legal scholarship and an academic movement of US civil-rights scholars and activists who seek to critically examine the intersection of race and U.S. law and to challenge mainstream American liberal approaches to racial justice.[1][2][3][4] CRT examines social, cultural, and legal issues primarily as they relate to race and racism in the US.[5][6] A tenet of CRT is that racism and disparate racial outcomes are the result of complex, changing, and often subtle social and institutional dynamics, rather than explicit and intentional prejudices of individuals.[7][8]

CRT scholars view race and white supremacy as an intersectional social construct,[7] which serves to uphold the interests of white people[9] at the expense of marginalized communities.[10][11][12] In the field of legal studies, CRT emphasizes that formally colorblind laws can still have racially discriminatory outcomes.[13] A key CRT concept is intersectionality, which emphasizes that race can intersect with other identities (such as gender and class) to produce complex combinations of power and advantage.[14]

Academic critics of CRT argue that it relies on social constructionism, elevates storytelling over evidence and reason, rejects the concepts of truth and merit, and opposes liberalism.[15][16][17] Since 2020, conservative US lawmakers have sought to ban or restrict critical race theory instruction along with other anti‑racism programs.[8][18] Critics of these efforts say the lawmakers have poorly defined or misrepresented the tenets and importance of CRT and that the goal of the laws is to more broadly silence discussions of racism, equality, social justice, and the history of race.[19][20][21]

CRT originated in the mid-1970s in the writings of several American legal scholars, including Derrick Bell, Alan Freeman, Kimberlé CrenshawRichard DelgadoCheryl Harris, Charles R. Lawrence III, Mari Matsuda, and Patricia J. Williams.[1] It emerged as a movement by the 1980s, reworking theories of critical legal studies(CLS) with more focus on race.[1][22] CRT is grounded in critical theory[23] and draws from thinkers such as Antonio GramsciSojourner TruthFrederick Douglass, and W. E. B. DuBois, as well as the Black PowerChicano, and radical feminist movements from the 1960s and 1970s.[1]

https://en.wikipedia.org/wiki/Critical_race_theory


bone fracture (abbreviated FRX or FxFx, or #) is a medical condition in which there is a partial or complete break in the continuity of a bone. In more severe cases, the bone may be broken into several pieces.[1] A bone fracture may be the result of high force impact or stress, or a minimal trauma injury as a result of certain medical conditions that weaken the bones, such as osteoporosisosteopeniabone cancer, or osteogenesis imperfecta, where the fracture is then properly termed a pathologic fracture.[2]

https://en.wikipedia.org/wiki/Bone_fracture


An orbital blowout fracture is a traumatic deformity of the orbital floor or medial wall, typically resulting from impact of a blunt object larger than the orbital aperture, or eye socket. Most commonly the inferior orbital wall i.e. the floor is likely to collapse, because the bones of the roof and lateral walls are robust. Although the bone forming the medial wall is thinnest, it is buttressed by the bone separating the ethmoidal air cells. The comparatively thin bone of the floor of the orbit and roof of the maxillary sinus has no support and therefore it is the inferior wall that collapses mostly. So the medial wall blowout fractures are second most common, whereas superior wall i.e. the roof and lateral wall blowout fractures are uncommon & rare respectively. There are two broad categories of blowout fractures: open door, which are large, displaced and comminuted, and trapdoor, which are linear, hinged, and minimally displaced. They are characterized by double visionsunken ocular globes, and loss of sensation of the cheek and upper gums due to infraorbital nerve injury.[1]

In pure orbital blowout fractures, the orbital rim (the most anterior bony margin of the orbit) is preserved, while with impure fractures, the orbital rim is also injured. With the trapdoor variant, there is a high frequency of extra-ocular muscle entrapment, despite minimal signs of external trauma, a phenomenon referred to as a 'white-eyed' orbital blowout fracture.[2] They can occur with other injuries such as transfacial Le Fort fractures or zygomaticomaxillary complex fractures. The most common causes are assault and motor vehicle accidents. In children, the trapdoor subtype are more common.[3]

Surgical intervention may be required to prevent diplopia and enophthalmos. Patients that are not experiencing enophthalmos or diplopia, and that have good extraocular mobility can be closely followed by ophthalmology without surgery.[4]

https://en.wikipedia.org/wiki/Orbital_blowout_fracture


Avascular necrosis (AVN), also called osteonecrosis or bone infarction, is death of bone tissue due to interruption of the blood supply.[1] Early on, there may be no symptoms.[1] Gradually joint pain may develop which may limit the ability to move.[1] Complications may include collapse of the bone or nearby joint surface.[1]

Risk factors include bone fracturesjoint dislocationsalcoholism, and the use of high-dose steroids.[1] The condition may also occur without any clear reason.[1] The most commonly affected bone is the femur.[1] Other relatively common sites include the upper arm bone, knee, shoulder, and ankle.[1] Diagnosis is typically by medical imaging such as X-rayCT scan, or MRI.[1] Rarely biopsy may be used.[1]

Treatments may include medication, not walking on the affected leg, stretching, and surgery.[1] Most of the time surgery is eventually required and may include core decompressionosteotomybone grafts, or joint replacement.[1] About 15,000 cases occur per year in the United States.[4] People 30 to 50 years old are most commonly affected.[3] Males are more commonly affected than females.[4]

https://en.wikipedia.org/wiki/Avascular_necrosis


Agent Blue, is one of the "rainbow herbicides" that is known for its use by the United States during the Vietnam War. It contained a mixture of dimethylarsinic acid (also known as cacodylic acid) and its related salt, sodium cacodylate and water. Largely inspired by the British use of herbicides and defoliants during the Malayan Emergency, killing rice was a military strategy from the very start of US military involvement in Vietnam. At first, US soldiers attempted to blow up rice paddies and rice stocks, using mortars and hand grenades. But grains of rice were far more durable than they understood, and were not easily destroyed. Every grain that survived was a seed, to be collected and planted again.

https://en.wikipedia.org/wiki/Agent_Blue


Phosphorus is a chemical element with the symbol P and atomic number 15. Elemental phosphorus exists in two major forms, white phosphorus and red phosphorus, but because it is highly reactive, phosphorus is never found as a free element on Earth. It has a concentration in the Earth's crust of about one gram per kilogram (compare copper at about 0.06 grams). In minerals, phosphorus generally occurs as phosphate.

Elemental phosphorus was first isolated as white phosphorus in 1669. White phosphorus emits a faint glow when exposed to oxygen – hence the name, taken from Greek mythology, Φωσφόρος meaning "light-bearer" (Latin Lucifer), referring to the "Morning Star", the planet Venus. The term "phosphorescence", meaning glow after illumination, derives from this property of phosphorus, although the word has since been used for a different physical process that produces a glow. The glow of phosphorus is caused by oxidation of the white (but not red) phosphorus — a process now called chemiluminescence. Together with nitrogen, arsenic, antimony, and bismuth, phosphorus is classified as a pnictogen.

Phosphorus is an element essential to sustaining life largely through phosphates, compounds containing the phosphate ion, PO43−. Phosphates are a component of DNARNAATP, and phospholipids, complex compounds fundamental to cells. Elemental phosphorus was first isolated from human urine, and bone ash was an important early phosphate source. Phosphate mines contain fossils because phosphate is present in the fossilized deposits of animal remains and excreta. Low phosphate levels are an important limit to growth in some aquatic systems. The vast majority of phosphorus compounds mined are consumed as fertilisers. Phosphate is needed to replace the phosphorus that plants remove from the soil, and its annual demand is rising nearly twice as fast as the growth of the human population. Other applications include organophosphorus compounds in detergentspesticides, and nerve agents.

https://en.wikipedia.org/wiki/Phosphorus


Iodine is a chemical element with the symbol I and atomic number 53. The heaviest of the stable halogens, it exists as a semi-lustrous, non-metallic solid at standard conditions that melts to form a deep violet liquid at 114 degrees Celsius, and boils to a violet gas at 184 degrees Celsius. The element was discovered by the French chemist Bernard Courtoisin 1811, and was named two years later by Joseph Louis Gay-Lussac, after the Greek ἰώδης "violet-coloured".

Iodine occurs in many oxidation states, including iodide (I), iodate (IO
3
), and the various periodate anions. It is the least abundant of the stable halogens, being the sixty-first most abundant element. It is the heaviest essential mineral nutrient. Iodine is essential in the synthesis of thyroid hormones.[4] Iodine deficiency affects about two billion people and is the leading preventable cause of intellectual disabilities.[5]

The dominant producers of iodine today are Chile and Japan. Iodine and its compounds are primarily used in nutrition. Due to its high atomic number and ease of attachment to organic compounds, it has also found favour as a non-toxic radiocontrast material. Because of the specificity of its uptake by the human body, radioactive isotopes of iodine can also be used to treat thyroid cancer. Iodine is also used as a catalyst in the industrial production of acetic acid and some polymers.

https://en.wikipedia.org/wiki/Iodine


Mercury is a chemical element with the symbol Hg and atomic number 80. It is commonly known as quicksilver and was formerly named hydrargyrum (/hˈdrɑːrərəm/ hy-DRAR-jər-əm).[4] A heavy, silvery d-block element, mercury is the only metallic element that is liquid at standard conditions for temperature and pressure; the only other element that is liquid under these conditions is the halogen bromine, though metals such as caesiumgallium, and rubidium melt just above room temperature.

Mercury occurs in deposits throughout the world mostly as cinnabar (mercuric sulfide). The red pigment vermilion is obtained by grinding natural cinnabar or synthetic mercuric sulfide.

Mercury is used in thermometersbarometersmanometerssphygmomanometersfloat valvesmercury switchesmercury relaysfluorescent lamps and other devices, though concerns about the element's toxicity have led to mercury thermometers and sphygmomanometers being largely phased out in clinical environments in favor of alternatives such as alcohol- or galinstan-filled glass thermometers and thermistor- or infrared-based electronic instruments. Likewise, mechanical pressure gauges and electronic strain gauge sensors have replaced mercury sphygmomanometers.

Mercury remains in use in scientific research applications and in amalgam for dental restoration in some locales. It is also used in fluorescent lighting. Electricity passed through mercury vapor in a fluorescent lamp produces short-wave ultraviolet light, which then causes the phosphor in the tube to fluoresce, making visible light.

Mercury poisoning can result from exposure to water-soluble forms of mercury (such as mercuric chloride or methylmercury), by inhalation of mercury vapor, or by ingesting any form of mercury.

https://en.wikipedia.org/wiki/Mercury_(element)


Cyaphide, P≡C, is the phosphorus analogue of cyanide. It is not known as a discrete salt, however In silico measurements reveal that the −1 charge in this ion is located mainly on carbon (0.65), as opposed to phosphorus.

https://en.wikipedia.org/wiki/Cyaphide


Cyanogen is the chemical compound with the formula (CN)2. It is a colorless, toxic gas with a pungent odor. The molecule is a pseudohalogen. Cyanogen molecules consist of two CN groups – analogous to diatomic halogen molecules, such as Cl2, but far less oxidizing. The two cyano groups are bonded together at their carbon atoms: N≡C−C≡N, although other isomers have been detected.[6] The name is also used for the CN radical,[7] and hence is used for compounds such as cyanogen bromide (NCBr).[8]

Cyanogen is the anhydride of oxamide:

H2NC(O)C(O)NH2 → NCCN + 2 H2O

although oxamide is manufactured from cyanogen by hydrolysis:[9]

NCCN + 2 H2O → H2NC(O)C(O)NH2

https://en.wikipedia.org/wiki/Cyanogen


Iron (/ˈərn/) is a chemical element with symbol Fe (from Latinferrum) and atomic number 26. It is a metal that belongs to the first transition series and group 8 of the periodic table. It is, by mass, the most common element on Earth, right in front of oxygen (32.1% and 30.1%, respectively), forming much of Earth's outer and inner core. It is the fourth most common element in the Earth's crust.

In its metallic state, iron is rare in the Earth's crust, limited mainly to deposition by meteoritesIron ores, by contrast, are among the most abundant in the Earth's crust, although extracting usable metal from them requires kilns or furnaces capable of reaching 1,500 °C (2,730 °F) or higher, about 500 °C (900 °F) higher than that required to smeltcopper. Humans started to master that process in Eurasia by about 2000 BCE,[not verified in body] and the use of iron tools and weapons began to displace copper alloys, in some regions, only around 1200 BCE. That event is considered the transition from the Bronze Age to the Iron Age. In the modern world, iron alloys, such as steelstainless steelcast iron and special steels are by far the most common industrial metals, because of their mechanical properties and low cost.

Pristine and smooth pure iron surfaces are mirror-like silvery-gray. However, iron reacts readily with oxygen and waterto give brown to black hydrated iron oxides, commonly known as rust. Unlike the oxides of some other metals, that form passivating layers, rust occupies more volume than the metal and thus flakes off, exposing fresh surfaces for corrosion. Although iron readily reacts, high purity iron, called electrolytic iron, has better corrosion resistance.

The body of an adult human contains about 4 grams (0.005% body weight) of iron, mostly in hemoglobin and myoglobin. These two proteins play essential roles in vertebrate metabolism, respectively oxygen transport by bloodand oxygen storage in muscles. To maintain the necessary levels, human iron metabolism requires a minimum of iron in the diet. Iron is also the metal at the active site of many important redox enzymes dealing with cellular respirationand oxidation and reduction in plants and animals.[5]

Chemically, the most common oxidation states of iron are iron(II) and iron(III). Iron shares many properties of other transition metals, including the other group 8 elementsruthenium and osmium. Iron forms compounds in a wide range of oxidation states, −2 to +7. Iron also forms many coordination compounds; some of them, such as ferroceneferrioxalate, and Prussian blue, have substantial industrial, medical, or research applications.

https://en.wikipedia.org/wiki/Iron


Prussian blue (also known as Berlin blue or, in paintingParisian or Paris blue) is a dark blue pigment produced by oxidation of ferrous ferrocyanide salts. It has the chemical formula FeIII
4
[FeII
(CN)
6
]
3
Turnbull's blue is chemically identical, but is made from different reagents, and its slightly different color stems from different impurities.

Prussian blue was the first modern synthetic pigment. It is prepared as a very fine colloidal dispersion, because the compound is not soluble in water. It contains variable amounts[1] of other ions and its appearance depends sensitively on the size of the colloidal particles. The pigment is used in paints, and it is the traditional "blue" in blueprints and aizuri-e (藍摺り絵Japanese woodblock prints.

In medicine, orally administered Prussian blue is used as an antidote for certain kinds of heavy metal poisoning, e.g., by thallium(I) and radioactive isotopes of caesium. The therapy exploits the compound's ion-exchange properties and high affinity for certain "soft" metal cations.

It is on the World Health Organization's List of Essential Medicines, the most important medications needed in a basic health system.[2] Prussian blue lent its name to prussic acid (hydrogen cyanide) derived from it. In German, hydrogen cyanide is called Blausäure ("blue acid"). French chemist Joseph Louis Gay-Lussac gave cyanide its name, from the Ancient Greek word κύανος (kyanos, "blue"), because of its Prussian blue color.

https://en.wikipedia.org/wiki/Prussian_blue


https://en.wikipedia.org/wiki/Silver

https://en.wikipedia.org/wiki/Ruthenium

https://en.wikipedia.org/wiki/Periodic_table


https://en.wikipedia.org/wiki/Nucleon

https://en.wikipedia.org/wiki/pressuron

https://en.wikipedia.org/wiki/vertical_pressure

https://en.wikipedia.org/wiki/linear

https://en.wikipedia.org/wiki/scalar

https://en.wikipedia.org/wiki/magnetic_induction

https://en.wikipedia.org/wiki/linear_motor

https://en.wikipedia.org/wiki/spinor

https://en.wikipedia.org/wiki/List_of_obsolete_units_of_measurement


colloid is a mixture in which one substance of microscopically dispersed insoluble particles are suspendedthroughout another substance. However, some definitions specify that the particles must be dispersed in a liquid,[1] and others extend the definition to include substances like aerosols and gels. The term colloidal suspension refers unambiguously to the overall mixture (although a narrower sense of the word suspension is distinguished from colloids by larger particle size). A colloid has a dispersed phase (the suspended particles) and a continuous phase (the medium of suspension). The dispersed phase particles have a diameter of approximately 1 nanometre to 1 micrometre.[2][3]

Some colloids are translucent because of the Tyndall effect, which is the scattering of light by particles in the colloid. Other colloids may be opaque or have a slight color.

Colloidal suspensions are the subject of interface and colloid science. This field of study was introduced in 1845 by Italian chemist Francesco Selmi[4] and further investigated since 1861 by Scottish scientist Thomas Graham.[5]

https://en.wikipedia.org/wiki/Colloid


Non-ideal gases[edit]

In an ideal gas the particles are assumed to interact only through collisions. The equipartition theorem may also be used to derive the energy and pressure of "non-ideal gases" in which the particles also interact with one another through conservative forces whose potential U(r) depends only on the distance r between the particles.[5] This situation can be described by first restricting attention to a single gas particle, and approximating the rest of the gas by a spherically symmetricdistribution. It is then customary to introduce a radial distribution function g(r) such that the probability density of finding another particle at a distance r from the given particle is equal to 4πr2ρg(r), where ρ = N/V is the mean density of the gas.[36] It follows that the mean potential energy associated to the interaction of the given particle with the rest of the gas is

The total mean potential energy of the gas is therefore , where N is the number of particles in the gas, and the factor 12 is needed because summation over all the particles counts each interaction twice. Adding kinetic and potential energies, then applying equipartition, yields the energy equation

A similar argument,[5] can be used to derive the pressure equation

Anharmonic oscillators[edit]

An anharmonic oscillator (in contrast to a simple harmonic oscillator) is one in which the potential energy is not quadratic in the extension q (the generalized positionwhich measures the deviation of the system from equilibrium). Such oscillators provide a complementary point of view on the equipartition theorem.[37][38] Simple examples are provided by potential energy functions of the form

where C and s are arbitrary real constants. In these cases, the law of equipartition predicts that

Thus, the average potential energy equals kBT/s, not kBT/2 as for the quadratic harmonic oscillator (where s = 2).

More generally, a typical energy function of a one-dimensional system has a Taylor expansion in the extension q:

for non-negative integers n. There is no n = 1 term, because at the equilibrium point, there is no net force and so the first derivative of the energy is zero. The n = 0 term need not be included, since the energy at the equilibrium position may be set to zero by convention. In this case, the law of equipartition predicts that[37]

In contrast to the other examples cited here, the equipartition formula

does not allow the average potential energy to be written in terms of known constants.

Brownian motion[edit]

Figure 7. Typical Brownian motion of a particle in three dimensions.

The equipartition theorem can be used to derive the Brownian motion of a particle from the Langevin equation.[5] According to that equation, the motion of a particle of mass m with velocity v is governed by Newton's second law

where Frnd is a random force representing the random collisions of the particle and the surrounding molecules, and where the time constant τ reflects the drag force that opposes the particle's motion through the solution. The drag force is often written Fdrag = −γv; therefore, the time constant τ equals m/γ.

The dot product of this equation with the position vector r, after averaging, yields the equation

for Brownian motion (since the random force Frnd is uncorrelated with the position r). Using the mathematical identities

and

the basic equation for Brownian motion can be transformed into

where the last equality follows from the equipartition theorem for translational kinetic energy:

The above differential equation for  (with suitable initial conditions) may be solved exactly:

On small time scales, with t << τ, the particle acts as a freely moving particle: by the Taylor series of the exponential function, the squared distance grows approximately quadratically:

However, on long time scales, with t >> τ, the exponential and constant terms are negligible, and the squared distance grows only linearly:

This describes the diffusion of the particle over time. An analogous equation for the rotational diffusion of a rigid molecule can be derived in a similar way.

Stellar physics[edit]

The equipartition theorem and the related virial theorem have long been used as a tool in astrophysics.[39] As examples, the virial theorem may be used to estimate stellar temperatures or the Chandrasekhar limit on the mass of white dwarf stars.[40][41]

The average temperature of a star can be estimated from the equipartition theorem.[42] Since most stars are spherically symmetric, the total gravitational potential energy can be estimated by integration

where M(r) is the mass within a radius r and ρ(r) is the stellar density at radius rG represents the gravitational constant and R the total radius of the star. Assuming a constant density throughout the star, this integration yields the formula

where M is the star's total mass. Hence, the average potential energy of a single particle is

where N is the number of particles in the star. Since most stars are composed mainly of ionized hydrogenN equals roughly M/mp, where mp is the mass of one proton. Application of the equipartition theorem gives an estimate of the star's temperature

Substitution of the mass and radius of the Sun yields an estimated solar temperature of T = 14 million kelvins, very close to its core temperature of 15 million kelvins. However, the Sun is much more complex than assumed by this model—both its temperature and density vary strongly with radius—and such excellent agreement (≈7% relative error) is partly fortuitous.[43]

Star formation[edit]

The same formulae may be applied to determining the conditions for star formation in giant molecular clouds.[44] A local fluctuation in the density of such a cloud can lead to a runaway condition in which the cloud collapses inwards under its own gravity. Such a collapse occurs when the equipartition theorem—or, equivalently, the virial theorem—is no longer valid, i.e., when the gravitational potential energy exceeds twice the kinetic energy

Assuming a constant density ρ for the cloud

yields a minimum mass for stellar contraction, the Jeans mass MJ

Substituting the values typically observed in such clouds (T = 150 K, ρ = 2×10−16 g/cm3) gives an estimated minimum mass of 17 solar masses, which is consistent with observed star formation. This effect is also known as the Jeans instability, after the British physicist James Hopwood Jeans who published it in 1902.[45]

https://en.wikipedia.org/wiki/Equipartition_theorem#Non-ideal_gases


In differential geometry, a subject of mathematics, a symplectic manifold is a smooth manifold, equipped with a closed nondegenerate differential 2-form , called the symplectic form. The study of symplectic manifolds is called symplectic geometry or symplectic topology. Symplectic manifolds arise naturally in abstract formulations of classical mechanics and analytical mechanics as the cotangent bundles of manifolds. For example, in the Hamiltonian formulation of classical mechanics, which provides one of the major motivations for the field, the set of all possible configurations of a system is modeled as a manifold, and this manifold's cotangent bundle describes the phase space of the system.

https://en.wikipedia.org/wiki/Symplectic_manifold


Mohr–Coulomb theory is a mathematical model (see yield surface) describing the response of brittle materials such as concrete, or rubble piles, to shear stress as well as normal stress. Most of the classical engineering materials somehow follow this rule in at least a portion of their shear failure envelope. Generally the theory applies to materials for which the compressive strength far exceeds the tensile strength.[1]

In geotechnical engineering it is used to define  shear strength of soils and rocks at different effective stresses.

In structural engineering it is used to determine failure load as well as the angle of fracture of a displacement fracture in concrete and similar materials.  Coulomb's friction hypothesis is used to determine the combination of shear and normal stress that will cause a fracture of the material. Mohr's circle is used to determine which principal stresses will produce this combination of shear and normal stress, and the angle of the plane in which this will occur. According to the principle of normality the stress introduced at failure will be perpendicular to the line describing the fracture condition.

It can be shown that a material failing according to Coulomb's friction hypothesis will show the displacement introduced at failure forming an angle to the line of fracture equal to the angle of friction. This makes the strength of the material determinable by comparing the external mechanical work introduced by the displacement and the external load with the internal mechanical work introduced by the strain and stress at the line of failure. By conservation of energy the sum of these must be zero and this will make it possible to calculate the failure load of the construction.

A common improvement of this model is to combine Coulomb's friction hypothesis with Rankine's principal stress hypothesis to describe a separation fracture.[2] An alternative view derives the Mohr-Coulomb criterion as extension failure.[3]

https://en.wikipedia.org/wiki/Mohr–Coulomb_theory


Tresca yield surface[edit]

The Tresca yield criterion is taken to be the work of Henri Tresca.[11] It is also known as the maximum shear stress theory (MSST) and the Tresca–Guest[12] (TG) criterion. In terms of the principal stresses the Tresca criterion is expressed as

Where  is the yield strength in shear, and  is the tensile yield strength.

Figure 1 shows the Tresca–Guest yield surface in the three-dimensional space of principal stresses. It is a prism of six sides and having infinite length. This means that the material remains elastic when all three principal stresses are roughly equivalent (a hydrostatic pressure), no matter how much it is compressed or stretched. However, when one of the principal stresses becomes smaller (or larger) than the others the material is subject to shearing. In such situations, if the shear stress reaches the yield limit then the material enters the plastic domain. Figure 2 shows the Tresca–Guest yield surface in two-dimensional stress space, it is a cross section of the prism along the  plane.

Figure 1: View of Tresca–Guest yield surface in 3D space of principal stresses




von Mises yield surface[edit]

The von Mises yield criterion is expressed in the principal stresses as

where  is the yield strength in uniaxial tension.

Figure 3 shows the von Mises yield surface in the three-dimensional space of principal stresses. It is a circular cylinder of infinite length with its axis inclined at equal angles to the three principal stresses. Figure 4 shows the von Mises yield surface in two-dimensional space compared with Tresca–Guest criterion. A cross section of the von Mises cylinder on the plane of  produces the elliptical shape of the yield surface.

Figure 3: View of Huber–Mises–Hencky yield surface in 3D space of principal stresses
Figure 4: Comparison of Tresca–Guest and Huber–Mises–Hencky criteria in 2D space ()

https://en.wikipedia.org/wiki/Yield_surface#Tresca_yield_surface

William John Macquorn Rankine FRSE FRS (/ˈræŋkɪn/; 5 July 1820 – 24 December 1872) was a Scottish mechanical engineer who also contributed to civil engineering, physics and mathematics. He was a founding contributor, with Rudolf Clausius and William Thomson (Lord Kelvin), to the science of thermodynamics, particularly focusing on the first of the three thermodynamic laws. He developed the Rankine scale, an equivalent to the Kelvin scale of temperature, but in degrees Fahrenheit rather than Celsius.

Rankine developed a complete theory of the steam engine and indeed of all heat engines. His manuals of engineering science and practice were used for many decades after their publication in the 1850s and 1860s. He published several hundred papers and notes on science and engineering topics, from 1840 onwards, and his interests were extremely varied, including, in his youth, botanymusic theory and number theory, and, in his mature years, most major branches of science, mathematics and engineering.

He was an enthusiastic amateur singer, pianist and cellist who composed his own humorous songs.

Thermodynamics[edit]

Undaunted,[clarification needed] Rankine returned to his youthful fascination with the mechanics of the heat engine. Though his theory of circulating streams of elastic vortices whose volumes spontaneously adapted to their environment sounds fanciful to scientists formed on a modern account, by 1849, he had succeeded in finding the relationship between saturated vapour pressure and temperature. The following year, he used his theory to establish relationships between the temperature, pressure and density of gases, and expressions for the latent heat of evaporation of a liquid. He accurately predicted the surprising fact that the apparent specific heatof saturated steam would be negative.[6]

Emboldened by his success, in 1851 he set out to calculate the efficiency of heat engines and used his theory as a basis to deduce the principle, that the maximum efficiency possible for any heat engine is a function only of the two temperatures between which it operates. Though a similar result had already been derived by Rudolf Clausius and William Thomson, Rankine claimed that his result rested upon his hypothesis of molecular vortices alone, rather than upon Carnot's theory or some other additional assumption. The work marked the first step on Rankine's journey to develop a more complete theory of heat.

Rankine later recast the results of his molecular theories in terms of a macroscopic account of energy and its transformations. He defined and distinguished between actual energy which was lost in dynamic processes and potential energy by which it was replaced. He assumed the sum of the two energies to be constant, an idea already, although surely not for very long, familiar in the law of conservation of energy. From 1854, he made wide use of his thermodynamic function which he later realised was identical to the entropy of Clausius. By 1855, Rankine had formulated a science of energetics which gave an account of dynamics in terms of energy and its transformations rather than force and motion. The theory was very influential in the 1890s. In 1859 he proposed the Rankine scale of temperature, an absolute or thermodynamic scale whose degree is equal to a Fahrenheit degree.

Energetics offered Rankine an alternative, and rather more mainstream, approach, to his science and, from the mid-1850s, he made rather less use of his molecular vortices. Yet he still claimed that Maxwell's work on electromagnetics was effectively an extension of his model. And, in 1864, he contended that the microscopic theories of heat proposed by Clausius and James Clerk Maxwell, based on linear atomic motion, were inadequate. It was only in 1869 that Rankine admitted the success of these rival theories. By that time, his own model of the atom had become almost identical with that of Thomson.

As was his constant aim, especially as a teacher of engineering, he used his own theories to develop a number of practical results and to elucidate their physical principles including:

  • The Rankine–Hugoniot equation for propagation of shock waves, governs the behaviour of shock waves normal to the oncoming flow. It is named after physicists Rankine and the French engineer Pierre Henri Hugoniot;
  • The Rankine cycle, an analysis of an ideal heat-engine with a condensor. Like other thermodynamic cycles, the maximum efficiency of the Rankine cycle is given by calculating the maximum efficiency of the Carnot cycle;
  • Properties of steam, gases and vapours.

The history of rotordynamics is replete with the interplay of theory and practice. Rankine first performed an analysis of a spinning shaft in 1869, but his model was not adequate and he predicted that supercritical speeds could not be attained.

Fatigue studies[edit]

Drawing of a fatigue failure in an axle, 1843.

Rankine was one of the first engineers to recognise that fatigue failures of railway axles was caused by the initiation and growth of brittle cracks. In the early 1840s he examined many broken axles, especially after the Versailles train crash of 1842 when a locomotive axle suddenly fractured and led to the death of over 50 passengers. He showed that the axles had failed by progressive growth of a brittle crack from a shoulder or other stress concentration source on the shaft, such as a keyway. He was supported by similar direct analysis of failed axles by Joseph Glynn, where the axles failed by slow growth of a brittle crack in a process now known as metal fatigue. It was likely that the front axle of one of the locomotives involved in the Versailles train crash failed in a similar way. Rankine presented his conclusions in a paper delivered to the Institution of Civil Engineers. His work was ignored however, by many engineers who persisted in believing that stress could cause "re-crystallisation" of the metal, a myth which has persisted even to recent times. The theory of recrystallisation was quite wrong, and inhibited worthwhile research until the work of William Fairbairn a few years later, which showed the weakening effect of repeated flexure on large beams. Nevertheless, fatigue remained a serious and poorly understood phenomenon, and was the root cause of many accidents on the railways and elsewhere. It is still a serious problem, but at least is much better understood today, and so can be prevented by careful design.

https://en.wikipedia.org/wiki/William_Rankine


The Rankine vortex is a simple mathematical model of a vortex in a viscous fluid. It is named after its discoverer, William John Macquorn Rankine.

The vortices observed in the nature are usually modelled with a irrotational (potential or free) vortex. However, in potential vortex, the velocity becomes infinite at the vortex center. In reality, very close to the origin, the motion resembles a solid body rotation. The Rankine vortex model assumes a solid-body rotation inside a cylinder of radius  and a potential vortex outside the cylinder. The radius  is referred to as the vortex-core radius. The velocity components  of the Rankine vortex, expressed in terms of the cylindrical-coordinate system  is given by[1]

where  is the circulation strength of the Rankine vortex. Since solid-body rotation is characterized by an azimuthal velocity , where  is the constant angular velocity, one can also use the parameter  to characterize the vortex.

The vorticity field  associated with the Rankine vortex is

Inside the vortex core, the vorticity is constant and twice the angular velocity, whereas outside the core, the flow is irrotational. In reality, the vortex cores are not always exactly circular; nor is the vorticity is uniform within the vortex core.

See also[edit]


Animation of a Rankine vortex. Free-floating test particles reveal the velocity and vorticity pattern.
https://en.wikipedia.org/wiki/Rankine_vortex




Ideas of reference and delusions of reference describe the phenomenon of an individual experiencing innocuous events or mere coincidences[1] and believing they have strong personal significance.[2] It is "the notion that everything one perceives in the world relates to one's own destiny", usually in a negative and hostile manner.[3]

In psychiatry, delusions of reference form part of the diagnostic criteria for psychotic illnesses such as schizophrenia,[4] delusional disorderbipolar disorder (during the elevated stages of mania), as well as schizotypal personality disorder[5] and even autism when under periods of intense stress.[6] To a lesser extent, it can be a hallmark of paranoid personality disorder, as well as body dysmorphic disorder. Such symptoms can also be caused by intoxication, such as stimulants like methamphetamine.

https://en.wikipedia.org/wiki/Ideas_and_delusions_of_reference


Hostile attribution bias, or hostile attribution of intent, is the tendency to interpret others' behaviors as having hostile intent, even when the behavior is ambiguous or benign.[1][2][3] For example, a person with high levels of hostile attribution bias might see two people laughing and immediately interpret this behavior as two people laughing about them, even though the behavior was ambiguous and may have been benign.

The term "hostile attribution bias" was first coined in 1980 by Nasby, Hayden, and DePaulo who noticed, along with several other key pioneers in this research area (e.g., Kenneth A. Dodge), that a subgroup of children tend to attribute hostile intent to ambiguous social situations more often than other children.[1][2] Since then, hostile attribution bias has been conceptualized as a bias of social information processing (similar to other attribution biases), including the way individuals perceive, interpret, and select responses to situations.[4][5] While occasional hostile attribution bias is normative (particularly for younger children), researchers have found that individuals who exhibit consistent and high levels of hostile attribution bias across development are much more likely to engage in aggressive behavior (e.g., hitting/fighting, reacting violently, verbal or relational aggression) toward others.[3][6]

In addition, hostile attribution bias is hypothesized to be one important pathway through which other risk factors, such as peer rejection or harsh parenting behavior, lead to aggression. For example, children exposed to peer teasing at school or child abuse at home are much more likely to develop high levels of hostile attribution bias, which then lead them to behave aggressively at school and/or at home. Thus, in addition to partially explaining one way aggression develops, hostile attribution bias also represents a target for the intervention and prevention of aggressive behaviors.[3]

https://en.wikipedia.org/wiki/Hostile_attribution_bias


The United States of America (U.S.A. or USA), commonly known as the United States (U.S. or US) or America, is a country primarily located in North America. It consists of 50 states, a federal district, five major unincorporated territories, 326 Indian reservations, and some minor possessions.[j] At 3.8 million square miles (9.8 million square kilometers), it is the world's third- or fourth-largest country by total area.[d] The United States shares significant land borders with Canada to the north and Mexico to the south, as well as limited maritime borders with the BahamasCuba, and Russia.[22] With a population of more than 331 million people, it is the third most populous country in the world. The national capital is Washington, D.C., and the most populous city is New York City.

https://en.wikipedia.org/wiki/United_States


This timeline of prehistory covers the time from the first appearance of Homo sapiens in Africa 315,000 years ago to the invention of writing and the beginning of history, 5,000 years ago. It thus covers the time from the Middle Paleolithic (Old Stone Age) to the very beginnings of world history.

All dates are approximate and subject to revision based on new discoveries or analyses.

https://en.wikipedia.org/wiki/Timeline_of_prehistory


In psychology, an attribution bias or attributional bias is a cognitive bias that refers to the systematic errors made when people evaluate or try to find reasons for their own and others' behaviors.[1][2][3] People constantly make attributions—judgements and assumptions about why people behave in certain ways. However, attributions do not always accurately reflect reality. Rather than operating as objective perceivers, people are prone to perceptual errors that lead to biased interpretations of their social world.[4][5] Attribution biases are present in everyday life. For example, when a driver cuts someone off, the person who has been cut off is often more likely to attribute blame to the reckless driver's inherent personality traits (e.g., "That driver is rude and incompetent") rather than situational circumstances (e.g., "That driver may have been late to work and was not paying attention"). Additionally, there are many different types of attribution biases, such as the ultimate attribution errorfundamental attribution erroractor-observer bias, and hostile attribution bias. Each of these biases describes a specific tendency that people exhibit when reasoning about the cause of different behaviors.

Since the early work, researchers have continued to examine how and why people exhibit biased interpretations of social information.[2][6] Many different types of attribution biases have been identified, and more recent psychological research on these biases has examined how attribution biases can subsequently affect emotions and behavior.[7][8][9]

https://en.wikipedia.org/wiki/Attribution_bias


Racism is the belief that groups of humans possess different behavioral traits corresponding to physical appearance and can be divided based on the superiority of one race over another.[1][2][3][4] It may also mean prejudicediscrimination, or antagonism directed against other people because they are of a different race or ethnicity.[2][3] Modern variants of racism are often based in social perceptions of biological differences between peoples. These views can take the form of social actions, practices or beliefs, or political systems in which different races are ranked as inherently superior or inferior to each other, based on presumed shared inheritable traits, abilities, or qualities.[2][3][5] There have been attempts to legitimise racist beliefs through scientific means, which have been overwhelmingly shown to be unfounded.

In terms of political systems (e.g., apartheid) that support the expression of prejudice or aversion in discriminatory practices or laws, racist ideology may include associated social aspects such as nativismxenophobiaothernesssegregationhierarchical ranking, and supremacism.

https://en.wikipedia.org/wiki/Racism


Order theory is a branch of mathematics which investigates the intuitive notion of order using binary relations. It provides a formal framework for describing statements such as "this is less than that" or "this precedes that". This article introduces the field and provides basic definitions. A list of order-theoretic terms can be found in the order theory glossary.

https://en.wikipedia.org/wiki/Order_theory


hierarchy (from Greekἱεραρχίαhierarkhia, 'rule of a high priest', from hierarkhes, 'president of sacred rites') is an arrangement of items (objects, names, values, categories, etc.) that are represented as being "above", "below", or "at the same level as" one another. Hierarchy is an important concept in a wide variety of fields, such as philosophyarchitecturedesignmathematicscomputer scienceorganizational theorysystems theorysystematic biology, and the social sciences(especially political philosophy).

A hierarchy can link entities either directly or indirectly, and either vertically or diagonally. The only direct links in a hierarchy, insofar as they are hierarchical, are to one's immediate superior or to one of one's subordinates, although a system that is largely hierarchical can also incorporate alternative hierarchies. Hierarchical links can extend "vertically" upwards or downwards via multiple links in the same direction, following a path. All parts of the hierarchy that are not linked vertically to one another nevertheless can be "horizontally" linked through a path by traveling up the hierarchy to find a common direct or indirect superior, and then down again. This is akin to two co-workers or colleagues; each reports to a common superior, but they have the same relative amount of authority. Organizational forms exist that are both alternative and complementary to hierarchy. Heterarchy is one such form.

https://en.wikipedia.org/wiki/Hierarchy


Hypokeimenon (Greek: ὑποκείμενον), later often material substratum, is a term in metaphysics which literally means the "underlying thing" (Latinsubiectum).

To search for the hypokeimenon is to search for that substance that persists in a thing going through change—its basic essence.

According to Aristotle's definition,[1] hypokeimenon is something which can be predicated by other things, but cannot be a predicate of others.

https://en.wikipedia.org/wiki/Hypokeimenon


metaphor is a figure of speech that, for rhetorical effect, directly refers to one thing by mentioning another.[1] It may provide (or obscure) clarity or identify hidden similarities between two different ideas. Metaphors are often compared with other types of figurative language, such as antithesishyperbolemetonymy and simile.[2] 

https://en.wikipedia.org/wiki/Metaphor


https://en.wikipedia.org/wiki/Language

https://en.wikipedia.org/wiki/structure

https://en.wikipedia.org/wiki/genetics

https://en.wikipedia.org/wiki/mortality

https://en.wikipedia.org/wiki/capacity


Robert Boyle FRS[5] (/bɔɪl/; 25 January 1627 – 31 December 1691) was an Anglo-Irish[6] natural philosopherchemistphysicist, and inventor. Boyle is largely regarded today as the first modern chemist, and therefore one of the founders of modern chemistry, and one of the pioneers of modern experimental scientific method. He is best known for Boyle's law,[7] which describes the inversely proportional relationship between the absolute pressure and volume of a gas, if the temperature is kept constant within a closed system.[8] Among his works, The Sceptical Chymist is seen as a cornerstone book in the field of chemistry. He was a devout and pious Anglican and is noted for his writings in theology.[9][10][11][12]

Robert Boyle was an alchemist;[24] and believing the transmutation of metals to be a possibility, he carried out experiments in the hope of achieving it; and he was instrumental in obtaining the repeal, in 1689, of the statute of Henry IV against multiplying gold and silver.[25][13] With all the important work he accomplished in physics – the enunciation of Boyle's law, the discovery of the part taken by air in the propagation of sound, and investigations on the expansive force of freezing water, on specific gravities and refractive powers, on crystals, on electricity, on colour, on hydrostatics, etc. – chemistry was his peculiar and favourite study. His first book on the subject was The Sceptical Chymist, published in 1661, in which he criticised the "experiments whereby vulgar Spagyrists are wont to endeavour to evince their Salt, Sulphur and Mercury to be the true Principles of Things." For him chemistry was the science of the composition of substances, not merely an adjunct to the arts of the alchemist or the physician.[13]

As a founder of the Royal Society, he was elected a Fellow of the Royal Society (FRS) in 1663.[5] Boyle's law is named in his honour. The Royal Society of Chemistryissues a Robert Boyle Prize for Analytical Science, named in his honour. The Boyle Medal for Scientific Excellence in Ireland, inaugurated in 1899, is awarded jointly by the Royal Dublin Society and The Irish Times.[34] Launched in 2012, The Robert Boyle Summer School organized by the Waterford Institute of Technology with support from Lismore Castle, is held annually to honor the heritage of Robert Boyle.[35]

Reading in 1657 of Otto von Guericke's air pump, he set himself, with the assistance of Robert Hooke, to devise improvements in its construction, and with the result, the "machina Boyleana" or "Pneumatical Engine", finished in 1659, he began a series of experiments on the properties of air and coined the term factitious airs.[7][13] An account of Boyle's work with the air pump was published in 1660 under the title New Experiments Physico-Mechanical, Touching the Spring of the Air, and its Effects.[13]

https://en.wikipedia.org/wiki/Robert_Boyle


https://en.wikipedia.org/wiki/Galileo_Galilei

https://en.wikipedia.org/wiki/Isaac_Newton

https://en.wikipedia.org/wiki/Robert_Boyle

https://en.wikipedia.org/wiki/Corpuscularianism

https://en.wikipedia.org/wiki/Otto_von_Guericke

https://en.wikipedia.org/wiki/Factitious_airs

https://en.wikipedia.org/wiki/Robert_Hooke


The phlogiston theory is a superseded scientific theory that postulated the existence of a fire-like element called phlogiston(/flɒˈɪstən, fl-, -ɒn/)[1][2] contained within combustible bodies and released during combustion. The name comes from the Ancient Greek φλογιστόν phlogistón (burning up), from φλόξ phlóx (flame). The idea was first proposed in 1667 by Johann Joachim Becherand later put together more formally by Georg Ernst Stahl. Phlogiston theory attempted to explain chemical processes of weight increase such as combustion and rusting, now collectively known as oxidation, and was abandoned before the end of the 18th century following experiments by Antoine Lavoisier and others. Phlogiston theory led to experiments which ultimately concluded with the discovery of oxygen.

https://en.wikipedia.org/wiki/Phlogiston_theory

https://en.wikipedia.org/wiki/Model


Franz Schubert's Symphony No. 8 in B minorD. 759 (sometimes renumbered as Symphony No. 7,[1] in accordance with the revised Deutsch catalogue and the Neue Schubert-Ausgabe[2]), commonly known as the Unfinished Symphony (German: Unvollendete), is a musical composition that Schubert started in 1822 but left with only two movements—though he lived for another six years. A scherzo, nearly completed in piano score but with only two pages orchestrated, also survives.

https://en.wikipedia.org/wiki/Symphony_No._8_(Schubert)


The Éléments de géométrie algébrique ("Elements of Algebraic Geometry") by Alexander Grothendieck (assisted by Jean Dieudonné), or EGA for short, is a rigorous treatise, in French, on algebraic geometry that was published (in eight parts or fascicles) from 1960 through 1967 by the Institut des Hautes Études Scientifiques. In it, Grothendieck established systematic foundations of algebraic geometry, building upon the concept of schemes, which he defined. The work is now considered the foundation stone and basic reference of modern algebraic geometry.

https://en.wikipedia.org/wiki/Éléments_de_géométrie_algébrique


Set theory is the branch of mathematical logic that studies sets, which can be informally described as collections of objects. Although objects of any kind can be collected into a set, set theory, as a branch of mathematics, is mostly concerned with those that are relevant to mathematics as a whole.

The modern study of set theory was initiated by the German mathematicians Richard Dedekind and Georg Cantor in the 1870s. In particular, Georg Cantor is commonly considered the founder of set theory. The non-formalized systems investigated during this early stage go under the name of naive set theory. After the discovery of paradoxes within naive set theory (such as Russell's paradoxCantor's paradox and Burali-Forti paradox) various axiomatic systems were proposed in the early twentieth century, of which Zermelo–Fraenkel set theory (with or without the axiom of choice) is still the best-known and most studied.

Set theory is commonly employed as a foundational system for the whole of mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice.[1] Beside its foundational role, set theory also provides the framework to develop a mathematical theory of infinity, and has various applications in computer sciencephilosophy and formal semantics[disambiguation needed]. Its foundational appeal, together with its paradoxes, its implications for the concept of infinity and its multiple applications, have made set theory an area of major interest for logicians and philosophers of mathematics. Contemporary research into set theory covers a vast array of topics, ranging from the structure of the real number line to the study of the consistency of large cardinals.

https://en.wikipedia.org/wiki/Set_theory


Mortality is the state of being mortal, or susceptible to death; the opposite of immortality.

https://en.wikipedia.org/wiki/Mortality

https://en.wikipedia.org/wiki/Disease#Morbidity


In Greek mythologyAchilles (/əˈkɪlz/ ə-KIL-eezLatin: [äˈkʰɪlːʲeːs̠]) or Achilleus (Ancient GreekἈχιλλεύςromanizedAkhilleús[a.kʰil.lěu̯s]) was a hero of the Trojan War, the greatest of all the Greek warriors, and is the central character of Homer's Iliad. He was the son of the Nereid Thetis and Peleus, king of Phthia.

Achilles' most notable feat during the Trojan War was the slaying of the Trojan prince Hector outside the gates of Troy. Although the death of Achilles is not presented in the Iliad, other sources concur that he was killed near the end of the Trojan War by Paris, who shot him with an arrow. Later legends (beginning with Statius' unfinished epic Achilleid, written in the 1st century AD) state that Achilles was invulnerable in all of his body except for one heel, because when his mother Thetis dipped him in the river Styx as an infant, she held him by one of his heels. Alluding to these legends, the term "Achilles' heel" has come to mean a point of weakness, especially in someone or something with an otherwise strong constitution. The Achilles tendon is also named after him due to these legends.

https://en.wikipedia.org/wiki/Achilles





Hypokeimenon, mortality, greek, achilles, achilles heel, mortal, immortality, philosophy, logic, value, Instrumental, intrinsic, self, weight, hierarchy, mirror, etc..



No comments:

Post a Comment