Blog Archive

Tuesday, May 16, 2023

05-15-2023-1952 - Magnetohydrodynamics, plasma physics (link), shocks and discontinuities (magnetohydrodynamics)(link, paragraph/brief/intro/etc.), etc. (draft)

https://en.wikipedia.org/wiki/Long-term_memory

In psychology, the misattribution of memory or source misattribution is the misidentification of the origin of a memory by the person making the memory recall. Misattribution is likely to occur when individuals are unable to monitor and control the influence of their attitudes, toward their judgments, at the time of retrieval.[1] Misattribution is divided into three components: cryptomnesia, false memories, and source confusion. It was originally noted as one of Daniel Schacter's seven sins of memory.[2] 

https://en.wikipedia.org/wiki/Misattribution_of_memory

Memory conformity, also known as social contagion of memory,[1] refers to the phenomenon where memories or information reported by others influences an individual and is incorporated into the individual's memory. Memory conformity is a memory error due to both social influences and cognitive mechanisms.[2] Social contamination of false memory can be exemplified in prominent situations involving social interactions, such as eyewitness testimony.[3][2][4] Research on memory conformity has revealed that such suggestibility and errors with source monitoring has far reaching consequences, with important legal and social implications. It is one of many social influences on memory. 

https://en.wikipedia.org/wiki/Memory_conformity

Memory

In psychology and cognitive science, a memory bias is a cognitive bias that either enhances or impairs the recall of a memory (either the chances that the memory will be recalled at all, or the amount of time it takes for it to be recalled, or both), or that alters the content of a reported memory. There are many types of memory bias, including:

Misattribution of memory

In psychology, the misattribution of memory or source misattribution is the misidentification of the origin of a memory by the person making the memory recall. Misattribution is likely to occur when individuals are unable to monitor and control the influence of their attitudes, toward their judgments, at the time of retrieval.[142] Misattribution is divided into three components: cryptomnesia, false memories, and source confusion. It was originally noted as one of Daniel Schacter's seven sins of memory.[143]

The misattributions include:

  • Cryptomnesia, where a memory is mistaken for novel thought or imagination, because there is no subjective experience of it being a memory.[144]
  • False memory, where imagination is mistaken for a memory.
  • Social cryptomnesia, a failure by people and society in general to remember the origin of a change, in which people know that a change has occurred in society, but forget how this change occurred; that is, the steps that were taken to bring this change about, and who took these steps. This has led to reduced social credit towards the minorities who made major sacrifices that led to the change in societal values.[145]
  • Source confusion, episodic memories are confused with other information, creating distorted memories.[146]
  • Suggestibility, where ideas suggested by a questioner are mistaken for memory.
  • The Perky effect, where real images can influence imagined images, or be misremembered as imagined rather than real

Other

Name Description
Availability bias Greater likelihood of recalling recent, nearby, or otherwise immediately available examples, and the imputation of importance to those examples over others.
Bizarreness effect Bizarre material is better remembered than common material.
Boundary extension Remembering the background of an image as being larger or more expansive than the foreground [147]
Childhood amnesia The retention of few memories from before the age of four.
Choice-supportive bias The tendency to remember one's choices as better than they actually were.[148]
Confirmation bias The tendency to search for, interpret, or recall information in a way that confirms one's beliefs or hypotheses. See also under Belief, decision-making and behavioral § Notes.
Conservatism or Regressive bias Tendency to remember high values and high likelihoods/probabilities/frequencies as lower than they actually were and low ones as higher than they actually were. Based on the evidence, memories are not extreme enough.[149][150]
Consistency bias Incorrectly remembering one's past attitudes and behaviour as resembling present attitudes and behaviour.[151]
Continued influence effect Misinformation continues to influence memory and reasoning about an event, despite the misinformation having been corrected.[152] cf. misinformation effect, where the original memory is affected by incorrect information received later.
Context effect That cognition and memory are dependent on context, such that out-of-context memories are more difficult to retrieve than in-context memories (e.g., recall time and accuracy for a work-related memory will be lower at home, and vice versa).
Cross-race effect The tendency for people of one race to have difficulty identifying members of a race other than their own.
Egocentric bias Recalling the past in a self-serving manner, e.g., remembering one's exam grades as being better than they were, or remembering a caught fish as bigger than it really was.
Euphoric recall The tendency of people to remember past experiences in a positive light, while overlooking negative experiences associated with that event.
Fading affect bias A bias in which the emotion associated with unpleasant memories fades more quickly than the emotion associated with positive events.[153]
Generation effect (Self-generation effect) That self-generated information is remembered best. For instance, people are better able to recall memories of statements that they have generated than similar statements generated by others.
Gender differences in eyewitness memory The tendency for a witness to remember more details about someone of the same gender.
Google effect The tendency to forget information that can be found readily online by using Internet search engines.
Hindsight bias ("I-knew-it-all-along" effect) The inclination to see past events as being predictable.
Humor effect That humorous items are more easily remembered than non-humorous ones, which might be explained by the distinctiveness of humor, the increased cognitive processing time to understand the humor, or the emotional arousal caused by the humor.[154]
Illusory correlation Inaccurately seeing a relationship between two events related by coincidence.[155] See also under Apophenia § Notes
Illusory truth effect (Illusion-of-truth effect) People are more likely to identify as true statements those they have previously heard (even if they cannot consciously remember having heard them), regardless of the actual validity of the statement. In other words, a person is more likely to believe a familiar statement than an unfamiliar one. See also under Truthiness § Notes
Lag effect The phenomenon whereby learning is greater when studying is spread out over time, as opposed to studying the same amount of time in a single session. See also spacing effect.
Leveling and sharpening Memory distortions introduced by the loss of details in a recollection over time, often concurrent with sharpening or selective recollection of certain details that take on exaggerated significance in relation to the details or aspects of the experience lost through leveling. Both biases may be reinforced over time, and by repeated recollection or re-telling of a memory.[156]
Levels-of-processing effect That different methods of encoding information into memory have different levels of effectiveness.[157]
List-length effect a smaller percentage of items are remembered in a longer list, but as the length of the list increases, the absolute number of items remembered increases as well.[158]
Memory inhibition Being shown some items from a list makes it harder to retrieve the other items (e.g., Slamecka, 1968).
Misinformation effect Memory becoming less accurate because of interference from post-event information.[159] cf. continued influence effect, where misinformation about an event, despite later being corrected, continues to influence memory about the event.
Modality effect That memory recall is higher for the last items of a list when the list items were received via speech than when they were received through writing.
Mood-congruent memory bias (state-dependent memory) The improved recall of information congruent with one's current mood.
Negativity bias or Negativity effect Psychological phenomenon by which humans have a greater recall of unpleasant memories compared with positive memories.[160][113] (see also actor-observer bias, group attribution error, positivity effect, and negativity effect).[126]
Next-in-line effect When taking turns speaking in a group using a predetermined order (e.g. going clockwise around a room, taking numbers, etc.) people tend to have diminished recall for the words of the person who spoke immediately before them.[161]
Part-list cueing effect That being shown some items from a list and later retrieving one item causes it to become harder to retrieve the other items.[162]
Peak–end rule That people seem to perceive not the sum of an experience but the average of how it was at its peak (e.g., pleasant or unpleasant) and how it ended.
Persistence The unwanted recurrence of memories of a traumatic event.
Picture superiority effect The notion that concepts that are learned by viewing pictures are more easily and frequently recalled than are concepts that are learned by viewing their written word form counterparts.[163][164][165][166][167][168]
Placement bias Tendency to remember ourselves to be better than others at tasks at which we rate ourselves above average (also Illusory superiority or Better-than-average effect)[169] and tendency to remember ourselves to be worse than others at tasks at which we rate ourselves below average (also Worse-than-average effect).[170]
Positivity effect (Socioemotional selectivity theory) That older adults favor positive over negative information in their memories. See also euphoric recall
Primacy effect Where an item at the beginning of a list is more easily recalled. A form of serial position effect. See also recency effect and suffix effect.
Processing difficulty effect That information that takes longer to read and is thought about more (processed with more difficulty) is more easily remembered.[171] See also levels-of-processing effect.
Recency effect A form of serial position effect where an item at the end of a list is easier to recall. This can be disrupted by the suffix effect. See also primacy effect.
Reminiscence bump The recalling of more personal events from adolescence and early adulthood than personal events from other lifetime periods.[172]
Repetition blindness Unexpected difficulty in remembering more than one instance of a visual sequence
Rosy retrospection The remembering of the past as having been better than it really was.
Saying is believing effect Communicating a socially tuned message to an audience can lead to a bias of identifying the tuned message as one's own thoughts.
Self-relevance effect That memories relating to the self are better recalled than similar information relating to others.
Serial position effect That items near the end of a sequence are the easiest to recall, followed by the items at the beginning of a sequence; items in the middle are the least likely to be remembered.[173] See also recency effect, primacy effect and suffix effect.
Spacing effect That information is better recalled if exposure to it is repeated over a long span of time rather than a short one. See also lag effect.
Spotlight effect The tendency to overestimate the amount that other people notice one's appearance or behavior.
Stereotype bias or stereotypical bias Memory distorted towards stereotypes (e.g., racial or gender).
Suffix effect Diminishment of the recency effect because a sound item is appended to the list that the subject is not required to recall.[174][175] A form of serial position effect. Cf. recency effect and primacy effect.
Subadditivity effect The tendency to estimate that the likelihood of a remembered event is less than the sum of its (more than two) mutually exclusive components.[176]
Tachypsychia When time perceived by the individual either lengthens, making events appear to slow down, or contracts.[177]
Telescoping effect The tendency to displace recent events backwards in time and remote events forward in time, so that recent events appear more remote, and remote events, more recent.
Testing effect The fact that one more easily recall information one has read by rewriting it instead of rereading it.[178] Frequent testing of material that has been committed to memory improves memory recall.
Tip of the tongue phenomenon When a subject is able to recall parts of an item, or related information, but is frustratingly unable to recall the whole item. This is thought to be an instance of "blocking" where multiple similar memories are being recalled and interfere with each other.[144]
Travis syndrome Overestimating the significance of the present.[179] It is related to chronological snobbery with possibly an appeal to novelty logical fallacy being part of the bias.
Verbatim effect That the "gist" of what someone has said is better remembered than the verbatim wording.[180] This is because memories are representations, not exact copies.
von Restorff effect That an item that sticks out is more likely to be remembered than other items.[181]
Zeigarnik effect That uncompleted or interrupted tasks are remembered better than completed ones.

See also

Footnotes


  • Haselton MG, Nettle D, Andrews PW (2005). "The evolution of cognitive bias" (PDF). In Buss DM (ed.). The Handbook of Evolutionary Psychology. Hoboken, NJ: John Wiley & Sons Inc. pp. 724–746.

  • "Cognitive Bias – Association for Psychological Science". www.psychologicalscience.org. Retrieved 2018-10-10.

  • Thomas O (2018-01-19). "Two decades of cognitive bias research in entrepreneurship: What do we know and where do we go from here?". Management Review Quarterly. 68 (2): 107–143. doi:10.1007/s11301-018-0135-9. ISSN 2198-1620. S2CID 148611312.

  • Dougherty MR, Gettys CF, Ogden EE (1999). "MINERVA-DM: A memory processes model for judgments of likelihood" (PDF). Psychological Review. 106 (1): 180–209. doi:10.1037/0033-295x.106.1.180.

  • Hilbert M (March 2012). "Toward a synthesis of cognitive biases: how noisy information processing can bias human decision making". Psychological Bulletin. 138 (2): 211–37. doi:10.1037/a0025940. PMID 22122235.

  • Gigerenzer G (2006). "Bounded and Rational". In Stainton RJ (ed.). Contemporary Debates in Cognitive Science. Blackwell. p. 129. ISBN 978-1405113045.

  • MacCoun RJ (1998). "Biases in the interpretation and use of research results" (PDF). Annual Review of Psychology. 49 (1): 259–287. doi:10.1146/annurev.psych.49.1.259. PMID 15012470.

  • Nickerson RS (1998). "Confirmation Bias: A Ubiquitous Phenomenon in Many Guises" (PDF). Review of General Psychology. 2 (2): 175–220 [198]. doi:10.1037/1089-2680.2.2.175. S2CID 8508954.

  • Dardenne B, Leyens JP (1995). "Confirmation Bias as a Social Skill". Personality and Social Psychology Bulletin. 21 (11): 1229–1239. doi:10.1177/01461672952111011. S2CID 146709087.

  • Alexander WH, Brown JW (June 2010). "Hyperbolically discounted temporal difference learning". Neural Computation. 22 (6): 1511–1527. doi:10.1162/neco.2010.08-09-1080. PMC 3005720. PMID 20100071.

  • Zhang Y, Lewis M, Pellon M, Coleman P (2007). A Preliminary Research on Modeling Cognitive Agents for Social Environments in Multi-Agent Systems (PDF). 2007 AAAI Fall Symposium: Emergent agents and socialities: Social and organizational aspects of intelligence. Association for the Advancement of Artificial Intelligence. pp. 116–123.

  • Iverson GL, Brooks BL, Holdnack JA (2008). "Misdiagnosis of Cognitive Impairment in Forensic Neuropsychology". In Heilbronner RL (ed.). Neuropsychology in the Courtroom: Expert Analysis of Reports and Testimony. New York: Guilford Press. p. 248. ISBN 978-1593856342.

  • Kim M, Daniel JL (2020-01-02). "Common Source Bias, Key Informants, and Survey-Administrative Linked Data for Nonprofit Management Research". Public Performance & Management Review. 43 (1): 232–256. doi:10.1080/15309576.2019.1657915. ISSN 1530-9576. S2CID 203468837. Retrieved 23 June 2021.

  • DuCharme WW (1970). "Response bias explanation of conservative human inference". Journal of Experimental Psychology. 85 (1): 66–74. doi:10.1037/h0029546. hdl:2060/19700009379.

  • Edwards W (1968). "Conservatism in human information processing". In Kleinmuntz B (ed.). Formal representation of human judgment. New York: Wiley. pp. 17–52.

  • "The Psychology Guide: What Does Functional Fixedness Mean?". PsycholoGenie. Retrieved 2018-10-10.

  • Carroll RT. "apophenia". The Skeptic's Dictionary. Retrieved 17 July 2017.

  • Tversky A, Kahneman D (September 1974). "Judgment under Uncertainty: Heuristics and Biases". Science. 185 (4157): 1124–1131. Bibcode:1974Sci...185.1124T. doi:10.1126/science.185.4157.1124. PMID 17835457. S2CID 143452957.

  • Fiedler K (1991). "The tricky nature of skewed frequency tables: An information loss account of distinctiveness-based illusory correlations". Journal of Personality and Social Psychology. 60 (1): 24–36. doi:10.1037/0022-3514.60.1.24.

  • Schwarz N, Bless H, Strack F, Klumpp G, Rittenauer-Schatka H, Simons A (1991). "Ease of Retrieval as Information: Another Look at the Availability Heuristic" (PDF). Journal of Personality and Social Psychology. 61 (2): 195–202. doi:10.1037/0022-3514.61.2.195. Archived from the original (PDF) on 9 February 2014. Retrieved 19 Oct 2014.

  • Coley JD, Tanner KD (2012). "Common origins of diverse misconceptions: cognitive principles and the development of biology thinking". CBE: Life Sciences Education. 11 (3): 209–215. doi:10.1187/cbe.12-06-0074. PMC 3433289. PMID 22949417.

  • "The Real Reason We Dress Pets Like People". LiveScience.com. 3 March 2010. Retrieved 2015-11-16.

  • Harris LT, Fiske ST (January 2011). "Dehumanized Perception: A Psychological Means to Facilitate Atrocities, Torture, and Genocide?". Zeitschrift für Psychologie. 219 (3): 175–181. doi:10.1027/2151-2604/a000065. PMC 3915417. PMID 24511459.

  • Bar-Haim Y, Lamy D, Pergamin L, Bakermans-Kranenburg MJ, van IJzendoorn MH (January 2007). "Threat-related attentional bias in anxious and nonanxious individuals: a meta-analytic study". Psychological Bulletin. 133 (1): 1–24. doi:10.1037/0033-2909.133.1.1. PMID 17201568. S2CID 2861872.

  • Zwicky A (2005-08-07). "Just Between Dr. Language and I". Language Log.

  • Bellows A (March 2006). "The Baader-Meinhof Phenomenon". Damn Interesting. Retrieved 2020-02-16.

  • Kershner K (20 March 2015). "What's the Baader-Meinhof phenomenon?". howstuffworks.com. Retrieved 15 April 2018.

  • "The Baader-Meinhof Phenomenon? Or: The Joy Of Juxtaposition?". twincities.com. St. Paul Pioneer Press. 23 February 2007. Retrieved October 20, 2020. As you might guess, the phenomenon is named after an incident in which I was talking to a friend about the Baader-Meinhof gang (and this was many years after they were in the news). The next day, my friend phoned me and referred me to an article in that day's newspaper in which the Baader-Meinhof gang was mentioned.

  • Michael I. Norton, Daniel Mochon, Dan Ariely (2011). The "IKEA Effect": When Labor Leads to Love. Harvard Business School

  • Lebowitz S (2 December 2016). "Harness the power of the 'Ben Franklin Effect' to get someone to like you". Business Insider. Retrieved 2018-10-10.

  • Oswald ME, Grosjean S (2004). "Confirmation Bias". In Pohl RF (ed.). Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory. Hove, UK: Psychology Press. pp. 79–96. ISBN 978-1841693514. OCLC 55124398 – via archive.org.

  • Sanna LJ, Schwarz N, Stocker SL (2002). "When debiasing backfires: Accessible content and accessibility experiences in debiasing hindsight" (PDF). Journal of Experimental Psychology: Learning, Memory, and Cognition. 28 (3): 497–502. CiteSeerX 10.1.1.387.5964. doi:10.1037/0278-7393.28.3.497. ISSN 0278-7393. PMID 12018501.

  • Jeng M (2006). "A selected history of expectation bias in physics". American Journal of Physics. 74 (7): 578–583. arXiv:physics/0508199. Bibcode:2006AmJPh..74..578J. doi:10.1119/1.2186333. S2CID 119491123.

  • Schacter DL, Gilbert DT, Wegner DM (2011). Psychology (2nd ed.). Macmillan. p. 254. ISBN 978-1429237192.

  • Pronin E, Kugler MB (July 2007). "Valuing thoughts, ignoring behavior: The introspection illusion as a source of the bias blind spot". Journal of Experimental Social Psychology. 43 (4): 565–578. doi:10.1016/j.jesp.2006.05.011. ISSN 0022-1031.

  • Marks G, Miller N (1987). "Ten years of research on the false-consensus effect: An empirical and theoretical review". Psychological Bulletin. 102 (1): 72–90. doi:10.1037/0033-2909.102.1.72.

  • "False Uniqueness Bias (Social PsychologyY) – IResearchNet". 2016-01-13.

  • "The Barnum Demonstration". psych.fullerton.edu. Retrieved 2018-10-10.

  • Pronin E, Kruger J, Savitsky K, Ross L (October 2001). "You don't know me, but I know you: the illusion of asymmetric insight". Journal of Personality and Social Psychology. 81 (4): 639–656. doi:10.1037/0022-3514.81.4.639. PMID 11642351.

  • Thompson SC (1999). "Illusions of Control: How We Overestimate Our Personal Influence". Current Directions in Psychological Science. 8 (6): 187–190. doi:10.1111/1467-8721.00044. ISSN 0963-7214. JSTOR 20182602. S2CID 145714398.

  • Dierkes M, Antal AB, Child J, Nonaka I (2003). Handbook of Organizational Learning and Knowledge. Oxford University Press. p. 22. ISBN 978-0198295822. Retrieved 9 September 2013.

  • Hoorens V (1993). "Self-enhancement and Superiority Biases in Social Comparison". European Review of Social Psychology. 4 (1): 113–139. doi:10.1080/14792779343000040.

  • Adams PA, Adams JK (December 1960). "Confidence in the recognition and reproduction of words difficult to spell". The American Journal of Psychology. 73 (4): 544–552. doi:10.2307/1419942. JSTOR 1419942. PMID 13681411.

  • Hoffrage U (2004). "Overconfidence". In Pohl R (ed.). Cognitive Illusions: a handbook on fallacies and biases in thinking, judgement and memory. Psychology Press. ISBN 978-1841693514.

  • Sutherland 2007, pp. 172–178

  • Sanna LJ, Schwarz N (July 2004). "Integrating temporal biases: the interplay of focal thoughts and accessibility experiences". Psychological Science. 15 (7): 474–481. doi:10.1111/j.0956-7976.2004.00704.x. PMID 15200632. S2CID 10998751.

  • Baron 1994, pp. 224–228

  • Västfjäll D, Slovic P, Mayorga M, Peters E (18 June 2014). "Compassion fade: affect and charity are greatest for a single child in need". PLOS ONE. 9 (6): e100115. Bibcode:2014PLoSO...9j0115V. doi:10.1371/journal.pone.0100115. PMC 4062481. PMID 24940738.

  • Fisk JE (2004). "Conjunction fallacy". In Pohl RF (ed.). Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory. Hove, UK: Psychology Press. pp. 23–42. ISBN 978-1841693514. OCLC 55124398.

  • Barbara L. Fredrickson and Daniel Kahneman (1993). Duration Neglect in Retrospective Evaluations of Affective Episodes. Journal of Personality and Social Psychology. 65 (1) pp. 45–55. Archived 2017-08-08 at the Wayback Machine

  • Laibson D (1997). "Golden Eggs and Hyperbolic Discounting". Quarterly Journal of Economics. 112 (2): 443–477. CiteSeerX 10.1.1.337.3544. doi:10.1162/003355397555253. S2CID 763839.

  • Baron 1994, p. 353

  • Goddard K, Roudsari A, Wyatt JC (2011). "Automation Bias – A Hidden Issue for Clinical Decision Support System Use". International Perspectives in Health Informatics. Studies in Health Technology and Informatics. Vol. 164. IOS Press. pp. 17–22. doi:10.3233/978-1-60750-709-3-17.

  • Tackling social norms: a game changer for gender inequalities (Gender Social Norms Index). 2020 Human Development Perspectives. United Nations Development Programme. Retrieved 2020-06-10.

  • Bian L, Leslie SJ, Cimpian A (December 2018). "Evidence of bias against girls and women in contexts that emphasize intellectual ability". The American Psychologist. 73 (9): 1139–1153. doi:10.1037/amp0000427. PMID 30525794.

  • Hamilton MC (1991). "Masculine Bias in the Attribution of Personhood: People = Male, Male = People". Psychology of Women Quarterly. 15 (3): 393–402. doi:10.1111/j.1471-6402.1991.tb00415.x. ISSN 0361-6843. S2CID 143533483.

  • Plous 1993, pp. 38–41

  • "Evolution and cognitive biases: the decoy effect". FutureLearn. Retrieved 2018-10-10.

  • "The Default Effect: How to Leverage Bias and Influence Behavior". Influence at Work. 2012-01-11. Retrieved 2018-10-10.

  • Why We Spend Coins Faster Than Bills by Chana Joffe-Walt. All Things Considered, 12 May 2009.

  • Hsee CK, Zhang J (May 2004). "Distinction bias: misprediction and mischoice due to joint evaluation". Journal of Personality and Social Psychology. 86 (5): 680–695. CiteSeerX 10.1.1.484.9171. doi:10.1037/0022-3514.86.5.680. PMID 15161394.

  • Mike K, Hazzan O (2022). "What Is Common to Transportation and Health in Machine Learning Education? The Domain Neglect Bias". IEEE Transactions on Education: 1–8. doi:10.1109/TE.2022.3218013. ISSN 0018-9359. S2CID 253402007.

  • "Berkson's Paradox | Brilliant Math & Science Wiki". brilliant.org. Retrieved 2018-10-10.

  • Kristal AS, Santos LR, G.I. Joe Phenomena: Understanding the Limits of Metacognitive Awareness on Debiasing (PDF), Harvard Business School

  • Investopedia Staff (2006-10-29). "Gambler's Fallacy/Monte Carlo Fallacy". Investopedia. Retrieved 2018-10-10.

  • Tuccio W (2011-01-01). "Heuristics to Improve Human Factors Performance in Aviation". Journal of Aviation/Aerospace Education & Research. 20 (3). doi:10.15394/jaaer.2011.1640. ISSN 2329-258X.

  • Baron, J. (in preparation). Thinking and Deciding, 4th edition. New York: Cambridge University Press.

  • Baron 1994, p. 372

  • de Meza D, Dawson C (January 24, 2018). "Wishful Thinking, Prudent Behavior: The Evolutionary Origin of Optimism, Loss Aversion and Disappointment Aversion". SSRN 3108432.

  • Dawson C, Johnson SG (8 April 2021). "Dread Aversion and Economic Preferences". SSRN 3822640.

  • (Kahneman, Knetsch & Thaler 1991, p. 193) Richard Thaler coined the term "endowment effect."

  • (Kahneman, Knetsch & Thaler 1991, p. 193) Daniel Kahneman, together with Amos Tversky, coined the term "loss aversion."

  • Hardman 2009, p. 137

  • Kahneman, Knetsch & Thaler 1991, p. 193

  • Baron 1994, p. 382

  • Kruger J, Dunning D (December 1999). "Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessments". Journal of Personality and Social Psychology. 77 (6): 1121–1134. CiteSeerX 10.1.1.64.2655. doi:10.1037/0022-3514.77.6.1121. PMID 10626367.

  • Van Boven L, Loewenstein G, Dunning D, Nordgren LF (2013). "Changing Places: A Dual Judgment Model of Empathy Gaps in Emotional Perspective Taking" (PDF). In Zanna MP, Olson JM (eds.). Advances in Experimental Social Psychology. Vol. 48. Academic Press. pp. 117–171. doi:10.1016/B978-0-12-407188-9.00003-X. ISBN 978-0124071889. Archived from the original (PDF) on 2016-05-28.

  • Lichtenstein S, Fischhoff B (1977). "Do those who know more also know more about how much they know?". Organizational Behavior and Human Performance. 20 (2): 159–183. doi:10.1016/0030-5073(77)90001-0.

  • Merkle EC (February 2009). "The disutility of the hard-easy effect in choice confidence". Psychonomic Bulletin & Review. 16 (1): 204–213. doi:10.3758/PBR.16.1.204. PMID 19145033.

  • Juslin P, Winman A, Olsson H (April 2000). "Naive empiricism and dogmatism in confidence research: a critical examination of the hard-easy effect". Psychological Review. 107 (2): 384–396. doi:10.1037/0033-295x.107.2.384. PMID 10789203.

  • Waytz A (26 January 2022). "2017 : What scientific term or concept ought to be more widely known?". Edge.org. Retrieved 26 January 2022.

  • Rozenblit L, Keil F (September 2002). "The misunderstood limits of folk science: an illusion of explanatory depth". Cognitive Science. Wiley. 26 (5): 521–562. doi:10.1207/s15516709cog2605_1. PMC 3062901. PMID 21442007.

  • Mills CM, Keil FC (January 2004). "Knowing the limits of one's understanding: the development of an awareness of an illusion of explanatory depth". Journal of Experimental Child Psychology. Elsevier BV. 87 (1): 1–32. doi:10.1016/j.jecp.2003.09.003. PMID 14698687.

  • "Imposter Syndrome | Psychology Today".

  • "Objectivity illusion". APA Dictionary of Psychology. Washington, DC: American Psychological Association. n.d. Retrieved 2022-01-15.

  • Klauer KC, Musch J, Naumer B (October 2000). "On belief bias in syllogistic reasoning". Psychological Review. 107 (4): 852–884. doi:10.1037/0033-295X.107.4.852. PMID 11089409.

  • "Why do we prefer doing something to doing nothing". The Decision Lab. 30 September 2021. Retrieved 30 November 2021.

  • Patt A, Zeckhauser R (July 2000). "Action Bias and Environmental Decisions". Journal of Risk and Uncertainty. 21: 45–72. doi:10.1023/A:1026517309871. S2CID 154662174. Retrieved 30 November 2021.

  • Gupta S (7 April 2021). "People add by default even when subtraction makes more sense". Science News. Retrieved 10 May 2021.

  • Adams GS, Converse BA, Hales AH, Klotz LE (April 2021). "People systematically overlook subtractive changes". Nature. 592 (7853): 258–261. Bibcode:2021Natur.592..258A. doi:10.1038/s41586-021-03380-y. PMID 33828317. S2CID 233185662.

  • Ackerman MS, ed. (2003). Sharing expertise beyond knowledge management (online ed.). Cambridge, Massachusetts: MIT Press. p. 7. ISBN 978-0262011952.

  • Quartz SR, The State Of The World Isn't Nearly As Bad As You Think, Edge Foundation, Inc., retrieved 2016-02-17

  • Quoidbach J, Gilbert DT, Wilson TD (January 2013). "The end of history illusion" (PDF). Science. 339 (6115): 96–98. Bibcode:2013Sci...339...96Q. doi:10.1126/science.1229294. PMID 23288539. S2CID 39240210. Archived from the original (PDF) on 2013-01-13. Young people, middle-aged people, and older people all believed they had changed a lot in the past but would change relatively little in the future.

  • Haring KS, Watanabe K, Velonaki M, Tossell CC, Finomore V (2018). "FFAB-The Form Function Attribution Bias in Human Robot Interaction". IEEE Transactions on Cognitive and Developmental Systems. 10 (4): 843–851. doi:10.1109/TCDS.2018.2851569. S2CID 54459747.

  • Kara-Yakoubian M (2022-07-29). "Psychologists uncover evidence of a fundamental pain bias". PsyPost. Retrieved 2022-11-27.

  • Pohl RF (2004). "Hindsight Bias". In Pohl RF (ed.). Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory. Hove, UK: Psychology Press. pp. 363–378. ISBN 978-1841693514. OCLC 55124398.

  • Baron 1994, pp. 258–259

  • Danziger S, Levav J, Avnaim-Pesso L (April 2011). "Extraneous factors in judicial decisions". Proceedings of the National Academy of Sciences of the United States of America. 108 (17): 6889–6892. Bibcode:2011PNAS..108.6889D. doi:10.1073/pnas.1018033108. PMC 3084045. PMID 21482790.

  • Zaman J, De Peuter S, Van Diest I, Van den Bergh O, Vlaeyen JW (November 2016). "Interoceptive cues predicting exteroceptive events". International Journal of Psychophysiology. 109: 100–106. doi:10.1016/j.ijpsycho.2016.09.003. PMID 27616473.

  • Barrett LF, Simmons WK (July 2015). "Interoceptive predictions in the brain". Nature Reviews. Neuroscience. 16 (7): 419–429. doi:10.1038/nrn3950. PMC 4731102. PMID 26016744.

  • Damasio AR (October 1996). "The somatic marker hypothesis and the possible functions of the prefrontal cortex". Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences. 351 (1346): 1413–1420. doi:10.1098/rstb.1996.0125. PMID 8941953. S2CID 1841280.

  • Shafir E, Diamond P, Tversky A (2000). "Money Illusion". In Kahneman D, Tversky A (eds.). Choices, values, and frames. Cambridge University Press. pp. 335–355. ISBN 978-0521627498.

  • Marcatto F, Cosulich A, Ferrante D (2015). "Once bitten, twice shy: Experienced regret and non-adaptive choice switching". PeerJ. 3: e1035. doi:10.7717/peerj.1035. PMC 4476096. PMID 26157618.

  • Bornstein RF, Crave-Lemley C (2004). "Mere exposure effect". In Pohl RF (ed.). Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory. Hove, UK: Psychology Press. pp. 215–234. ISBN 978-1841693514. OCLC 55124398.

  • Baron 1994, p. 386

  • Baron 1994, p. 44

  • Hardman 2009, p. 104

  • O'Donoghue T, Rabin M (1999). "Doing it now or later". American Economic Review. 89 (1): 103–124. doi:10.1257/aer.89.1.103. S2CID 5115877.

  • Balas B, Momsen JL (September 2014). Holt EA (ed.). "Attention "blinks" differently for plants and animals". CBE: Life Sciences Education. 13 (3): 437–443. doi:10.1187/cbe.14-05-0080. PMC 4152205. PMID 25185227.

  • Safi R, Browne GJ, Naini AJ (2021). "Mis-spending on information security measures: Theory and experimental evidence". International Journal of Information Management. 57 (102291): 102291. doi:10.1016/j.ijinfomgt.2020.102291. S2CID 232041220.

  • Hsee CK, Hastie R (January 2006). "Decision and experience: why don't we choose what makes us happy?" (PDF). Trends in Cognitive Sciences. 10 (1): 31–37. CiteSeerX 10.1.1.178.7054. doi:10.1016/j.tics.2005.11.007. PMID 16318925. S2CID 12262319. Archived (PDF) from the original on 2015-04-20.

  • Trofimova I (October 1999). "An investigation of how people of different age, sex, and temperament estimate the world". Psychological Reports. 85 (2): 533–552. doi:10.2466/pr0.1999.85.2.533. PMID 10611787. S2CID 8335544.

  • Trofimova I (2014). "Observer bias: an interaction of temperament traits with biases in the semantic perception of lexical material". PLOS ONE. 9 (1): e85677. Bibcode:2014PLoSO...985677T. doi:10.1371/journal.pone.0085677. PMC 3903487. PMID 24475048.

  • Leman PJ, Cinnirella M (2007). "A major event has a major cause: Evidence for the role of heuristics in reasoning about conspiracy theories". Social Psychological Review. 9 (2): 18–28. doi:10.53841/bpsspr.2007.9.2.18. S2CID 245126866.

  • Buckley T. "Why Do Some People Believe in Conspiracy Theories?". Scientific American. Retrieved 26 July 2020.

  • "Use Cognitive Biases to Your Advantage, Institute for Management Consultants, #721, December 19, 2011".

  • Fiedler K, Unkelbach C (2014-10-01). "Regressive Judgment: Implications of a Universal Property of the Empirical World". Current Directions in Psychological Science. 23 (5): 361–367. doi:10.1177/0963721414546330. ISSN 0963-7214. S2CID 146376950.

  • Forsyth DR (2009). Group Dynamics (5th ed.). Cengage Learning. p. 317. ISBN 978-0495599524.

  • "Unconscious Bias". Vanderbilt University. Retrieved 2020-11-09.

  • "Penn Psychologists Believe 'Unit Bias' Determines The Acceptable Amount To Eat". ScienceDaily (November 21, 2005)

  • Talboy A, Schneider S (2022-03-17). "Reference Dependence in Bayesian Reasoning: Value Selection Bias, Congruence Effects, and Response Prompt Sensitivity". Frontiers in Psychology. 13: 729285. doi:10.3389/fpsyg.2022.729285. PMC 8970303. PMID 35369253.

  • Talboy AN, Schneider SL (December 2018). "Focusing on what matters: Restructuring the presentation of Bayesian reasoning problems". Journal of Experimental Psychology. Applied. 24 (4): 440–458. doi:10.1037/xap0000187. PMID 30299128. S2CID 52943395.

  • Milgram S (October 1963). "Behavioral Study of Obedience". Journal of Abnormal Psychology. 67 (4): 371–378. doi:10.1037/h0040525. PMID 14049516. S2CID 18309531.

  • Walker D, Vul E (January 2014). "Hierarchical encoding makes individuals in a group seem more attractive". Psychological Science. 25 (1): 230–235. doi:10.1177/0956797613497969. PMID 24163333. S2CID 16309135.

  • Baron 1994, p. 275

  • Sutherland 2007, pp. 138–139

  • Anderson KB, Graham LM (2007). "Hostile Attribution Bias". Encyclopedia of Social Psychology. Sage Publications, Inc. pp. 446–447. doi:10.4135/9781412956253. ISBN 978-1412916707.

  • Rosset E (2008-09-01). "It's no accident: Our bias for intentional explanations". Cognition. 108 (3): 771–780. doi:10.1016/j.cognition.2008.07.001. ISSN 0010-0277. PMID 18692779. S2CID 16559459.

  • Kokkoris M (2020-01-16). "The Dark Side of Self-Control". Harvard Business Review. Retrieved 17 January 2020.

  • Plous 2006, p. 185

  • Kuran T, Sunstein CR (1998). "Availability Cascades and Risk Regulation". Stanford Law Review. 51 (4): 683–768. doi:10.2307/1229439. JSTOR 1229439. S2CID 3941373.

  • Colman A (2003). Oxford Dictionary of Psychology. New York: Oxford University Press. p. 77. ISBN 978-0192806321.

  • Ciccarelli S, White J (2014). Psychology (4th ed.). Pearson Education, Inc. p. 62. ISBN 978-0205973354.

  • Dalton D, Ortegren M (2011). "Gender differences in ethics research: The importance of controlling for the social desirability response bias". Journal of Business Ethics. 103 (1): 73–93. doi:10.1007/s10551-011-0843-8. S2CID 144155599.

  • McCornack S, Parks M (1986). "Deception Detection and Relationship Development: The Other Side of Trust". Annals of the International Communication Association. 9: 377–389. doi:10.1080/23808985.1986.11678616.

  • Levine T (2014). "Truth-Default Theory (TDT): A Theory of Human Deception and Deception Detection". Journal of Language and Social Psychology. 33: 378–392. doi:10.1177/0261927X14535916. S2CID 146916525.

  • Plous 2006, p. 206

  • "Assumed similarity bias". APA Dictionary of Psychology. Washington, DC: American Psychological Association. n.d. Retrieved 2022-01-15.

  • Garcia SM, Song H, Tesser A (November 2010). "Tainted recommendations: The social comparison bias". Organizational Behavior and Human Decision Processes. 113 (2): 97–101. doi:10.1016/j.obhdp.2010.06.002. ISSN 0749-5978.

  • Forsyth DR (2009). Group Dynamics (5th ed.). Pacific Grove, CA: Brooks/Cole.

  • Kruger J (August 1999). "Lake Wobegon be gone! The "below-average effect" and the egocentric nature of comparative ability judgments". Journal of Personality and Social Psychology. 77 (2): 221–232. doi:10.1037/0022-3514.77.2.221. PMID 10474208.

  • Payne BK, Cheng CM, Govorun O, Stewart BD (September 2005). "An inkblot for attitudes: affect misattribution as implicit measurement". Journal of Personality and Social Psychology. 89 (3): 277–293. CiteSeerX 10.1.1.392.4775. doi:10.1037/0022-3514.89.3.277. PMID 16248714.

  • Schacter DL (2001). The Seven Sins of Memory. New York, NY: Houghton Mifflin Company.

  • Schacter DL (March 1999). "The seven sins of memory. Insights from psychology and cognitive neuroscience". The American Psychologist. 54 (3): 182–203. doi:10.1037/0003-066X.54.3.182. PMID 10199218. S2CID 14882268.

  • Butera F, Levine JM, Vernet J (August 2009). "Influence without credit: How successful minorities respond to social cryptomnesia". Coping with Minority Status. Cambridge University Press. pp. 311–332. doi:10.1017/cbo9780511804465.015. ISBN 978-0511804465.

  • Lieberman DA (2011). Human Learning and Memory. Cambridge University Press. p. 432. ISBN 978-1139502535.

  • McDunn BA, Siddiqui AP, Brown JM (April 2014). "Seeking the boundary of boundary extension". Psychonomic Bulletin & Review. 21 (2): 370–375. doi:10.3758/s13423-013-0494-0. PMID 23921509. S2CID 2876131.

  • Mather M, Shafir E, Johnson MK (March 2000). "Misremembrance of options past: source monitoring and choice" (PDF). Psychological Science. 11 (2): 132–138. doi:10.1111/1467-9280.00228. PMID 11273420. S2CID 2468289. Archived (PDF) from the original on 2009-01-17.

  • Attneave F (August 1953). "Psychological probability as a function of experienced frequency". Journal of Experimental Psychology. 46 (2): 81–86. doi:10.1037/h0057955. PMID 13084849.

  • Fischhoff B, Slovic P, Lichtenstein S (1977). "Knowing with certainty: The appropriateness of extreme confidence". Journal of Experimental Psychology: Human Perception and Performance. 3 (4): 552–564. doi:10.1037/0096-1523.3.4.552. S2CID 54888532.

  • Cacioppo J (2002). Foundations in social neuroscience. Cambridge, MA: MIT Press. pp. 130–132. ISBN 978-0262531955.

  • Cacciatore MA (April 2021). "Misinformation and public opinion of science and health: Approaches, findings, and future directions". Proceedings of the National Academy of Sciences of the United States of America. 118 (15): e1912437117. Bibcode:2021PNAS..11812437C. doi:10.1073/pnas.1912437117. PMC 8053916. PMID 33837143. p. 4: The CIE refers to the tendency for information that is initially presented as true, but later revealed to be false, to continue to affect memory and reasoning

  • Schmidt SR (July 1994). "Effects of humor on sentence memory" (PDF). Journal of Experimental Psychology: Learning, Memory, and Cognition. 20 (4): 953–967. doi:10.1037/0278-7393.20.4.953. PMID 8064254. Archived from the original (PDF) on 2016-03-15. Retrieved 2015-04-19.

  • Schmidt SR (2003). "Life Is Pleasant – and Memory Helps to Keep It That Way!" (PDF). Review of General Psychology. 7 (2): 203–210. doi:10.1037/1089-2680.7.2.203. S2CID 43179740.

  • Fiedler K (1991). "The tricky nature of skewed frequency tables: An information loss account of distinctiveness-based illusory correlations". Journal of Personality and Social Psychology. 60 (1): 24–36. doi:10.1037/0022-3514.60.1.24.

  • Koriat A, Goldsmith M, Pansky A (2000). "Toward a psychology of memory accuracy". Annual Review of Psychology. 51 (1): 481–537. doi:10.1146/annurev.psych.51.1.481. PMID 10751979.

  • Craik & Lockhart, 1972

  • Kinnell A, Dennis S (February 2011). "The list length effect in recognition memory: an analysis of potential confounds". Memory & Cognition. 39 (2): 348–63. doi:10.3758/s13421-010-0007-6. PMID 21264573.

  • Weiten W (2010). Psychology: Themes and Variations. Cengage Learning. p. 338. ISBN 978-0495601975.

  • Haizlip J, May N, Schorling J, Williams A, Plews-Ogan M (September 2012). "Perspective: the negativity bias, medical education, and the culture of academic medicine: why culture change is hard". Academic Medicine. 87 (9): 1205–1209. doi:10.1097/ACM.0b013e3182628f03. PMID 22836850.

  • Weiten W (2007). Psychology: Themes and Variations. Cengage Learning. p. 260. ISBN 978-0495093039.

  • Slamecka NJ (April 1968). "An examination of trace storage in free recall". Journal of Experimental Psychology. 76 (4): 504–513. doi:10.1037/h0025695. PMID 5650563.

  • Shepard RN (1967). "Recognition memory for words, sentences, and pictures". Journal of Learning and Verbal Behavior. 6: 156–163. doi:10.1016/s0022-5371(67)80067-7.

  • McBride DM, Dosher BA (2002). "A comparison of conscious and automatic memory processes for picture and word stimuli: a process dissociation analysis". Consciousness and Cognition. 11 (3): 423–460. doi:10.1016/s1053-8100(02)00007-7. PMID 12435377. S2CID 2813053.

  • Defetyer MA, Russo R, McPartlin PL (2009). "The picture superiority effect in recognition memory: a developmental study using the response signal procedure". Cognitive Development. 24 (3): 265–273. doi:10.1016/j.cogdev.2009.05.002.

  • Whitehouse AJ, Maybery MT, Durkin K (2006). "The development of the picture-superiority effect". British Journal of Developmental Psychology. 24 (4): 767–773. doi:10.1348/026151005X74153.

  • Ally BA, Gold CA, Budson AE (January 2009). "The picture superiority effect in patients with Alzheimer's disease and mild cognitive impairment". Neuropsychologia. 47 (2): 595–598. doi:10.1016/j.neuropsychologia.2008.10.010. PMC 2763351. PMID 18992266.

  • Curran T, Doyle J (May 2011). "Picture superiority doubly dissociates the ERP correlates of recollection and familiarity". Journal of Cognitive Neuroscience. 23 (5): 1247–1262. doi:10.1162/jocn.2010.21464. PMID 20350169. S2CID 6568038.

  • Kruger J, Dunning D (December 1999). "Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessments". Journal of Personality and Social Psychology. 77 (6): 1121–1134. CiteSeerX 10.1.1.64.2655. doi:10.1037/0022-3514.77.6.1121. PMID 10626367.

  • Kruger, J. (1999). Lake Wobegon be gone! The "below-average effect" and the egocentric nature of comparative ability judgments" Journal of Personality and Social Psychology 77(2),

  • O'Brien EJ, Myers JL (1985). "When comprehension difficulty improves memory for text". Journal of Experimental Psychology: Learning, Memory, and Cognition. 11 (1): 12–21. doi:10.1037/0278-7393.11.1.12. S2CID 199928680.

  • Rubin, Wetzler & Nebes, 1986; Rubin, Rahhal & Poon, 1998

  • Martin GN, Carlson NR, Buskist W (2007). Psychology (3rd ed.). Pearson Education. pp. 309–310. ISBN 978-0273710868.

  • Morton, Crowder & Prussin, 1971

  • Pitt I, Edwards AD (2003). Design of Speech-Based Devices: A Practical Guide. Springer. p. 26. ISBN 978-1852334369.

  • Tversky A, Koehler DJ (1994). "Support theory: A nonextensional representation of subjective probability" (PDF). Psychological Review. 101 (4): 547–567. doi:10.1037/0033-295X.101.4.547. Archived from the original (PDF) on 2017-01-09. Retrieved 2021-12-10.

  • Stetson C, Fiesta MP, Eagleman DM (December 2007). "Does time really slow down during a frightening event?". PLOS ONE. 2 (12): e1295. Bibcode:2007PLoSO...2.1295S. doi:10.1371/journal.pone.0001295. PMC 2110887. PMID 18074019.

  • Goldstein ED (2010-06-21). Cognitive Psychology: Connecting Mind, Research and Everyday Experience. Cengage Learning. p. 231. ISBN 978-1133009122.

  • "Not everyone is in such awe of the internet". Evening Standard. Evening Standard. 2011-03-23. Retrieved 28 October 2015.

  • Poppenk, Walia, Joanisse, Danckert, & Köhler, 2006

    1. Von Restorff H (1933). "Über die Wirkung von Bereichsbildungen im Spurenfeld (The effects of field formation in the trace field)"". Psychological Research. 18 (1): 299–342. doi:10.1007/bf02409636. S2CID 145479042.

    References

    Further reading

    External links

     https://en.wikipedia.org/wiki/List_of_cognitive_biases#Memory_biases

    Imagination inflation is a type of memory distortion that occurs when imagining an event that never happened increases confidence in the memory of the event.[1]

    Several factors have been demonstrated to increase the imagination inflation effect. Imagining a false event increases familiarity and people mistake this familiarity for evidence that they have experienced the event.[2][3] Imagination inflation could also be the result of source confusion or source monitoring errors. When imagining a false event, people generate information about the event that is often stored in their memory. Later, they might remember the content of the memory but not its source and mistakenly attribute the recalled information to a real experience.[2]

    This effect is relevant to the study of memory and cognition, particularly false memory. Imagination inflation often occurs during attempts to retrieve repressed memories (i.e. via recovered memory therapy) and may lead to the development of false or distorted memories.[2] In criminal justice, imagination inflation is tied to false confessions because police interrogation practices involving suspects to imagine committing or planning the crime in question.[1][4] 

    https://en.wikipedia.org/wiki/Imagination_inflation

    Cryptomnesia occurs when a forgotten memory returns without its being recognized as such by the subject, who believes it is something new and original. It is a memory bias whereby a person may falsely recall generating a thought, an idea, a tune, a name, or a joke,[1] not deliberately engaging in plagiarism but rather experiencing a memory as if it were a new inspiration. 

    https://en.wikipedia.org/wiki/Cryptomnesia

    Source amnesia is the inability to remember where, when or how previously learned information has been acquired, while retaining the factual knowledge.[1] This branch of amnesia is associated with the malfunctioning of one's explicit memory. It is likely that the disconnect between having the knowledge and remembering the context in which the knowledge was acquired is due to a dissociation between semantic and episodic memory[2] – an individual retains the semantic knowledge (the fact), but lacks the episodic knowledge to indicate the context in which the knowledge was gained.

    Memory representations reflect the encoding processes during acquisition. Different types of acquisition processes (e.g.: reading, thinking, listening) and different types of events (e.g.: newspaper, thoughts, conversation) will produce mental depictions that perceptually differ from one another in the brain, making it harder to retrieve where information was learned when placed in a different context of retrieval.[3] Source monitoring involves a systematic process of slow and deliberate thought of where information was originally learned. Source monitoring can be improved by using more retrieval cues, discovering and noting relations and extended reasoning.[3] 

    https://en.wikipedia.org/wiki/Source_amnesia

    The ultimate attribution error is a type of attribution error which proposed to explain why attributions of outgroup behavior is more negative (ie. antisocial or undesirable) than ingroup behavior (see in-group and out-group).[1] Ultimate attribution error itself described as a cognitive bias where negative outgroup behavior is more likely attributed to factors internal and specific to the actor, such as personality. The second component of the bias is a higher chance of attributing negative ingroup behavior to external factors such as luck or circumstance.[1] This bias is said to reinforce a negative stereotype and prejudice about the outgroup, and favouritism of the ingroup through positive stereotypes.[2] The theory was later extended to the bias that positive acts performed by ingroup members are more likely a result of their personality, whereas, if an ingroup member behaves negatively (which is assumed to be rare), it is more likely a result of situational factors.[3]

    In the case of negative attribution of outgroup member's positive behaviours, four categories were proposed: the person with good behavior being an exception to a general rule, luck or special advantages, high levels of motivation, and situational causes. The theory proposed that ultimate attribution error can result through any combination of these four categories.[2]

    The concept of the ultimate attribution error and the term itself was published by Thomas F. Pettigrew in 1979 as an extension of the fundamental attribution error which was pioneered in 1958.[1][4] Since its publication which at the time lacked a strong empirical basis, there was some support for the theory in that attributions tend to favor ingroup members rather than outgroup members.[1] The specific categorisation originally proposed had partial empirical support only for broader categories of motivational and cognitive attribution,[1] which was later used in social identity theory.[5] The term has since broadened to a field of research concerned with intergroup attribution bias, also known as intergroup bias or in-group favoritism, as a part of social psychology research.[5][6] 

    https://en.wikipedia.org/wiki/Ultimate_attribution_error

    A source-monitoring error is a type of memory error where the source of a memory is incorrectly attributed to some specific recollected experience. For example, individuals may learn about a current event from a friend, but later report having learned about it on the local news, thus reflecting an incorrect source attribution. This error occurs when normal perceptual and reflective processes are disrupted, either by limited encoding of source information or by disruption to the judgment processes used in source-monitoring. Depression, high stress levels and damage to relevant brain areas are examples of factors that can cause such disruption and hence source-monitoring errors.[1] 

    https://en.wikipedia.org/wiki/Source-monitoring_error

    Self-monitoring, a concept introduced in the 1970s by Mark Snyder, describes the extent to which people monitor their self-presentations, expressive behavior, and nonverbal affective displays.[1] Snyder held that human beings generally differ in substantial ways in their abilities and desires to engage in expressive controls (see dramaturgy).[2] Self-monitoring is defined as a personality trait that refers to an ability to regulate behavior to accommodate social situations. People concerned with their expressive self-presentation (see impression management) tend to closely monitor their audience in order to ensure appropriate or desired public appearances.[3] Self-monitors try to understand how individuals and groups will perceive their actions. Some personality types commonly act spontaneously (low self-monitors) and others are more apt to purposely control and consciously adjust their behavior (high self-monitors).[4] Recent studies suggest that a distinction should be made between acquisitive and protective self-monitoring due to their different interactions with metatraits.[5] This differentiates the motive behind self-monitoring behaviours: for the purpose of acquiring appraisal from others (acquisitive) or protecting oneself from social disapproval (protective). 

    https://en.wikipedia.org/wiki/Self-monitoring

    Source amnesia is the inability to remember where, when or how previously learned information has been acquired, while retaining the factual knowledge.[1] This branch of amnesia is associated with the malfunctioning of one's explicit memory. It is likely that the disconnect between having the knowledge and remembering the context in which the knowledge was acquired is due to a dissociation between semantic and episodic memory[2] – an individual retains the semantic knowledge (the fact), but lacks the episodic knowledge to indicate the context in which the knowledge was gained.

    Memory representations reflect the encoding processes during acquisition. Different types of acquisition processes (e.g.: reading, thinking, listening) and different types of events (e.g.: newspaper, thoughts, conversation) will produce mental depictions that perceptually differ from one another in the brain, making it harder to retrieve where information was learned when placed in a different context of retrieval.[3] Source monitoring involves a systematic process of slow and deliberate thought of where information was originally learned. Source monitoring can be improved by using more retrieval cues, discovering and noting relations and extended reasoning.[3] 

    https://en.wikipedia.org/wiki/Source_amnesia

    In the field of epidemiology, source attribution refers to a category of methods with the objective of reconstructing the transmission of an infectious disease from a specific source, such as a population, individual, or location. For example, source attribution methods may be used to trace the origin of a new pathogen that recently crossed from another host species into humans, or from one geographic region to another. It may be used to determine the common source of an outbreak of a foodborne infectious disease, such as a contaminated water supply. Finally, source attribution may be used to estimate the probability that an infection was transmitted from one specific individual to another, i.e., "who infected whom".

    Source attribution can play an important role in public health surveillance and management of infectious disease outbreaks. In practice, it tends to be a problem of statistical inference, because transmission events are seldom observed directly and may have occurred in the distant past. Thus, there is an unavoidable level of uncertainty when reconstructing transmission events from residual evidence, such as the spatial distribution of the disease. As a result, source attribution models often employ Bayesian methods that can accommodate substantial uncertainty in model parameters.

    Molecular source attribution is a subfield of source attribution that uses the molecular characteristics of the pathogen — most often its nucleic acid genome — to reconstruct transmission events. Many infectious diseases are routinely detected or characterized through genetic sequencing, which can be faster than culturing isolates in a reference laboratory and can identify specific strains of the pathogen at substantially higher precision than laboratory assays, such as antibody-based assays or drug susceptibility tests. On the other hand, analyzing the genetic (or whole genome) sequence data requires specialized computational methods to fit models of transmission. Consequently, molecular source attribution is a highly interdisciplinary area of molecular epidemiology that incorporates concepts and skills from mathematical statistics and modeling, microbiology, public health and computational biology.

    There are generally two ways that molecular data are used for source attribution. First, infections can be categorized into different "subtypes" that each corresponds to a unique molecular variety, or a cluster of similar varieties. Source attribution can then be inferred from the similarity of subtypes. Individual infections that belong to the same subtype are more likely to be related epidemiologically, including direct source-recipient transmission, because they have not substantially evolved away from their common ancestor. Similarly, we assume the true source population will have frequencies of subtypes that are more similar to the recipient population, relative to other potential sources. Second, molecular (genetic) sequences from different infections can be directly compared to reconstruct a phylogenetic tree, which represents how they are related by common ancestors. The resulting phylogeny can approximate the transmission history, and a variety of methods have been developed to adjust for confounding factors.

    Due to the associated stigma and the criminalization of transmission for specific infectious diseases, molecular source attribution at the level of individuals can be a controversial use of data that was originally collected in a healthcare setting, with potentially severe legal consequences for individuals who become identified as putative sources. In these contexts, the development and application of molecular source attribution techniques may involve trade-offs between public health responsibilities and individual rights to data privacy

    https://en.wikipedia.org/wiki/Source_attribution

    In psychology, an attribution bias or attributional bias is a cognitive bias that refers to the systematic errors made when people evaluate or try to find reasons for their own and others' behaviors.[1][2][3] People constantly make attributions—judgements and assumptions about why people behave in certain ways. However, attributions do not always accurately reflect reality. Rather than operating as objective perceivers, people are prone to perceptual errors that lead to biased interpretations of their social world.[4][5] Attribution biases are present in everyday life. For example, when a driver cuts someone off, the person who has been cut off is often more likely to attribute blame to the reckless driver's inherent personality traits (e.g., "That driver is rude and incompetent") rather than situational circumstances (e.g., "That driver may have been late to work and was not paying attention"). Additionally, there are many different types of attribution biases, such as the ultimate attribution error, fundamental attribution error, actor-observer bias, and hostile attribution bias. Each of these biases describes a specific tendency that people exhibit when reasoning about the cause of different behaviors.

    Since the early work, researchers have continued to examine how and why people exhibit biased interpretations of social information.[2][6] Many different types of attribution biases have been identified, and more recent psychological research on these biases has examined how attribution biases can subsequently affect emotions and behavior.[7][8][9]

    History

    Early influences

    Attribution theory

    Research on attribution biases is founded in attribution theory, which was proposed to explain why and how people create meaning about others' and their own behavior. This theory focuses on identifying how an observer uses information in his/her social environment in order to create a causal explanation for events. Attribution theory also provides explanations for why different people can interpret the same event in different ways and what factors contribute to attribution biases.[10]

    Psychologist Fritz Heider first discussed attributions in his 1958 book, The Psychology of Interpersonal Relations.[1] Heider made several contributions that laid the foundation for further research on attribution theory and attribution biases. He noted that people tend to make distinctions between behaviors that are caused by personal disposition versus environmental or situational conditions. He also predicted that people are more likely to explain others' behavior in terms of dispositional factors (i.e., caused by a given person's personality), while ignoring the surrounding situational demands.

    Correspondent inference theory

    Building on Heider's early work, other psychologists in the 1960s and 1970s extended work on attributions by offering additional related theories. In 1965, social psychologists Edward E. Jones and Keith Davis proposed an explanation for patterns of attribution termed correspondent inference theory.[6] A correspondent inference assumes that a person's behavior reflects a stable disposition or personality characteristic instead of a situational factor. They explained that certain conditions make us more likely to make a correspondent inference about someone's behavior:

    • Intention: People are more likely to make a correspondent inference when they interpret someone's behavior as intentional, rather than unintentional.
    • Social desirability: People are more likely to make a correspondent inference when an actor's behavior is socially undesirable than when it is conventional.
    • Effects of behavior: People are more likely to make a correspondent, or dispositional, inference when someone else's actions yield outcomes that are rare or not yielded by other actions.

    Covariation model

    Soon after Jones and Davis first proposed their correspondent inference theory, Harold Kelley, a social psychologist famous for his work on interdependence theory as well as attribution theory, proposed a covariation model in 1973 to explain the way people make attributions.[2][11] This model helped to explain how people choose to attribute a behavior to an internal disposition versus an environmental factor. Kelley used the term 'covariation' to convey that when making attributions, people have access to information from many observations, across different situations, and at many time points; therefore, people can observe the way a behavior varies under these different conditions and draw conclusions based on that context. He proposed three factors that influence the way individuals explain behavior:

    • Consensus: The extent to which other people behave in the same way. There is high consensus when most people behave consistent with a given action/actor. Low consensus is when not many people behave in this way.
    • Consistency: The extent to which a person usually behaves in a given way. There is high consistency when a person almost always behaves in a specific way. Low consistency is when a person almost never behaves like this.
    • Distinctiveness: The extent to which an actor's behavior in one situation is different from his/her behavior in other situations. There is high distinctiveness when an actor does not behave this way in most situations. Low distinctiveness is when an actor usually behaves in a particular way in most situations.

    Kelley proposed that people are more likely to make dispositional attributions when consensus is low (most other people don't behave in the same way), consistency is high (a person behaves this way across most situations), and distinctiveness is low (a person's behavior is not unique to this situation). Alternatively, situational attributions are more likely reached when consensus is high, consistency is low, and distinctiveness is high.[11] His research helped to reveal the specific mechanisms underlying the process of making attributions. 

    https://en.wikipedia.org/wiki/Attribution_bias

    Amnesia is a deficit in memory caused by brain damage or disease,[1] but it can also be caused temporarily by the use of various sedatives and hypnotic drugs. The memory can be either wholly or partially lost due to the extent of damage that was caused.[2] There are two main types of amnesia: retrograde amnesia and anterograde amnesia. Retrograde amnesia is the inability to retrieve information that was acquired before a particular date, usually the date of an accident or operation.[3] In some cases the memory loss can extend back decades, while in others the person may lose only a few months of memory. Anterograde amnesia is the inability to transfer new information from the short-term store into the long-term store. People with anterograde amnesia cannot remember things for long periods of time. These two types are not mutually exclusive; both can occur simultaneously.[4]

    Case studies also show that amnesia is typically associated with damage to the medial temporal lobe. In addition, specific areas of the hippocampus (the CA1 region) are involved with memory. Research has also shown that when areas of the diencephalon are damaged, amnesia can occur. Recent studies have shown a correlation between deficiency of RbAp48 protein and memory loss. Scientists were able to find that mice with damaged memory have a lower level of RbAp48 protein compared to normal, healthy mice.[5][6] In people with amnesia, the ability to recall immediate information is still retained,[7][8][9] and they may still be able to form new memories. However, a severe reduction in the ability to learn new material and retrieve old information can be observed. People can learn new procedural knowledge. In addition, priming (both perceptual and conceptual) can assist amnesiacs in the learning of fresh non-declarative knowledge.[1] Individuals with amnesia also retain substantial intellectual, linguistic, and social skill despite profound impairments in the ability to recall specific information encountered in prior learning episodes.[10][11][12]

    The term is from Ancient Greek 'forgetfulness'; from ἀ- (a-) 'without', and μνήσις (mnesis) 'memory'. 

    https://en.wikipedia.org/wiki/Amnesia

    A flashbulb memory is a vivid, long-lasting memory about a surprising or shocking event that has happened in the past.[1][2]

    The term "flashbulb memory" suggests the surprise, indiscriminate illumination, detail, and brevity of a photograph; however, flashbulb memories are only somewhat indiscriminate and are far from complete.[2] Evidence has shown that although people are highly confident in their memories, the details of the memories can be forgotten.[3]

    Flashbulb memories are one type of autobiographical memory. Some researchers believe that there is reason to distinguish flashbulb memories from other types of autobiographical memory because they rely on elements of personal importance, consequence, emotion, and surprise.[2][4][5] Others believe that ordinary memories can also be accurate and long-lasting if they are highly distinctive, personally significant,[6][7] or repeatedly rehearsed.[8]

    Flashbulb memories have six characteristic features: place, ongoing activity, informant, own affect, other affect, and aftermath.[2] Arguably, the principal determinants of a flashbulb memory are a high level of surprise, a high level of consequentiality, and perhaps emotional arousal.

    Historical overview

    The term flashbulb memory was coined by Brown and Kulik in 1977.[2] They formed the special-mechanism hypothesis, which argues for the existence of a special biological memory mechanism that, when triggered by an event exceeding critical levels of surprise and consequentiality, creates a permanent record of the details and circumstances surrounding the experience.[2] Brown and Kulik believed that although flashbulb memories are permanent, they are not always accessible from long term memory.[9] The hypothesis of a special flashbulb-memory mechanism holds that flashbulb memories have special characteristics that are different from those produced by "ordinary" memory mechanisms. The representations created by the special mechanism are detailed, accurate, vivid, and resistant to forgetting.[2] Most of these initial properties of flashbulb memory have been debated since Brown and Kulik first coined the term. Ultimately, over the years, four models of flashbulb memories have emerged to explain the phenomenon: the photographic model, the comprehensive model, the emotional-integrative model, and the importance-driven model, additional studies have been conducted to test the validity of these models.[10]

    Positive vs. negative

    It is possible for both positive and negative events to produce flashbulb memories. When the event is viewed as a positive event, individuals show higher rates of reliving and sensory imagery, also showed having more live-qualities associated with the event. Individuals view these positive events as central to their identities and life stories, resulting in more rehearsal of the event, encoding the memory with more subjective clarity.[11]

    On the other hand, events seen as negative by a person have demonstrated having used more detail-oriented, conservative processing strategies. Negative flashbulb memories are more highly unpleasant and can cause a person to avoid reliving the negative event. This avoidance could possibly lead to a reduction of emotional intense memory. The memory stays intact in an individual who experiences a negative flashbulb memory but have a more toned down emotional side. With negative flashbulb memories they are seen to have more consequences.[12]

    Flashbulb memories can be produced, but do not need to be, from a positive or negative event. Studies have shown that flashbulb memories may be produced by experiencing a type of brand-related interaction. It was found that brands which are well-differentiated from competitors (for example, Build-A-Bear Workshop versus KB Toys) produced a definitional flashbulb memory, but brands lacking strongly differentiated positioning do not. These "flashbulb brand memories" were viewed very much like conventional flashbulb memories for the features of strength, sharpness, vividness, and intensity.[13]

    Methods

    Research on flashbulb memories generally shares a common method. Typically, researchers conduct studies immediately following a shocking, public event.[8][14] Participants are first tested within a few days of the event, answering questions via survey or interview regarding the details and circumstances regarding their personal experience of the event.[8] Then groups of participants are tested for a second time, for example six months, a year, or 18 months later.[15] Generally, participants are divided into groups, each group being tested at different intervals. This method allows researchers to observe the rate of memory decay, the accuracy and the content of flashbulb memories.

    Accuracy

    Many researchers[who?] feel that flashbulb memories are not accurate enough to be considered their own category of memory. One of the issues is that flashbulb memories may deteriorate over time, just like everyday memories. Also, it has been questioned whether flashbulb memories are substantially different from everyday memories. A number of studies suggest that flashbulb memories are not especially accurate, but that they are experienced with great vividness and confidence.[16][17][18] In a study conducted on September 12, 2001, 54 Duke students were tested for their memory of hearing the terrorist attack and their recall of a recent everyday event. Then, they were randomly assigned to be tested again either 7, 42 or 224 days after the event. The results showed that mean number of consistent inconsistent details recalled did not differ for flashbulb memories and everyday memories, in both cases declining over time. However, ratings of vividness, recollection and belief in accuracy of memory declined only for everyday memories. These findings further support the claims that "flashbulb memories are not special in their accuracy but only in their perceived accuracy.”[19]

    Many experimenters question the accuracy of Flashbulb Memories, but rehearsal of the event is to blame. Errors that are rehearsed through retelling and reliving can become a part of the memory. Because Flashbulb memories happen only a single time, there are no opportunities for repeated exposure or correction. Errors that are introduced early on are more likely to remain. Many individuals see these events that create Flashbulb memories as very important and want to "never forget", which may result in overconfidence in the accuracy of the flashbulb memory.[20] The most important thing in creating a flashbulb memory is not what occurs at the exact moment of hearing striking news, rather what occurs after hearing the news. The role of post-encoding factors such as retelling and reliving is important when trying to understand the increase in remembrance after the event has already taken place.[21]

    Such research focuses on identifying reasons why flashbulb memories are more accurate than everyday memories. It has been documented that importance of an event, the consequences involved, how distinct it is, personal involvement in the event, and proximity increase the accuracy of recall of flashbulb memories.[22]

    Stability over time

    It has been argued that flashbulb memories are not very stable over time. A study conducted on the recollection of flashbulb memories for the Challenger Space Shuttle disaster sampled two independent groups of subjects on a date close to the disaster, and another eight months later. Very few subjects had flashbulb memories for the disaster after eight months. Considering only the participants who could recall the source of the news, ongoing activity, and place, researchers reported that less than 35% had detailed memories.[23] Another study examining participants' memories for the Challenger Space Shuttle explosion found that although participants were highly confident about their memories for the event, their memories were not very accurate three years after the event had occurred.[24] A third study conducted on the O. J. Simpson murder case found that although participants' confidence in their memories remained strong, the accuracy of their memories declined 15 months after the event, and continued to decline 32 months after the event.[15]

    While the accuracy of flashbulb memories may not be stable over time, confidence of the accuracy of a flashbulb memory appears to be stable over time. A study conducted on the bombing in Iraq and a contrasting ordinary event showed no difference for memory accuracy over a year period; however, participants showed greater confidence when remembering the Iraqi bombing than the ordinary event despite no difference in accuracy.[25] Likewise, when memories for the 9/11 World Trade Center attack were contrasted with everyday memories, researchers found that after one year, there was a high, positive correlation between the initial and subsequent recollection of the 9/11 attack. This indicates very good retention, compared to a lower positive correlation for everyday memories.[26] Participants also showed greater confidence in memory at time of retrieval than time of encoding.

    Relation to autobiographical memory

    Some studies indicate that flashbulb memories are not any more accurate than other types of memories.[27] It has been reported that memories of high school graduation or early emotional experiences can be just as vivid and clear as flashbulb memories. Undergraduates recorded their three most vivid autobiographical memories. Nearly all of the memories produced were rated to be of high personal importance, but low national importance. These memories were rated as having the same level of consequentiality and surprise as memories for events of high national importance. This indicates that flashbulb memories may just be a subset of vivid memories and may be the result of a more general phenomenon.[27]

    When looking at flashbulb memories and "control memories" (non-flashbulb memories) it has been observed that flashbulb memories are incidentally encoded into one's memory, whereas if one wanted to, a non-flashbulb memory can be intentionally encoded in one's memory. Both of these types of memories have vividness that accompanies the memory, but it was found that for flashbulb memories, the vividness was much higher and never decreases compared to control memories, which in fact did decrease over time.[28]

    Flashbulb memory has always been classified as a type of autobiographical memory, which is memory for one's everyday life events. Emotionally neutral autobiographical events, such as a party or a barbecue, were contrasted with emotionally arousing events that were classified as flashbulb memories. Memory for the neutral autobiographical events was not as accurate as the emotionally arousing events of Princess Diana's death and Mother Teresa's death. Therefore, flashbulb memories were more accurately recalled than everyday autobiographical events.[1] In some cases, consistency of flashbulb memories and everyday memories do not differ, as they both decline over time. Ratings of vividness, recollection and belief in the accuracy of memory, however, have been documented to decline only in everyday memories and not flashbulb memories.[17]

    The latent structure of a flashbulb memory is taxonic, and qualitatively distinct from non-flashbulb memories. It has been suggested that there are "optimal cut points" on flashbulb memory features that can ultimately divide people who can produce them from those who cannot. This follows the idea that flashbulb memories are a recollection of "event-specific sensory-perceptual details" and are much different from other known autobiographical memories. Ordinary memories show a dimensional structure that involves all levels of autobiographical knowledge, whereas flashbulb memories appear to come from a more densely integrated region of autobiographical knowledge. Flashbulb memories and non-flashbulb memories also differ qualitatively and not just quantitatively.[29] Flashbulb memories are considered a form of autobiographical memory but involve the activation of episodic memory, where as everyday memories are a semantic form of recollections. Being a form of autobiographical recollections, flashbulb memories are deeply determined by the reconstructive processes of memory, and just like any other form of memory are prone to decay.[30]

    Importance of an event

    Brown and Kulik (1977) emphasized that importance is a critical variable in flashbulb memory formation. In a study conducted by Brown and Kulik, news events were chosen so that some of them would be important to some of their subjects, but not to others. They found that when an event was important to one group, it was associated with a comparatively high incidence of flashbulb memories. The same event, when judged lower on importance by another group, was found to be associated with a lower incidence of flashbulb memory.[2] The retelling or rehearsal of personally important events also increases the accuracy of flashbulb memories. Personally important events tend to be rehearsed more often than non-significant events. A study conducted on flashbulb memories of the Loma Prieta earthquake found that people who discussed and compared their personal stories with others repeatedly had better recall of the event compared to Atlanta subjects who had little reason to talk about how they had heard the news. Therefore, the rehearsal of personally important events can be important in developing accurate flashbulb memories.[16] There has been other evidence that shows that personal importance of an event is a strong predictor of flashbulb memories. A study done on the flashbulb memory of the resignation of the British prime minister, Margaret Thatcher, found that the majority of UK subjects had flashbulb memories nearly one year after her resignation. Their memory reports were characterized by spontaneous, accurate, and full recall of event details. In contrast, a low number of non-UK subjects had flashbulb memories one year after her resignation. Memory reports in this group were characterized by forgetting and reconstructive errors. The flashbulb memories for Margaret Thatcher's resignation were, therefore, primarily associated with the level of importance attached to the event.

    When Princess Diana died, it was a very important and surprising event. It affected people across the globe. When looking at accuracy, the importance of the event can be related to how accurate an individual's flashbulb memory is. Reports found that among British participants, no forgetting occurred over four years since the event. Events that are highly surprising and are rated as highly important to an individual may be preserved in the memory for a longer period of time, and have the qualities of recent events compared to those not as affected. If an event has a strong impact on an individual these memories are found to be kept much longer.[31]

    Consequence

    It was proposed that the intensity of initial emotional reaction, rather than perceived consequence, is a primary determinant of flashbulb memories. Flashbulb memories of the 1981 assassination attempt on President Reagan were studied, and it was found that participants had accurate flashbulb memories seven months after the shooting. Respondents reported flashbulb memories, despite low consequence ratings. This study only evaluated the consequence of learning about a flashbulb event, and not how the consequences of being involved with the event affects accuracy. Therefore, some people were unsure of the extent of injury, and most could only guess about the eventual outcomes.[32] Two models of flashbulb memory state that the consequences of an event determines the intensity of emotional reactions. The Importance Driven Emotional Reactions Model indicates that personal consequences determine intensity of emotional reactions. The consequence of an event is a critical variable in the formation and maintenance of a flashbulb memory. These propositions were based on flashbulb memories of the Marmara earthquake.[33] The other model of flashbulb memory, called the Emotional-Integrative model, proposes that both personal importance and consequence determine the intensity of one's emotional state.[34] Overall, the majority of research found on flashbulb memories demonstrates that consequences of an event play a key role in the accuracy of flashbulb memories. The death of Pope John Paul II did not come as a surprise but flashbulb memories were still found in individuals who were affected. This shows a direct link between emotion and event memory, and emphasizes how attitude can play a key factor in determining importance and consequence for an event. Events being high in importance and consequence lead to more vivid and long-lasting flashbulb memories.[35]

    Distinctiveness of an event

    Some experiences are unique and distinctive, while others are familiar, commonplace, or are similar to much that has gone on before. Distinctiveness of an event has been considered to be a main contributor to the accuracy of flashbulb memories.[36] The accounts of flashbulb memory that have been documented as remarkably accurate have been unique and distinctive from everyday memories. It has been found that uniqueness of an event can be the best overall predictor of how well it will be recalled later on. In a study conducted on randomly sampled personal events, subjects were asked to carry beepers that went off randomly. Whenever the beeper sounded, participants recorded where they were, what they were doing, and what they were thinking. Weeks or months later, the participants' memories were tested. The researchers found that recall of action depends strongly on uniqueness.[36] Similar results have been found in studies regarding distinctiveness and flashbulb memories; memories for events that produced flashbulb memories, specifically various terrorist attacks, had high correlations between distinctiveness and personal importance, novelty, and emotionality.[37] It has also been documented that if someone has a distinctive experience during a meaningful event, then accuracy for recall will increase. During the 1989 Loma Prieta earthquake, higher accuracy for the recall of the earthquake was documented in participants who had distinctive experiences during the earthquake, often including a substantial disruption in their activity.[16]

    Personal involvement and proximity

    Santa Cruz's historic Pacific Garden Mall suffered severe damage during the 1989 Loma Prieta earthquake

    It has been documented that people that are involved in a flashbulb event have more accurate recollections compared to people that were not involved in the event. Recollections of those who experienced the Marmara earthquake in Turkey had more accurate recollections of the event than people who had no direct experience. In this study, the majority of participants in the victim group recalled more specific details about the earthquake compared to the group that was not directly affected by the earthquake, and rather received their information about it from the news.[33] Another study compared Californians' memories of an earthquake that happened in California to the memories of the same earthquake formed by people who were living in Atlanta. The results indicated that the people that were personally involved with the earthquake had better recall of the event. Californians' recall of the event were much higher than Atlantans', with the exception of those who had relatives in the affected area, such that they reported being more personally involved.[16] The death of Pope John Paul II has created many Flashbulb Memories among people who were more religiously involved with the Catholic Church. The more involved someone is to a religion, city or group, the more importance and consequentiality is reported for an event. More emotions are reported, resulting in more consistent Flashbulb Memories.[35]

    A study (Sharot et al. 2007) conducted on the September 11, 2001 terrorist attacks demonstrates that proximity plays a part in the accuracy of recall of flashbulb memories. Three years after the terrorist attacks, participants were asked to retrieve memories of 9/11, as well as memories of personally selected control events from 2001. At the time of the attacks, some participants were in the downtown Manhattan region, closer to the World Trade Center, while others were in Midtown, a few miles away. The participants who were closer to downtown recalled more emotionally significant detailed memories than the Midtown participants. When looking solely at the Manhattan participants, the retrieval of memories for 9/11 were accompanied by an enhancement in recollective experience relative to the retrieval of other memorable life events in only a subset of participants who were, on average, two miles from the World Trade Center (around Washington Square) and not in participants who were, on average, 4.5 miles from the World Trade Center (around the Empire State Building). Although focusing only on participants that were in Manhattan on 9/11, the recollections of those closer to the World Trade Center were more vivid than those who were farther away. The downtown participants reported seeing, hearing, and even smelling what had happened.[14] Personal involvement in, or proximity to, a national event could explain greater accuracy in memories because there could be more significant consequences for the people involved, such as the death of a loved one, which can create more emotional activation in the brain. This emotional activation in the brain has been shown to be involved in the recall of flashbulb memories.

    Source of Information

    When looking at the source of knowledge about an event, hearing the news from the media or from another person does not cause a difference in reaction, rather causes a difference in the type of information that is encoded to one's memory. When hearing the news from the media, more details about the events itself are better remembered due to the processing of facts while experiencing high levels of arousal, whereas when hearing the news from another individual a person tends to remember personal responses and circumstances.[38]

    Additionally, the source monitoring problem contributes to the recollection and memory errors of flashbulb memories. Over time, new information is encountered and this post-significant event information from other sources may replace or added to the part of information already stored in memory.[39] Repeated rehearsal of the news in media and between individuals make flashbulb memories more susceptible to misremembering the source of information, thus leading to less recall of true details of the event. In a study done by Dutch researchers, participants were asked about an event of El Al Boeing 747 crash on apartment buildings in Amsterdam. Ten months after the accident, participants were asked if they recalled seeing the television film of the moment the plane hit the building. According to the results, over 60% of the subjects said they had seen the crash on television, although there was no television film regarding the incident. If they said yes, there were asked questions about the details of the crash and most falsely reported that they saw the fire had started immediately. This study demonstrates that adults can falsely believe that they have witnessed something they actually have not seen themselves but only heard from news or other people. Even, they can go further to report specific but incorrect details regarding the event. It is important to note that the error rate in this experiment is higher than usually found in flashbulb experiments since it uses a suggestive question instead of the usual neutral ‘flashbulb memory question’ and unlike in typical flashbulb memory studies, subjects are not asked how they first learned about the event which doesn't lead to critical consideration of possible original source. However, it demonstrates how even flashbulb memories are susceptible to memory distortion due to source monitoring errors.[39]

    Demographic differences

    Although people of all ages experience flashbulb memories, different demographics and ages can influence the strength and quality of a flashbulb memory.

    Age differences

    In general, younger adults form flashbulb memories more readily than older adults.[40] One study examined age-related differences in flashbulb memories: participants were tested for memory within 14 days of an important event and then retested for memory of the same event 11 months later. Even 11 months after the event occurred, nearly all the younger adults experienced flashbulb memories, but less than half of the older adults met all the criteria of a flashbulb memory. Younger and older adults also showed different reasons for recalling vivid flashbulb memories. The main predictor for creating flashbulb among younger adults was emotional connectedness to the event, whereas older adults relied more on rehearsal of the event in creating flashbulb memories.[40] Being emotionally connected was not enough for older adults to create flashbulbs; they also needed to rehearse the event over the 11 months to remember details. Older adults also had more difficulty remembering the context of the event; the older adults were more likely to forget with whom they spoke and where events took place on a daily basis.[40] If older adults are significantly impacted by the dramatic event, however, they could form flashbulb memories that are just as detailed as those that younger adults form. Older adults that were personally impacted by or close to September 11 recalled memories that did not differ in detail from those of younger adults.[41][42] Older adults were found to be more confident in their memories than younger adults, in regards to whom they were with, where they were, and their own personal emotions at the time of hearing the news of 9/11. Older adults remembered a vast majority of events between the ages of 10 and 30, a period known as the "reminiscence bump". During that period, events occur during a time of finding one's identity and peak brain function. These events tend to be more talked about than events occurring outside this period. Flashbulb memories from the "reminiscence bump" are better remembered by older adults than are memories are having recently occurred.[43]

    Cultural variations

    Generally the factors that influence flashbulb memories are considered to be constant across cultures. Tinti et al. (2009) conducted a study on memories of Pope John Paul II's death amongst Polish, Italian, and Swiss Catholics.[44] The results showed that personal involvement was most important in memory formation, followed by proximity to the event.

    Flashbulb memories differ among cultures with the degree to which certain factors influence the vividness of flashbulb memories. For example, Asian cultures de-emphasis individuality; therefore Chinese and Japanese people might not be as affected by the effects of personal involvement on vividness of flashbulb memories. A study conducted by Kulkofsky, Wang, Conway, Hou, Aydin, Johnson, and Williams (2011) investigated the formation of flashbulb memories in 5 countries: China, the United Kingdom, the United States, Germany, and Turkey. Overall participants in the United States and the United Kingdom reported more memories in a 5 minutes span than participants from Germany, Turkey, and China. This could simply be due to the fact that different cultures have different memory search strategies. In terms of flashbulb memories, Chinese participants were less affected by all factors related to personal closeness and involvement with the event. There were also cultural variations in effects of emotional intensity and surprise.[44]

    Gender

    Some studies conducted in this area of research yielded findings indicating that women are able to produce more vivid details of events and recall autobiographical events elicited by Senate hearings than men. One such study had participants fill out questionnaires about flashbulb memories and recollections of autobiographical events pertaining to the Senate hearings that confirmed Clarence Thomas as a Supreme Court Justice (Morse, 1993).[45] The study found that half of the individuals reported vivid memory images associated with the hearings. 64% of women reported images as opposed to 33% men. 77% of women reported having had stimulated recall of an autobiographical event, while only 27% of men indicated having experienced such recall. Women were more likely than men to report additional imagery (24% of women and 6% of men). Women were more likely than men to report vivid image memories and recall of autobiographical events elicited by the hearings, but they did not differ from men in the ratings of these memories. There was also no difference in the average amount of time spent consuming media on the hearing.

    A large body of research was conducted into events taking place during the 9/11 terrorist attacks, although it was not specifically researching gender differences. In one study researchers had participants answer questions to establish "consistent flashbulb memory," which consists of details about where the participants were at the time of the attacks, what they were doing, etc. In 2002 it was found that 49% of women and 47% of men fulfilled these requirements. In 2003, this dropped to 46% of women and 44% of men (Conway, 2009).[46] Women seemed more likely to have a more consistent memory for the event than men in this study. A longer time since the incident decreases the consistency of the memory. However, a study aimed at finding whether a series of terrorist attacks with common features elicit flashbulb memories found a different pattern of gender effects. Men rated the distinctiveness of their flashbulb-producing event higher than women did. Additionally, men had memories with more detail than women. Women however, reported higher rates of emotional reactivity.[47]

    Biological reasons for gender variances in flashbulb memory may be explained by amygdala asymmetry. The amygdala is a part of the limbic system, and is linked with memory and emotion. Memory is enhanced by emotion, and studies have shown that people are more likely to remember a negative event than a neutral or positive one. Investigations into the amygdala revealed "people who showed strong amygdala activation in response to a set of positive or negative stimuli (relative to other study participants) also showed superior memory for those stimuli (relative to other study participants)".[48] This may explain why flashbulb memory typically involves traumatic events. When viewing emotional content, research has shown that men enhance their memory by activating their right amygdala while women activate the left side.[48] Although it is still unclear how lateralization affects memory, there may be a more effective relationship between activation of the left amygdala and memory than activation of right and memory.[medical citation needed] Generally speaking, studies testing differences between genders on episodic memory tasks revealed that "women consistently outperform men on tasks that require remembering items that are verbal in nature or can be verbally labeled" (Herlitz, 2008).[49] It seems that women also "excel on tasks requiring little or no verbal processing, such as recognition of unfamiliar odors or faces" (Herlitz, 2008).[49] Men only seem to excel in memory tasks that require visuospatial processing. Gender differences are also very apparent in research on autobiographical memory. To sum up these gender differences, most literature on memory indicates that:[50]

    Women use a greater quantity and variety of emotion words than men when describing their past experiences ... Women include not only a greater number of references to their own emotional states but also a greater number of references to the emotional states of others. In addition, when asked to recall emotional life experiences, women recall more memories of both positive and negative personal experiences than men.

    — Bloise & Johnson, 2007

    Overall women seem to have better memory performance than men in both emotional and non-emotional events.[50]

    There are many problems with assessing gender differences found in the research into this topic. The clearest is that it is heavily reliant on self-reporting of events. Inaccuracy of findings could result from biased questions or participants misremembering. There is no way to completely verify the accuracy of accounts given by the subjects in a study. Additionally, there are many indications that eye-witness memory can often be fallible. Emotion does not seem to improve memory performance in a situation that involves weapons. Eyewitnesses remember fewer details about perpetrators a weapon is involved in an event (Pickel, 2009).[51] Accuracy in these situations is compromised by a phenomenon known as the weapon focus effect. Further complicating matters is the time frame in which people are surveyed in relation to the event as many studies survey people well after the events. Thus, there is a validity issue with much of the research into flashbulb memory in general, as well as any apparent gender differences.

    Improvement

    A number of studies have found that flashbulb memories are formed immediately after a life changing event happens or when news of the event is relayed.[52] Although additional information about the event can then be researched or learned, the extra information is often lost in memory due to different encoding processes. A more recent study, examining effects of the media on flashbulb memories for the September 11, 2001 attacks, shows that extra information may help retain vivid flashbulb memories. Although the researchers found that memory for the event decreased over time for all participants, looking at images had a profound effect on participants memory. Those who said they saw images of the September 11th attacks immediately retained much more vivid images 6-months later than those who said they saw images hours after they heard about the attacks. The latter participants failed to encode the images with the original learning of the event. Thus, it may be the images themselves that lead some of the participants to recall more details of the event. Graphic images may make an individual associate more with the horror and scale of a tragic event and hence produce a more elaborate encoding mechanism.[52] Furthermore, perhaps looking at images may help individuals retain vivid flashbulb memories months, and perhaps even years, after an event occurs.

    Controversy: special mechanism hypothesis

    The special-mechanism hypothesis has been the subject of considerable discussion in recent years, with some authors endorsing the hypothesis and others noting potential problems.This hypothesis divides memory processes into different categories, positing that different mechanisms underlie flashbulb memories. Yet many argue that flashbulb memories are simply the product of multiple, unique factors coalescing.[5]

    Supporting evidence

    Data concerning people's recollections of the Reagan assassination attempt provide support for the special-mechanism hypothesis.[32] People had highly accurate accounts of the event and had lost very few details regarding the event several months after it occurred. Additionally, an experiment examining emotional state and word valence found that people are better able to remember irrelevant information when they are in a negative, shocked state.[53] There is also neurological evidence in support of a special mechanism view. Emotionally neutral autobiographical events, such as a party, were compared with two emotionally arousing events: Princess Diana's death, and Mother Teresa's death. Long-term memory for the contextual details of an emotionally neutral autobiographical event was related to medial temporal lobe function and correlated with frontal lobe function, whereas there was no hint of an effect of either medial temporal lobe or frontal lobe function on memory for the two flashbulb events. These results indicate that there might be a special neurobiological mechanism associated with emotionally arousing flashbulb memories.[1]

    Opposing evidence

    Studies have shown that flashbulb memories can result from non-surprising events,[8] such as the first moon landing,[54] and also from non-consequential events. While Brown and Kulik defined flashbulb memories as memories of first learning about a shocking event, they expand their discussion to include personal events in which the memory is of the event itself. Simply asking participants to retrieve vivid, autobiographical memories has been shown to produce memories that contain the six features of flashbulb memories.[27] Therefore, it has been proposed that such memories be viewed as products of ordinary memory mechanisms.[6] Moreover, flashbulb memories have been shown to be susceptible to errors in reconstructive processes, specifically systematic bias.[55] It has been suggested that flashbulb memories are not especially resistant to forgetting.[56][16][17] A number of studies suggest that flashbulb memories are not especially accurate, but that they are experienced with great vividness and confidence.[16][17] Therefore, it is argued that it may be more precise to define flashbulb memories as extremely vivid autobiographical memories. Although they are often memories of learning about a shocking public event, they are not limited to such events, and not all memories of learning about shocking public events produce flashbulb memories.[57]

    Models

    The photographic model

    Brown and Kulik proposed the term flashbulb memory, along with the first model of the process involved in developing what they called flashbulb accounts.[2] The photographic model proposes that in order for a flashbulb account to occur in the presence of a stimulus event, there must be, a high level of surprise, consequentiality, and emotional arousal. Specifically, at the time in which an individual first hears of an event, the degree of unexpectedness and surprise is the first step in the registration of the event. The next step involved in registration of flashbulb accounts is the degree of consequentiality, which in turn, triggers a certain level of emotional arousal. Brown and Kulik described consequentiality as the things one would imagine may have gone differently if the event hadn't occurred, or what consequences the event had on an individual's life.[2] Further, Brown and Kulik believed that high levels of these variables would also result in frequent rehearsal, being either covert ("always on the mind") or overt (ex. talked about in conversations with others). Rehearsal, which acts as a mediating process in the development of a flashbulb account, creates stronger associations and more elaborate accounts. Therefore, the flashbulb memory becomes more accessible and vividly remembered for a long period of time.[2]

    Comprehensive model

    Some researchers recognized that previous studies of flashbulb memories are limited by the reliance on small sample groups of few nationalities, thus limiting the comparison of memory consistency across different variables. The comprehensive model was born out of similar experimentation as Brown and Kulik's, but with a larger participant sample. One major difference between the two models is that the Photographic Model follows more of a step-by-step process in the development of flashbulb accounts, whereas the Comprehensive Model demonstrates an interconnected relationship between the variables. Specifically, knowledge and interest in the event affects the level of personal importance for the individual, which also affects the individual's level of emotional arousal (affect). Furthermore, knowledge and interest pertaining to the event, as well as the level of importance, contribute to the frequency of rehearsal. Therefore, high levels of knowledge and interest contribute to high levels of personal importance and affect, as well as high frequency of rehearsal. Finally, affect and rehearsal play major roles in creating associations, thus enabling the individual to remember vivid attributes of the event, such as the people, place, and description of the situation.[58]

    Emotional-integrative model

    An Emotional-Integrative Model of flashbulb memories integrates the two previously discussed models the Photographic Model and the Comprehensive Model.[34] Similar to the Photographic Model, the Emotional-Integrative Model states that the first step toward the registration of a flashbulb memory is an individual's degree of surprise associated with the event. This level of surprise triggers an emotional feeling state, which is also a result of the combination of the level of importance (consequentiality) of the event to the individual, and the individual's affective attitude. The emotional feeling state of the individual directly contributes to the creation of a flashbulb memory. To strengthen the association, thus enabling the individual to vividly remember the event, emotional feeling state and affective attitude contribute to overt rehearsal (mediator) of the event to strengthen the memory of the original event which, in turn, determines the formation of a flashbulb memory.[34] According to the Emotional-Integrative model flashbulb memories can also be formed for expected events.[59] The formation of flashbulb memories in this case depends greatly on a high emotional relationship to the event and rehearsal of the memory.[59]

    Importance-driven emotional reactions model

    This model emphasizes that personal consequences determine intensity of emotional reactions.[33] These consequences are, therefore, critical operators in the formation and maintenance of flashbulb memories. This model was based on whether traumatic events were experienced or not during the Marmara earthquake. According to the findings of this study, the memories of the people who experienced the earthquake were preserved as a whole, and unchanged over time. Results of the re-test showed that the long-term memories of the victim group are more complete, more durable and more consistent than those of the comparison group. Therefore, based on this study, a new model was formed that highlights that consequences play a very large role in the formation of flashbulb memories.[33]

    Compared to traumatic memories

    Flashbulb memories are engendered by highly emotional, surprising events. Flashbulb memories differ from traumatic events because they do not generally contain an emotional response. Traumatic memories involve some element of fear or anxiety. While flashbulb memories can include components of negative emotion, these elements are generally absent.

    There are some similarities between traumatic and flashbulb memories. During a traumatic event, high arousal can increase attention to central information leading to increased vividness and detail. Another similar characteristic is that memory for traumatic events is enhanced by emotional stimuli. An additional, a difference between the nature of flashbulb memories and traumatic memories is the amount of information regarding unimportant details that will be encoded in the memory of the event. In high-stress situations, arousal dampens memory for peripheral information—such as context, location, time, or other less important details.[60] To rephrase, flashbulb memories are described as acute awareness of where a person was and what they were doing when a significant or traumatic event occurred, and are not characterized by strong emotion, while traumatic memories are accompanied by highly negative emotions such as anxiety, fear, and panic when the related event is recalled.[2]

    Neurological bases

    Amygdala

    Amygdala
    Amygdala highlighted in red

    Laboratory studies have related specific neural systems to the influence of emotion on memory. Cross-species investigations have shown that emotional arousal causes neurohormonal changes, which engage the amygdala. The amygdala modulates the encoding, storage, and retrieval of episodic memory.[22][61][62][63][64] These memories are later retrieved with an enhanced recollective experience,[22][65] similar to the recollection of flashbulb memories. The amygdala, therefore, may be important in the encoding and retrieval of memories for emotional public events. Since the role of the amygdala in memory is associated with increased arousal induced by the emotional event,[66] factors that influence arousal should also influence the nature of these memories. The constancy of flashbulb memories over time varies based on the individual factors related to the arousal response, such as emotional engagement[32][15] and personal involvement with the shocking event.[16] The strength of amygdala activation at retrieval has been shown to correlate with an enhanced recollective experience for emotional scenes, even when accuracy is not enhanced.[22] Memory storage is increased by endocrine responses to shocking events; the more shocking an individual finds an event, the more likely a vivd flashbulb memory will develop.

    There has been considerable debate as to whether unique mechanisms are involved in the formation of flashbulb memories, or whether ordinary memory processes are sufficient to account for memories of shocking public events. Sharot et al. found that for individuals who were close to the World Trade Center, the retrieval of 9/11 memories engaged neural systems that are uniquely tied to the influence of emotion on memory. The engagement of these emotional memory circuits is consistent with the unique limbic mechanism that Brown and Kulik[2] suggested. These are the same neural mechanisms, however, engaged during the retrieval of emotional stimuli in the laboratory.[22] The consistency in the pattern of neural responses during the retrieval of emotional scenes presented in the laboratory and flashbulb memories suggests that even though different mechanisms may be involved in flashbulb memories, these mechanisms are not unique to the surprising and consequential nature of the initiating events.

    Evidence indicates the importance of the amygdala in the retrieval of 9/11 events, but only among individuals who personally experienced these events.[22] The amygdala's influence on episodic memory is explicitly tied to physiological arousal.[66] Although simply hearing about shocking public events may result in arousal, the strength of this response likely varies depending on the individual's personal experience with the events.

    Critique of research

    Flashbulb memory research tends to focus on public events that have a negative valence. There is a shortage on studies regarding personal events such as accidents or trauma. This is due to the nature of the variables needed for flashbulb memory research: the experience of a surprising event is hard to manipulate.[citation needed] Also, it is very hard to conduct experiments on flashbulb memories due to lack of control over the events. In an empirical study, it is very difficult to control the rehearsal amount.

    Some researchers also argue that the effect of rehearsal factors on individual memory is different with respect to the availability of the mass media across different societies.[67]

    See also

    References


  • Davidson, Patrick S. R.; Glisky, Elizabeth L. (2002). "Is flashbulb memory a special instance of source memory? Evidence from older adults" (PDF). Memory. 10 (2): 99–111. doi:10.1080/09658210143000227. PMID 11798440. S2CID 10400226.

  • Brown, Roger; Kulik, James (1977). "Flashbulb memories". Cognition. 5 (1): 73–99. doi:10.1016/0010-0277(77)90018-X. S2CID 53195074.

  • Robinson-Riegler, Bridget (2012). Cognitive Psychology. Boston: Allyn & Bacon. pp. 297–299. ISBN 978-0-205-03364-5.

  • Conway, Martin A. (1995). Flashbulb memories (Essays in cognitive psychology). ISBN 978-0863773532.

  • Pillemer, D. B. (1990). "Clarifying flashbulb memory concept: Comment on McCloskey, Wible, and Cohen (1988)". Journal of Experimental Psychology: General. 119 (1): 92–96. doi:10.1037/0096-3445.119.1.92.

  • McCloskey, Michael; Wible, Cynthia G.; Cohen, Neal J. (June 1988). "Is there a special flashbulb-memory mechanism?" (PDF). Journal of Experimental Psychology: General. 117 (2): 171–181. doi:10.1037/0096-3445.117.2.171. Archived from the original (PDF) on 2011-07-20.

  • Weaver, Charles A. (March 1993). "Do you need a "flash" to form a flashbulb memory?". Journal of Experimental Psychology: General. 122: 39–46. doi:10.1037/0096-3445.122.1.39. S2CID 144337190.

  • Neisser, U. (1982). "Snapshots or benchmarks", Memory Observed: Remembering in Natural Contexts, ed. 43–48, San Francisco: Freeman

  • Cohen, N; McCloskey, M.; Wible, C. (1990). "Flashbulb memories and underlying cognitive mechanisms: Reply to Pillemer". Journal of Experimental Psychology. 119: 97–100. doi:10.1037/0096-3445.119.1.97.

  • Er, N. (2003). "A new flashbulb memory model applied to the Marmara earthquake". Applied Cognitive Psychology. 17 (5): 503–517. doi:10.1002/acp.870.

  • Rubin, David C.; Berntsen, Dorthe; Deffler, Samantha A.; Brodar, Kaitlyn (1 January 2019). "Self-narrative focus in autobiographical events: The effect of time, emotion, and individual differences". Memory & Cognition. 47 (1): 63–75. doi:10.3758/s13421-018-0850-4. ISSN 1532-5946. PMC 6353681. PMID 30144002. S2CID 52080284.

  • Bohn, A.; Berntsen, D. (April 2007). "Pleasantness bias in flashbulb memories: Positive and negative flashbulb memories of the fall of the Berlin Wall among East and West Germans" (PDF). Memory & Cognition. 35 (3): 565–577. doi:10.3758/BF03193295. PMID 17691154.

  • Roehm Jr., Harper A.; Roehm, Michelle L. (January 2007). "Can brand encounters inspire flashbulb memories?". Psychology and Marketing. 24 (1): 25–40. doi:10.1002/mar.20151.

  • Sharot T.; Martorella A.; Delgado R.; Phelps A. (2006). "How Personal experience modulates the neural circuitry of memories of September 11". Proceedings of the National Academy of Sciences. 104 (1): 389–394. doi:10.1073/pnas.0609230103. PMC 1713166. PMID 17182739.

  • Schmolck, H.; Buffalo, E. A.; Squire, L. R. (January 2000). "Memory Distortions Develop over Time: Recollections of the O. J. Simpson Trial Verdict After 15 and 32 Months" (PDF). Psychological Science. 11 (1): 39–45. doi:10.1111/1467-9280.00212. PMID 11228841. S2CID 6918395.

  • Neisser, U.; Winograd, E.; Bergman, E. T.; Schreiber, C. A.; Palmer, S. E.; Weldon, M. S. (July 1996). "Remembering the earthquake: direct experience vs. hearing the news". Memory. 4 (4): 337–357. doi:10.1080/096582196388898. PMID 8817459.

  • Talarico, J. M.; Rubin, D. C. (September 2003). "Confidence, not consistency, characterizes flashbulb memories" (PDF). Psychological Science. 14 (5): 455–461. doi:10.1111/1467-9280.02453. hdl:10161/10118. JSTOR 40064167. PMID 12930476. S2CID 14643427.

  • Day, Martin V.; Ross, Michael (2014-04-03). "Predicting confidence in flashbulb memories". Memory. 22 (3): 232–242. doi:10.1080/09658211.2013.778290. ISSN 0965-8211. PMID 23496003. S2CID 31186142.

  • Bohannon, John Neil; Symons, Victoria Louise (1992-10-30), "Flashbulb memories: Confidence, consistency, and quantity", Affect and Accuracy in Recall, Cambridge University Press, pp. 65–92, doi:10.1017/cbo9780511664069.005, ISBN 978-0-521-40188-3

  • Talarico, Jennifer M.; Rubin, David C. (July 2007). "Flashbulb memories are special after all; in phenomenology, not accuracy" (PDF). Applied Cognitive Psychology. 21 (5): 557–578. CiteSeerX 10.1.1.726.6517. doi:10.1002/acp.1293. hdl:10161/10092.

  • Coluccia, Emanuele; Bianco, Carmela; Brandimonte, Maria A. (February 2010). "Autobiographical and event memories for surprising and unsurprising events". Applied Cognitive Psychology. 24 (2): 177–199. doi:10.1002/acp.1549.

  • Sharot, Tali; Delgado, Mauricio R.; Phelps, Elizabeth A. (December 2004). "How emotion enhances the feeling of remembering" (PDF). Nature Neuroscience. 7 (12): 1376–1380. doi:10.1038/nn1353. PMID 15558065. S2CID 1877981.

  • Bohannon III, John Neil (July 1988). "Flashbulb memories for the space shuttle disaster: A tale of two theories". Cognition. 29 (2): 179–196. doi:10.1016/0010-0277(88)90036-4. PMID 3168421. S2CID 41552464.

  • Neisser, U. & Harsh, N. (1992). "Phantom flashbulbs: False recollections of hearing the news about Challenger", Affect and Accuracy in Recall: Studies of flashbulb memories, ed. 9–31, New York: Cambridge University Press

  • Weaver, C. (1993). "Do you need a "flash" to form a flashbulb memory?". Journal of Experimental Psychology. 122: 39–46. doi:10.1037/0096-3445.122.1.39.

  • Davidson, P. S. R.; Cook, S. P.; Glisky, E. L. (June 2006). "Flashbulb memories for September 11th can be preserved in older adults" (PDF). Aging, Neuropsychology, and Cognition. 13 (2): 196–206. doi:10.1080/13825580490904192. PMC 2365738. PMID 16807198.

  • Rubin, David C.; Kozin, Marc (February 1984). "Vivid memories". Cognition. 16 (1): 81–95. doi:10.1016/0010-0277(84)90037-4. PMID 6540650. S2CID 39562420.

  • Kvavilashvili, L.; Mirani, J.; Schlagman, S.; Erskine, J. A. K.; Kornbrot, D. E. (June 2010). "Effects of age on phenomenology and consistency of flashbulb memories of September 11 and a staged control event". Psychology and Aging. 25 (2): 391–404. doi:10.1037/a0017532. hdl:2299/10440. PMID 20545423.

  • Lanciano, T.; Curci, A. (2012). "Type or dimension? A taxometric investigation of flashbulb memories". Memory. 20 (2): 177–188. doi:10.1080/09658211.2011.651088. PMID 22313420. S2CID 24862794.

  • Curci, A.; Lanciano, T. (April 2009). "Features of Autobiographical Memory: Theoretical and Empirical Issues in the Measurement of Flashbulb Memory". The Journal of General Psychology. 136 (2): 129–150. doi:10.3200/GENP.136.2.129-152. PMID 19350832. S2CID 24421614.

  • Kvavilashvili, Lia; Mirani, Jennifer; Schlagman, Simone; Kornbrot, Diana E. (November–December 2003). "Comparing flashbulb memories of September 11 and the death of Princess Diana: Effects of time delays and nationality". Applied Cognitive Psychology. 17 (9): 1017–1031. doi:10.1002/acp.983.

  • Pillemer, David B. (February 1984). "Flashbulb memories of the assassination attempt on President Reagan". Cognition. 16 (1): 63–80. doi:10.1016/0010-0277(84)90036-2. PMID 6540649. S2CID 32368291.

  • Er, Nurhan (July 2003). "A new flashbulb memory model applied to the Marmara earthquake" (PDF). Applied Cognitive Psychology. 17 (5): 503–517. doi:10.1002/acp.870.

  • Finkenauer, C.; Luminet, O.; Gisle, L.; El-Ahmadi, A.; Van Der Linden, M.; Philippot, P. (May 1998). "Flashbulb memories and the underlying mechanisms of their formation: Toward an emotional-integrative model" (PDF). Memory & Cognition. 26 (3): 516–531. doi:10.3758/bf03201160. PMID 9610122.

  • Tinti, Carla; Schmidt, Susanna; Sotgiu, Igor; Testa, Silvia; Curci, Antonietta (February 2009). "The role of importance/consequentiality appraisal in flashbulb memory formation: The case of the death of Pope John Paul II". Applied Cognitive Psychology. 23 (2): 236–253. doi:10.1002/acp.1452. hdl:2318/28583.

  • Brewer, W. (1988) "Memory for randomly sampled autiobiographical events." In U. Neisser & E. Winograd (Eds.), Remembering reconsidered: Ecological and traditional approaches to the study of memory, 21–90. New York: Cambridge University Press

  • Edery-Halpern, G.; Nachson, I. (2004). "Distinctiveness in flashbulb memory: Comparative analysis of five terrorist attacks". Memory. 12 (2): 147–157. doi:10.1080/09658210244000432. PMID 15250180. S2CID 31338900.

  • Bohannon III, John Neil; Gratz, Sami; Cross, Victoria Symons (December 2007). "The effects of affect and input source on flashbulb memories". Applied Cognitive Psychology. 21 (8): 1023–1036. doi:10.1002/acp.1372.

  • "From the archive: 'Crashing memories and the problem of "source monitoring"' by H. F. M. Crombag, W. A. Wagenaar, & P. J. van Koppen (1996). Applied Cognitive Psychology, 10, 95-104 with commentary". Applied Cognitive Psychology. Wiley. 25 (S1): S91–S101. January 2011. doi:10.1002/acp.1779. ISSN 0888-4080.

  • Cohen, G; Conway, M.; Maylor, E. (1993). "Flashbulb memories in older adults". Psychology and Aging. 9 (3): 454–63. doi:10.1037/0882-7974.9.3.454. PMID 7999330.

  • Kvavilashili, L; Mirani, J.; Schlagman, S.; Erskine, J.; Kornbrot, D. (2010). "Effects of age on phenomenology and consistency of flashbulb memories of September 11 and a staged control event". Psychology and Aging. 25 (2): 391–404. doi:10.1037/a0017532. hdl:2299/10440. PMID 20545423.

  • Conway, A.; Skitka, L.; Hemmerich, J.; Kershaw, T. (2009). "FLashbulb memory for 11 September 2001". Applied Cognitive Psychology. 23 (5): 605–23. doi:10.1002/acp.1497.

  • Denver, J. Y.; Lane, S. M.; Cherry, K. E. (2010). "Recent versus remote: Flashbulb memory for 9/11 and self-selected events from the reminiscence bump". The International Journal of Aging & Human Development. 70 (4): 275–297. doi:10.2190/AG.70.4.a. PMID 20649160. S2CID 22519766.

  • Kulkofsky, S; Wang, Q.; Conway, M.; Hou, Y.; Aydin, C.; Johnson, K.; Williams, H. (2011). "Cultural variation in the correlates of flashbulb memories: An investigation in five countries". Memory. 19 (3): 233–240. doi:10.1080/09658211.2010.551132. PMID 21500085. S2CID 14894179.

  • Morse, Claire K.; Woodward, Elizabeth M.; Zweigenhaft, R. L. (August 1993). "Gender Differences in Flashbulb Memories Elicited by the Clarence Thomas Hearings". The Journal of Social Psychology. 133 (4): 453–458. doi:10.1080/00224545.1993.9712169. PMID 8231123.

  • Conway, Andrew R. A.; Skitka, Linda J.; Hemmerich, Joshua A.; Kershaw, Trina C. (July 2009). "Flashbulb memory for 11 September 2001" (PDF). Applied Cognitive Psychology. 23 (5): 605–623. doi:10.1002/acp.1497. Archived from the original (PDF) on 25 April 2012. Retrieved 15 February 2013.

  • Edery‐Halpern, Galit; Nachson, Israel (March 2004). "Distinctiveness in flashbulb memory: Comparative analysis of five terrorist attacks". Memory. 12 (2): 147–157. doi:10.1080/09658210244000432. ISSN 0965-8211. PMID 15250180. S2CID 31338900.

  • Kensinger, Elizabeth A. (August 2007). "Negative Emotion Enhances Memory Accuracy: Behavioral and Neuroimaging Evidence" (PDF). Current Directions in Psychological Science. 16 (4): 213–218. doi:10.1111/j.1467-8721.2007.00506.x. S2CID 16885166.

  • Herlitz, Agneta; Rehnman, Jenny (February 2008). "Sex Differences in Episodic Memory" (PDF). Current Directions in Psychological Science. 17 (1): 52–56. doi:10.1111/j.1467-8721.2008.00547.x. S2CID 145107751. Archived from the original (PDF) on 2012-02-27. Retrieved 2013-02-18.

  • Bloise, Susan M.; Johnson, Marcia K. (February 2007). "Memory for emotional and neutral information: Gender and individual differences in emotional sensitivity" (PDF). Memory. 15 (2): 192–204. doi:10.1080/09658210701204456. PMID 17534112. S2CID 352374. Archived from the original (PDF) on 2013-03-19. Retrieved 2013-02-18.

  • Pickel, Kerri L. (August 2009). "The weapon focus effect on memory for female versus male perpetrators". Memory. 17 (6): 664–678. doi:10.1080/09658210903029412. PMID 19536689. S2CID 10272699.

  • Schaefer, E.G.; Halldorson, M.; Dizon-Reynante, C. (2011). "TV or not TV? Does the immediacy of viewing images of a momentous news event affect the quality and stability of flashbulb memories". Memory. 19 (3): 251–266. doi:10.1080/09658211.2011.558512. PMID 21500086. S2CID 33596571.

  • Lanciano, T.; Curci, A.; Semin, G. R. (2010). "The emotional and reconstructive determinants of emotional memories: An experimental approach to flashbulb memory investigation". Memory. 18 (5): 473–485. doi:10.1080/09658211003762076. PMID 20419556. S2CID 22516784.

  • Winograd, Eugene; Killinger, William A. (September 1983). "Relating age at encoding in early childhood to adult recall: Development of flashbulb memories". Journal of Experimental Psychology: General. 112 (3): 413–432. doi:10.1037/0096-3445.112.3.413.

  • Wright, D. B. (1993). "Recall of the Hillsborough disaster over time: Systematic biases of 'flashbulb' memories". Applied Cognitive Psychology. 7 (2): 129–138. doi:10.1002/acp.2350070205.

  • Neisser, U. & Harsch, N. (1992) in Affect and Accuracy in Recall: Studies of "Flashbulb" Memories, eds Winograd, E., Neisser, U (Cambridge University Press, New York), pp 9–32.

  • Larsen, S. F. (1992). "Affect and Accuracy in Recall: Studies of Flashbulb Memories", eds Winograd, E., Neisser, U. 43–48, Cambridge University Press, New York

  • Conway, M. A.; Anderson, S. J.; Larsen, S. F.; Donnelly, C. M.; McDaniel, M. A.; McClelland, A. G.; Rawles, R. E.; Logie, R. H. (May 1994). "The formation of flashbulb memories". Memory & Cognition. 22 (3): 326–343. doi:10.3758/BF03200860. PMID 8007835.

  • Curci, A; Luminet, O. (2009). "Flashbulb memories for expected events: A test of the emotional-integrative model". Applied Cognitive Psychology. 23: 98–114. doi:10.1002/acp.1444.

  • Brewin, C.R. (April 2007). "Autobiographical memory for trauma: Update on four controversies". Memory. 15 (3): 227–248. doi:10.1080/09658210701256423. PMID 17454661. S2CID 19383961.

  • Dolcos, Florin; Labar, Kevin S.; Cabeza, Roberto (2005). "Remembering one year later: Role of the amygdala and the medial temporal lobe memory system in retrieving emotional memories". Proceedings of the National Academy of Sciences. 102 (7): 2626–2631. Bibcode:2005PNAS..102.2626D. doi:10.1073/pnas.0409848102. PMC 548968. PMID 15703295.

  • Dolcos, F.; Labar, K. S.; Cabeza, R. (June 2004). "Interaction between the amygdala and the medial temporal lobe memory system predicts better memory for emotional events" (PDF). Neuron. 42 (5): 855–863. doi:10.1016/S0896-6273(04)00289-2. PMID 15182723.

  • Dolan, R. J.; Lane, R.; Chua, P.; Fletcher, P. (March 2000). "Dissociable temporal lobe activations during emotional episodic memory retrieval" (PDF). NeuroImage. 11 (3): 203–209. doi:10.1006/nimg.2000.0538. hdl:21.11116/0000-0001-A281-5. PMID 10694462. S2CID 124728. Archived from the original (PDF) on 2017-08-09. Retrieved 2013-02-22.

  • Smith, A. P.; Henson, R. N.; Rugg, M. D.; Dolan, R. J. (September–October 2005). "Modulation of retrieval processing reflects accuracy of emotional source memory" (PDF). Learning & Memory. 12 (5): 472–479. doi:10.1101/lm.84305. PMC 1240059. PMID 16204201.

  • Ochsner, K. N. (June 2000). "Are affective events richly recollected or simply familiar? The experience and process of recognizing feelings past" (PDF). Journal of Experimental Psychology: General. 129 (2): 242–261. doi:10.1037/0096-3445.129.2.242. PMID 10868336. Archived from the original (PDF) on 2016-04-09. Retrieved 2013-02-12.

  • McGaugh, J. L. (July 2004). "The amygdala modulates the consolidation of memories of emotionally arousing experiences". Annual Review of Neuroscience. 27 (1): 1–28. doi:10.1146/annurev.neuro.27.070203.144157. PMID 15217324.

    1. Luminet, Olivier; Curci, Antonietta, eds. (2008-11-24). Flashbulb Memories. Psychology Press. doi:10.4324/9780203889930. ISBN 978-0-203-88993-0.

     https://en.wikipedia.org/wiki/Flashbulb_memory

    Eidetic memory (/ˈdɛtɪk/ eye-DET-ik; also known as photographic memory and total recall) is the ability to recall an image from memory with high precision—at least for a brief period of time—after seeing it only once[1] and without using a mnemonic device.[2]

    Although the terms eidetic memory and photographic memory are popularly used interchangeably,[1] they are also distinguished, with eidetic memory referring to the ability to see an object for a few minutes after it is no longer present[3][4] and photographic memory referring to the ability to recall pages of text or numbers, or similar, in great detail.[5][6] When the concepts are distinguished, eidetic memory is reported to occur in a small number of children and is generally not found in adults,[3][7] while true photographic memory has never been demonstrated to exist.[6][8]

    The word eidetic comes from the Greek word εἶδος (pronounced [êːdos], eidos) "visible form".[9] 

    https://en.wikipedia.org/wiki/Eidetic_memory

    Echoic memory is the sensory memory that registers specific to auditory information (sounds). Once an auditory stimulus is heard, it is stored in memory so that it can be processed and understood.[1] Unlike most visual memory, where a person can choose how long to view the stimulus and can reassess it repeatedly, auditory stimuli are usually transient and cannot be reassessed. Since echoic memories are heard once, they are stored for slightly longer periods of time than iconic memories (visual memories).[2] Auditory stimuli are received by the ear one at a time before they can be processed and understood.

    It can be said that the echoic memory is conceptually like a "holding tank", where a sound is unprocessed (or held back) until the following sound is heard, and only then can it be made meaningful.[3] This particular sensory store is capable of storing large amounts of auditory information that is only retained for a short period of time (3–4 seconds). This echoic sound resonates in the mind and is replayed for this brief amount of time shortly after being heard.[4] Echoic memory encodes only moderately primitive aspects of the stimuli, for example pitch, which specifies localization to the non-association brain regions.[5] 

    https://en.wikipedia.org/wiki/Echoic_memory

    Psychoacoustics is the branch of psychophysics involving the scientific study of sound perception and audiology—how human auditory system perceives various sounds. More specifically, it is the branch of science studying the psychological responses associated with sound (including noise, speech, and music). Psychoacoustics is an interdisciplinary field of many areas, including psychology, acoustics, electronic engineering, physics, biology, physiology, and computer science.[1]

    Background

    Hearing is not a purely mechanical phenomenon of wave propagation, but is also a sensory and perceptual event; in other words, when a person hears something, that something arrives at the ear as a mechanical sound wave traveling through the air, but within the ear it is transformed into neural action potentials. The outer hair cells (OHC) of a mammalian cochlea give rise to enhanced sensitivity and better[clarification needed] frequency resolution of the mechanical response of the cochlear partition. These nerve pulses then travel to the brain where they are perceived. Hence, in many problems in acoustics, such as for audio processing, it is advantageous to take into account not just the mechanics of the environment, but also the fact that both the ear and the brain are involved in a person's listening experience.[clarification needed][citation needed]

    The inner ear, for example, does significant signal processing in converting sound waveforms into neural stimuli, so certain differences between waveforms may be imperceptible.[2] Data compression techniques, such as MP3, make use of this fact.[3] In addition, the ear has a nonlinear response to sounds of different intensity levels; this nonlinear response is called loudness. Telephone networks and audio noise reduction systems make use of this fact by nonlinearly compressing data samples before transmission and then expanding them for playback.[4] Another effect of the ear's nonlinear response is that sounds that are close in frequency produce phantom beat notes, or intermodulation distortion products.[5]

    The term psychoacoustics also arises in discussions about cognitive psychology and the effects that personal expectations, prejudices, and predispositions may have on listeners' relative evaluations and comparisons of sonic aesthetics and acuity and on listeners' varying determinations about the relative qualities of various musical instruments and performers. The expression that one "hears what one wants (or expects) to hear" may pertain in such discussions.[citation needed]

    Limits of perception

    An equal-loudness contour. Note peak sensitivity around 2–4 kHz, in the middle of the voice frequency band.

    The human ear can nominally hear sounds in the range 20 Hz (0.02 kHz) to 20,000 Hz (20 kHz). The upper limit tends to decrease with age; most adults are unable to hear above 16 kHz. The lowest frequency that has been identified as a musical tone is 12 Hz under ideal laboratory conditions.[6] Tones between 4 and 16 Hz can be perceived via the body's sense of touch.

    Frequency resolution of the ear is about 3.6 Hz within the octave of 1000–2000 Hz. That is, changes in pitch larger than 3.6 Hz can be perceived in a clinical setting.[6] However, even smaller pitch differences can be perceived through other means. For example, the interference of two pitches can often be heard as a repetitive variation in the volume of the tone. This amplitude modulation occurs with a frequency equal to the difference in frequencies of the two tones and is known as beating.

    The semitone scale used in Western musical notation is not a linear frequency scale but logarithmic. Other scales have been derived directly from experiments on human hearing perception, such as the mel scale and Bark scale (these are used in studying perception, but not usually in musical composition), and these are approximately logarithmic in frequency at the high-frequency end, but nearly linear at the low-frequency end.

    The intensity range of audible sounds is enormous. Human eardrums are sensitive to variations in the sound pressure and can detect pressure changes from as small as a few micropascals (μPa) to greater than 100 kPa. For this reason, sound pressure level is also measured logarithmically, with all pressures referenced to 20 μPa (or 1.97385×10−10 atm). The lower limit of audibility is therefore defined as 0 dB, but the upper limit is not as clearly defined. The upper limit is more a question of the limit where the ear will be physically harmed or with the potential to cause noise-induced hearing loss.

    A more rigorous exploration of the lower limits of audibility determines that the minimum threshold at which a sound can be heard is frequency dependent. By measuring this minimum intensity for testing tones of various frequencies, a frequency-dependent absolute threshold of hearing (ATH) curve may be derived. Typically, the ear shows a peak of sensitivity (i.e., its lowest ATH) between 1–5 kHz, though the threshold changes with age, with older ears showing decreased sensitivity above 2 kHz.[7]

    The ATH is the lowest of the equal-loudness contours. Equal-loudness contours indicate the sound pressure level (dB SPL), over the range of audible frequencies, that are perceived as being of equal loudness. Equal-loudness contours were first measured by Fletcher and Munson at Bell Labs in 1933 using pure tones reproduced via headphones, and the data they collected are called Fletcher–Munson curves. Because subjective loudness was difficult to measure, the Fletcher–Munson curves were averaged over many subjects.

    Robinson and Dadson refined the process in 1956 to obtain a new set of equal-loudness curves for a frontal sound source measured in an anechoic chamber. The Robinson-Dadson curves were standardized as ISO 226 in 1986. In 2003, ISO 226 was revised as equal-loudness contour using data collected from 12 international studies.

    Sound localization

    Sound localization is the process of determining the location of a sound source. The brain utilizes subtle differences in loudness, tone and timing between the two ears to allow us to localize sound sources.[8] Localization can be described in terms of three-dimensional position: the azimuth or horizontal angle, the zenith or vertical angle, and the distance (for static sounds) or velocity (for moving sounds).[9] Humans, as most four-legged animals, are adept at detecting direction in the horizontal, but less so in the vertical directions due to the ears being placed symmetrically. Some species of owls have their ears placed asymmetrically and can detect sound in all three planes, an adaption to hunt small mammals in the dark.[10]

    Masking effects

    Audio masking graph

    Suppose a listener can hear a given acoustical signal under silent conditions. When a signal is playing while another sound is being played (a masker), the signal has to be stronger for the listener to hear it. The masker does not need to have the frequency components of the original signal for masking to happen. A masked signal can be heard even though it is weaker than the masker. Masking happens when a signal and a masker are played together—for instance, when one person whispers while another person shouts—and the listener doesn't hear the weaker signal as it has been masked by the louder masker. Masking can also happen to a signal before a masker starts or after a masker stops. For example, a single sudden loud clap sound can make sounds inaudible that immediately precede or follow. The effects of backward masking is weaker than forward masking. The masking effect has been widely studied in psychoacoustical research. One can change the level of the masker and measure the threshold, then create a diagram of a psychophysical tuning curve that will reveal similar features. Masking effects are also used in lossy audio encoding, such as MP3.

    Missing fundamental

    When presented with a harmonic series of frequencies in the relationship 2f, 3f, 4f, 5f, etc. (where f is a specific frequency), humans tend to perceive that the pitch is f. An audible example can be found on YouTube.[11]

    Software

    Perceptual audio coding uses psychoacoustics-based algorithms.

    The psychoacoustic model provides for high quality lossy signal compression by describing which parts of a given digital audio signal can be removed (or aggressively compressed) safely—that is, without significant losses in the (consciously) perceived quality of the sound.

    It can explain how a sharp clap of the hands might seem painfully loud in a quiet library but is hardly noticeable after a car backfires on a busy, urban street. This provides great benefit to the overall compression ratio, and psychoacoustic analysis routinely leads to compressed music files that are one-tenth to one-twelfth the size of high-quality masters, but with discernibly less proportional quality loss. Such compression is a feature of nearly all modern lossy audio compression formats. Some of these formats include Dolby Digital (AC-3), MP3, Opus, Ogg Vorbis, AAC, WMA, MPEG-1 Layer II (used for digital audio broadcasting in several countries) and ATRAC, the compression used in MiniDisc and some Walkman models.

    Psychoacoustics is based heavily on human anatomy, especially the ear's limitations in perceiving sound as outlined previously. To summarize, these limitations are:

    A compression algorithm can assign a lower priority to sounds outside the range of human hearing. By carefully shifting bits away from the unimportant components and toward the important ones, the algorithm ensures that the sounds a listener is most likely to perceive are most accurately represented.

    Music

    Psychoacoustics includes topics and studies that are relevant to music psychology and music therapy. Theorists such as Benjamin Boretz consider some of the results of psychoacoustics to be meaningful only in a musical context.[12]

    Irv Teibel's Environments series LPs (1969–79) are an early example of commercially available sounds released expressly for enhancing psychological abilities.[13]

    Applied psychoacoustics

    Psychoacoustic model

    Psychoacoustics has long enjoyed a symbiotic relationship with computer science. Internet pioneers J. C. R. Licklider and Bob Taylor both completed graduate-level work in psychoacoustics, while BBN Technologies originally specialized in consulting on acoustics issues before it began building the first packet-switched network.

    Licklider wrote a paper entitled "A duplex theory of pitch perception".[14]

    Psychoacoustics is applied within many fields of software development, where developers map proven and experimental mathematical patterns in digital signal processing. Many audio compression codecs such as MP3 and Opus use a psychoacoustic model to increase compression ratios. The success of conventional audio systems for the reproduction of music in theatres and homes can be attributed to psychoacoustics[15] and psychoacoustic considerations gave rise to novel audio systems, such as psychoacoustic sound field synthesis.[16] Furthermore, scientists have experimented with limited success in creating new acoustic weapons, which emit frequencies that may impair, harm, or kill.[17] Psychoacoustics are also leveraged in sonification to make multiple independent data dimensions audible and easily interpretable.[18] This enables auditory guidance without the need for spatial audio and in sonification computer games[19] and other applications, such as drone flying and image-guided surgery.[20] It is also applied today within music, where musicians and artists continue to create new auditory experiences by masking unwanted frequencies of instruments, causing other frequencies to be enhanced. Yet another application is in the design of small or lower-quality loudspeakers, which can use the phenomenon of missing fundamentals to give the effect of bass notes at lower frequencies than the loudspeakers are physically able to produce (see references).

    Automobile manufacturers engineer their engines and even doors to have a certain sound.[21]

    See also

    Related fields

    Psychoacoustic topics

    References

    Notes


  • Ballou, G (2008). Handbook for Sound Engineers (Fourth ed.). Burlington: Focal Press. p. 43.

  • Christopher J. Plack (2005). The Sense of Hearing. Routledge. ISBN 978-0-8058-4884-7.

  • Lars Ahlzen; Clarence Song (2003). The Sound Blaster Live! Book. No Starch Press. ISBN 978-1-886411-73-9.

  • Rudolf F. Graf (1999). Modern dictionary of electronics. Newnes. ISBN 978-0-7506-9866-5.

  • Jack Katz; Robert F. Burkard & Larry Medwetsky (2002). Handbook of Clinical Audiology. Lippincott Williams & Wilkins. ISBN 978-0-683-30765-8.

  • Olson, Harry F. (1967). Music, Physics and Engineering. Dover Publications. pp. 248–251. ISBN 978-0-486-21769-7.

  • Fastl, Hugo; Zwicker, Eberhard (2006). Psychoacoustics: Facts and Models. Springer. pp. 21–22. ISBN 978-3-540-23159-2.

  • Thompson, Daniel M. Understanding Audio: Getting the Most out of Your Project or Professional Recording Studio. Boston, MA: Berklee, 2005. Print.

  • Roads, Curtis. The Computer Music Tutorial. Cambridge, MA: MIT, 2007. Print.

  • Lewis, D.P. (2007): Owl ears and hearing. Owl Pages [Online]. Available: http://www.owlpages.com/articles.php?section=Owl+Physiology&title=Hearing [2011, April 5]

  • Acoustic, Musical. "Missing Fundamental". YouTube. Archived from the original on 2021-12-20. Retrieved 19 August 2019.

  • Sterne, Jonathan (2003). The Audible Past: Cultural Origins of Sound Reproduction. Durham: Duke University Press. ISBN 9780822330134.

  • Cummings, Jim. "Irv Teibel died this week: Creator of 1970s "Environments" LPs". Earth Ear. Retrieved 18 November 2015.

  • Licklider, J. C. R. (January 1951). "A Duplex Theory of Pitch Perception" (PDF). The Journal of the Acoustical Society of America. 23 (1): 147. Bibcode:1951ASAJ...23..147L. doi:10.1121/1.1917296. Archived (PDF) from the original on 2016-09-02.

  • Ziemer, Tim (2020). "Conventional Stereophonic Sound". Psychoacoustic Music Sound Field Synthesis. Current Research in Systematic Musicology. Vol. 7. Cham: Springer. pp. 171–202. doi:10.1007/978-3-030-23033-3_7. ISBN 978-3-030-23033-3. S2CID 201142606.

  • Ziemer, Tim (2020). Psychoacoustic Music Sound Field Synthesis. Current Research in Systematic Musicology. Vol. 7. Cham: Springer. doi:10.1007/978-3-030-23033-3. ISBN 978-3-030-23032-6. ISSN 2196-6974. S2CID 201136171.

  • "Acoustic-Energy Research Hits Sour Note". Archived from the original on 2010-07-19. Retrieved 2010-02-06.

  • Ziemer, Tim; Schultheis, Holger; Black, David; Kikinis, Ron (2018). "Psychoacoustical Interactive Sonification for Short Range Navigation". Acta Acustica United with Acustica. 104 (6): 1075–1093. doi:10.3813/AAA.919273. S2CID 125466508.

  • CURAT. "Games and Training for Minimally Invasive Surgery". CURAT. University of Bremen. Retrieved 15 July 2020.

  • Ziemer, Tim; Nuchprayoon, Nuttawut; Schultheis, Holger (2019). "Psychoacoustic Sonification as User Interface for Human-Machine Interaction". International Journal of Informatics Society. 12 (1). arXiv:1912.08609. doi:10.13140/RG.2.2.14342.11848.

    1. Tarmy, James (5 August 2014). "Mercedes Doors Have a Signature Sound: Here's How". Bloomberg Business. Retrieved 10 August 2020.

    Sources

    External links

     

     https://en.wikipedia.org/wiki/Psychoacoustics#cite_note-1

    An operator controlling The Virtual Interface Environment Workstation (VIEW)[1] at NASA Ames

    Virtual reality (VR) is a simulated experience that employs pose tracking and 3D near-eye displays to give the user an immersive feel of a virtual world. Applications of virtual reality include entertainment (particularly video games), education (such as medical or military training) and business (such as virtual meetings). Other distinct types of VR-style technology include augmented reality and mixed reality, sometimes referred to as extended reality or XR, although definitions are currently changing due to the nascence of the industry.[2]

    Currently, standard virtual reality systems use either virtual reality headsets or multi-projected environments to generate some realistic images, sounds and other sensations that simulate a user's physical presence in a virtual environment. A person using virtual reality equipment is able to look around the artificial world, move around in it, and interact with virtual features or items. The effect is commonly created by VR headsets consisting of a head-mounted display with a small screen in front of the eyes, but can also be created through specially designed rooms with multiple large screens. Virtual reality typically incorporates auditory and video feedback, but may also allow other types of sensory and force feedback through haptic technology.

    Etymology

    "Virtual" has had the meaning of "being something in essence or effect, though not actually or in fact" since the mid-1400s.[3] The term "virtual" has been used in the computer sense of "not physically existing but made to appear by software" since 1959.[3]

    In 1938, French avant-garde playwright Antonin Artaud described the illusory nature of characters and objects in the theatre as "la réalité virtuelle" in a collection of essays, Le Théâtre et son double. The English translation of this book, published in 1958 as The Theater and its Double,[4] is the earliest published use of the term "virtual reality". The term "artificial reality", coined by Myron Krueger, has been in use since the 1970s. The term "virtual reality" was first used in a science fiction context in The Judas Mandala, a 1982 novel by Damien Broderick.

    Widespread adoption of the term "virtual reality" in the popular media is attributed to Jaron Lanier, who in the late 1980s designed some of the first business-grade virtual reality hardware under his firm VPL Research, and the 1992 film Lawnmower Man, which features use of virtual reality systems.[5]

    Forms and methods

    Researchers with the European Space Agency in Darmstadt, Germany, equipped with a VR headset and motion controllers, demonstrating how astronauts might use virtual reality in the future to train to extinguish a fire inside a lunar habitat

    One method by which virtual reality can be realized is simulation-based virtual reality. Driving simulators, for example, give the driver on board the impression of actually driving an actual vehicle by predicting vehicular motion caused by driver input and feeding back corresponding visual, motion and audio cues to the driver.

    With avatar image-based virtual reality, people can join the virtual environment in the form of real video as well as an avatar. One can participate in the 3D distributed virtual environment as form of either a conventional avatar or a real video. Users can select their own type of participation based on the system capability.

    In projector-based virtual reality, modeling of the real environment plays a vital role in various virtual reality applications, such as robot navigation, construction modeling, and airplane simulation. Image-based virtual reality systems have been gaining popularity in computer graphics and computer vision communities. In generating realistic models, it is essential to accurately register acquired 3D data; usually, a camera is used for modeling small objects at a short distance.

    Desktop-based virtual reality involves displaying a 3D virtual world on a regular desktop display without use of any specialized VR positional tracking equipment. Many modern first-person video games can be used as an example, using various triggers, responsive characters, and other such interactive devices to make the user feel as though they are in a virtual world. A common criticism of this form of immersion is that there is no sense of peripheral vision, limiting the user's ability to know what is happening around them.

    An Omni treadmill being used at a VR convention

    A head-mounted display (HMD) more fully immerses the user in a virtual world. A virtual reality headset typically includes two small high resolution OLED or LCD monitors which provide separate images for each eye for stereoscopic graphics rendering a 3D virtual world, a binaural audio system, positional and rotational real-time head tracking for six degrees of movement. Options include motion controls with haptic feedback for physically interacting within the virtual world in an intuitive way with little to no abstraction and an omnidirectional treadmill for more freedom of physical movement allowing the user to perform locomotive motion in any direction.

    Augmented reality (AR) is a type of virtual reality technology that blends what the user sees in their real surroundings with digital content generated by computer software. The additional software-generated images with the virtual scene typically enhance how the real surroundings look in some way. AR systems layer virtual information over a camera live feed into a headset or smartglasses or through a mobile device giving the user the ability to view three-dimensional images.

    Mixed reality (MR) is the merging of the real world and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact in real time.

    A cyberspace is sometimes defined as a networked virtual reality.[6]

    Simulated reality is a hypothetical virtual reality as truly immersive as the actual reality, enabling an advanced lifelike experience or even virtual eternity.

    History

    View-Master, a stereoscopic visual simulator, was introduced in 1939

    The development of perspective in Renaissance European art and the stereoscope invented by Sir Charles Wheatstone were both precursors to virtual reality.[7][8][9] The first references to the more modern concept of virtual reality came from science fiction.

    20th century

    Morton Heilig wrote in the 1950s of an "Experience Theatre" that could encompass all the senses in an effective manner, thus drawing the viewer into the onscreen activity. He built a prototype of his vision dubbed the Sensorama in 1962, along with five short films to be displayed in it while engaging multiple senses (sight, sound, smell, and touch). Predating digital computing, the Sensorama was a mechanical device. Heilig also developed what he referred to as the "Telesphere Mask" (patented in 1960). The patent application described the device as "a telescopic television apparatus for individual use... The spectator is given a complete sensation of reality, i.e. moving three dimensional images which may be in colour, with 100% peripheral vision, binaural sound, scents and air breezes."[10]

    In 1968, Ivan Sutherland, with the help of his students including Bob Sproull, created what was widely considered to be the first head-mounted display system for use in immersive simulation applications, called The Sword of Damocles. It was primitive both in terms of user interface and visual realism, and the HMD to be worn by the user was so heavy that it had to be suspended from the ceiling, which gave the device a formidable appearance and inspired its name.[11] Technically, the device was an augmented reality device due to optical passthrough. The graphics comprising the virtual environment were simple wire-frame model rooms.

    1970–1990

    The virtual reality industry mainly provided VR devices for medical, flight simulation, automobile industry design, and military training purposes from 1970 to 1990.[12]

    David Em became the first artist to produce navigable virtual worlds at NASA's Jet Propulsion Laboratory (JPL) from 1977 to 1984.[13] The Aspen Movie Map, a crude virtual tour in which users could wander the streets of Aspen in one of the three modes (summer, winter, and polygons), was created at MIT in 1978.

    NASA Ames's 1985 VIEW headset

    In 1979, Eric Howlett developed the Large Expanse, Extra Perspective (LEEP) optical system. The combined system created a stereoscopic image with a field of view wide enough to create a convincing sense of space. The users of the system have been impressed by the sensation of depth (field of view) in the scene and the corresponding realism. The original LEEP system was redesigned for NASA's Ames Research Center in 1985 for their first virtual reality installation, the VIEW (Virtual Interactive Environment Workstation)[14] by Scott Fisher. The LEEP system provides the basis for most of the modern virtual reality headsets.[15]

    A VPL Research DataSuit, a full-body outfit with sensors for measuring the movement of arms, legs, and trunk. Developed circa 1989. Displayed at the Nissho Iwai showroom in Tokyo

    By the late 1980s, the term "virtual reality" was popularized by Jaron Lanier, one of the modern pioneers of the field. Lanier had founded the company VPL Research in 1985. VPL Research has developed several VR devices like the DataGlove, the EyePhone, and the AudioSphere. VPL licensed the DataGlove technology to Mattel, which used it to make the Power Glove, an early affordable VR device.

    Atari, Inc. founded a research lab for virtual reality in 1982, but the lab was closed after two years due to the Atari Shock (video game crash of 1983). However, its hired employees, such as Thomas G. Zimmerman [16], Scott Fisher, Jaron Lanier, Michael Naimark, and Brenda Laurel, kept their research and development on VR-related technologies.

    In 1988, the Cyberspace Project at Autodesk was the first to implement VR on a low-cost personal computer.[17][18] The project leader Eric Gullichsen left in 1990 to found Sense8 Corporation and develop the WorldToolKit virtual reality SDK,[19] which offered the first real time graphics with Texture mapping on a PC, and was widely used throughout industry and academia.[20][21]

    1990–2000

    The 1990s saw the first widespread commercial releases of consumer headsets. In 1992, for instance, Computer Gaming World predicted "affordable VR by 1994".[22]

    In 1991, Sega announced the Sega VR headset for the Mega Drive home console. It used LCD screens in the visor, stereo headphones, and inertial sensors that allowed the system to track and react to the movements of the user's head.[23] In the same year, Virtuality launched and went on to become the first mass-produced, networked, multiplayer VR entertainment system that was released in many countries, including a dedicated VR arcade at Embarcadero Center. Costing up to $73,000 per multi-pod Virtuality system, they featured headsets and exoskeleton gloves that gave one of the first "immersive" VR experiences.[24]

    A CAVE system at IDL's Center for Advanced Energy Studies in 2010

    That same year, Carolina Cruz-Neira, Daniel J. Sandin and Thomas A. DeFanti from the Electronic Visualization Laboratory created the first cubic immersive room, the Cave automatic virtual environment (CAVE). Developed as Cruz-Neira's PhD thesis, it involved a multi-projected environment, similar to the holodeck, allowing people to see their own bodies in relation to others in the room.[25][26] Antonio Medina, a MIT graduate and NASA scientist, designed a virtual reality system to "drive" Mars rovers from Earth in apparent real time despite the substantial delay of Mars-Earth-Mars signals.[27]

    Virtual Fixtures immersive AR system developed in 1992. Picture features Dr. Louis Rosenberg interacting freely in 3D with overlaid virtual objects called 'fixtures'

    In 1992, Nicole Stenger created Angels, the first real-time interactive immersive movie where the interaction was facilitated with a dataglove and high-resolution goggles. That same year, Louis Rosenberg created the virtual fixtures system at the U.S. Air Force's Armstrong Labs using a full upper-body exoskeleton, enabling a physically realistic mixed reality in 3D. The system enabled the overlay of physically real 3D virtual objects registered with a user's direct view of the real world, producing the first true augmented reality experience enabling sight, sound, and touch.[28][29]

    By July 1994, Sega had released the VR-1 motion simulator ride attraction in Joypolis indoor theme parks,[30] as well as the Dennou Senki Net Merc arcade game. Both used an advanced head-mounted display dubbed the "Mega Visor Display" developed in conjunction with Virtuality;[31][32] it was able to track head movement in a 360-degree stereoscopic 3D environment, and in its Net Merc incarnation was powered by the Sega Model 1 arcade system board.[33] Apple released QuickTime VR, which, despite using the term "VR", was unable to represent virtual reality, and instead displayed 360-degree interactive panoramas.

    Nintendo's Virtual Boy console was released in 1995.[34] A group in Seattle created public demonstrations of a "CAVE-like" 270 degree immersive projection room called the Virtual Environment Theater, produced by entrepreneurs Chet Dagit and Bob Jacobson.[35] Forte released the VFX1, a PC-powered virtual reality headset that same year.

    In 1999, entrepreneur Philip Rosedale formed Linden Lab with an initial focus on the development of VR hardware. In its earliest form, the company struggled to produce a commercial version of "The Rig", which was realized in prototype form as a clunky steel contraption with several computer monitors that users could wear on their shoulders. The concept was later adapted into the personal computer-based, 3D virtual world program Second Life.[36]

    21st century

    The 2000s were a period of relative public and investment indifference to commercially available VR technologies.

    In 2001, SAS Cube (SAS3) became the first PC-based cubic room, developed by Z-A Production (Maurice Benayoun, David Nahon), Barco, and Clarté. It was installed in Laval, France. The SAS library gave birth to Virtools VRPack. In 2007, Google introduced Street View, a service that shows panoramic views of an increasing number of worldwide positions such as roads, indoor buildings and rural areas. It also features a stereoscopic 3D mode, introduced in 2010.[37]

    2010–present

    An inside view of the Oculus Rift Crescent Bay prototype headset

    In 2010, Palmer Luckey designed the first prototype of the Oculus Rift. This prototype, built on a shell of another virtual reality headset, was only capable of rotational tracking. However, it boasted a 90-degree field of vision that was previously unseen in the consumer market at the time. Luckey eliminated distortion issues arising from the type of lens used to create the wide field of vision using software that pre-distorted the rendered image in real-time. This initial design would later serve as a basis from which the later designs came.[38] In 2012, the Rift is presented for the first time at the E3 video game trade show by Carmack.[39][40] In 2014, Facebook purchased Oculus VR for what at the time was stated as $2 billion[41] but later revealed that the more accurate figure was $3 billion.[40] This purchase occurred after the first development kits ordered through Oculus' 2012 Kickstarter had shipped in 2013 but before the shipping of their second development kits in 2014.[42] ZeniMax, Carmack's former employer, sued Oculus and Facebook for taking company secrets to Facebook;[40] the verdict was in favour of ZeniMax, settled out of court later.[43]

    HTC Vive headsets worn at Mobile World Congress 2018

    In 2013, Valve discovered and freely shared the breakthrough of low-persistence displays which make lag-free and smear-free display of VR content possible.[44] This was adopted by Oculus and was used in all their future headsets. In early 2014, Valve showed off their SteamSight prototype, the precursor to both consumer headsets released in 2016. It shared major features with the consumer headsets including separate 1K displays per eye, low persistence, positional tracking over a large area, and fresnel lenses.[45][46] HTC and Valve announced the virtual reality headset HTC Vive and controllers in 2015. The set included tracking technology called Lighthouse, which utilized wall-mounted "base stations" for positional tracking using infrared light.[47][48][49]

    The Project Morpheus (PlayStation VR) headset worn at Gamescom 2015

    In 2014, Sony announced Project Morpheus (its code name for the PlayStation VR), a virtual reality headset for the PlayStation 4 video game console.[50] In 2015, Google announced Cardboard, a do-it-yourself stereoscopic viewer: the user places their smartphone in the cardboard holder, which they wear on their head. Michael Naimark was appointed Google's first-ever 'resident artist' in their new VR division. The Kickstarter campaign for Gloveone, a pair of gloves providing motion tracking and haptic feedback, was successfully funded, with over $150,000 in contributions.[51] Also in 2015, Razer unveiled its open source project OSVR.

    Smartphone-based budget headset Samsung Gear VR in dismantled state

    By 2016, there were at least 230 companies developing VR-related products. Amazon, Apple, Facebook, Google, Microsoft, Sony and Samsung all had dedicated AR and VR groups. Dynamic binaural audio was common to most headsets released that year. However, haptic interfaces were not well developed, and most hardware packages incorporated button-operated handsets for touch-based interactivity. Visually, displays were still of a low-enough resolution and frame rate that images were still identifiable as virtual.[52]

    In 2016, HTC shipped its first units of the HTC Vive SteamVR headset.[53] This marked the first major commercial release of sensor-based tracking, allowing for free movement of users within a defined space.[54] A patent filed by Sony in 2017 showed they were developing a similar location tracking technology to the Vive for PlayStation VR, with the potential for the development of a wireless headset.[55]

    In 2019, Oculus released the Oculus Rift S and a standalone headset, the Oculus Quest. These headsets utilized inside-out tracking compared to external outside-in tracking seen in previous generations of headsets.[56]

    Later in 2019, Valve released the Valve Index. Notable features include a 130° field of view, off-ear headphones for immersion and comfort, open-handed controllers which allow for individual finger tracking, front facing cameras, and a front expansion slot meant for extensibility.[57]

    In 2020, Oculus released the Oculus Quest 2. Some new features include a sharper screen, reduced price, and increased performance. Facebook (which became Meta a year later) initially required users to log in with a Facebook account in order to use the new headset.[58] In 2021 the Oculus Quest 2 accounted for 80% of all VR headsets sold.[59]

    Robinson R22 Virtual Reality Training Device developed by VRM Switzerland[60]

    In 2021, EASA approved the first Virtual Reality based Flight Simulation Training Device. The device, for rotorcraft pilots, enhances safety by opening up the possibility of practicing risky maneuvers in a virtual environment. This addresses a key risk area in rotorcraft operations,[61] where statistics show that around 20% of accidents occur during training flights.

    In 2023, Sony released the Playstation VR2, a follow-up to their 2016 headset. Playstation VR2 comes with inside-out tracking, higher-resolution displays, controllers with adaptive triggers and haptic feedback, and a wider field-of-view.[62]

    Technology

    Software

    The Virtual Reality Modelling Language (VRML), first introduced in 1994, was intended for the development of "virtual worlds" without dependency on headsets.[63] The Web3D consortium was subsequently founded in 1997 for the development of industry standards for web-based 3D graphics. The consortium subsequently developed X3D from the VRML framework as an archival, open-source standard for web-based distribution of VR content.[64] WebVR is an experimental JavaScript application programming interface (API) that provides support for various virtual reality devices, such as the HTC Vive, Oculus Rift, Google Cardboard or OSVR, in a web browser.[65]

    Hardware

    Paramount for the sensation of immersion into virtual reality are a high frame rate and low latency.

    Modern virtual reality headset displays are based on technology developed for smartphones including: gyroscopes and motion sensors for tracking head, body, and hand positions; small HD screens for stereoscopic displays; and small, lightweight and fast computer processors. These components led to relative affordability for independent VR developers, and led to the 2012 Oculus Rift Kickstarter offering the first independently developed VR headset.[52]

    Independent production of VR images and video has increased alongside the development of affordable omnidirectional cameras, also known as 360-degree cameras or VR cameras, that have the ability to record 360 interactive photography, although at relatively low resolutions or in highly compressed formats for online streaming of 360 video.[66] In contrast, photogrammetry is increasingly used to combine several high-resolution photographs for the creation of detailed 3D objects and environments in VR applications.[67][68]

    To create a feeling of immersion, special output devices are needed to display virtual worlds. Well-known formats include head-mounted displays or the CAVE. In order to convey a spatial impression, two images are generated and displayed from different perspectives (stereo projection). There are different technologies available to bring the respective image to the right eye. A distinction is made between active (e.g. shutter glasses) and passive technologies (e.g. polarizing filters or Infitec).[69]

    In order to improve the feeling of immersion, wearable multi-string cables offer haptics to complex geometries in virtual reality. These strings offer fine control of each finger joint to simulate the haptics involved in touching these virtual geometries.[70]

    Special input devices are required for interaction with the virtual world. Some of the most common input devices are motion controllers and optical tracking sensors. In some cases, wired gloves are used. Controllers typically use optical tracking systems (primarily infrared cameras) for location and navigation, so that the user can move freely without wiring. Some input devices provide the user with force feedback to the hands or other parts of the body, so that the human being can orientate himself in the three-dimensional world through haptics and sensor technology as a further sensory sensation and carry out realistic simulations. This allows for the viewer to have a sense of direction in the artificial landscape. Additional haptic feedback can be obtained from omnidirectional treadmills (with which walking in virtual space is controlled by real walking movements) and vibration gloves and suits.

    Virtual reality cameras can be used to create VR photography using 360-degree panorama videos. 360-degree camera shots can be mixed with virtual elements to merge reality and fiction through special effects.[citation needed] VR cameras are available in various formats, with varying numbers of lenses installed in the camera.[71]

    Visual immersion experience

    Display resolution

    Minimal Angle of Resolution (MAR) refers to the minimum distance between two display pixels. At the distance, viewer can clearly distinguish the independent pixels. Often measured in arc-seconds, MAR between two pixels has to do with the viewing distance. For the general public, resolution is about 30-65 arc-seconds, which is referred to as the spatial resolution when combined with distance. Given the viewing distance of 1m and 2m respectively, regular viewers won't be able to perceive two pixels as separate if they are less than 0.29mm apart at 1m and less than 0.58mm apart at 2m.[72]

    Image latency and display refresh frequency

    Most small-size displays have a refresh rate of 60 Hz, which adds about 15ms of additional latency. The number is reduced to less than 7ms if the refresh rate is increased to 120 Hz or even 240 Hz and more.[73] Participants generally feel that the experience is more immersive with higher refresh rates as a result. However, higher refresh rates require a more powerful graphics processing unit.

    Relationship between display and field of view

    In theory, it represents participant's field of view (yellow area).

    In assessing the achieved immersion by a VR device, we need to consider our field of view (FOV) in addition to quality image. Our eyes have a horizontal FOV of about 140 degrees per side and a vertical FOV of roughly 175 degrees.[74][75] Binocular vision is limited to 120 degrees horizontally where the right and the left visual fields overlap. Overall, we have a FOV of roughly 300 degrees x 175 degrees[76] with two eyes, i.e., approximately one third of the full 360-deg sphere.

    Applications

    Virtual reality is most commonly used in entertainment applications such as video games, 3D cinema, amusement park rides including dark rides and social virtual worlds. Consumer virtual reality headsets were first released by video game companies in the early-mid 1990s. Beginning in the 2010s, next-generation commercial tethered headsets were released by Oculus (Rift), HTC (Vive) and Sony (PlayStation VR), setting off a new wave of application development.[77] 3D cinema has been used for sporting events, pornography, fine art, music videos and short films. Since 2015, roller coasters and theme parks have incorporated virtual reality to match visual effects with haptic feedback.[52] VR not only fits the trend of the digital industry but also enhances the film's visual effect. The film gives the audience more ways to interact through VR technology.[78]

    In social sciences and psychology, virtual reality offers a cost-effective tool to study and replicate interactions in a controlled environment.[79] It can be used as a form of therapeutic intervention.[80] For instance, there is the case of the virtual reality exposure therapy (VRET), a form of exposure therapy for treating anxiety disorders such as post traumatic stress disorder (PTSD) and phobias.[81][82][83]

    Virtual reality programs are being used in the rehabilitation processes with elderly individuals that have been diagnosed with Alzheimer's disease. This gives these elderly patients the opportunity to simulate real experiences that they would not otherwise be able to experience due to their current state. 17 recent studies with randomized controlled trials have shown that virtual reality applications are effective in treating cognitive deficits with neurological diagnoses.[84] Loss of mobility in elderly patients can lead to a sense of loneliness and depression. Virtual reality is able to assist in making aging in place a lifeline to an outside world that they cannot easily navigate. Virtual reality allows exposure therapy to take place in a safe environment.[85]

    In medicine, simulated VR surgical environments were first developed in the 1990s.[86][87][88] Under the supervision of experts, VR can provide effective and repeatable training[89] at a low cost, allowing trainees to recognize and amend errors as they occur.[90]

    Virtual reality has been used in physical rehabilitation since the 2000s. Despite numerous studies conducted, good quality evidence of its efficacy compared to other rehabilitation methods without sophisticated and expensive equipment is lacking for the treatment of Parkinson's disease.[91] A 2018 review on the effectiveness of mirror therapy by virtual reality and robotics for any type of pathology concluded in a similar way.[92] Another study was conducted that showed the potential for VR to promote mimicry and revealed the difference between neurotypical and autism spectrum disorder individuals in their response to a two-dimensional avatar.[93][94]

    Immersive virtual reality technology with myoelectric and motion tracking control may represent a possible therapy option for treatment-resistant phantom limb pain. Pain scale measurements were taken into account and an interactive 3-D kitchen environment was developed based on the principles of mirror therapy to allow for control of virtual hands while wearing a motion-tracked VR headset.[95] A systematic search in Pubmed and Embase was performed to determine results that were pooled in two meta-analysis. Meta-analysis showed a significant result in favor of VRT for balance.[96]

    In the fast-paced and globalised business world, meetings in VR are used to create an environment in which interactions with other people (e.g. colleagues, customers, partners) can feel more natural than a phone call or video chat. In the customisable meeting rooms all parties can join using the VR headset and interact as if they are in the same physical room. Presentations, videos or 3D models (of e.g. products or prototypes) can be uploaded and interacted with.[97] Compared to traditional text-based CMC, Avatar-based interactions in 3D virtual environment lead to higher levels of consensus, satisfaction, and cohesion among group members.[98]

    U.S. Navy medic demonstrating a VR parachute simulator at the Naval Survival Training Institute in 2006

    VR can simulate real workspaces for workplace occupational safety and health purposes, educational purposes, and training purposes. It can be used to provide learners with a virtual environment where they can develop their skills without the real-world consequences of failing. It has been used and studied in primary education,[99] anatomy teaching,[100][101] military,[102][103] astronaut training,[104][105][106] flight simulators,[107] miner training,[108] medical education,[109] geography education,[110] architectural design,[citation needed] driver training[111] and bridge inspection.[112] Immersive VR engineering systems enable engineers to see virtual prototypes prior to the availability of any physical prototypes.[113] Supplementing training with virtual training environments has been claimed to offer avenues of realism in military[114] and healthcare[115] training while minimizing cost.[116] It also has been claimed to reduce military training costs by minimizing the amounts of ammunition expended during training periods.[114] VR can also be used for the healthcare training and education for medical practitioners.[117][118]

    In the engineering field, VR has proved very useful for both engineering educators and the students. A previously expensive cost in the educational department now being much more accessible due to lowered overall costs, has proven to be a very useful tool in educating future engineers. The most significant element lies in the ability for the students to be able to interact with 3-D models that accurately respond based on real world possibilities. This added tool of education provides many the immersion needed to grasp complex topics and be able to apply them.[119] As noted, the future architects and engineers benefit greatly by being able to form understandings between spatial relationships and providing solutions based on real-world future applications.[120]

    The first fine art virtual world was created in the 1970s.[121] As the technology developed, more artistic programs were produced throughout the 1990s, including feature films. When commercially available technology became more widespread, VR festivals began to emerge in the mid-2010s. The first uses of VR in museum settings began in the 1990s, seeing a significant increase in the mid-2010s. Additionally, museums have begun making some of their content virtual reality accessible.[122][123]

    Virtual reality's growing market presents an opportunity and an alternative channel for digital marketing.[124] It is also seen as a new platform for e-commerce, particularly in the bid to challenge traditional "brick and mortar" retailers. However, a 2018 study revealed that the majority of goods are still purchased in physical stores.[125]

    In the case of education, the uses of virtual reality have demonstrated being capable of promoting higher order thinking,[126] promoting the interest and commitment of students, the acquisition of knowledge, promoting mental habits and understanding that are generally useful within an academic context.[127]

    A case has also been made for including virtual reality technology in the context of public libraries. This would give library users access to cutting-edge technology and unique educational experiences.[128] This could include giving users access to virtual, interactive copies of rare texts and artifacts and to tours of famous landmarks and archeological digs (as in the case with the Virtual Ganjali Khan Project).[129]

    Starting in the early 2020s, virtual reality has also been discussed as a technological setting that may support people's griefing process, based on digital recreations of deceased individuals. In 2021, this practice received substantial media attention following a South Korean TV documentary, which invited a griefing mother to interact with a virtual replica of her deceased daughter.[130] Subsequently, scientists have summarized several potential implications of such endeavours, including its potential to facilitate adaptive mourning, but also many ethical challenges.[131][132]

    Growing interest in the metaverse has resulted in organizational efforts to incorporate the many diverse applications of virtual reality into ecosystems like VIVERSE, reportedly offering connectivity between platforms for a wide range of uses.[133]

    Concerts

    In June of 2020, Jean Michel Jarre performed in VRChat.[134] In July, Brendan Bradley released the free FutureStages web-based virtual reality venue for live events and concerts throughout the 2020 shutdown,[135] Justin Bieber performed on November 18, 2021 in WaveXR.[136] On December 2, 2021 Non-Player Character performed at The Mugar Omni Theater with audiences interacting with a live performer in both virtual reality and projected on the IMAX dome screen.[137][138] Meta’s Foo Fighters Super Bowl VR concert performed on Venues.[139] Post Malone performed in Venues starting July 15, 2022.[140] Megan Thee Stallion performed on AmazeVR at AMC Theaters throughout 2022.[141]

    On October 24, 2021, Billie Eilish performed on Oculus Venues. Pop group Imagine Dragons performed on June 15, 2022.

    Concerns and challenges

    Health and safety

    There are many health and safety considerations of virtual reality. A number of unwanted symptoms have been caused by prolonged use of virtual reality,[142] and these may have slowed proliferation of the technology. Most virtual reality systems come with consumer warnings, including: seizures; developmental issues in children; trip-and-fall and collision warnings; discomfort; repetitive stress injury; and interference with medical devices.[143] Some users may experience twitches, seizures or blackouts while using VR headsets, even if they do not have a history of epilepsy and have never had blackouts or seizures before. One in 4,000 people, or .025%, may experience these symptoms. Motion sickness, eyestrain, headaches, and discomfort are the most prevalent short-term adverse effects. In addition, because of the virtual reality headsets' heavy weight, discomfort may be more likely among children. Therefore, children are advised against using VR headsets.[144] Other problems may occur in physical interactions with one's environment. While wearing VR headsets, people quickly lose awareness of their real-world surroundings and may injure themselves by tripping over, or colliding with real-world objects.[145]

    VR headsets may regularly cause eye fatigue, as does all screened technology, because people tend to blink less when watching screens, causing their eyes to become more dried out.[146] There have been some concerns about VR headsets contributing to myopia, but although VR headsets sit close to the eyes, they may not necessarily contribute to nearsightedness if the focal length of the image being displayed is sufficiently far away.[147]

    Virtual reality sickness (also known as cybersickness) occurs when a person's exposure to a virtual environment causes symptoms that are similar to motion sickness symptoms.[148] Women are significantly more affected than men by headset-induced symptoms, at rates of around 77% and 33% respectively.[149][150] The most common symptoms are general discomfort, headache, stomach awareness, nausea, vomiting, pallor, sweating, fatigue, drowsiness, disorientation, and apathy.[151] For example, Nintendo's Virtual Boy received much criticism for its negative physical effects, including "dizziness, nausea, and headaches".[152] These motion sickness symptoms are caused by a disconnect between what is being seen and what the rest of the body perceives. When the vestibular system, the body's internal balancing system, does not experience the motion that it expects from visual input through the eyes, the user may experience VR sickness. This can also happen if the VR system does not have a high enough frame rate, or if there is a lag between the body's movement and the onscreen visual reaction to it.[153] Because approximately 25–40% of people experience some kind of VR sickness when using VR machines, companies are actively looking for ways to reduce VR sickness.[154]

    Vergence-accommodation conflict (VAC) is one of the main causes of virtual reality sickness.[155]

    In January 2022 The Wall Street Journal found that VR usage could lead to physical injuries including leg, hand, arm and shoulder injuries.[156] VR usage has also been tied to incidents that resulted in neck injuries,[157] and death.[158]

    Children and teenagers in virtual reality

    Children are becoming increasingly aware of VR, with the number in the USA having never heard of it dropping by half from Autumn 2016 (40%) to Spring 2017 (19%).[159]

    A 2022 research report by Piper Sandler revealed that only 26% of U.S. teens own a VR device, 5% use it daily, while 48% of teen headset owners "seldom" use it. Of the teens who don't own a VR headset, 9% plan to buy one. 50% of surveyed teens are unsure about the metaverse or don't have any interest, and don't have any plans to purchase a VR headset.[160]

    Studies show that young children, compared to adults, may respond cognitively and behaviorally to immersive VR in ways that differ from adults. VR places users directly into the media content, potentially making the experience very vivid and real for children. For example, children of 6–18 years of age reported higher levels of presence and "realness" of a virtual environment compared with adults 19–65 years of age.[161]

    Studies on VR consumer behavior or its effect on children and a code of ethical conduct involving underage users are especially needed, given the availability of VR porn and violent content. Related research on violence in video games suggests that exposure to media violence may affect attitudes, behavior, and even self-concept. Self-concept is a key indicator of core attitudes and coping abilities, particularly in adolescents.[162] Early studies conducted on observing versus participating in violent VR games suggest that physiological arousal and aggressive thoughts, but not hostile feelings, are higher for participants than for observers of the virtual reality game.[163]

    Experiencing VR by children may further involve simultaneously holding the idea of the virtual world in mind while experiencing the physical world. Excessive usage of immersive technology that has very salient sensory features may compromise children's ability to maintain the rules of the physical world, particularly when wearing a VR headset that blocks out the location of objects in the physical world. Immersive VR can provide users with multisensory experiences that replicate reality or create scenarios that are impossible or dangerous in the physical world. Observations of 10 children experiencing VR for the first time suggested that 8-12-years-old kids were more confident to explore VR content when it was in a familiar situation, e.g. the children enjoyed playing in the kitchen context of Job Simulator, and enjoyed breaking rules by engaging in activities they are not allowed to do in reality, such as setting things on fire.[159]

    Privacy

    The persistent tracking required by all VR systems makes the technology particularly useful for, and vulnerable to, mass surveillance. The expansion of VR will increase the potential and reduce the costs for information gathering of personal actions, movements and responses.[52] Data from eye tracking sensors, which are projected to become a standard feature in virtual reality headsets,[164][165] may indirectly reveal information about a user's ethnicity, personality traits, fears, emotions, interests, skills, and physical and mental health conditions.[166]

    Virtual reality in fiction

    See also

    References


  • Rosson, Lois (15 April 2014). "The Virtual Interface Environment Workstation (VIEW), 1990". NASA. Retrieved 23 January 2023.

  • "Get Ready to Hear a Lot More About 'XR'". Wired. 1 May 2019. ISSN 1059-1028. Retrieved 29 August 2020.

  • "virtual | Search Online Etymology Dictionary". www.etymonline.com.

  • Antonin Artaud, The Theatre and its Double Trans. Mary Caroline Richards. (New York: Grove Weidenfeld, 1958).

  • Faisal, Aldo (2017). "Computer science: Visionary of virtual reality". Nature. 551 (7680): 298–299. Bibcode:2017Natur.551..298F. doi:10.1038/551298a.

  • "Definition of cyberspace | Dictionary.com". www.dictionary.com.

  • Baltrušaitis, Jurgis; Strachan, W.J. (1977). Anamorphic art. New York: Harry N. Abrams. p. 4. ISBN 9780810906624.

  • "Virtual Reality Society". Virtual Reality Society. 2 January 2020. Retrieved 19 January 2023.

  • "Charles Wheatstone: the father of 3D and virtual reality technology". Feature from King's College London. 28 October 2016. Retrieved 19 January 2023.

  • Holly Brockwell (3 April 2016). "Forgotten genius: the man who made a working VR machine in 1957". Tech Radar. Retrieved 7 March 2017.

  • Watkins, Christopher; Marenka, Stephen (1994). Virtual Reality Excursions with Programs in C. Academic Press Inc. p. 58. ISBN 0-12-737865-0.

  • "National Center for Supercomputing Applications: History". The Board of Trustees of the University of Illinois. Archived from the original on 21 August 2015.

  • Nelson, Ted (March 1982). "Report on Siggraph '81". Creative Computing.

  • Scott S. Fisher; The NASA Ames VIEWlab Project—A Brief History. Presence: Teleoperators and Virtual Environments 2016; 25 (4): 339–348. doi: https://doi.org/10.1162/PRES_a_00277

  • Thomas, Wayne (December 2005). "Section 17". "Virtual Reality and Artificial Environments", A Critical History of Computer Graphics and Animation.

  • https://www.historyofinformation.com/detail.php?entryid=4081

  • Barlow, John Perry (1990). "Being in Nothingness". Wired.

  • "Cyberspace – The New Explorers". 1989. Retrieved 8 August 2019 – via Internet Archive.

  • Delaney, Ben (2017). Virtual Reality 1.0 -- The 90s: The Birth of VR. CyberEdge Information Services. p. 40. ISBN 978-1513617039.

  • Stoker, Carol. "MARSMAP: AN INTERACTIVE VIRTUAL REALITY MODEL OF THE PATHFINDER LANDING SITE" (PDF). NASA JPL. NASA. Retrieved 7 August 2019.

  • Cullen, Chris (13 April 2017). "Pioneering VR Stories Part 1: Idaho National Laboratory In The '90s". Idaho Virtual Reality Council. Retrieved 7 August 2019.

  • Engler, Craig E. (November 1992). "Affordable VR by 1994". Computer Gaming World. p. 80. Retrieved 4 July 2014.

  • Horowitz, Ken (28 December 2004). "Sega VR: Great Idea or Wishful Thinking?". Sega-16. Archived from the original on 14 January 2010. Retrieved 21 August 2010.

  • "Virtuality". YouTube. Archived from the original on 11 December 2021. Retrieved 21 September 2014.

  • Goad, Angela. "Carolina Cruz-Neira | Introductions Necessary". Introductions Necessary. Retrieved 28 March 2017.

  • Smith, David (24 November 2014). "Engineer envisions sci-fi as reality". Arkansas Online. Retrieved 28 March 2017.

  • Gonzales, D.; Criswell, D.; Heer, E (1991). Gonzales, D. (ed.). "Automation and Robotics for the Space Exploration Initiative: Results from Project Outreach" (PDF). NASA STI/Recon Technical Report N. 92 (17897): 35. Bibcode:1991STIN...9225258G.

  • Rosenberg, Louis (1992). "The Use of Virtual Fixtures As Perceptual Overlays to Enhance Operator Performance in Remote Environments.". Technical Report AL-TR-0089, USAF Armstrong Laboratory, Wright-Patterson AFB OH, 1992.

  • Rosenberg, L.B. (1993). "Virtual Fixtures: Perceptual Overlays for Telerobotic Manipulation". In Proc. of the IEEE Annual Int. Symposium on Virtual Reality (1993): pp. 76–82.

  • "News & Information". Beep! Mega Drive. No. 1994–08. July 1994. p. [1].

  • Kevin Williams. "The Virtual Arena – Blast From The Past: The VR-1". VR Focus.

  • "Sega Teams Up With W. Industries For Its VR Game". Game Machine. No. 455. August 1993. p. [2].

  • NEXT Generation. June 1995. Retrieved 20 October 2015 – via archive.org.

  • "Nintendo Virtual Boy on theverge.com". Archived from the original on 1 April 2014.

  • Dye, Lee (22 February 1995). "Virtual Reality Applications Expand : Imaging: Technology is finding important places in medicine, engineering and many other realms". Los Angeles Times.

  • Au, Wagner James. The Making of Second Life, pg. 19. New York: Collins. ISBN 978-0-06-135320-8.

  • "Google Street View in 3D: More Than Just an April Fool's Joke". 6 April 2010.

  • Rubin, Peter (2014). "Oculus Rift". Wired. Vol. 22, no. 6. p. 78.

  • "E3 12: John Carmack's VR Presentation". Gamereactor. 27 July 2012. Archived from the original on 11 December 2021. Retrieved 20 February 2019.

  • Gilbert, Ben (12 December 2018). "Facebook just settled a $500 million lawsuit over virtual reality after a years-long battle — here's what's going on". Business Insider. Retrieved 20 February 2019.

  • "Facebook to buy Oculus virtual reality firm for $2B". Associated Press. 25 March 2014. Retrieved 27 March 2014.

  • Metz, Cade (25 March 2014). "Facebook Buys VR Startup Oculus for $2 Billion". WIRED. Retrieved 13 March 2017.

  • Spangler, Todd (12 December 2018). "ZeniMax Agrees to Settle Facebook VR Lawsuit". Variety. Retrieved 20 February 2019.

  • "Not-quite-live bloga : panel discussion with John Carmack, Tim Sweeney, Johan Andersson". The Tech Report. Retrieved 14 December 2016.

  • James, Paul (30 January 2014). "30 Minutes Inside Valve's Prototype Virtual Reality Headset: Owlchemy Labs Share Their Steam Dev Days Experience – Road to VR". Road to VR. Retrieved 14 December 2016.

  • James, Paul (18 November 2013). "Valve to Demonstrate Prototype VR HMD and Talk Changes to Steam to "Support and Promote VR Games" – Road to VR". Road to VR. Retrieved 14 December 2016.

  • "Valve showing off new virtual reality hardware and updated Steam controller next week". The Verge. 24 February 2015. Retrieved 1 March 2015.

  • "Valve's VR headset revealed with Oculus-like features". The Verge. 3 June 2014. Retrieved 1 March 2015.

  • "HTC Vive: Everything you need to know about the SteamVR headset". Wareable. 5 April 2016. Retrieved 19 June 2016.

  • "Sony Announces 'Project Morpheus:' Virtual Reality Headset For PS4". Forbes.

  • "Gloveone: Feel Virtual Reality". Kickstarter. Retrieved 15 May 2016.

  • Kelly, Kevin (April 2016). "The Untold Story of Magic Leap, the World's Most Secretive Startup". WIRED. Retrieved 13 March 2017.

  • "Vive Shipment Updates – VIVE Blog". VIVE Blog. 7 April 2016. Retrieved 19 June 2016.

  • Prasuethsut, Lily (2 August 2016). "HTC Vive: Everything you need to know about the SteamVR headset". Wareable. Retrieved 13 March 2017.

  • Martindale, Jon (15 February 2017). "Vive-like sensor spotted in new Sony patent could make its way to PlayStation VR". Digital Trends. Retrieved 13 March 2017.

  • "From the lab to the living room: The story behind Facebook's Oculus Insight technology and a new era of consumer VR". tech.fb.com. 22 August 2019. Retrieved 1 September 2020.

  • "Headset - Valve Index® - Upgrade your experience - Valve Corporation". www.valvesoftware.com. 9 May 2019. Retrieved 28 February 2021.

  • Robertson, Adi (16 September 2020). "Oculus Quest 2 Review: Better, Cheaper VR". theverge.com. Retrieved 16 December 2020.

  • Ochanji, Sam (27 March 2022). "Survey: Quest 2 Accounted for 80% of Headset Sales in 2021". Virtual Reality Times. Retrieved 29 March 2022.

  • "VRM Switzerland – Professional Flight Training Solutions". Retrieved 10 May 2021.

  • "EASA approves the first Virtual Reality (VR) based Flight Simulation Training Device". EASA. Retrieved 10 May 2021.

  • "PS VR2 Tech Specs | PlayStation VR2 display, setup and compatibility". PlayStation. Retrieved 26 March 2023.

  • "VRML Virtual Reality Modeling Language". www.w3.org. Retrieved 20 March 2017.

  • Brutzman, Don (October 2016). "X3D Graphics and VR" (PDF). web3D.org. Web3D Consortium. Archived (PDF) from the original on 21 March 2017. Retrieved 20 March 2017.

  • "WebVR API". Mozilla Developer Network. Retrieved 4 November 2015.

  • Orellana, Vanessa Hand (31 May 2016). "10 things I wish I knew before shooting 360 video". CNET. Retrieved 20 March 2017.

  • "Resident Evil 7: The Use of Photogrammetry for VR". 80.lv. 28 August 2016. Retrieved 20 March 2017.

  • Johnson, Leif (13 March 2016). "Forget 360 Videos, Photogrammetric Virtual Reality Is Where It's At – Motherboard". Motherboard. Retrieved 20 March 2017.

  • "Stereoscopic Display - an overview | ScienceDirect Topics". www.sciencedirect.com. Retrieved 19 October 2022.

  • Fang, Cathy; Zhang, Yang; Dworman, Matthew; Harrison, Chris (21 April 2020). "Wireality: Enabling Complex Tangible Geometries in Virtual Reality with Worn Multi-String Haptics". Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. CHI '20. Honolulu, HI, USA: Association for Computing Machinery: 1–10. doi:10.1145/3313831.3376470. ISBN 978-1-4503-6708-0. S2CID 218483027.

  • Kuhn, Thomas. "Wie Virtual-Reality-Brillen die Arbeit verändern". WirtschaftsWoche. Retrieved 18 November 2020.

  • Davson, Hugh (1972). The Physiology of The Eye. Burlington: Elsevier Science. ISBN 978-0-323-14394-3. OCLC 841909276.

  • Leclair, Dave (21 September 2022). "From 60Hz to 240Hz: Refresh Rates on Phones Explained". PCMag UK. Retrieved 19 October 2022.

  • Strasburger, Hans (2020). "Seven myths on crowding and peripheral vision". i-Perception. 11 (2): 1–45. doi:10.1177/2041669520913052. PMC 7238452. PMID 32489576.

  • The FOV includes eye movements which are estimated here as roughly 30 deg to either side horizontally/20 vertically, added to the size of the visual field. A reliable source for that estimation would be needed.

  • reliable source needed

  • "Comparison of VR headsets: Project Morpheus vs. Oculus Rift vs. HTC Vive". Data Reality. Archived from the original on 20 August 2015. Retrieved 15 August 2015.

  • He, Jing; Wu, Yanping (10 October 2022). Tirunagari, Santosh (ed.). "Application of Digital Interactive Display Design Based on Computer Technology in VR Film". Mobile Information Systems. 2022: 1–7. doi:10.1155/2022/8462037. ISSN 1875-905X.

  • Groom, Victoria; Bailenson, Jeremy N.; Nass, Clifford (1 July 2009). "The influence of racial embodiment on racial bias in immersive virtual environments". Social Influence. 4 (3): 231–248. doi:10.1080/15534510802643750. ISSN 1553-4510. S2CID 15300623.

  • Wiebe, Annika; Kannen, Kyra; Selaskowski, Benjamin; Mehren, Aylin; Thöne, Ann-Kathrin; Pramme, Lisa; Blumenthal, Nike; Li, Mengtong; Asché, Laura; Jonas, Stephan; Bey, Katharina; Schulze, Marcel; Steffens, Maria; Pensel, Max; Guth, Matthias; Rohlfsen, Felicia; Ekhlas, Mogda; Lügering, Helena; Fileccia, Helena; Pakos, Julian; Lux, Silke; Philipsen, Alexandra; Braun, Niclas (2022). "Virtual reality in the diagnostic and therapy for mental disorders: A systematic review". Clinical Psychology Review. 98 (2). doi:10.1016/j.cpr.2022.102213. Retrieved 18 April 2023.

  • Gonçalves, Raquel; Pedrozo, Ana Lúcia; Coutinho, Evandro Silva Freire; Figueira, Ivan; Ventura, Paula (27 December 2012). "Efficacy of Virtual Reality Exposure Therapy in the Treatment of PTSD: A Systematic Review". PLOS ONE. 7 (12): e48469. Bibcode:2012PLoSO...748469G. doi:10.1371/journal.pone.0048469. ISSN 1932-6203. PMC 3531396. PMID 23300515.

  • Garrick, Jacqueline; Williams, Mary Beth (2014). Trauma Treatment Techniques: Innovative Trends. London: Routledge. p. 199. ISBN 9781317954934.

  • Gerardi, Maryrose (June 2010). "Virtual Reality Exposure Therapy for Post-Traumatic Stress Disorder and Other Anxiety Disorders". Current Psychiatry Reports. 12 (4): 298–305. doi:10.1007/s11920-010-0128-4. PMID 20535592. S2CID 436354.

  • [citation needed]

  • Kamińska, Magdalena Sylwia; Miller, Agnieszka; Rotter, Iwona; Szylińska, Aleksandra; Grochans, Elżbieta (14 November 2018). "The effectiveness of virtual reality training in reducing the risk of falls among elderly people". Clinical Interventions in Aging. 13: 2329–2338. doi:10.2147/CIA.S183502. PMC 6241865. PMID 30532523.

  • Satava, R. M. (1996). "Medical virtual reality. The current status of the future". Studies in Health Technology and Informatics. 29: 100–106. ISSN 0926-9630. PMID 10163742.

  • Rosenberg, Louis; Stredney, Don (1996). "A haptic interface for virtual simulation of endoscopic surgery". Studies in Health Technology and Informatics. 29: 371–387. ISSN 0926-9630. PMID 10172846.

  • Stredney, D.; Sessanna, D.; McDonald, J. S.; Hiemenz, L.; Rosenberg, L. B. (1996). "A virtual simulation environment for learning epidural anesthesia". Studies in Health Technology and Informatics. 29: 164–175. ISSN 0926-9630. PMID 10163747.

  • Thomas, Daniel J.; Singh, Deepti (2 April 2021). "Letter to the Editor: Virtual Reality in Surgical Training". International Journal of Surgery. 89: 105935. doi:10.1016/j.ijsu.2021.105935. ISSN 1743-9191. PMID 33819684. S2CID 233036480.

  • Westwood, J.D. Medicine Meets Virtual Reality 21: NextMed / MMVR21. IOS Press. p. 462.

  • Dockx, Kim (2016). "Virtual reality for rehabilitation in Parkinson's disease". Cochrane Database of Systematic Reviews. 2016 (12): CD010760. doi:10.1002/14651858.CD010760.pub2. PMC 6463967. PMID 28000926.

  • Darbois, Nelly; Guillaud, Albin; Pinsault, Nicolas (2018). "Does Robotics and Virtual Reality Add Real Progress to Mirror Therapy Rehabilitation? A Scoping Review". Rehabilitation Research and Practice. 2018: 6412318. doi:10.1155/2018/6412318. PMC 6120256. PMID 30210873.

  • Forbes, Paul A. G.; Pan, Xueni; Hamilton, Antonia F. de C. (2016). "Reduced Mimicry to Virtual Reality Avatars in Autism Spectrum Disorder". Journal of Autism and Developmental Disorders. 46 (12): 3788–3797. doi:10.1007/s10803-016-2930-2. PMC 5110595. PMID 27696183.

  • "How virtual reality is transforming autism studies". Spectrum | Autism Research News. 24 October 2018.

  • Chau, Brian (August 2017). "Immersive virtual reality therapy with myoelectric control for treatment-resistant phantom limb pain: Case report". Psychiatry. 14 (7–8): 3–7. PMC 5880370. PMID 29616149.

  • Warnier, Nadieh (November 2019). "Effect of virtual reality therapy on balance and walking in children with cerebral palsy: A systematic review". Pediatric Health. 23 (8): 502–518. doi:10.1080/17518423.2019.1683907. PMID 31674852. S2CID 207814817.

  • "VR Meetings Are Weird, but They Beat Our Current Reality". Wired. ISSN 1059-1028. Retrieved 3 April 2021.

  • Schouten, Alexander P.; van den Hooff, Bart; Feldberg, Frans (March 2016). "Virtual Team Work: Group Decision Making in 3D Virtual Environments". Communication Research. 43 (2): 180–210. doi:10.1177/0093650213509667. ISSN 0093-6502. S2CID 10503426.

  • "Online High School In Japan Enters Virtual Reality". blogs.wsj.com. 7 April 2016.

  • Moro, Christian; Štromberga, Zane; Raikos, Athanasios; Stirling, Allan (17 April 2017). "The effectiveness of virtual and augmented reality in health sciences and medical anatomy: VR and AR in Health Sciences and Medical Anatomy". Anatomical Sciences Education. 10 (6): 549–559. doi:10.1002/ase.1696. PMID 28419750. S2CID 25961448.

  • Moro, Christian; Štromberga, Zane; Stirling, Allan (29 November 2017). "Virtualisation devices for student learning: Comparison between desktop-based (Oculus Rift) and mobile-based (Gear VR) virtual reality in medical and health science education". Australasian Journal of Educational Technology. 33 (6). doi:10.14742/ajet.3840. ISSN 1449-5554.

  • "DSTS: First immersive virtual training system fielded". www.army.mil. Retrieved 16 March 2017.

  • "Virtual reality used to train Soldiers in new training simulator".

  • "NASA shows the world its 20-year virtual reality experiment to train astronauts: The inside story – TechRepublic". TechRepublic. Retrieved 15 March 2017.

  • James, Paul (19 April 2016). "A Look at NASA's Hybrid Reality Astronaut Training System, Powered by HTC Vive – Road to VR". Road to VR. Retrieved 15 March 2017.

  • "How NASA is Using Virtual and Augmented Reality to Train Astronauts". Unimersiv. 11 April 2016. Retrieved 15 March 2017.

  • Dourado, Antônio O.; Martin, C.A. (2013). "New concept of dynamic flight simulator, Part I". Aerospace Science and Technology. 30 (1): 79–82. doi:10.1016/j.ast.2013.07.005.

  • "Virtual Reality in Mine Training". www.cdc.gov. 21 September 2012. Retrieved 9 November 2018.

  • Moro, C; Birt, J; Stromberga, Z; Phelps, C; Clark, J; Glasziou, P; Scott, AM (May 2021). "Virtual and Augmented Reality Enhancements to Medical and Science Student Physiology and Anatomy Test Performance: A Systematic Review and Meta-Analysis". Anatomical Sciences Education. 14 (3): 368–376. doi:10.1002/ase.2049. PMID 33378557. S2CID 229929326.

  • Sedlák, Michal; Šašinka, Čeněk; Stachoň, Zdeněk; Chmelík, Jiří; Doležal, Milan (18 October 2022). "Collaborative and individual learning of geography in immersive virtual reality: An effectiveness study". PLOS ONE. 17 (10): e0276267. doi:10.1371/journal.pone.0276267. ISSN 1932-6203. PMC 9578614. PMID 36256672.

  • "How Virtual Reality Military Applications Work". 27 August 2007.

  • Omer; et al. (2018). "Performance evaluation of bridges using virtual reality". Proceedings of the 6th European Conference on Computational Mechanics (ECCM 6) & 7th European Conference on Computational Fluid Dynamics (ECFD 7), Glasgow, Scotland.

  • Seu; et al. (2018). "Use of gaming and affordable VR technology for the visualization of complex flow fields". Proceedings of the 6th European Conference on Computational Mechanics (ECCM 6) & 7th European Conference on Computational Fluid Dynamics (ECFD 7), Glasgow, Scotland.

  • Shufelt, Jr., J.W. (2006) A Vision for Future Virtual Training. In Virtual Media for Military Applications (pp. KN2-1 – KN2-12). Meeting Proceedings RTO-MP-HFM-136, Keynote 2. Neuilly-sur-Seine, France: RTO. Available from: http://www.rto.nato.int/abstracts.asp Archived 2007-06-13 at the Wayback Machine

  • Bukhari, Hatim; Andreatta, Pamela; Goldiez, Brian; Rabelo, Luis (1 January 2017). "A Framework for Determining the Return on Investment of Simulation-Based Training in Health Care". INQUIRY: The Journal of Health Care Organization, Provision, and Financing. 54: 0046958016687176. doi:10.1177/0046958016687176. ISSN 0046-9580. PMC 5798742. PMID 28133988.

  • Smith, Roger (1 February 2010). "The Long History of Gaming in Military Training". Simulation & Gaming. 41 (1): 6–19. doi:10.1177/1046878109334330. ISSN 1046-8781. S2CID 13051996.

  • Dennis, Ophelie Puissegur; Patterson, Rita M. (April 2020). "Medical virtual reality". Journal of Hand Therapy. 33 (2): 243–245. doi:10.1016/j.jht.2020.02.003. ISSN 1545-004X. PMID 32451173. S2CID 218895372.

  • Bueckle, Andreas; Buehling, Kilian; Shih, Patrick C.; Börner, Katy (27 October 2021). "3D virtual reality vs. 2D desktop registration user interface comparison". PLOS ONE. 16 (10): e0258103. arXiv:2102.12030. Bibcode:2021PLoSO..1658103B. doi:10.1371/journal.pone.0258103. ISSN 1932-6203. PMC 8550408. PMID 34705835.

  • Abulrub, Abdul-Hadi G.; Attridge, Alex N.; Williams, Mark A. (April 2011). "Virtual reality in engineering education: The future of creative learning". 2011 IEEE Global Engineering Education Conference (EDUCON): 751–757. doi:10.1109/EDUCON.2011.5773223. ISBN 978-1-61284-642-2.

  • Makaklı, Elif Süyük (2019). "STEAM approach in architectural education". SHS Web of Conferences. 66: 01012. doi:10.1051/shsconf/20196601012. ISSN 2261-2424.

  • Mura, Gianluca (2011). Metaplasticity in Virtual Worlds: Aesthetics and Semantic Concepts. Hershey, Pennsylvania: Information Science Reference. p. 203. ISBN 978-1-60960-077-8.

  • "Virtual reality at the British Museum: What is the value of virtual reality environments for learning by children and young people, schools, and families? – MW2016: Museums and the Web 2016".

  • "Extending the Museum Experience with Virtual Reality". 18 March 2016.

  • Shirer, Michael; Torchia, Marcus (27 February 2017). "Worldwide Spending on Augmented and Virtual Reality Forecast to Reach $13.9 Billion in 2017, According to IDC". International Data Corporation. International Data Corporation. Archived from the original on 19 March 2018. Retrieved 16 March 2018.

  • "How Technology is Expanding the Scope of Online Commerce Beyond Retail". www.walkersands.com. Retrieved 31 August 2018.

  • Thomas, Daniel J. (December 2016). "Augmented reality in surgery: The Computer-Aided Medicine revolution". International Journal of Surgery (London, England). 36 (Pt A): 25. doi:10.1016/j.ijsu.2016.10.003. ISSN 1743-9159. PMID 27741424.

  • Sáez-López, José-Manuel; García, María Luisa Sevillano-García; Pascual-Sevillano, María de los Ángeles (2019). "Aplicación del juego ubicuo con realidad aumentada en Educación Primaria". Comunicar (in Spanish). 27 (61): 71–82. doi:10.3916/C61-2019-06. ISSN 1134-3478.

  • Kirsch, Breanne (2019). "Virtual Reality: The Next Big Thing for Libraries to Consider". Information Technology and Libraries. 38 (4): 4–5. doi:10.6017/ital.v38i4.11847.

  • Bozorgi, Khosrow; Lischer-Katz, Zack (2020). "Using 3D/VR for Research and Cultural Heritage Preservation: Project Update on the Virtual Ganjali Khan Project". Preservation, Digital Technology & Culture. 49 (2): 45–57. doi:10.1515/pdtc-2020-0017. S2CID 221160772.

  • "Meeting You VR Documentary on MBC Global Media". MBC Global Media. 2 February 2022.

  • Nikolaou, Niki (25 September 2020). "The reconnection with a deceased loved one through virtual reality. Opinions and concerns against an unprecedented challenge". Bioethica. 6 (2): 52–64. doi:10.12681/bioeth.24851. S2CID 225264729.

  • Stein, Jan-Philipp (2021). "Conjuring up the departed in virtual reality: The good, the bad, and the potentially ugly". Psychology of Popular Media. 10 (4): 505–510. doi:10.1037/ppm0000315. S2CID 233628743.

  • Takle, Steve (28 February 2022). "HTC Vive partners with holoride; private 5G solution; location based entertainment". The Virtual Report. Retrieved 14 March 2022.

  • Hayden, Scott (18 June 2020). "Electronic Music Pioneer Jean-Michel Jarre to Host Concert in 'VRChat' This Weekend". Road to VR. Retrieved 6 October 2022.

  • FIERBERG, RUTHIE (20 July 2020). "Can This Game-Changing Innovation Get Live Theatre Back Before the Pandemic Ends?". PLAYBILL. Retrieved 6 October 2022.

  • Aswad, Jem (9 November 2021). "Justin Bieber to Stage Interactive Virtual Concert With Wave". Variety. Retrieved 6 October 2022.

  • "Stage And Screen: Virtual Creators Take The Next Step". The Metaculture. 1 October 2022. Retrieved 6 October 2022.

  • Moseley, Martin (20 July 2022). "Brendan Bradley's virtual reality musical Non-Player Character debuts on Top Soundtrack Chart with first single 'Reprogram Me' arriving at No. 25 on iTunes". Urbanista Magazine. Retrieved 6 October 2022.

  • Hamish Hector (14 February 2022). "Meta's Foo Fighters Super Bowl VR concert failed in the most basic ways". TechRadar. Retrieved 6 October 2022.

  • Havens, Lyndsey (6 July 2022). "Post Malone to Perform 'Twelve Carat Toothache' in a Virtual Reality Concert Hosted by Meta: Exclusive". Billboard. Retrieved 6 October 2022.

  • "Megan Thee Stallion To Hit the Virtual Road With "Enter Thee Hottieverse" VR Concert Tour". HYPEBEAST. 1 March 2022. Retrieved 6 October 2022.

  • Lawson, B. D. (2014). Motion sickness symptomatology and origins. Handbook of Virtual Environments: Design, Implementation, and Applications, 531-599.

  • "Oculus Rift Health and Safety Notice" (PDF). Archived from the original (PDF) on 6 July 2017. Retrieved 13 March 2017.

  • Araiza-Alba, Paola; Keane, Therese; Kaufman, Jordy (30 January 2022). "Are we ready for virtual reality in K–12 classrooms?". Technology, Pedagogy and Education. 31 (4): 471–491. doi:10.1080/1475939X.2022.2033307. ISSN 1475-939X. S2CID 246439125.

  • Fagan, Kaylee. "Here's what happens to your body when you've been in virtual reality for too long". Business Insider. Retrieved 5 September 2018.

  • Mukamal, Reena (28 February 2017). "Are Virtual Reality Headsets Safe for Eyes?". American Academy of Ophthalmology. Retrieved 11 September 2018.

  • Langley, Hugh (22 August 2017). "We need to look more carefully into the long-term effects of VR". Wareable.com. Retrieved 11 September 2018.

  • Kiryu, T; So, RH (25 September 2007). "Sensation of presence and cybersickness in applications of virtual reality for advanced rehabilitation". Journal of Neuroengineering and Rehabilitation. 4: 34. doi:10.1186/1743-0003-4-34. PMC 2117018. PMID 17894857.

  • Munafo, Justin; Diedrick, Meg; Stoffregen, Thomas A. (3 December 2016). "The virtual reality head-mounted display Oculus Rift induces motion sickness and is sexist in its effects". Experimental Brain Research. 235 (3): 889–901. doi:10.1007/s00221-016-4846-7. hdl:11299/224663. PMID 27915367. S2CID 13740398.

  • Park, George D.; Allen, R. Wade; Fiorentino, Dary; Rosenthal, Theodore J.; Cook, Marcia L. (5 November 2016). "Simulator Sickness Scores According to Symptom Susceptibility, Age, and Gender for an Older Driver Assessment Study". Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 50 (26): 2702–2706. doi:10.1177/154193120605002607. S2CID 111310621.

  • Hicks, Jamison S.; Durbin, David B. (June 2011). "ARL-TR-5573: A Summary of Simulator Sickness Ratings for U.S. Army Aviation Engineering Simulators" (PDF). US Army Research Laboratory. Archived (PDF) from the original on 27 July 2018.

  • Frischling, Bill (25 October 1995). "Sideline Play". The Washington Post. p. 11 – via ProQuest.

  • Caddy, Becca (19 October 2016). "Vomit Reality: Why VR makes some of us feel sick and how to make it stop". Wareable.com. Retrieved 11 September 2018.

  • Samit, Jay. "A Possible Cure for Virtual Reality Motion Sickness". Fortune.com. Retrieved 11 September 2018.

  • Lawson, Ben D.; Stanney, Kay M. (2021). "Editorial: Cybersickness in Virtual Reality and Augmented Reality". Frontiers in Virtual Reality. 2. doi:10.3389/frvir.2021.759682. ISSN 2673-4192.

  • Rodriguez, Sarah E. Needleman and Salvador (1 February 2022). "VR to the ER: Metaverse Early Adopters Prove Accident-Prone". The Wall Street Journal. ISSN 0099-9660. Retrieved 2 February 2022.

  • Elgueta, Adriana (31 January 2022). "Man breaks neck playing virtual reality game". news.com.au — Australia's leading news site. Retrieved 2 February 2022.

  • Tyler Wilde (22 December 2017). "Man dies in VR accident, reports Russian news agency". PC Gamer. Retrieved 2 February 2022.

  • Yamada-Rice, Dylan; Mushtaq, Faisal; Woodgate, Adam; Bosmans, D.; Douthwaite, A.; Douthwaite, I.; Harris, W.; Holt, R.; Kleeman, D. (12 September 2017). "Children and Virtual Reality: Emerging Possibilities and Challenges" (PDF). digilitey.eu. Archived from the original (PDF) on 17 May 2018. Retrieved 27 April 2020.

  • "Teens are split on the metaverse, most barely use VR headsets, survey shows". PC Gamer. 14 April 2022.

  • Bailey, Jakki O.; Bailenson, Jeremy N. (1 January 2017), Blumberg, Fran C.; Brooks, Patricia J. (eds.), "Chapter 9 – Immersive Virtual Reality and the Developing Child", Cognitive Development in Digital Contexts, Academic Press, pp. 181–200, doi:10.1016/B978-0-12-809481-5.00009-2, ISBN 978-0-12-809481-5, retrieved 27 April 2020

  • Funk, Jeanne B.; Buchman, Debra D. (1 June 1996). "Playing Violent Video and Computer Games and Adolescent Self-Concept". Journal of Communication. 46 (2): 19–32. doi:10.1111/j.1460-2466.1996.tb01472.x. ISSN 0021-9916.

  • Calvert, Sandra L.; Tan, Siu-Lan (January 1994). "Impact of virtual reality on young adults' physiological arousal and aggressive thoughts: Interaction versus observation". Journal of Applied Developmental Psychology. 15 (1): 125–139. doi:10.1016/0193-3973(94)90009-4. ISSN 0193-3973.

  • Rogers, Sol (5 February 2019). "Seven Reasons Why Eye-tracking Will Fundamentally Change VR". Forbes. Retrieved 13 May 2020.

  • Stein, Scott (31 January 2020). "Eye tracking is the next phase for VR, ready or not". CNET. Retrieved 8 April 2021.

    1. Kröger, Jacob Leon; Lutz, Otto Hans-Martin; Müller, Florian (2020). "What Does Your Gaze Reveal About You? On the Privacy Implications of Eye Tracking". Privacy and Identity Management. Data for Better Living: AI and Privacy. IFIP Advances in Information and Communication Technology. Vol. 576. pp. 226–241. doi:10.1007/978-3-030-42504-3_15. ISBN 978-3-030-42503-6. ISSN 1868-4238.

    Further reading

    External links

     

    https://en.wikipedia.org/wiki/Virtual_reality

    Photograph of the first AR system
    Virtual Fixtures – first AR system, U.S. Air Force, Wright-Patterson Air Force Base (1992)

    Augmented reality (AR) is an interactive experience that combines the real world and computer-generated content. The content can span multiple sensory modalities, including visual, auditory, haptic, somatosensory and olfactory.[1] AR can be defined as a system that incorporates three basic features: a combination of real and virtual worlds, real-time interaction, and accurate 3D registration of virtual and real objects.[2] The overlaid sensory information can be constructive (i.e. additive to the natural environment), or destructive (i.e. masking of the natural environment).[3] This experience is seamlessly interwoven with the physical world such that it is perceived as an immersive aspect of the real environment.[3] In this way, augmented reality alters one's ongoing perception of a real-world environment, whereas virtual reality completely replaces the user's real-world environment with a simulated one.[4][5]

    Augmented reality is largely synonymous with mixed reality. There is also overlap in terminology with extended reality and computer-mediated reality.

    The primary value of augmented reality is the manner in which components of the digital world blend into a person's perception of the real world, not as a simple display of data, but through the integration of immersive sensations, which are perceived as natural parts of an environment. The earliest functional AR systems that provided immersive mixed reality experiences for users were invented in the early 1990s, starting with the Virtual Fixtures system developed at the U.S. Air Force's Armstrong Laboratory in 1992.[3][6][7] Commercial augmented reality experiences were first introduced in entertainment and gaming businesses.[8] Subsequently, augmented reality applications have spanned commercial industries such as education, communications, medicine, and entertainment. In education, content may be accessed by scanning or viewing an image with a mobile device or by using markerless AR techniques.[9][10][11]

    Augmented reality is used to enhance natural environments or situations and offers perceptually enriched experiences. With the help of advanced AR technologies (e.g. adding computer vision, incorporating AR cameras into smartphone applications, and object recognition) the information about the surrounding real world of the user becomes interactive and digitally manipulated. Information about the environment and its objects is overlaid on the real world. This information can be virtual. Augmented Reality is any experience which is artificial and which adds to the already existing reality.[12][13][14][15][16] or real, e.g. seeing other real sensed or measured information such as electromagnetic radio waves overlaid in exact alignment with where they actually are in space.[17][18][19] Augmented reality also has a lot of potential in the gathering and sharing of tacit knowledge. Augmentation techniques are typically performed in real-time and in semantic contexts with environmental elements. Immersive perceptual information is sometimes combined with supplemental information like scores over a live video feed of a sporting event. This combines the benefits of both augmented reality technology and heads up display technology (HUD).

    Comparison with virtual reality

    In virtual reality (VR), the users' perception of reality is completely based on virtual information. In augmented reality (AR) the user is provided with additional computer- generated information within the data collected from real life that enhances their perception of reality.[20][21] For example, in architecture, VR can be used to create a walk-through simulation of the inside of a new building; and AR can be used to show a building's structures and systems super-imposed on a real-life view. Another example is through the use of utility applications. Some AR applications, such as Augment, enable users to apply digital objects into real environments, allowing businesses to use augmented reality devices as a way to preview their products in the real world.[22] Similarly, it can also be used to demo what products may look like in an environment for customers, as demonstrated by companies such as Mountain Equipment Co-op or Lowe's who use augmented reality to allow customers to preview what their products might look like at home through the use of 3D models.[23]

    Augmented reality (AR) differs from virtual reality (VR) in the sense that in AR part of the surrounding environment is 'real' and AR is just adding layers of virtual objects to the real environment. On the other hand, in VR the surrounding environment is completely virtual and computer generated. A demonstration of how AR layers objects onto the real world can be seen with augmented reality games. WallaMe is an augmented reality game application that allows users to hide messages in real environments, utilizing geolocation technology in order to enable users to hide messages wherever they may wish in the world.[24] Such applications have many uses in the world, including in activism and artistic expression.[25]

    Technology

    Photograph of a man wearing an augmented reality headset
    A man wearing an augmented reality headset

    Hardware

    Hardware components for augmented reality are: a processor, display, sensors and input devices. Modern mobile computing devices like smartphones and tablet computers contain these elements, which often include a camera and microelectromechanical systems (MEMS) sensors such as an accelerometer, GPS, and solid state compass, making them suitable AR platforms.[26][27] There are two technologies used in augmented reality: diffractive waveguides and reflective waveguides.

    Display

    Various technologies are used in augmented reality rendering, including optical projection systems, monitors, handheld devices, and display systems, which are worn on the human body.

    A head-mounted display (HMD) is a display device worn on the forehead, such as a harness or helmet-mounted. HMDs place images of both the physical world and virtual objects over the user's field of view. Modern HMDs often employ sensors for six degrees of freedom monitoring that allow the system to align virtual information to the physical world and adjust accordingly with the user's head movements.[28][29][30] HMDs can provide VR users with mobile and collaborative experiences.[31] Specific providers, such as uSens and Gestigon, include gesture controls for full virtual immersion.[32][33]

    Eyeglasses

    AR displays can be rendered on devices resembling eyeglasses. Versions include eyewear that employs cameras to intercept the real world view and re-display its augmented view through the eyepieces[34] and devices in which the AR imagery is projected through or reflected off the surfaces of the eyewear lens pieces.[35][36][37]

    HUD
    Photograph of a Headset computer
    Headset computer

    A head-up display (HUD) is a transparent display that presents data without requiring users to look away from their usual viewpoints. A precursor technology to augmented reality, heads-up displays were first developed for pilots in the 1950s, projecting simple flight data into their line of sight, thereby enabling them to keep their "heads up" and not look down at the instruments. Near-eye augmented reality devices can be used as portable head-up displays as they can show data, information, and images while the user views the real world. Many definitions of augmented reality only define it as overlaying the information.[38][39] This is basically what a head-up display does; however, practically speaking, augmented reality is expected to include registration and tracking between the superimposed perceptions, sensations, information, data, and images and some portion of the real world.[40]

    Contact lenses

    Contact lenses that display AR imaging are in development. These bionic contact lenses might contain the elements for display embedded into the lens including integrated circuitry, LEDs and an antenna for wireless communication. The first contact lens display was patented in 1999 by Steve Mann and was intended to work in combination with AR spectacles, but the project was abandoned,[41][42] then 11 years later in 2010–2011.[43][44][45][46] Another version of contact lenses, in development for the U.S. military, is designed to function with AR spectacles, allowing soldiers to focus on close-to-the-eye AR images on the spectacles and distant real world objects at the same time.[47][48]

    At CES 2013, a company called Innovega also unveiled similar contact lenses that required being combined with AR glasses to work.[49]

    The futuristic short film Sight[50] features contact lens-like augmented reality devices.[51][52]

    Many scientists have been working on contact lenses capable of different technological feats. A patent filed by Samsung describes an AR contact lens, that, when finished, will include a built-in camera on the lens itself.[53] The design is intended to control its interface by blinking an eye. It is also intended to be linked with the user's smartphone to review footage, and control it separately. When successful, the lens would feature a camera, or sensor inside of it. It is said that it could be anything from a light sensor, to a temperature sensor.

    The first publicly unveiled working prototype of an AR contact lens not requiring the use of glasses in conjunction was developed by Mojo Vision and announced and shown off at CES 2020.[54][55][56]

    Virtual retinal display

    A virtual retinal display (VRD) is a personal display device under development at the University of Washington's Human Interface Technology Laboratory under Dr. Thomas A. Furness III.[57] With this technology, a display is scanned directly onto the retina of a viewer's eye. This results in bright images with high resolution and high contrast. The viewer sees what appears to be a conventional display floating in space.[58]

    Several of tests were done to analyze the safety of the VRD.[57] In one test, patients with partial loss of vision—having either macular degeneration (a disease that degenerates the retina) or keratoconus—were selected to view images using the technology. In the macular degeneration group, five out of eight subjects preferred the VRD images to the cathode-ray tube (CRT) or paper images and thought they were better and brighter and were able to see equal or better resolution levels. The Keratoconus patients could all resolve smaller lines in several line tests using the VRD as opposed to their own correction. They also found the VRD images to be easier to view and sharper. As a result of these several tests, virtual retinal display is considered safe technology.

    Virtual retinal display creates images that can be seen in ambient daylight and ambient room light. The VRD is considered a preferred candidate to use in a surgical display due to its combination of high resolution and high contrast and brightness. Additional tests show high potential for VRD to be used as a display technology for patients that have low vision.

    EyeTap

    The EyeTap (also known as Generation-2 Glass[59]) captures rays of light that would otherwise pass through the center of the lens of the wearer's eye, and substitutes synthetic computer-controlled light for each ray of real light.

    The Generation-4 Glass[59] (Laser EyeTap) is similar to the VRD (i.e. it uses a computer-controlled laser light source) except that it also has infinite depth of focus and causes the eye itself to, in effect, function as both a camera and a display by way of exact alignment with the eye and resynthesis (in laser light) of rays of light entering the eye.[60]

    Handheld

    A Handheld display employs a small display that fits in a user's hand. All handheld AR solutions to date opt for video see-through. Initially handheld AR employed fiducial markers,[61] and later GPS units and MEMS sensors such as digital compasses and six degrees of freedom accelerometer–gyroscope. Today simultaneous localization and mapping (SLAM) markerless trackers such as PTAM (parallel tracking and mapping) are starting to come into use. Handheld display AR promises to be the first commercial success for AR technologies. The two main advantages of handheld AR are the portable nature of handheld devices and the ubiquitous nature of camera phones. The disadvantages are the physical constraints of the user having to hold the handheld device out in front of them at all times, as well as the distorting effect of classically wide-angled mobile phone cameras when compared to the real world as viewed through the eye.[62]

    Games such as Pokémon Go and Ingress utilize an Image Linked Map (ILM) interface, where approved geotagged locations appear on a stylized map for the user to interact with.[63]

    Projection mapping

    Projection mapping augments real-world objects and scenes, without the use of special displays such as monitors, head-mounted displays or hand-held devices. Projection mapping makes use of digital projectors to display graphical information onto physical objects. The key difference in projection mapping is that the display is separated from the users of the system. Since the displays are not associated with each user, projection mapping scales naturally up to groups of users, allowing for collocated collaboration between users.

    Examples include shader lamps, mobile projectors, virtual tables, and smart projectors. Shader lamps mimic and augment reality by projecting imagery onto neutral objects. This provides the opportunity to enhance the object's appearance with materials of a simple unit—a projector, camera, and sensor.

    Other applications include table and wall projections. One innovation, the Extended Virtual Table, separates the virtual from the real by including beam-splitter mirrors attached to the ceiling at an adjustable angle.[64] Virtual showcases, which employ beam splitter mirrors together with multiple graphics displays, provide an interactive means of simultaneously engaging with the virtual and the real. Many more implementations and configurations make spatial augmented reality display an increasingly attractive interactive alternative.

    A projection mapping system can display on any number of surfaces in an indoor setting at once. Projection mapping supports both a graphical visualization and passive haptic sensation for the end users. Users are able to touch physical objects in a process that provides passive haptic sensation.[16][65][66][67]

    Tracking

    Modern mobile augmented-reality systems use one or more of the following motion tracking technologies: digital cameras and/or other optical sensors, accelerometers, GPS, gyroscopes, solid state compasses, radio-frequency identification (RFID). These technologies offer varying levels of accuracy and precision. These technologies are implemented in the ARKit API by Apple and ARCore API by Google to allow tracking for their respective mobile device platforms.

    Networking

    Mobile augmented reality applications are gaining popularity because of the wide adoption of mobile and especially wearable devices. However, they often rely on computationally intensive computer vision algorithms with extreme latency requirements. To compensate for the lack of computing power, offloading data processing to a distant machine is often desired. Computation offloading introduces new constraints in applications, especially in terms of latency and bandwidth. Although there are a plethora of real-time multimedia transport protocols, there is a need for support from network infrastructure as well.[68]

    Input devices

    Techniques include speech recognition systems that translate a user's spoken words into computer instructions, and gesture recognition systems that interpret a user's body movements by visual detection or from sensors embedded in a peripheral device such as a wand, stylus, pointer, glove or other body wear.[69][70][71][72] Products which are trying to serve as a controller of AR headsets include Wave by Seebright Inc. and Nimble by Intugine Technologies.

    Computer

    The computer analyzes the sensed visual and other data to synthesize and position augmentations. Computers are responsible for the graphics that go with augmented reality. Augmented reality uses a computer-generated image which has a striking effect on the way the real world is shown. With the improvement of technology and computers, augmented reality is going to lead to a drastic change on ones perspective of the real world.[73] According to Time, in about 15–20 years it is predicted that augmented reality and virtual reality are going to become the primary use for computer interactions.[74] Computers are improving at a very fast rate, leading to new ways to improve other technology. The more that computers progress, augmented reality will become more flexible and more common in society. Computers are the core of augmented reality. [75] The computer receives data from the sensors which determine the relative position of an objects' surface. This translates to an input to the computer which then outputs to the users by adding something that would otherwise not be there. The computer comprises memory and a processor.[76] The computer takes the scanned environment then generates images or a video and puts it on the receiver for the observer to see. The fixed marks on an object's surface are stored in the memory of a computer. The computer also withdraws from its memory to present images realistically to the onlooker. The best example of this is of the Pepsi Max AR Bus Shelter.[77]

    Projector

    Projectors can also be used to display AR contents. The projector can throw a virtual object on a projection screen and the viewer can interact with this virtual object. Projection surfaces can be many objects such as walls or glass panes.[78]

    Software and algorithms

    Comparison of augmented reality fiducial markers for computer vision

    A key measure of AR systems is how realistically they integrate augmentations with the real world. The software must derive real world coordinates, independent of camera, and camera images. That process is called image registration, and uses different methods of computer vision, mostly related to video tracking.[79][80] Many computer vision methods of augmented reality are inherited from visual odometry. An augogram is a computer generated image that is used to create AR. Augography is the science and software practice of making augograms for AR.

    Usually those methods consist of two parts. The first stage is to detect interest points, fiducial markers or optical flow in the camera images. This step can use feature detection methods like corner detection, blob detection, edge detection or thresholding, and other image processing methods.[81][82] The second stage restores a real world coordinate system from the data obtained in the first stage. Some methods assume objects with known geometry (or fiducial markers) are present in the scene. In some of those cases the scene 3D structure should be calculated beforehand. If part of the scene is unknown simultaneous localization and mapping (SLAM) can map relative positions. If no information about scene geometry is available, structure from motion methods like bundle adjustment are used. Mathematical methods used in the second stage include: projective (epipolar) geometry, geometric algebra, rotation representation with exponential map, kalman and particle filters, nonlinear optimization, robust statistics.[citation needed]

    In augmented reality, the distinction is made between two distinct modes of tracking, known as marker and markerless. Markers are visual cues which trigger the display of the virtual information.[83] A piece of paper with some distinct geometries can be used. The camera recognizes the geometries by identifying specific points in the drawing. Markerless tracking, also called instant tracking, does not use markers. Instead, the user positions the object in the camera view preferably in a horizontal plane. It uses sensors in mobile devices to accurately detect the real-world environment, such as the locations of walls and points of intersection.[84]

    Augmented Reality Markup Language (ARML) is a data standard developed within the Open Geospatial Consortium (OGC),[85] which consists of Extensible Markup Language (XML) grammar to describe the location and appearance of virtual objects in the scene, as well as ECMAScript bindings to allow dynamic access to properties of virtual objects.

    To enable rapid development of augmented reality applications, some software development applications such as Lens Studio from Snapchat and Spark AR from Facebook were launched including Software Development kits (SDKs) from Apple and Google have emerged.[86][87]

    Development

    The implementation of augmented reality in consumer products requires considering the design of the applications and the related constraints of the technology platform. Since AR systems rely heavily on the immersion of the user and the interaction between the user and the system, design can facilitate the adoption of virtuality. For most augmented reality systems, a similar design guideline can be followed. The following lists some considerations for designing augmented reality applications:

    Environmental/context design

    Context Design focuses on the end-user's physical surrounding, spatial space, and accessibility that may play a role when using the AR system. Designers should be aware of the possible physical scenarios the end-user may be in such as:

    • Public, in which the users use their whole body to interact with the software
    • Personal, in which the user uses a smartphone in a public space
    • Intimate, in which the user is sitting with a desktop and is not really moving
    • Private, in which the user has on a wearable.[88]

    By evaluating each physical scenario, potential safety hazards can be avoided and changes can be made to greater improve the end-user's immersion. UX designers will have to define user journeys for the relevant physical scenarios and define how the interface reacts to each.

    Another aspect of context design involves the design of the system's functionality and its ability to accommodate user preferences.[89][90] While accessibility tools are common in basic application design, some consideration should be made when designing time-limited prompts (to prevent unintentional operations), audio cues and overall engagement time. It is important to note that in some situations, the application's functionality may hinder the user's ability. For example, applications that is used for driving should reduce the amount of user interaction and use audio cues instead.

    Interaction design

    Interaction design in augmented reality technology centers on the user's engagement with the end product to improve the overall user experience and enjoyment. The purpose of interaction design is to avoid alienating or confusing the user by organizing the information presented. Since user interaction relies on the user's input, designers must make system controls easier to understand and accessible. A common technique to improve usability for augmented reality applications is by discovering the frequently accessed areas in the device's touch display and design the application to match those areas of control.[91] It is also important to structure the user journey maps and the flow of information presented which reduce the system's overall cognitive load and greatly improves the learning curve of the application.[92]

    In interaction design, it is important for developers to utilize augmented reality technology that complement the system's function or purpose.[93] For instance, the utilization of exciting AR filters and the design of the unique sharing platform in Snapchat enables users to augment their in-app social interactions. In other applications that require users to understand the focus and intent, designers can employ a reticle or raycast from the device.[89]

    Visual design

    In general, visual design is the appearance of the developing application that engages the user. To improve the graphic interface elements and user interaction, developers may use visual cues to inform the user what elements of UI are designed to interact with and how to interact with them. Since navigating in an AR application may appear difficult and seem frustrating, visual cue design can make interactions seem more natural.[88]

    In some augmented reality applications that use a 2D device as an interactive surface, the 2D control environment does not translate well in 3D space making users hesitant to explore their surroundings. To solve this issue, designers should apply visual cues to assist and encourage users to explore their surroundings.

    It is important to note the two main objects in AR when developing VR applications: 3D volumetric objects that are manipulated and realistically interact with light and shadow; and animated media imagery such as images and videos which are mostly traditional 2D media rendered in a new context for augmented reality.[88] When virtual objects are projected onto a real environment, it is challenging for augmented reality application designers to ensure a perfectly seamless integration relative to the real-world environment, especially with 2D objects. As such, designers can add weight to objects, use depths maps, and choose different material properties that highlight the object's presence in the real world. Another visual design that can be applied is using different lighting techniques or casting shadows to improve overall depth judgment. For instance, a common lighting technique is simply placing a light source overhead at the 12 o’clock position, to create shadows on virtual objects.[88]

    Possible applications

    Augmented reality has been explored for many applications, from gaming and entertainment to medicine, education and business.[94] Example application areas described below include archaeology, architecture, commerce and education. Some of the earliest cited examples include augmented reality used to support surgery by providing virtual overlays to guide medical practitioners, to AR content for astronomy and welding.[7][95]

    Archaeology

    AR has been used to aid archaeological research. By augmenting archaeological features onto the modern landscape, AR allows archaeologists to formulate possible site configurations from extant structures.[96] Computer generated models of ruins, buildings, landscapes or even ancient people have been recycled into early archaeological AR applications.[97][98][99] For example, implementing a system like VITA (Visual Interaction Tool for Archaeology) will allow users to imagine and investigate instant excavation results without leaving their home. Each user can collaborate by mutually "navigating, searching, and viewing data". Hrvoje Benko, a researcher in the computer science department at Columbia University, points out that these particular systems and others like them can provide "3D panoramic images and 3D models of the site itself at different excavation stages" all the while organizing much of the data in a collaborative way that is easy to use. Collaborative AR systems supply multimodal interactions that combine the real world with virtual images of both environments.[100]

    Architecture

    AR can aid in visualizing building projects. Computer-generated images of a structure can be superimposed onto a real-life local view of a property before the physical building is constructed there; this was demonstrated publicly by Trimble Navigation in 2004. AR can also be employed within an architect's workspace, rendering animated 3D visualizations of their 2D drawings. Architecture sight-seeing can be enhanced with AR applications, allowing users viewing a building's exterior to virtually see through its walls, viewing its interior objects and layout.[101][102][103]

    With continual improvements to GPS accuracy, businesses are able to use augmented reality to visualize georeferenced models of construction sites, underground structures, cables and pipes using mobile devices.[104] Augmented reality is applied to present new projects, to solve on-site construction challenges, and to enhance promotional materials.[105] Examples include the Daqri Smart Helmet, an Android-powered hard hat used to create augmented reality for the industrial worker, including visual instructions, real-time alerts, and 3D mapping.

    Following the Christchurch earthquake, the University of Canterbury released CityViewAR,[106] which enabled city planners and engineers to visualize buildings that had been destroyed.[107] This not only provided planners with tools to reference the previous cityscape, but it also served as a reminder of the magnitude of the resulting devastation, as entire buildings had been demolished.

    Urban design and planning

    AR systems are being used as collaborative tools for design and planning in the built environment. For example, AR can be used to create augmented reality maps, buildings and data feeds projected onto tabletops for collaborative viewing by built environment professionals.[108] Outdoor AR promises that designs and plans can be superimposed on the real-world, redefining the remit of these professions to bring in-situ design into their process. Design options can be articulated on site, and appear closer to reality than traditional desktop mechanisms such as 2D maps and 3d models.

    The concept of smart city also utilizes ICT systems including AR to present information to citizens, enhance operational efficiency, and ultimately improve the quality of public services.[109] Some urban developers have started to take actions by installing intelligent systems for waste collection, monitoring public security through AR monitoring technologies, and improving tourism through interactive technologies.[109]

    Education

    In educational settings, AR has been used to complement a standard curriculum. Text, graphics, video, and audio may be superimposed into a student's real-time environment. Textbooks, flashcards and other educational reading material may contain embedded "markers" or triggers that, when scanned by an AR device, produced supplementary information to the student rendered in a multimedia format.[110][111][112] The 2015 Virtual, Augmented and Mixed Reality: 7th International Conference mentioned Google Glass as an example of augmented reality that can replace the physical classroom.[113] First, AR technologies help learners engage in authentic exploration in the real world, and virtual objects such as texts, videos, and pictures are supplementary elements for learners to conduct investigations of the real-world surroundings.[114]

    As AR evolves, students can participate interactively and interact with knowledge more authentically. Instead of remaining passive recipients, students can become active learners, able to interact with their learning environment. Computer-generated simulations of historical events allow students to explore and learning details of each significant area of the event site.[115]

    In higher education, Construct3D, a Studierstube system, allows students to learn mechanical engineering concepts, math or geometry.[116] Chemistry AR apps allow students to visualize and interact with the spatial structure of a molecule using a marker object held in the hand.[117] Others have used HP Reveal, a free app, to create AR notecards for studying organic chemistry mechanisms or to create virtual demonstrations of how to use laboratory instrumentation.[118] Anatomy students can visualize different systems of the human body in three dimensions.[119] Using AR as a tool to learn anatomical structures has been shown to increase the learner knowledge and provide intrinsic benefits, such as increased engagement and learner immersion.[120][121]

    Industrial manufacturing

    AR is used to substitute paper manuals with digital instructions which are overlaid on the manufacturing operator's field of view, reducing mental effort required to operate.[122] AR makes machine maintenance efficient because it gives operators direct access to a machine's maintenance history.[123] Virtual manuals help manufacturers adapt to rapidly-changing product designs, as digital instructions are more easily edited and distributed compared to physical manuals.[122]

    Digital instructions increase operator safety by removing the need for operators to look at a screen or manual away from the working area, which can be hazardous. Instead, the instructions are overlaid on the working area.[124] The use of AR can increase operators' feeling of safety when working near high-load industrial machinery by giving operators additional information on a machine's status and safety functions, as well as hazardous areas of the workspace.[124][125]

    Commerce

    Illustration of an AR-Icon image
    The AR-Icon can be used as a marker on print as well as on online media. It signals the viewer that digital content is behind it. The content can be viewed with a smartphone or tablet

    AR is used to integrate print and video marketing. Printed marketing material can be designed with certain "trigger" images that, when scanned by an AR-enabled device using image recognition, activate a video version of the promotional material. A major difference between augmented reality and straightforward image recognition is that one can overlay multiple media at the same time in the view screen, such as social media share buttons, the in-page video even audio and 3D objects. Traditional print-only publications are using augmented reality to connect different types of media.[126][127][128][129][130]

    AR can enhance product previews such as allowing a customer to view what's inside a product's packaging without opening it.[131] AR can also be used as an aid in selecting products from a catalog or through a kiosk. Scanned images of products can activate views of additional content such as customization options and additional images of the product in its use.[132]

    By 2010, virtual dressing rooms had been developed for e-commerce.[133]

    In 2012, a mint used AR techniques to market a commemorative coin for Aruba. The coin itself was used as an AR trigger, and when held in front of an AR-enabled device it revealed additional objects and layers of information that were not visible without the device.[134][135]

    In 2018, Apple announced USDZ AR file support for iPhones and iPads with iOS12. Apple has created an AR QuickLook Gallery that allows masses to experience augmented reality on their own Apple device.[136]

    In 2018, Shopify, the Canadian e-commerce company, announced AR Quick Look integration. Their merchants will be able to upload 3D models of their products and their users will be able to tap on the models inside the Safari browser on their iOS devices to view them in their real-world environments.[137]

    In 2018, Twinkl released a free AR classroom application. Pupils can see how York looked over 1,900 years ago.[138] Twinkl launched the first ever multi-player AR game, Little Red[139] and has over 100 free AR educational models.[140]

    Augmented reality is becoming more frequently used for online advertising. Retailers offer the ability to upload a picture on their website and "try on" various clothes which are overlaid on the picture. Even further, companies such as Bodymetrics install dressing booths in department stores that offer full-body scanning. These booths render a 3-D model of the user, allowing the consumers to view different outfits on themselves without the need of physically changing clothes.[141] For example, JC Penney and Bloomingdale's use "virtual dressing rooms" that allow customers to see themselves in clothes without trying them on.[142] Another store that uses AR to market clothing to its customers is Neiman Marcus.[143] Neiman Marcus offers consumers the ability to see their outfits in a 360-degree view with their "memory mirror".[143] Makeup stores like L'Oreal, Sephora, Charlotte Tilbury, and Rimmel also have apps that utilize AR.[144] These apps allow consumers to see how the makeup will look on them.[144] According to Greg Jones, director of AR and VR at Google, augmented reality is going to "reconnect physical and digital retail".[144]

    AR technology is also used by furniture retailers such as IKEA, Houzz, and Wayfair.[144][142] These retailers offer apps that allow consumers to view their products in their home prior to purchasing anything.[144] In 2017, Ikea announced the Ikea Place app. It contains a catalogue of over 2,000 products—nearly the company's full collection of sofas, armchairs, coffee tables, and storage units which one can place anywhere in a room with their phone.[145] The app made it possible to have 3D and true-to-scale models of furniture in the customer's living space. IKEA realized that their customers are not shopping in stores as often or making direct purchases anymore.[146][147] Shopify's acquisition of Primer, an AR app aims to push small and medium-sized sellers towards interactive AR shopping with easy to use AR integration and user experience for both merchants and consumers.[148] AR helps the retail industry reduce operating costs. Merchants upload product information to the AR system, and consumers can use mobile terminals to search and generate 3D maps.[149]

    Literature

    Illustration of a QR code
    An example of an AR code containing a QR code

    The first description of AR as it is known today was in Virtual Light, the 1994 novel by William Gibson. In 2011, AR was blended with poetry by ni ka from Sekai Camera in Tokyo, Japan. The prose of these AR poems come from Paul Celan, Die Niemandsrose, expressing the aftermath of the 2011 Tōhoku earthquake and tsunami.[150]

    Visual art

    Illustration from AR Game 10.000 Moving Cities Art Installation.
    10.000 Moving Cities, Marc Lee, Augmented Reality Multiplayer Game, Art Installation[151]

    AR applied in the visual arts allows objects or places to trigger artistic multidimensional experiences and interpretations of reality.

    The Australian new media artist Jeffrey Shaw pioneered Augmented Reality in three artworks: Viewpoint in 1975, Virtual Sculptures in 1987 and The Golden Calf in 1993.[152][153] He continues to explore new permutations of AR in numerous recent works.

    Augmented reality can aid in the progression of visual art in museums by allowing museum visitors to view artwork in galleries in a multidimensional way through their phone screens.[154] The Museum of Modern Art in New York has created an exhibit in their art museum showcasing AR features that viewers can see using an app on their smartphone.[155] The museum has developed their personal app, called MoMAR Gallery, that museum guests can download and use in the augmented reality specialized gallery in order to view the museum's paintings in a different way.[156] This allows individuals to see hidden aspects and information about the paintings, and to be able to have an interactive technological experience with artwork as well.

    AR technology was also used in Nancy Baker Cahill's "Margin of Error" and "Revolutions,"[157] the two public art pieces she created for the 2019 Desert X exhibition.[158]

    AR technology aided the development of eye tracking technology to translate a disabled person's eye movements into drawings on a screen.[159]

    AR technology can also be used to place objects in the user's environment. A Danish artist, Olafur Eliasson, is placing objects like burning suns, extraterrestrial rocks, and rare animals, into the user's environment.[160] Martin & Muñoz started using Augmented Reality (AR) technology in 2020 to create and place virtual works, based on their snow globes, in their exhibitions and in user's environments. Their first AR work was presented at the Cervantes Institute in New York in early 2022.[161]

    Fitness

    AR hardware and software for use in fitness includes smart glasses made for biking and running, with performance analytics and map navigation projected onto the user's field of vision,[162] and boxing, martial arts, and tennis, where users remain aware of their physical environment for safety.[163] Fitness-related games and software include Pokémon Go and Jurassic World Alive.[164]

    Human–computer interaction

    Human–computer interaction (HCI) is an interdisciplinary area of computing that deals with design and implementation of systems that interact with people. Researchers in HCI come from a number of disciplines, including computer science, engineering, design, human factor, and social science, with a shared goal to solve problems in the design and the use of technology so that it can be used more easily, effectively, efficiently, safely, and with satisfaction.[165]

    Remote collaboration

    Primary school children learn easily from interactive experiences. As an example, astronomical constellations and the movements of objects in the solar system were oriented in 3D and overlaid in the direction the device was held, and expanded with supplemental video information. Paper-based science book illustrations could seem to come alive as video without requiring the child to navigate to web-based materials.

    In 2013, a project was launched on Kickstarter to teach about electronics with an educational toy that allowed children to scan their circuit with an iPad and see the electric current flowing around.[166] While some educational apps were available for AR by 2016, it was not broadly used. Apps that leverage augmented reality to aid learning included SkyView for studying astronomy,[167] AR Circuits for building simple electric circuits,[168] and SketchAr for drawing.[169]

    AR would also be a way for parents and teachers to achieve their goals for modern education, which might include providing more individualized and flexible learning, making closer connections between what is taught at school and the real world, and helping students to become more engaged in their own learning.

    Emergency management/search and rescue

    Augmented reality systems are used in public safety situations, from super storms to suspects at large.

    As early as 2009, two articles from Emergency Management discussed AR technology for emergency management. The first was "Augmented Reality—Emerging Technology for Emergency Management", by Gerald Baron.[170] According to Adam Crow,: "Technologies like augmented reality (ex: Google Glass) and the growing expectation of the public will continue to force professional emergency managers to radically shift when, where, and how technology is deployed before, during, and after disasters."[171]

    Another early example was a search aircraft looking for a lost hiker in rugged mountain terrain. Augmented reality systems provided aerial camera operators with a geographic awareness of forest road names and locations blended with the camera video. The camera operator was better able to search for the hiker knowing the geographic context of the camera image. Once located, the operator could more efficiently direct rescuers to the hiker's location because the geographic position and reference landmarks were clearly labeled.[172]

    Social interaction

    AR can be used to facilitate social interaction. An augmented reality social network framework called Talk2Me enables people to disseminate information and view others' advertised information in an augmented reality way. The timely and dynamic information sharing and viewing functionalities of Talk2Me help initiate conversations and make friends for users with people in physical proximity.[173] However, use of an AR headset can inhibit the quality of an interaction between two people if one isn't wearing one if the headset becomes a distraction.[174]

    Augmented reality also gives users the ability to practice different forms of social interactions with other people in a safe, risk-free environment. Hannes Kauffman, Associate Professor for virtual reality at TU Vienna, says: "In collaborative augmented reality multiple users may access a shared space populated by virtual objects, while remaining grounded in the real world. This technique is particularly powerful for educational purposes when users are collocated and can use natural means of communication (speech, gestures, etc.), but can also be mixed successfully with immersive VR or remote collaboration."[This quote needs a citation] Hannes cites education as a potential use of this technology.

    Video games

    An image from an AR mobile game
    An AR mobile game using a trigger image as fiducial marker

    The gaming industry embraced AR technology. A number of games were developed for prepared indoor environments, such as AR air hockey, Titans of Space, collaborative combat against virtual enemies, and AR-enhanced pool table games.[175][176][177]

    In 2010, Ogmento became the first AR gaming startup to receive VC Funding. The company went on to produce early location-based AR games for titles like Paranormal Activity: Sanctuary, NBA: King of the Court, and Halo: King of the Hill. The companies computer vision technology was eventually repackaged and sold to Apple, became a major contribution to ARKit.[178]

    Augmented reality allowed video game players to experience digital game play in a real-world environment. Niantic released the augmented reality mobile game Pokémon Go.[179] Disney has partnered with Lenovo to create the augmented reality game Star Wars: Jedi Challenges that works with a Lenovo Mirage AR headset, a tracking sensor and a Lightsaber controller, scheduled to launch in December 2017.[180]

    Augmented reality gaming (ARG) is also used to market film and television entertainment properties. On 16 March 2011, BitTorrent promoted an open licensed version of the feature film Zenith in the United States. Users who downloaded the BitTorrent client software were also encouraged to download and share Part One of three parts of the film. On 4 May 2011, Part Two of the film was made available on VODO. The episodic release of the film, supplemented by an ARG transmedia marketing campaign, created a viral effect and over a million users downloaded the movie.[181][182][183][184]

    Industrial design

    AR allows industrial designers to experience a product's design and operation before completion. Volkswagen has used AR for comparing calculated and actual crash test imagery.[185] AR has been used to visualize and modify car body structure and engine layout. It has also been used to compare digital mock-ups with physical mock-ups to find discrepancies between them.[186][187]

    Healthcare planning, practice and education

    One of the first applications of augmented reality was in healthcare, particularly to support the planning, practice, and training of surgical procedures. As far back as 1992, enhancing human performance during surgery was a formally stated objective when building the first augmented reality systems at U.S. Air Force laboratories.[3] Since 2005, a device called a near-infrared vein finder that films subcutaneous veins, processes and projects the image of the veins onto the skin has been used to locate veins.[188][189] AR provides surgeons with patient monitoring data in the style of a fighter pilot's heads-up display, and allows patient imaging records, including functional videos, to be accessed and overlaid. Examples include a virtual X-ray view based on prior tomography or on real-time images from ultrasound and confocal microscopy probes,[190] visualizing the position of a tumor in the video of an endoscope,[191] or radiation exposure risks from X-ray imaging devices.[192][193] AR can enhance viewing a fetus inside a mother's womb.[194] Siemens, Karl Storz and IRCAD have developed a system for laparoscopic liver surgery that uses AR to view sub-surface tumors and vessels.[195] AR has been used for cockroach phobia treatment[196] and to reduce the fear of spiders.[197] Patients wearing augmented reality glasses can be reminded to take medications.[198] Augmented reality can be very helpful in the medical field.[199] It could be used to provide crucial information to a doctor or surgeon without having them take their eyes off the patient. On 30 April 2015 Microsoft announced the Microsoft HoloLens, their first attempt at augmented reality. The HoloLens has advanced through the years and is capable of projecting holograms for near infrared fluorescence based image guided surgery.[200] As augmented reality advances, it finds increasing applications in healthcare. Augmented reality and similar computer based-utilities are being used to train medical professionals.[201][202] In healthcare, AR can be used to provide guidance during diagnostic and therapeutic interventions e.g. during surgery. Magee et al.,[203] for instance, describe the use of augmented reality for medical training in simulating ultrasound-guided needle placement. A very recent study by Akçayır, Akçayır, Pektaş, and Ocak (2016) revealed that AR technology both improves university students' laboratory skills and helps them to build positive attitudes relating to physics laboratory work.[204] Recently, augmented reality began seeing adoption in neurosurgery, a field that requires heavy amounts of imaging before procedures.[205]

    Visualizations of big data sets

    With different methods of visualization for processing big data sets in augmented and virtual reality, Gautam Siwach et al, explored the implementation of the statistical methods and modeling techniques on big data in Metaverse i.e. using machine learning algorithms and artificial intelligence.[206]

    Spatial immersion and interaction

    Augmented reality applications, running on handheld devices utilized as virtual reality headsets, can also digitize human presence in space and provide a computer generated model of them, in a virtual space where they can interact and perform various actions. Such capabilities are demonstrated by Project Anywhere, developed by a postgraduate student at ETH Zurich, which was dubbed as an "out-of-body experience".[207][208][209]

    Flight training

    Building on decades of perceptual-motor research in experimental psychology, researchers at the Aviation Research Laboratory of the University of Illinois at Urbana–Champaign used augmented reality in the form of a flight path in the sky to teach flight students how to land an airplane using a flight simulator. An adaptive augmented schedule in which students were shown the augmentation only when they departed from the flight path proved to be a more effective training intervention than a constant schedule.[210][211] Flight students taught to land in the simulator with the adaptive augmentation learned to land a light aircraft more quickly than students with the same amount of landing training in the simulator but with constant augmentation or without any augmentation.[210]

    Military

    Photograph of an Augmented Reality System for Soldier ARC4.
    Augmented reality system for soldier ARC4 (U.S. Army 2017)

    An interesting early application of AR occurred when Rockwell International created video map overlays of satellite and orbital debris tracks to aid in space observations at Air Force Maui Optical System. In their 1993 paper "Debris Correlation Using the Rockwell WorldView System" the authors describe the use of map overlays applied to video from space surveillance telescopes. The map overlays indicated the trajectories of various objects in geographic coordinates. This allowed telescope operators to identify satellites, and also to identify and catalog potentially dangerous space debris.[212]

    Starting in 2003 the US Army integrated the SmartCam3D augmented reality system into the Shadow Unmanned Aerial System to aid sensor operators using telescopic cameras to locate people or points of interest. The system combined fixed geographic information including street names, points of interest, airports, and railroads with live video from the camera system. The system offered a "picture in picture" mode that allows it to show a synthetic view of the area surrounding the camera's field of view. This helps solve a problem in which the field of view is so narrow that it excludes important context, as if "looking through a soda straw". The system displays real-time friend/foe/neutral location markers blended with live video, providing the operator with improved situational awareness.

    As of 2010, Korean researchers are looking to implement mine-detecting robots into the military. The proposed design for such a robot includes a mobile platform that is like a track which would be able to cover uneven distances including stairs. The robot's mine detection sensor would include a combination of metal detectors and ground-penetrating radar to locate mines or IEDs. This unique design would be immeasurably helpful in saving lives of Korean soldiers.[213]

    Researchers at USAF Research Lab (Calhoun, Draper et al.) found an approximately two-fold increase in the speed at which UAV sensor operators found points of interest using this technology.[214] This ability to maintain geographic awareness quantitatively enhances mission efficiency. The system is in use on the US Army RQ-7 Shadow and the MQ-1C Gray Eagle Unmanned Aerial Systems.

    Circular review system of the company LimpidArmor

    In combat, AR can serve as a networked communication system that renders useful battlefield data onto a soldier's goggles in real time. From the soldier's viewpoint, people and various objects can be marked with special indicators to warn of potential dangers. Virtual maps and 360° view camera imaging can also be rendered to aid a soldier's navigation and battlefield perspective, and this can be transmitted to military leaders at a remote command center.[215] The combination of 360° view cameras visualization and AR can be used on board combat vehicles and tanks as circular review system.

    AR can be an effective tool for virtually mapping out the 3D topologies of munition storages in the terrain, with the choice of the munitions combination in stacks and distances between them with a visualization of risk areas.[216][unreliable source?] The scope of AR applications also includes visualization of data from embedded munitions monitoring sensors.[216]

    Navigation

    Illustration of a LandForm video map overlay marking runways, road, and buildings
    LandForm video map overlay marking runways, road, and buildings during 1999 helicopter flight test

    The NASA X-38 was flown using a hybrid synthetic vision system that overlaid map data on video to provide enhanced navigation for the spacecraft during flight tests from 1998 to 2002. It used the LandForm software which was useful for times of limited visibility, including an instance when the video camera window frosted over leaving astronauts to rely on the map overlays.[217] The LandForm software was also test flown at the Army Yuma Proving Ground in 1999. In the photo at right one can see the map markers indicating runways, air traffic control tower, taxiways, and hangars overlaid on the video.[218]

    AR can augment the effectiveness of navigation devices. Information can be displayed on an automobile's windshield indicating destination directions and meter, weather, terrain, road conditions and traffic information as well as alerts to potential hazards in their path.[219][220][221] Since 2012, a Swiss-based company WayRay has been developing holographic AR navigation systems that use holographic optical elements for projecting all route-related information including directions, important notifications, and points of interest right into the drivers' line of sight and far ahead of the vehicle.[222][223] Aboard maritime vessels, AR can allow bridge watch-standers to continuously monitor important information such as a ship's heading and speed while moving throughout the bridge or performing other tasks.[224]

    Workplace

    Augmented reality may have a positive impact on work collaboration as people may be inclined to interact more actively with their learning environment. It may also encourage tacit knowledge renewal which makes firms more competitive. AR was used to facilitate collaboration among distributed team members via conferences with local and virtual participants. AR tasks included brainstorming and discussion meetings utilizing common visualization via touch screen tables, interactive digital whiteboards, shared design spaces and distributed control rooms.[225][226][227]

    In industrial environments, augmented reality is proving to have a substantial impact with more and more use cases emerging across all aspect of the product lifecycle, starting from product design and new product introduction (NPI) to manufacturing to service and maintenance, to material handling and distribution. For example, labels were displayed on parts of a system to clarify operating instructions for a mechanic performing maintenance on a system.[228][229] Assembly lines benefited from the usage of AR. In addition to Boeing, BMW and Volkswagen were known for incorporating this technology into assembly lines for monitoring process improvements.[230][231][232] Big machines are difficult to maintain because of their multiple layers or structures. AR permits people to look through the machine as if with an x-ray, pointing them to the problem right away.[233]

    As AR technology has evolved and second and third generation AR devices come to market, the impact of AR in enterprise continues to flourish. In the Harvard Business Review, Magid Abraham and Marco Annunziata discuss how AR devices are now being used to "boost workers' productivity on an array of tasks the first time they're used, even without prior training'.[234] They contend that "these technologies increase productivity by making workers more skilled and efficient, and thus have the potential to yield both more economic growth and better jobs".[234]

    Broadcast and live events

    Weather visualizations were the first application of augmented reality in television. It has now become common in weather casting to display full motion video of images captured in real-time from multiple cameras and other imaging devices. Coupled with 3D graphics symbols and mapped to a common virtual geospatial model, these animated visualizations constitute the first true application of AR to TV.

    AR has become common in sports telecasting. Sports and entertainment venues are provided with see-through and overlay augmentation through tracked camera feeds for enhanced viewing by the audience. Examples include the yellow "first down" line seen in television broadcasts of American football games showing the line the offensive team must cross to receive a first down. AR is also used in association with football and other sporting events to show commercial advertisements overlaid onto the view of the playing area. Sections of rugby fields and cricket pitches also display sponsored images. Swimming telecasts often add a line across the lanes to indicate the position of the current record holder as a race proceeds to allow viewers to compare the current race to the best performance. Other examples include hockey puck tracking and annotations of racing car performance[235] and snooker ball trajectories.[79][236]

    AR has been used to enhance concert and theater performances. For example, artists allow listeners to augment their listening experience by adding their performance to that of other bands/groups of users.[237][238][239]

    Tourism and sightseeing

    Travelers may use AR to access real-time informational displays regarding a location, its features, and comments or content provided by previous visitors. Advanced AR applications include simulations of historical events, places, and objects rendered into the landscape.[240][241][242]

    AR applications linked to geographic locations present location information by audio, announcing features of interest at a particular site as they become visible to the user.[243][244][245]

    Translation

    AR systems such as Word Lens can interpret the foreign text on signs and menus and, in a user's augmented view, re-display the text in the user's language. Spoken words of a foreign language can be translated and displayed in a user's view as printed subtitles.[246][247][248]

    Music

    It has been suggested that augmented reality may be used in new methods of music production, mixing, control and visualization.[249][250][251][252]

    A tool for 3D music creation in clubs that, in addition to regular sound mixing features, allows the DJ to play dozens of sound samples, placed anywhere in 3D space, has been conceptualized.[253]

    Leeds College of Music teams have developed an AR app that can be used with Audient desks and allow students to use their smartphone or tablet to put layers of information or interactivity on top of an Audient mixing desk.[254]

    ARmony is a software package that makes use of augmented reality to help people to learn an instrument.[255]

    In a proof-of-concept project Ian Sterling, an interaction design student at California College of the Arts, and software engineer Swaroop Pal demonstrated a HoloLens app whose primary purpose is to provide a 3D spatial UI for cross-platform devices—the Android Music Player app and Arduino-controlled Fan and Light—and also allow interaction using gaze and gesture control.[256][257][258][259]

    AR Mixer is an app that allows one to select and mix between songs by manipulating objects—such as changing the orientation of a bottle or can.[260]

    In a video, Uriel Yehezkel demonstrates using the Leap Motion controller and GECO MIDI to control Ableton Live with hand gestures and states that by this method he was able to control more than 10 parameters simultaneously with both hands and take full control over the construction of the song, emotion and energy.[261][262][better source needed]

    A novel musical instrument that allows novices to play electronic musical compositions, interactively remixing and modulating their elements, by manipulating simple physical objects has been proposed.[263]

    A system using explicit gestures and implicit dance moves to control the visual augmentations of a live music performance that enable more dynamic and spontaneous performances and—in combination with indirect augmented reality—leading to a more intense interaction between artist and audience has been suggested.[264]

    Research by members of the CRIStAL at the University of Lille makes use of augmented reality to enrich musical performance. The ControllAR project allows musicians to augment their MIDI control surfaces with the remixed graphical user interfaces of music software.[265] The Rouages project proposes to augment digital musical instruments to reveal their mechanisms to the audience and thus improve the perceived liveness.[266] Reflets is a novel augmented reality display dedicated to musical performances where the audience acts as a 3D display by revealing virtual content on stage, which can also be used for 3D musical interaction and collaboration.[267]

    Snapchat

    Snapchat users have access to augmented reality in the company's instant messaging app through use of camera filters. In September 2017, Snapchat updated its app to include a camera filter that allowed users to render an animated, cartoon version of themselves called "Bitmoji". These animated avatars would be projected in the real world through the camera, and can be photographed or video recorded.[268] In the same month, Snapchat also announced a new feature called "Sky Filters" that will be available on its app. This new feature makes use of augmented reality to alter the look of a picture taken of the sky, much like how users can apply the app's filters to other pictures. Users can choose from sky filters such as starry night, stormy clouds, beautiful sunsets, and rainbow.[269]

    Concerns

    Reality modifications

    In a paper titled "Death by Pokémon GO", researchers at Purdue University's Krannert School of Management claim the game caused "a disproportionate increase in vehicular crashes and associated vehicular damage, personal injuries, and fatalities in the vicinity of locations, called PokéStops, where users can play the game while driving."[270] Using data from one municipality, the paper extrapolates what that might mean nationwide and concluded "the increase in crashes attributable to the introduction of Pokémon GO is 145,632 with an associated increase in the number of injuries of 29,370 and an associated increase in the number of fatalities of 256 over the period of 6 July 2016, through 30 November 2016." The authors extrapolated the cost of those crashes and fatalities at between $2bn and $7.3 billion for the same period. Furthermore, more than one in three surveyed advanced Internet users would like to edit out disturbing elements around them, such as garbage or graffiti.[271] They would like to even modify their surroundings by erasing street signs, billboard ads, and uninteresting shopping windows. So it seems that AR is as much a threat to companies as it is an opportunity. Although, this could be a nightmare to numerous brands that do not manage to capture consumer imaginations it also creates the risk that the wearers of augmented reality glasses may become unaware of surrounding dangers. Consumers want to use augmented reality glasses to change their surroundings into something that reflects their own personal opinions. Around two in five want to change the way their surroundings look and even how people appear to them.[citation needed]

    Next, to the possible privacy issues that are described below, overload and over-reliance issues are the biggest danger of AR. For the development of new AR-related products, this implies that the user-interface should follow certain guidelines as not to overload the user with information while also preventing the user from over-relying on the AR system such that important cues from the environment are missed.[16] This is called the virtually-augmented key.[16] Once the key is ignored, people might not desire the real world anymore.

    Privacy concerns

    The concept of modern augmented reality depends on the ability of the device to record and analyze the environment in real time. Because of this, there are potential legal concerns over privacy. While the First Amendment to the United States Constitution allows for such recording in the name of public interest, the constant recording of an AR device makes it difficult to do so without also recording outside of the public domain. Legal complications would be found in areas where a right to a certain amount of privacy is expected or where copyrighted media are displayed.

    In terms of individual privacy, there exists the ease of access to information that one should not readily possess about a given person. This is accomplished through facial recognition technology. Assuming that AR automatically passes information about persons that the user sees, there could be anything seen from social media, criminal record, and marital status.[272]

    The Code of Ethics on Human Augmentation, which was originally introduced by Steve Mann in 2004 and further refined with Ray Kurzweil and Marvin Minsky in 2013, was ultimately ratified at the virtual reality Toronto conference on 25 June 2017.[273][274][275][276]

    Property law

    The interaction of location-bound augmented reality with property law is largely undefined.[277][278] Several models have been analysed for how this interaction may be resolved in a common law context: an extension of real property rights to also cover augmentations on or near the property with a strong notion of trespassing, forbidding augmentations unless allowed by the owner; an 'open range' system, where augmentations are allowed unless forbidden by the owner; and a 'freedom to roam' system, where real property owners have no control over non-disruptive augmentations.[279]

    One issue experienced during the Pokémon Go craze was the game's players disturbing owners of private property while visiting nearby location-bound augmentations, which may have been on the properties or the properties may have been en route. The terms of service of Pokémon Go explicitly disclaim responsibility for players' actions, which may limit (but may not totally extinguish) the liability of its producer, Niantic, in the event of a player trespassing while playing the game: by Niantic's argument, the player is the one committing the trespass, while Niantic has merely engaged in permissible free speech. A theory advanced in lawsuits brought against Niantic is that their placement of game elements in places that will lead to trespass or an exceptionally large flux of visitors can constitute nuisance, despite each individual trespass or visit only being tenuously caused by Niantic.[280][281][282]

    Another claim raised against Niantic is that the placement of profitable game elements on land without permission of the land's owners is unjust enrichment.[283] More hypothetically, a property may be augmented with advertising or disagreeable content against its owner's wishes.[284] Under American law, these situations are unlikely to be seen as a violation of real property rights by courts without an expansion of those rights to include augmented reality (similarly to how English common law came to recognise air rights).[283]

    An article in the Michigan Telecommunications and Technology Law Review argues that there are three bases for this extension, starting with various understanding of property. The personality theory of property, outlined by Margaret Radin, is claimed to support extending property rights due to the intimate connection between personhood and ownership of property; however, her viewpoint is not universally shared by legal theorists.[285] Under the utilitarian theory of property, the benefits from avoiding the harms to real property owners caused by augmentations and the tragedy of the commons, and the reduction in transaction costs by making discovery of ownership easy, were assessed as justifying recognising real property rights as covering location-bound augmentations, though there does remain the possibility of a tragedy of the anticommons from having to negotiate with property owners slowing innovation.[286] Finally, following the 'property as the law of things' identification as supported by Thomas Merrill and Henry E Smith, location-based augmentation is naturally identified as a 'thing', and, while the non-rivalrous and ephemeral nature of digital objects presents difficulties to the excludeability prong of the definition, the article argues that this is not insurmountable.[287]

    Some attempts at legislative regulation have been made in the United States. Milwaukee County, Wisconsin attempted to regulate augmented reality games played in its parks, requiring prior issuance of a permit,[288] but this was criticised on free speech grounds by a federal judge;[289] and Illinois considered mandating a notice and take down procedure for location-bound augmentations.[290]

    An article for the Iowa Law Review observed that dealing with many local permitting processes would be arduous for a large-scale service,[291] and, while the proposed Illinois mechanism could be made workable,[292] it was reactive and required property owners to potentially continually deal with new augmented reality services; instead, a national-level geofencing registry, analogous to a do-not-call list, was proposed as the most desirable form of regulation to efficiently balance the interests of both providers of augmented reality services and real property owners.[293] An article in the Vanderbilt Journal of Entertainment and Technology Law, however, analyses a monolithic do-not-locate registry as an insufficiently flexible tool, either permitting unwanted augmentations or foreclosing useful applications of augmented reality.[294] Instead, it argues that an 'open range' model, where augmentations are permitted by default but property owners may restrict them on a case-by-case basis (and with noncompliance treated as a form of trespass), will produce the socially-best outcome.[295]

    Notable researchers

    • Ivan Sutherland invented the first VR head-mounted display at Harvard University.
    • Steve Mann formulated an earlier concept of mediated reality in the 1970s and 1980s, using cameras, processors, and display systems to modify visual reality to help people see better (dynamic range management), building computerized welding helmets, as well as "augmediated reality" vision systems for use in everyday life. He is also an adviser to Meta.[296]
    • Ronald Azuma is a scientist and author of works on AR.
    • Dieter Schmalstieg and Daniel Wagner developed a marker tracking systems for mobile phones and PDAs in 2009.[297]
    • Jeri Ellsworth headed a research effort for Valve on augmented reality (AR), later taking that research to her own start-up CastAR. The company, founded in 2013, eventually shuttered. Later, she created another start-up based on the same technology called Tilt Five; another AR start-up formed by her with the purpose of creating a device for digital board games.[298]

    History

    • 1901: L. Frank Baum, an author, first mentions the idea of an electronic display/spectacles that overlays data onto real life (in this case 'people'). It is named a 'character marker'.[299]
    • 1957–62: Morton Heilig, a cinematographer, creates and patents a simulator called Sensorama with visuals, sound, vibration, and smell.
    • 1968: Ivan Sutherland invents the head-mounted display and positions it as a window into a virtual world.[300]
    • 1975: Myron Krueger creates Videoplace to allow users to interact with virtual objects.
    • 1980: The research by Gavan Lintern of the University of Illinois is the first published work to show the value of a heads up display for teaching real-world flight skills.[210]
    • 1980: Steve Mann creates the first wearable computer, a computer vision system with text and graphical overlays on a photographically mediated scene.[301] See EyeTap. See Heads Up Display.
    • 1981: Dan Reitan geospatially maps multiple weather radar images and space-based and studio cameras to earth maps and abstract symbols for television weather broadcasts, bringing a precursor concept to augmented reality (mixed real/graphical images) to TV.[302]
    • 1986: Within IBM, Ron Feigenblatt describes the most widely experienced form of AR today (viz. "magic window," e.g. smartphone-based Pokémon Go), use of a small, "smart" flat panel display positioned and oriented by hand.[303][304]
    • 1987: Douglas George and Robert Morris create a working prototype of an astronomical telescope-based "heads-up display" system (a precursor concept to augmented reality) which superimposed in the telescope eyepiece, over the actual sky images, multi-intensity star, and celestial body images, and other relevant information.[305]
    • 1990: The term augmented reality is attributed to Thomas P. Caudell, a former Boeing researcher.[306]
    • 1992: Louis Rosenberg developed one of the first functioning AR systems, called Virtual Fixtures, at the United States Air Force Research Laboratory—Armstrong, that demonstrated benefit to human perception.[307]
    • 1992: Steven Feiner, Blair MacIntyre and Doree Seligmann present an early paper on an AR system prototype, KARMA, at the Graphics Interface conference.
    • 1993: CMOS active-pixel sensor, a type of metal–oxide–semiconductor (MOS) image sensor, developed at NASA's Jet Propulsion Laboratory.[308] CMOS sensors are later widely used for optical tracking in AR technology.[309]
    • 1993: Mike Abernathy, et al., report the first use of augmented reality in identifying space debris using Rockwell WorldView by overlaying satellite geographic trajectories on live telescope video.[212]
    • 1993: A widely cited version of the paper above is published in Communications of the ACM – Special issue on computer augmented environments, edited by Pierre Wellner, Wendy Mackay, and Rich Gold.[310]
    • 1993: Loral WDL, with sponsorship from STRICOM, performed the first demonstration combining live AR-equipped vehicles and manned simulators. Unpublished paper, J. Barrilleaux, "Experiences and Observations in Applying Augmented Reality to Live Training", 1999.[311]
    • 1994: Julie Martin creates first 'Augmented Reality Theater production', Dancing in Cyberspace, funded by the Australia Council for the Arts, features dancers and acrobats manipulating body–sized virtual object in real time, projected into the same physical space and performance plane. The acrobats appeared immersed within the virtual object and environments. The installation used Silicon Graphics computers and Polhemus sensing system.
    • 1995: S. Ravela et al. at University of Massachusetts introduce a vision-based system using monocular cameras to track objects (engine blocks) across views for augmented reality.
    • 1996: General Electric develops system for projecting information from 3D CAD models onto real-world instances of those models.[312]
    • 1998: Spatial augmented reality introduced at University of North Carolina at Chapel Hill by Ramesh Raskar, Welch, Henry Fuchs.[65]
    • 1999: Frank Delgado, Mike Abernathy et al. report successful flight test of LandForm software video map overlay from a helicopter at Army Yuma Proving Ground overlaying video with runways, taxiways, roads and road names.[217][218]
    • 1999: The US Naval Research Laboratory engages on a decade-long research program called the Battlefield Augmented Reality System (BARS) to prototype some of the early wearable systems for dismounted soldier operating in urban environment for situation awareness and training.[313]
    • 1999: NASA X-38 flown using LandForm software video map overlays at Dryden Flight Research Center.[314]
    • 2000: Rockwell International Science Center demonstrates tetherless wearable augmented reality systems receiving analog video and 3-D Audio over radio-frequency wireless channels. The systems incorporate outdoor navigation capabilities, with digital horizon silhouettes from a terrain database overlain in real time on the live outdoor scene, allowing visualization of terrain made invisible by clouds and fog.[315][316]
    • 2003: Sony released the EyeToy colour webcam, their first foray into Augmented Reality on PlayStation 2.[317]
    • 2004: Outdoor helmet-mounted AR system demonstrated by Trimble Navigation and the Human Interface Technology Laboratory (HIT lab).[103]
    • 2006: Outland Research develops AR media player that overlays virtual content onto a users view of the real world synchronously with playing music, thereby providing an immersive AR entertainment experience.[318][319]
    • 2008: Wikitude AR Travel Guide launches on 20 Oct 2008 with the G1 Android phone.[320]
    • 2009: ARToolkit was ported to Adobe Flash (FLARToolkit) by Saqoosha, bringing augmented reality to the web browser.[321]
    • 2010: Design of mine detection robot for Korean mine field.[213]
    • 2011: Ogmento launches Paranormal Activity: Sanctuary, the first location-based augmented reality game on mobile.[322]
    • 2012: Launch of Lyteshot, an interactive AR gaming platform that utilizes smart glasses for game data
    • 2015: Microsoft announces Windows Holographic and the HoloLens augmented reality headset. The headset utilizes various sensors and a processing unit to blend high definition "holograms" with the real world.[323]
    • 2016: Niantic released Pokémon Go for iOS and Android in July 2016. The game quickly became one of the most popular smartphone applications and in turn spikes the popularity of augmented reality games.[324]
    • 2017: Magic Leap announces the use of Digital Lightfield technology embedded into the Magic Leap One headset. The creators edition headset includes the glasses and a computing pack worn on your belt.[325]
    • 2019: Microsoft announces HoloLens 2 with significant improvements in terms of field of view and ergonomics.[326]

    See also

    References


  • Cipresso, Pietro; Giglioli, Irene Alice Chicchi; Raya, iz; Riva, Giuseppe (7 December 2011). "The Past, Present, and Future of Virtual and Augmented Reality Research: A Network and Cluster Analysis of the Literature". Frontiers in Psychology. 9: 2086. doi:10.3389/fpsyg.2018.02086. PMC 6232426. PMID 30459681.

  • Wu, Hsin-Kai; Lee, Silvia Wen-Yu; Chang, Hsin-Yi; Liang, Jyh-Chong (March 2013). "Current status, opportunities and challenges of augmented reality in education...". Computers & Education. 62: 41–49. doi:10.1016/j.compedu.2012.10.024.

  • Rosenberg, Louis B. (1992). "The Use of Virtual Fixtures as Perceptual Overlays to Enhance Operator Performance in Remote Environments". Archived from the original on 10 July 2019.

  • Steuer,"Defining virtual reality: Dimensions Determining Telepresence". Archived from the original on 24 May 2016. Retrieved 27 November 2018., Department of Communication, Stanford University. 15 October 1993.

  • Introducing Virtual Environments Archived 21 April 2016 at the Wayback Machine National Center for Supercomputing Applications, University of Illinois.

  • Rosenberg, L.B. (1993). "Virtual fixtures: Perceptual tools for telerobotic manipulation". Proceedings of IEEE virtual reality Annual International Symposium. pp. 76–82. doi:10.1109/VRAIS.1993.380795. ISBN 0-7803-1363-1. S2CID 9856738.

  • Dupzyk, Kevin (6 September 2016). "I Saw the Future Through Microsoft's Hololens". Popular Mechanics.

  • Arai, Kohei, ed. (2022), "Augmented Reality: Reflections at Thirty Years", Proceedings of the Future Technologies Conference (FTC) 2021, Volume 1, Lecture Notes in Networks and Systems, Cham: Springer International Publishing, vol. 358, pp. 1–11, doi:10.1007/978-3-030-89906-6_1, ISBN 978-3-030-89905-9, S2CID 239881216

  • Moro, Christian; Birt, James; Stromberga, Zane; Phelps, Charlotte; Clark, Justin; Glasziou, Paul; Scott, Anna Mae (2021). "Virtual and Augmented Reality Enhancements to Medical and Science Student Physiology and Anatomy Test Performance: A Systematic Review and Meta‐Analysis". Anatomical Sciences Education. 14 (3): 368–376. doi:10.1002/ase.2049. ISSN 1935-9772. PMID 33378557. S2CID 229929326.

  • "How to Transform Your Classroom with Augmented Reality - EdSurge News". 2 November 2015.

  • Crabben, Jan van der (16 October 2018). "Why We Need More Tech in History Education". ancient.eu. Archived from the original on 23 October 2018. Retrieved 23 October 2018.

  • Hegde, Naveen (19 March 2023). "What is Augmented Reality". Codegres. Retrieved 19 March 2023.

  • Chen, Brian (25 August 2009). "If You're Not Seeing Data, You're Not Seeing". Wired. Retrieved 18 June 2019.

  • Maxwell, Kerry. "Augmented Reality". macmillandictionary.com. Retrieved 18 June 2019.

  • "Augmented Reality (AR)". augmentedrealityon.com. Archived from the original on 5 April 2012. Retrieved 18 June 2019.

  • Azuma, Ronald (August 1997). "A Survey of Augmented Reality" (PDF). Presence: Teleoperators and Virtual Environments. MIT Press. 6 (4): 355–385. doi:10.1162/pres.1997.6.4.355. S2CID 469744. Retrieved 2 June 2021.

  • "Phenomenal Augmented Reality, IEEE Consumer Electronics, Volume 4, No. 4, October 2015, cover+pp92-97" (PDF).

  • Time-frequency perspectives, with applications, in Advances in Machine Vision, Strategies and Applications, World Scientific Series in Computer Science: Volume 32, C Archibald and Emil Petriu, Cover + pp 99–128, 1992.

  • Mann, Steve; Feiner, Steve; Harner, Soren; Ali, Mir Adnan; Janzen, Ryan; Hansen, Jayse; Baldassi, Stefano (15 January 2015). "Wearable Computing, 3D Aug* Reality, Photographic/Videographic Gesture Sensing, and Veillance". Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction - TEI '14. ACM. pp. 497–500. doi:10.1145/2677199.2683590. ISBN 9781450333054. S2CID 12247969.

  • Carmigniani, Julie; Furht, Borko; Anisetti, Marco; Ceravolo, Paolo; Damiani, Ernesto; Ivkovic, Misa (1 January 2011). "Augmented reality technologies, systems and applications". Multimedia Tools and Applications. 51 (1): 341–377. doi:10.1007/s11042-010-0660-6. ISSN 1573-7721. S2CID 4325516.

  • Ma, Minhua; C. Jain, Lakhmi; Anderson, Paul (2014). Virtual, Augmented Reality and Serious Games for Healthcare 1. Springer Publishing. p. 120. ISBN 978-3-642-54816-1.

  • Marvin, Rob (16 August 2016). "Augment Is Bringing the AR Revolution to Business". PC Mag. Retrieved 23 February 2021.

  • Stamp, Jimmy (30 August 2019). "Retail is getting reimagined with augmented reality". The Architect's Newspaper. Archived from the original on 15 November 2019.

  • Mahmood 2019-04-12T11:30:27Z, Ajmal (12 April 2019). "The future is virtual - why AR and VR will live in the cloud". TechRadar. Retrieved 12 December 2019.

  • Aubrey, Dave. "Mural Artists Use Augmented Reality To Highlight Effects Of Climate Change". VRFocus. Retrieved 12 December 2019.

  • Metz, Rachael (2 August 2012). "Augmented Reality Is Finally Getting Real". technologyreview.com. Retrieved 18 June 2019.

  • Marino, Emanuele; Bruno, Fabio; Barbieri, Loris; Lagudi, Antonio (2022). "Benchmarking Built-In Tracking Systems for Indoor AR Applications on Popular Mobile Devices". Sensors. 22 (14): 5382. Bibcode:2022Senso..22.5382M. doi:10.3390/s22145382. PMC 9320911. PMID 35891058.

  • "Fleet Week: Office of Naval Research Technology". eweek.com. 28 May 2012. Retrieved 18 June 2019.

  • Rolland, Jannick; Baillott, Yohan; Goon, Alexei.A Survey of Tracking Technology for Virtual Environments, Center for Research and Education in Optics and Lasers, University of Central Florida.

  • Klepper, Sebastian. "Augmented Reality - Display Systems" (PDF). campar.in.tum.de. Archived from the original (PDF) on 28 January 2013. Retrieved 18 June 2019.

  • Rolland, Jannick P.; Biocca, Frank; Hamza-Lup, Felix; Ha, Yanggang; Martins, Ricardo (October 2005). "Development of Head-Mounted Projection Displays for Distributed, Collaborative, Augmented Reality Applications". Presence: Teleoperators and Virtual Environments. 14 (5): 528–549. doi:10.1162/105474605774918741. S2CID 5328957.

  • "Gestigon Gesture Tracking – TechCrunch Disrupt". TechCrunch. Retrieved 11 October 2016.

  • Matney, Lucas (29 August 2016). "uSens shows off new tracking sensors that aim to deliver richer experiences for mobile VR". TechCrunch. Retrieved 29 August 2016.

  • Grifatini, Kristina. Augmented Reality Goggles, Technology Review 10 November 2010.

  • Arthur, Charles. UK company's 'augmented reality' glasses could be better than Google's, The Guardian, 10 September 2012.

  • Gannes, Liz. "Google Unveils Project Glass: Wearable Augmented-Reality Glasses". allthingsd.com. Retrieved 4 April 2012., All Things D.

  • Benedetti, Winda. Xbox leak reveals Kinect 2, augmented reality glasses NBC News. Retrieved 23 August 2012.

  • "Augmented Reality". merriam-webster.com. Archived from the original on 13 September 2015. Retrieved 8 October 2015. an enhanced version of reality created by the use of technology to overlay digital information on an image of something being viewed through a device (such as a smartphone camera) also : the technology used to create augmented reality

  • "Augmented Reality". oxforddictionaries.com. Archived from the original on 25 November 2013. Retrieved 8 October 2015. A technology that superimposes a computer-generated image on a user's view of the real world, thus providing a composite view.

  • "What is Augmented Reality (AR): Augmented Reality Defined, iPhone Augmented Reality Apps and Games and More". Digital Trends. 3 November 2009. Retrieved 8 October 2015.

  • "Full Page Reload". IEEE Spectrum: Technology, Engineering, and Science News. 10 April 2013. Retrieved 6 May 2020.

  • "Contact lens for the display of information such as text, graphics, or pictures".

  • Greenemeier, Larry. Computerized Contact Lenses Could Enable In-Eye Augmented Reality. Scientific American, 23 November 2011.

  • Yoneda, Yuka. Solar Powered Augmented Contact Lenses Cover Your Eye with 100s of LEDs. inhabitat, 17 March 2010.

  • Rosen, Kenneth (8 December 2012). "Contact Lenses Can Display Your Text Messages". Mashable.com. Mashable.com. Retrieved 13 December 2012.

  • O'Neil, Lauren. "LCD contact lenses could display text messages in your eye". CBC News. Archived from the original on 11 December 2012. Retrieved 12 December 2012.

  • Anthony, Sebastian. US military developing multi-focus augmented reality contact lenses. ExtremeTech, 13 April 2012.

  • Bernstein, Joseph. 2012 Invention Awards: Augmented-Reality Contact Lenses Popular Science, 5 June 2012.

  • Robertson, Adi (10 January 2013). "Innovega combines glasses and contact lenses for an unusual take on augmented reality". The Verge. Retrieved 6 May 2020.

  • Robot Genius (24 July 2012). "Sight". vimeo.com. Retrieved 18 June 2019.

  • Kosner, Anthony Wing (29 July 2012). "Sight: An 8-Minute Augmented Reality Journey That Makes Google Glass Look Tame". Forbes. Retrieved 3 August 2015.

  • O'Dell, J. (27 July 2012). "Beautiful short film shows a frightening future filled with Google Glass-like devices". Retrieved 3 August 2015.

  • "Samsung Just Patented Smart Contact Lenses With a Built-in Camera". sciencealert.com. 7 April 2016. Retrieved 18 June 2019.

  • "Full Page Reload". IEEE Spectrum: Technology, Engineering, and Science News. 16 January 2020. Retrieved 6 May 2020.

  • "Mojo Vision's AR contact lenses are very cool, but many questions remain". TechCrunch. 16 January 2020. Retrieved 6 May 2020.

  • "Mojo Vision is developing AR contact lenses". TechCrunch. Retrieved 6 May 2020.

  • Viirre, E.; Pryor, H.; Nagata, S.; Furness, T. A. (1998). "The virtual retinal display: a new technology for virtual reality and augmented vision in medicine". Studies in Health Technology and Informatics. 50 (Medicine Meets virtual reality): 252–257. doi:10.3233/978-1-60750-894-6-252. ISSN 0926-9630. PMID 10180549.

  • Tidwell, Michael; Johnson, Richard S.; Melville, David; Furness, Thomas A.The Virtual Retinal Display – A Retinal Scanning Imaging System Archived 13 December 2010 at the Wayback Machine, Human Interface Technology Laboratory, University of Washington.

  • "GlassEyes": The Theory of EyeTap Digital Eye Glass, supplemental material for IEEE Technology and Society, Volume Vol. 31, Number 3, 2012, pp. 10–14.

  • "Intelligent Image Processing", John Wiley and Sons, 2001, ISBN 0-471-40637-6, 384 p.

  • Marker vs Markerless AR Archived 28 January 2013 at the Wayback Machine, Dartmouth College Library.

  • Feiner, Steve (3 March 2011). "Augmented reality: a long way off?". AR Week. Pocket-lint. Retrieved 3 March 2011.

  • Borge, Ariel (11 July 2016). "The story behind 'Pokémon Go's' impressive mapping". Mashable. Retrieved 13 July 2016.

  • Bimber, Oliver; Encarnação, L. Miguel; Branco, Pedro (2001). "The Extended Virtual Table: An Optical Extension for Table-Like Projection Systems". Presence: Teleoperators and Virtual Environments. 10 (6): 613–631. doi:10.1162/105474601753272862. S2CID 4387072.

  • Ramesh Raskar, Greg Welch, Henry Fuchs Spatially Augmented Reality, First International Workshop on Augmented Reality, Sept 1998.

  • Knight, Will. Augmented reality brings maps to life 19 July 2005.

  • Sung, Dan. Augmented reality in action – maintenance and repair. Pocket-lint, 1 March 2011.

  • Braud, T. "Future Networking Challenges: The Case of Mobile Augmented Reality" (PDF). cse.ust.hk. Retrieved 20 June 2019.

  • Marshall, Gary.Beyond the mouse: how input is evolving, Touch, voice and gesture recognition and augmented realityTechRadar.computing\PC Plus 23 August 2009.

  • Simonite, Tom. Augmented Reality Meets Gesture Recognition, Technology Review, 15 September 2011.

  • Chaves, Thiago; Figueiredo, Lucas; Da Gama, Alana; de Araujo, Christiano; Teichrieb, Veronica. Human Body Motion and Gestures Recognition Based on Checkpoints. SVR '12 Proceedings of the 2012 14th Symposium on Virtual and Augmented Reality pp. 271–278.

  • Barrie, Peter; Komninos, Andreas; Mandrychenko, Oleksii.A Pervasive Gesture-Driven Augmented Reality Prototype using Wireless Sensor Body Area Networks.

  • Bosnor, Kevin (19 February 2001). "How Augmented Reality Works". howstuffworks.

  • Bajarin, Tim (31 January 2017). "This Technology Could Replace the Keyboard and Mouse". time.com. Retrieved 19 June 2019.

  • Meisner, Jeffrey; Donnelly, Walter P.; Roosen, Richard (6 April 1999). "Augmented reality technology".

  • Krevelen, Poelman, D.W.F, Ronald (2010). A Survey of Augmented Reality Technologies, Applications and Limitations. International Journal of virtual reality. pp. 3, 6.

  • Pepsi Max (20 March 2014), Unbelievable Bus Shelter | Pepsi Max. Unbelievable #LiveForNow, retrieved 6 March 2018

  • Jung, Timothy; Claudia Tom Dieck, M. (4 September 2017). Augmented reality and virtual reality : empowering human, place and business. Jung, Timothy,, Dieck, M. Claudia tom. Cham, Switzerland. ISBN 9783319640273. OCLC 1008871983.

  • Azuma, Ronald; Balliot, Yohan; Behringer, Reinhold; Feiner, Steven; Julier, Simon; MacIntyre, Blair. Recent Advances in Augmented Reality Computers & Graphics, November 2001.

  • Maida, James; Bowen, Charles; Montpool, Andrew; Pace, John. Dynamic registration correction in augmented-reality systems Archived 18 May 2013 at the Wayback Machine, Space Life Sciences, NASA.

  • State, Andrei; Hirota, Gentaro; Chen, David T; Garrett, William; Livingston, Mark. Superior Augmented Reality Registration by Integrating Landmark Tracking and Magnetic Tracking, Department of Computer Science, University of North Carolina at Chapel Hill.

  • Bajura, Michael; Neumann, Ulrich. Dynamic Registration Correction in Augmented-Reality Systems Archived 13 July 2012, University of North Carolina, University of Southern California.

  • "What are augmented reality markers ?". anymotion.com. Retrieved 18 June 2019.

  • "Markerless Augmented Reality is here". Marxent | Top Augmented Reality Apps Developer. 9 May 2014. Retrieved 23 January 2018.

  • "ARML 2.0 SWG". Open Geospatial Consortium website. Open Geospatial Consortium. Archived from the original on 12 November 2013. Retrieved 12 November 2013.

  • "Top 5 AR SDKs". Augmented Reality News. Archived from the original on 13 December 2013. Retrieved 15 November 2013.

  • "Top 10 AR SDKs". Augmented World Expo. Archived from the original on 23 November 2013. Retrieved 15 November 2013.

  • Wilson, Tyler (30 January 2018). ""The Principles of Good UX for Augmented Reality – UX Collective." UX Collective". Retrieved 19 June 2019.

  • "Best Practices for Mobile AR Design- Google". blog.google. 13 December 2017.

  • "Human Computer Interaction with Augmented Reality" (PDF). eislab.fim.uni-passau.de. Archived from the original (PDF) on 25 May 2018.

  • "Basic Patterns of Mobile Navigation". theblog.adobe.com. 9 May 2017. Archived from the original on 13 April 2018. Retrieved 12 April 2018.

  • "Principles of Mobile App Design: Engage Users and Drive Conversions". thinkwithgoogle.com. Archived from the original on 13 April 2018.

  • "Inside Out: Interaction Design for Augmented Reality-UXmatters". uxmatters.com.

  • Moro, Christian; Štromberga, Zane; Raikos, Athanasios; Stirling, Allan (2017). "The effectiveness of virtual and augmented reality in health sciences and medical anatomy". Anatomical Sciences Education. 10 (6): 549–559. doi:10.1002/ase.1696. ISSN 1935-9780. PMID 28419750. S2CID 25961448.

  • "Don't be blind on wearable cameras insists AR genius". SlashGear. 20 July 2012. Retrieved 21 October 2018.

  • Stuart Eve (2012). "Augmenting Phenomenology: Using Augmented Reality to Aid Archaeological Phenomenology in the Landscape" (PDF). Journal of Archaeological Method and Theory. 19 (4): 582–600. doi:10.1007/s10816-012-9142-7. S2CID 4988300.

  • Dähne, Patrick; Karigiannis, John N. (2002). Archeoguide: System Architecture of a Mobile Outdoor Augmented Reality System. ISBN 9780769517810. Retrieved 6 January 2010.

  • LBI-ArchPro (5 September 2011). "School of Gladiators discovered at Roman Carnuntum, Austria". Retrieved 29 December 2014.

  • Papagiannakis, George; Schertenleib, Sébastien; O'Kennedy, Brian; Arevalo-Poizat, Marlene; Magnenat-Thalmann, Nadia; Stoddart, Andrew; Thalmann, Daniel (1 February 2005). "Mixing virtual and real scenes in the site of ancient Pompeii". Computer Animation and Virtual Worlds. 16 (1): 11–24. CiteSeerX 10.1.1.64.8781. doi:10.1002/cav.53. ISSN 1546-427X. S2CID 5341917.

  • Benko, H.; Ishak, E.W.; Feiner, S. (2004). "Collaborative Mixed Reality Visualization of an Archaeological Excavation". Third IEEE and ACM International Symposium on Mixed and Augmented Reality. pp. 132–140. doi:10.1109/ISMAR.2004.23. ISBN 0-7695-2191-6. S2CID 10122485.

  • Divecha, Devina.Augmented Reality (AR) used in architecture and design Archived 14 February 2013 at the Wayback Machine. designMENA 8 September 2011.

  • Architectural dreams in augmented reality. University News, University of Western Australia. 5 March 2012.

  • Outdoor AR. TV One News, 8 March 2004.

  • Churcher, Jason. "Internal accuracy vs external accuracy". Retrieved 7 May 2013.

  • "Augment for Architecture & Construction". Archived from the original on 8 November 2015. Retrieved 12 October 2015.

  • "App gives a view of city as it used to be". Stuff. 10 December 2011. Retrieved 20 May 2018.

  • Lee, Gun (2012). "CityViewAR outdoor AR visualization". Proceedings of the 13th International Conference of the NZ Chapter of the ACM's Special Interest Group on Human-Computer Interaction - CHINZ '12. Chinz '12. ACM. p. 97. doi:10.1145/2379256.2379281. hdl:10092/8693. ISBN 978-1-4503-1474-9. S2CID 34199215.

  • Lock, Oliver (25 February 2020). "HoloCity". doi:10.1145/3359997.3365734.

  • Alzahrani, Nouf M.; Alfouzan, Faisal Abdulaziz (2022). "Augmented Reality (AR) and Cyber-Security for Smart Cities—A Systematic Literature Review". Sensors. 22 (7): 2792. Bibcode:2022Senso..22.2792A. doi:10.3390/s22072792. ISSN 1424-8220. PMC 9002492. PMID 35408406.

  • Groundbreaking Augmented Reality-Based Reading Curriculum Launches, PRweb, 23 October 2011.

  • Stewart-Smith, Hanna. Education with Augmented Reality: AR textbooks released in Japan, ZDnet, 4 April 2012.

  • Augmented reality in education smarter learning.

  • Shumaker, Randall; Lackey, Stephanie (20 July 2015). Virtual, Augmented and Mixed Reality: 7th International Conference, VAMR 2015, Held as Part of HCI International 2015, Los Angeles, CA, USA, 2–7 August 2015, Proceedings. Springer. ISBN 9783319210674.

  • Wu, Hsin-Kai; Lee, Silvia Wen-Yu; Chang, Hsin-Yi; Liang, Jyh-Chong (March 2013). "Current status, opportunities and challenges of augmented reality in education". Computers & Education. 62: 41–49. doi:10.1016/j.compedu.2012.10.024.

  • Lubrecht, Anna. Augmented Reality for Education Archived 5 September 2012 at the Wayback Machine The Digital Union, The Ohio State University 24 April 2012.

  • "Augmented reality, an evolution of the application of mobile devices" (PDF). Archived from the original (PDF) on 17 April 2015. Retrieved 19 June 2014.

  • Maier, Patrick; Tönnis, Marcus; Klinker, Gudron. Augmented Reality for teaching spatial relations Archived 28 January 2013 at the Wayback Machine, Conference of the International Journal of Arts & Sciences (Toronto 2009).

  • Plunkett, Kyle N. (12 November 2019). "A Simple and Practical Method for Incorporating Augmented Reality into the Classroom and Laboratory". Journal of Chemical Education. 96 (11): 2628–2631. Bibcode:2019JChEd..96.2628P. doi:10.1021/acs.jchemed.9b00607.

  • "Anatomy 4D". Qualcomm. Archived from the original on 11 March 2016. Retrieved 2 July 2015.

  • Moro, Christian; Štromberga, Zane; Raikos, Athanasios; Stirling, Allan (November 2017). "The effectiveness of virtual and augmented reality in health sciences and medical anatomy: VR and AR in Health Sciences and Medical Anatomy". Anatomical Sciences Education. 10 (6): 549–559. doi:10.1002/ase.1696. PMID 28419750. S2CID 25961448.

  • Birt, James; Stromberga, Zane; Cowling, Michael; Moro, Christian (31 January 2018). "Mobile Mixed Reality for Experiential Learning and Simulation in Medical and Health Sciences Education". Information. 9 (2): 31. doi:10.3390/info9020031. ISSN 2078-2489.

  • Mourtzis, Dimitris; Zogopoulos, Vasilios; Xanthi, Fotini (11 June 2019). "Augmented reality application to support the assembly of highly customized products and to adapt to production re-scheduling". The International Journal of Advanced Manufacturing Technology. 105 (9): 3899–3910. doi:10.1007/s00170-019-03941-6. ISSN 0268-3768. S2CID 189904235.

  • Boccaccio, A.; Cascella, G. L.; Fiorentino, M.; Gattullo, M.; Manghisi, V. M.; Monno, G.; Uva, A. E. (2019), Cavas-Martínez, Francisco; Eynard, Benoit; Fernández Cañavate, Francisco J.; Fernández-Pacheco, Daniel G. (eds.), "Exploiting Augmented Reality to Display Technical Information on Industry 4.0 P&ID", Advances on Mechanics, Design Engineering and Manufacturing II, Springer International Publishing, pp. 282–291, doi:10.1007/978-3-030-12346-8_28, ISBN 978-3-030-12345-1, S2CID 150159603

  • Mourtzis, Dimitris; Zogopoulos, Vasilios; Katagis, Ioannis; Lagios, Panagiotis (2018). "Augmented Reality based Visualization of CAM Instructions towards Industry 4.0 paradigm: a CNC Bending Machine case study". Procedia CIRP. 70: 368–373. doi:10.1016/j.procir.2018.02.045.

  • Michalos, George; Kousi, Niki; Karagiannis, Panagiotis; Gkournelos, Christos; Dimoulas, Konstantinos; Koukas, Spyridon; Mparis, Konstantinos; Papavasileiou, Apostolis; Makris, Sotiris (November 2018). "Seamless human robot collaborative assembly – An automotive case study". Mechatronics. 55: 194–211. doi:10.1016/j.mechatronics.2018.08.006. ISSN 0957-4158. S2CID 115979090.

  • Katts, Rima. Elizabeth Arden brings new fragrance to life with augmented reality Mobile Marketer, 19 September 2012.

  • Meyer, David. Telefónica bets on augmented reality with Aurasma tie-in gigaom, 17 September 2012.

  • Mardle, Pamela.Video becomes reality for Stuprint.com Archived 12 March 2013 at the Wayback Machine. PrintWeek, 3 October 2012.

  • Giraldo, Karina.Why mobile marketing is important for brands? Archived 2 April 2015 at the Wayback Machine. SolinixAR, Enero 2015.

  • "Augmented reality could be advertising world's best bet". The Financial Express. 18 April 2015. Archived from the original on 21 May 2015.

  • Humphries, Mathew.[1] Archived 26 June 2012 at the Wayback Machine.Geek.com 19 September 2011.

  • Netburn, Deborah.Ikea introduces augmented reality app for 2013 catalog Archived 2 December 2012 at the Wayback Machine. Los Angeles Times, 23 July 2012.

  • van Krevelen, D.W.F.; Poelman, R. (November 2015). "A Survey of Augmented Reality Technologies, Applications and Limitations". International Journal of Virtual Reality. 9 (2): 1–20. doi:10.20870/IJVR.2010.9.2.2767.

  • Alexander, Michael.Arbua Shoco Owl Silver Coin with Augmented Reality, Coin Update 20 July 2012.

  • Royal Mint produces revolutionary commemorative coin for Aruba Archived 4 September 2015 at the Wayback Machine, Today 7 August 2012.

  • "This small iOS 12 feature is the birth of a whole industry". Jonny Evans. 19 September 2018. Retrieved 19 September 2018.

  • "Shopify is bringing Apple's latest AR tech to their platform". Lucas Matney. 17 September 2018. Retrieved 3 December 2018.

  • "History re-made: New AR classroom application lets pupils see how York looked over 1,900 years ago". QA Education. 4 September 2018. Retrieved 4 September 2018.

  • "Sheffield's Twinkl claims AR first with new game". Prolific North. 19 September 2018. Retrieved 19 September 2018.

  • "Technology from Twinkl brings never seen before objects to the classroom". The Educator UK. 21 September 2018. Retrieved 21 December 2018.

  • Pavlik, John V., and Shawn McIntosh. "Augmented Reality." Converging Media: a New Introduction to Mass Communication, 5th ed., Oxford University Press, 2017, pp. 184–185.

  • Dacko, Scott G. (November 2017). "Enabling smart retail settings via mobile augmented reality shopping apps" (PDF). Technological Forecasting and Social Change. 124: 243–256. doi:10.1016/j.techfore.2016.09.032.

  • "How Neiman Marcus is turning technology innovation into a 'core value'". Retail Dive. Retrieved 23 September 2018.

  • Arthur, Rachel. "Augmented Reality Is Set To Transform Fashion And Retail". Forbes. Retrieved 23 September 2018.

  • Pardes, Arielle (20 September 2017). "IKEA's new app flaunts what you'll love most about AR". Wired. Retrieved 20 September 2017.

  • "IKEA Highlights 2017".

  • "Performance". www.inter.ikea.com. Archived from the original on 26 June 2018.

  • "How Shopify is setting the future of AR shopping and what it means for sellers". 29 June 2021. Retrieved 29 June 2021.

  • Indriani, Masitoh; Liah Basuki Anggraeni (30 June 2022). "What Augmented Reality Would Face Today? The Legal Challenges to the Protection of Intellectual Property in Virtual Space". Media Iuris. 5 (2): 305–330. doi:10.20473/mi.v5i2.29339. ISSN 2621-5225. S2CID 250464007.

  • "AR詩 | にかにかブログ! (おぶんがく&包丁&ちぽちぽ革命)". にかにかブログ! (おぶんがく&包丁&ちぽちぽ革命) (in Japanese). Retrieved 20 May 2018.

  • "10.000 Moving Cities – Same but Different, AR (Augmented Reality) Art Installation, 2018". Marc Lee. Retrieved 24 December 2018.

  • Duguet, Anne-Marie (2003). Jeffrey Shaw, Future Cinema. The Cinematic Imaginary after Film. ZKM Karlsruhe and MIT Press, Cambridge, Massachusetts. pp. 376–381. ISBN 9780262692861.

  • Duguet, Anne-Marie; Klotz, Heinrich; Weibel, Peter (1997). Jeffrey Shaw: A User's Manual. From Expanded Cinema to Virtual Reality. ZKM Cantz. pp. 9–20.

  • tom Dieck, M. Claudia; Jung, Timothy; Han, Dai-In (July 2016). "Mapping requirements for the wearable smart glasses augmented reality museum application". Journal of Hospitality and Tourism Technology. 7 (3): 230–253. doi:10.1108/JHTT-09-2015-0036. ISSN 1757-9880.

  • Kipper, Greg; Rampolla, Joseph (31 December 2012). Augmented Reality: An Emerging Technologies Guide to AR. Elsevier. ISBN 9781597497343.

  • "Augmented Reality Is Transforming Museums". WIRED. Retrieved 30 September 2018.

  • Vankin, Deborah (28 February 2019). "With a free phone app, Nancy Baker Cahill cracks the glass ceiling in male-dominated land art". Los Angeles Times. Retrieved 26 August 2020.

  • "In the Vast Beauty of the Coachella Valley, Desert X Artists Emphasize the Perils of Climate Change". artnet News. 12 February 2019. Retrieved 10 April 2019.

  • Webley, Kayla. The 50 Best Inventions of 2010 – EyeWriter Time, 11 November 2010.

  • "Olafur Eliasson creates augmented-reality cabinet of curiosities". 14 May 2020. Retrieved 17 May 2020.

  • "The Houses are Blind but the Trees Can See". March 2022. Retrieved 7 February 2023.

  • "Augmented Reality (AR) vs. virtual reality (VR): What's the Difference?". PCMAG. Retrieved 6 November 2020.

  • Sandee LaMotte (13 December 2017). "The very real health dangers of virtual reality". CNN. Retrieved 6 November 2020.

  • Thier, Dave. "'Jurassic World Alive' Makes Two Big Improvements Over 'Pokémon GO'". Forbes. Retrieved 6 November 2020.

  • "Research Human Computer Interaction (HCI), Virtual and Augmented Reality, Wearable Technologies". cs.nycu.edu.tw. Retrieved 28 March 2021.

  • "LightUp - An award-winning toy that teaches kids about circuits and coding". LightUp. Archived from the original on 29 August 2018. Retrieved 29 August 2018.

  • "Terminal Eleven: SkyView – Explore the Universe". www.terminaleleven.com. Retrieved 15 February 2016.

  • "AR Circuits – Augmented Reality Electronics Kit". arcircuits.com. Retrieved 15 February 2016.

  • "SketchAR - start drawing easily using augmented reality". sketchar.tech. Retrieved 20 May 2018.

  • "Augmented Reality—Emerging Technology for Emergency Management", Emergency Management 24 September 2009.

  • "What Does the Future Hold for Emergency Management?", Emergency Management Magazine, 8 November 2013

  • Cooper, Joseph (15 November 2007). Supporting Flight Control for UAV-Assisted Wilderness Search and Rescue Through Human Centered Interface Design (Master's thesis). Brigham Young University.

  • Shu, Jiayu; Kosta, Sokol; Zheng, Rui; Hui, Pan (2018). "Talk2Me: A Framework for Device-to-Device Augmented Reality Social Network". 2018 IEEE International Conference on Pervasive Computing and Communications (Per Com). pp. 1–10. doi:10.1109/PERCOM.2018.8444578. ISBN 978-1-5386-3224-6. S2CID 44017349.

  • "Effects of Augmented Reality on Social Interactions". Electronics Diary. 27 May 2019.

  • Hawkins, Mathew. Augmented Reality Used To Enhance Both Pool And Air Hockey Game Set Watch15 October 2011.

  • One Week Only – Augmented Reality Project Archived 6 November 2013 at the Wayback Machine Combat-HELO Dev Blog 31 July 2012.

  • "Best VR, Augmented Reality apps & games on Android". Archived from the original on 15 February 2017. Retrieved 14 February 2017.

  • "Ogmento First AR Gaming Startup to Win VC Funding". 26 May 2010.

  • Swatman, Rachel (10 August 2016). "Pokémon Go catches five new world records". Guinness World Records. Retrieved 28 August 2016.

  • "'Star Wars' augmented reality game that lets you be a Jedi launched". CNBC. 31 August 2017.

  • "ZENITH: crowdfunded, BitTorrent science fiction thriller". Boing Boing. 22 March 2011. Retrieved 19 November 2019.

  • "Daily Dose Pick: Zenith". Flavorwire. 18 December 2010. Retrieved 19 November 2019.

  • Macaulay, Scott (4 May 2011). "Zenith Creator Vladan Nikolic". Filmmaker Magazine. Retrieved 19 November 2019.

  • Kohn, Eric (18 January 2011). "Toolkit Case Study: The Transmedia Conspiracy of Vladan Nikolic's "Zenith"". IndieWire. Retrieved 19 November 2019.

  • Noelle, S. (2002). "Stereo augmentation of simulation results on a projection wall by combining two basic ARVIKA systems". Proceedings. International Symposium on Mixed and Augmented Reality. pp. 271–322. CiteSeerX 10.1.1.121.1268. doi:10.1109/ISMAR.2002.1115108. ISBN 0-7695-1781-1. S2CID 24876142.

  • Verlinden, Jouke; Horvath, Imre. "Augmented Prototyping as Design Means in Industrial Design Engineering". Delft University of Technology. Archived from the original on 16 June 2013. Retrieved 7 October 2012.

  • Pang, Y.; Nee, Andrew Y. C.; Youcef-Toumi, Kamal; Ong, S. K.; Yuan, M. L. (January 2005). "Assembly Design and Evaluation in an Augmented Reality Environment". hdl:1721.1/7441.

  • Miyake RK, et al. (2006). "Vein imaging: a new method of near infrared imaging, where a processed image is projected onto the skin for the enhancement of vein treatment". Dermatol Surg. 32 (8): 1031–8. doi:10.1111/j.1524-4725.2006.32226.x. PMID 16918565. S2CID 8872471.

  • "Reality_Only_Better". The Economist. 8 December 2007.

  • Mountney, Peter; Giannarou, Stamatia; Elson, Daniel; Yang, Guang-Zhong (2009). "Optical Biopsy Mapping for Minimally Invasive Cancer Screening". Medical Image Computing and Computer-Assisted Intervention – MICCAI 2009. Lecture Notes in Computer Science. Vol. 5761. pp. 483–490. doi:10.1007/978-3-642-04268-3_60. ISBN 978-3-642-04267-6. PMID 20426023.

  • Scopis Augmented Reality: Path guidance to craniopharyngioma on YouTube

  • Loy Rodas, Nicolas; Padoy, Nicolas (2014). "3D Global Estimation and Augmented Reality Visualization of Intra-operative X-ray Dose". Medical Image Computing and Computer-Assisted Intervention – MICCAI 2014. Lecture Notes in Computer Science. Vol. 8673. pp. 415–422. doi:10.1007/978-3-319-10404-1_52. ISBN 978-3-319-10403-4. PMID 25333145. S2CID 819543.

  • 3D Global Estimation and Augmented Reality Visualization of Intra-operative X-ray Dose on YouTube

  • "UNC Ultrasound/Medical Augmented Reality Research". Archived from the original on 12 February 2010. Retrieved 6 January 2010.

  • Mountney, Peter; Fallert, Johannes; Nicolau, Stephane; Soler, Luc; Mewes, Philip W. (2014). "An Augmented Reality Framework for Soft Tissue Surgery". Medical Image Computing and Computer-Assisted Intervention – MICCAI 2014. Lecture Notes in Computer Science. Vol. 8673. pp. 423–431. doi:10.1007/978-3-319-10404-1_53. ISBN 978-3-319-10403-4. PMID 25333146.

  • Botella, Cristina; Bretón-López, Juani; Quero, Soledad; Baños, Rosa; García-Palacios, Azucena (September 2010). "Treating Cockroach Phobia With Augmented Reality". Behavior Therapy. 41 (3): 401–413. doi:10.1016/j.beth.2009.07.002. PMID 20569788.

  • Zimmer, Anja; Wang, Nan; Ibach, Merle K.; Fehlmann, Bernhard; Schicktanz, Nathalie S.; Bentz, Dorothée; Michael, Tanja; Papassotiropoulos, Andreas; de Quervain, Dominique J. F. (1 August 2021). "Effectiveness of a smartphone-based, augmented reality exposure app to reduce fear of spiders in real-life: A randomized controlled trial". Journal of Anxiety Disorders. 82: 102442. doi:10.1016/j.janxdis.2021.102442. ISSN 0887-6185. PMID 34246153. S2CID 235791626.

  • "Augmented Reality Revolutionizing Medicine". Health Tech Event. 6 June 2014. Archived from the original on 12 October 2014. Retrieved 9 October 2014.

  • Thomas, Daniel J. (December 2016). "Augmented reality in surgery: The Computer-Aided Medicine revolution". International Journal of Surgery. 36 (Pt A): 25. doi:10.1016/j.ijsu.2016.10.003. ISSN 1743-9159. PMID 27741424.

  • Cui, Nan; Kharel, Pradosh; Gruev, Viktor (8 February 2017). "Augmented reality with Microsoft Holo Lens holograms for near-infrared fluorescence based image guided surgery". In Pogue, Brian W; Gioux, Sylvain (eds.). Augmented reality with Microsoft HoloLens holograms for near-infrared fluorescence based image guided surgery. Molecular-Guided Surgery: Molecules, Devices, and Applications III. Vol. 10049. International Society for Optics and Photonics. pp. 100490I. doi:10.1117/12.2251625. S2CID 125528534.

  • Moro, C; Birt, J; Stromberga, Z; Phelps, C; Clark, J; Glasziou, P; Scott, AM (May 2021). "Virtual and Augmented Reality Enhancements to Medical and Science Student Physiology and Anatomy Test Performance: A Systematic Review and Meta-Analysis". Anatomical Sciences Education. 14 (3): 368–376. doi:10.1002/ase.2049. PMID 33378557. S2CID 229929326.

  • Barsom, E. Z.; Graafland, M.; Schijven, M. P. (1 October 2016). "Systematic review on the effectiveness of augmented reality applications in medical training". Surgical Endoscopy. 30 (10): 4174–4183. doi:10.1007/s00464-016-4800-6. ISSN 0930-2794. PMC 5009168. PMID 26905573.

  • Magee, D.; Zhu, Y.; Ratnalingam, R.; Gardner, P.; Kessel, D. (1 October 2007). "An augmented reality simulator for ultrasound guided needle placement training" (PDF). Medical & Biological Engineering & Computing. 45 (10): 957–967. doi:10.1007/s11517-007-0231-9. ISSN 1741-0444. PMID 17653784. S2CID 14943048.

  • Akçayır, Murat; Akçayır, Gökçe (February 2017). "Advantages and challenges associated with augmented reality for education: A systematic review of the literature". Educational Research Review. 20: 1–11. doi:10.1016/j.edurev.2016.11.002.

  • Tagaytayan, Raniel; Kelemen, Arpad; Sik-Lanyi, Cecilia (2018). "Augmented reality in neurosurgery". Archives of Medical Science. 14 (3): 572–578. doi:10.5114/aoms.2016.58690. ISSN 1734-1922. PMC 5949895. PMID 29765445.

  • Siwach, Gautam (29 November 2022). Inferencing Big Data with Artificial intelligence & Machine learning Models in Metaverse (pdf). 2022 International Conference on Smart Applications, Communications and Networking (SmartNets). Botswana. pp. 1–6. Retrieved 3 January 2023.

  • Davis, Nicola (7 January 2015). "Project Anywhere: digital route to an out-of-body experience". The Guardian. Retrieved 21 September 2016.

  • "Project Anywhere: an out-of-body experience of a new kind". Euronews. 25 February 2015. Retrieved 21 September 2016.

  • Project Anywhere at studioany.com

  • Lintern, Gavan (1980). "Transfer of landing skill after training with supplementary visual cues". Human Factors. 22 (1): 81–88. doi:10.1177/001872088002200109. PMID 7364448. S2CID 113087380.

  • Lintern, Gavan; Roscoe, Stanley N.; Sivier, Jonathan E. (June 1990). "Display Principles, Control Dynamics, and Environmental Factors in Pilot Training and Transfer". Human Factors. 32 (3): 299–317. doi:10.1177/001872089003200304. S2CID 110528421.

  • Abernathy, M., Houchard, J., Puccetti, M., and Lambert, J,"Debris Correlation Using the Rockwell WorldView System", Proceedings of 1993 Space Surveillance Workshop 30 March to 1 April 1993, pages 189-195

  • Kang, Seong Pal; Choi, Junho; Suh, Seung-Beum; Kang, Sungchul (October 2010). Design of mine detection robot for Korean mine field. 2010 IEEE Workshop on Advanced Robotics and Its Social Impacts. pp. 53–56. doi:10.1109/ARSO.2010.5679622. ISBN 978-1-4244-9122-3.

  • Calhoun, G. L., Draper, M. H., Abernathy, M. F., Delgado, F., and Patzek, M. "Synthetic Vision System for Improving Unmanned Aerial Vehicle Operator Situation Awareness," 2005 Proceedings of SPIE Enhanced and Synthetic Vision, Vol. 5802, pp. 219–230.

  • Cameron, Chris. Military-Grade Augmented Reality Could Redefine Modern Warfare ReadWriteWeb 11 June 2010.

  • Slyusar, Vadym (19 July 2019). "Augmented reality in the interests of ESMRM and munitions safety".

  • Delgado, F., Abernathy, M., White J., and Lowrey, B. Real-Time 3-D Flight Guidance with Terrain for the X-38, SPIE Enhanced and Synthetic Vision 1999, Orlando Florida, April 1999, Proceedings of the SPIE Vol. 3691, pages 149–156

  • Delgado, F., Altman, S., Abernathy, M., White, J. Virtual Cockpit Window for the X-38, SPIE Enhanced and Synthetic Vision 2000, Orlando Florida, Proceedings of the SPIE Vol. 4023, pages 63–70

  • GM's Enhanced Vision System. Techcrunch.com (17 March 2010). Retrieved 9 June 2012.

  • Couts, Andrew. New augmented reality system shows 3D GPS navigation through your windshield Digital Trends,27 October 2011.

  • Griggs, Brandon. Augmented-reality' windshields and the future of driving CNN Tech, 13 January 2012.

  • "WayRay's AR in-car HUD convinced me HUDs can be better". TechCrunch. Retrieved 3 October 2018.

  • Walz, Eric (22 May 2017). "WayRay Creates Holographic Navigation: Alibaba Invests $18 Million". FutureCar. Retrieved 17 October 2018.

  • Cheney-Peters, Scott (12 April 2012). "CIMSEC: Google's AR Goggles". Retrieved 20 April 2012.

  • Stafford, Aaron; Piekarski, Wayne; Thomas, Bruce H. "Hand of God". Archived from the original on 7 December 2009. Retrieved 18 December 2009.

  • Benford, Steve; Greenhalgh, Chris; Reynard, Gail; Brown, Chris; Koleva, Boriana (1 September 1998). "Understanding and constructing shared spaces with mixed-reality boundaries". ACM Transactions on Computer-Human Interaction. 5 (3): 185–223. doi:10.1145/292834.292836. S2CID 672378.

  • Office of Tomorrow Media Interaction Lab.

  • The big idea:Augmented Reality. Ngm.nationalgeographic.com (15 May 2012). Retrieved 9 June 2012.

  • Henderson, Steve; Feiner, Steven. "Augmented Reality for Maintenance and Repair (ARMAR)". Archived from the original on 6 March 2010. Retrieved 6 January 2010.

  • Sandgren, Jeffrey. The Augmented Eye of the Beholder Archived 21 June 2013 at the Wayback Machine, BrandTech News 8 January 2011.

  • Cameron, Chris. Augmented Reality for Marketers and Developers, ReadWriteWeb.

  • Dillow, Clay BMW Augmented Reality Glasses Help Average Joes Make Repairs, Popular Science September 2009.

  • King, Rachael. Augmented Reality Goes Mobile, Bloomberg Business Week Technology 3 November 2009.

  • Abraham, Magid; Annunziata, Marco (13 March 2017). "Augmented Reality Is Already Improving Worker Performance". Harvard Business Review. Retrieved 13 January 2019.

  • Archived at Ghostarchive and the Wayback Machine: Arti AR highlights at SRX -- the first sports augmented reality live from a moving car!, retrieved 14 July 2021

  • Marlow, Chris. Hey, hockey puck! NHL PrePlay adds a second-screen experience to live games, digitalmediawire 27 April 2012.

  • Pair, J.; Wilson, J.; Chastine, J.; Gandy, M. (2002). "The Duran Duran project: The augmented reality toolkit in live performance". The First IEEE International Workshop Agumented Reality Toolkit. p. 2. doi:10.1109/ART.2002.1107010. ISBN 0-7803-7680-3. S2CID 55820154.

  • Broughall, Nick. Sydney Band Uses Augmented Reality For Video Clip. Gizmodo, 19 October 2009.

  • Pendlebury, Ty. Augmented reality in Aussie film clip. c|net 19 October 2009.

  • Saenz, Aaron Augmented Reality Does Time Travel Tourism SingularityHUB 19 November 2009.

  • Sung, Dan Augmented reality in action – travel and tourism Pocket-lint 2 March 2011.

  • Dawson, Jim Augmented Reality Reveals History to Tourists Life Science 16 August 2009.

  • Bartie, Phil J.; MacKaness, William A. (2006). "Development of a Speech-Based Augmented Reality System to Support Exploration of Cityscape". Transactions in GIS. 10: 63–86. doi:10.1111/j.1467-9671.2006.00244.x. S2CID 13325561.

  • Benderson, Benjamin B. Audio Augmented Reality: A Prototype Automated Tour Guide Archived 1 July 2002 at the Wayback Machine Bell Communications Research, ACM Human Computer in Computing Systems Conference, pp. 210–211.

  • Jain, Puneet and Manweiler, Justin and Roy Choudhury, Romit. OverLay: Practical Mobile Augmented Reality ACM MobiSys, May 2015.

  • Tsotsis, Alexia. Word Lens Translates Words Inside of Images. Yes Really. TechCrunch (16 December 2010).

  • N.B. Word Lens: This changes everything The Economist: Gulliver blog 18 December 2010.

  • Borghino, Dario Augmented reality glasses perform real-time language translation. gizmag, 29 July 2012.

  • "Music Production in the Era of Augmented Reality". Medium. 14 October 2016. Retrieved 5 January 2017.

  • "Augmented Reality music making with Oak on Kickstarter – gearnews.com". gearnews.com. 3 November 2016. Retrieved 5 January 2017.

  • Clouth, Robert (1 January 2013). "Mobile Augmented Reality as a Control Mode for Real-time Music Systems". Retrieved 5 January 2017.

  • Farbiz, Farzam; Tang, Ka Yin; Wang, Kejian; Ahmad, Waqas; Manders, Corey; Jyh Herng, Chong; Kee Tan, Yeow (2007). "A multimodal augmented reality DJ music system". 2007 6th International Conference on Information, Communications & Signal Processing. pp. 1–5. doi:10.1109/ICICS.2007.4449564. ISBN 978-1-4244-0982-2. S2CID 17807179.

  • Stampfl, Philipp (1 January 2003). "Augmented Reality Disk Jockey: AR/DJ". ACM SIGGRAPH 2003 Sketches & Applications: 1. doi:10.1145/965400.965556. S2CID 26182835.

  • "GROUND-BREAKING AUGMENTED REALITY PROJECT Supporting music production through new technology". Archived from the original on 6 January 2017. Retrieved 5 January 2017.

  • "ARmony – Using Augmented Reality to learn music". YouTube. 24 August 2014. Archived from the original on 5 June 2019. Retrieved 5 January 2017.

  • "HoloLens concept lets you control your smart home via augmented reality". Digital Trends. 26 July 2016. Retrieved 5 January 2017.

  • "Hololens: Entwickler zeigt räumliches Interface für Elektrogeräte" (in German). MIXED. 22 July 2016. Retrieved 5 January 2017.

  • "Control Your IoT Smart Devices Using Microsoft HoloLen (video) – Geeky Gadgets". Geeky Gadgets. 27 July 2016. Retrieved 5 January 2017.

  • "Experimental app brings smart home controls into augmented reality with HoloLens". Windows Central. 22 July 2016. Retrieved 5 January 2017.

  • "This app can mix music while you mix drinks, and proves augmented reality can be fun". Digital Trends. 20 November 2013. Retrieved 5 January 2017.

  • Sterling, Bruce (6 November 2013). "Augmented Reality: Controlling music with Leapmotion Geco and Ableton (Hands Control)". Wired. Retrieved 5 January 2017.

  • "Controlling Music With Leap Motion Geco & Ableton". Synthtopia. 4 November 2013. Retrieved 5 January 2017.

  • "Augmented Reality Interface for Electronic Music Performance". S2CID 7847478.

  • "Expressive Control of Indirect Augmented Reality During Live Music Performances" (PDF). Retrieved 5 January 2017.

  • Berthaut, Florent; Jones, Alex (2016). "ControllAR". ControllAR : Appropriation of Visual Feedback on Control Surfaces (PDF). pp. 271–277. doi:10.1145/2992154.2992170. ISBN 9781450342483. S2CID 7180627.

  • "Rouages: Revealing the Mechanisms of Digital Musical Instruments to the Audience". May 2013. pp. 6 pages.

  • "Reflets: Combining and Revealing Spaces for Musical Performances". May 2015.

  • Wagner, Kurt. "Snapchat's New Augmented Reality Feature Brings Your Cartoon Bitmoji into the Real World." Recode, Recode, 14 Sept. 2017, www.recode.net/2017/9/14/16305890/snapchat-bitmoji-ar-Facebook.

  • Miller, Chance. "Snapchat's Latest Augmented Reality Feature Lets You Paint the Sky with New Filters." 9to5Mac, 9to5Mac, 25 Sept. 2017, 9to5mac.com/2017/09/25/how-to-use-snapchat-sky-filters/.

  • Faccio, Mara; McConnell, John J. (2017). "Death by Pokémon GO". doi:10.2139/ssrn.3073723. SSRN 3073723.

  • Peddie, J., 2017, Agumented Reality, Springer[page needed]

  • Roesner, Franziska; Kohno, Tadayoshi; Denning, Tamara; Calo, Ryan; Newell, Bryce Clayton (2014). "Augmented reality". Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing Adjunct Publication - UbiComp '14 Adjunct. pp. 1283–1288. doi:10.1145/2638728.2641709. ISBN 978-1-4503-3047-3. S2CID 15190154.

  • "The Code of Ethics on Human Augmentation - Augmented Reality : Where We Will All Live -". m.ebrary.net. Retrieved 18 November 2019.

  • Damiani, Jesse (18 July 2016). "The Future of Tech Just Changed at VRTO--Here's Why That Matters to You". HuffPost. Retrieved 18 November 2019.

  • "VRTO Spearheads Code of Ethics on Human Augmentation". VRFocus. Retrieved 18 November 2019.

  • "The Code of Ethics on Human Augmentation". www.eyetap.org. Retrieved 18 November 2019.

  • McClure 2017, p. 364-366.

  • McEvoy, Fiona J (4 June 2018). "What Are Your Augmented Reality Property Rights?". Slate. Retrieved 31 May 2022.

  • Mallick 2020, p. 1068-1072.

  • McClure 2017, p. 341-343.

  • McClure 2017, p. 347-351.

  • Conroy 2017, p. 20.

  • McClure 2017, p. 351-353.

  • Conroy 2017, p. 21-22.

  • Conroy 2017, p. 24-26.

  • Conroy 2017, p. 27-29.

  • Conroy 2017, p. 29-34.

  • McClure 2017, p. 354-355.

  • "Judge halts Wisconsin county rule for apps like Pokemon Go". Associated Press. 21 July 2017.

  • McClure 2017, p. 356-357.

  • McClure 2017, p. 355.

  • McClure 2017, p. 357.

  • McClure 2017, p. 357-359.

  • Mallick 2020, p. 1079-1080.

  • Mallick 2020, p. 1080-1084.

  • Mann, S. (1997). "Wearable computing: a first step toward personal imaging". Computer. 30 (2): 25–32. doi:10.1109/2.566147. S2CID 28001657.

  • Wagner, Daniel (29 September 2009). First Steps Towards Handheld Augmented Reality. ACM. ISBN 9780769520346. Retrieved 29 September 2009.

  • Markoff, John (24 October 2019). "Always Building, From the Garage to Her Company". The New York Times. ISSN 0362-4331. Retrieved 12 December 2019.

  • Johnson, Joel. "The Master Key": L. Frank Baum envisions augmented reality glasses in 1901 Mote & Beam 10 September 2012.

  • Sutherland, Ivan E. (1968). "A head-mounted three dimensional display". Proceedings of the December 9-11, 1968, fall joint computer conference, part I on - AFIPS '68 (Fall, part I). p. 757. doi:10.1145/1476589.1476686. S2CID 4561103.

  • Mann, Steve (2 November 2012). "Eye Am a Camera: Surveillance and Sousveillance in the Glassage". Techland.time.com. Retrieved 14 October 2013.

  • "Google Glasses Project". Archived from the original on 3 October 2013. Retrieved 21 February 2014.

  • "Absolute Display Window Mouse/Mice". Archived from the original on 6 November 2019. Retrieved 19 October 2020. (context & abstract only) IBM Technical Disclosure Bulletin 1 March 1987

  • "Absolute Display Window Mouse/Mice". Archived from the original on 19 October 2020. Retrieved 19 October 2020. (image of anonymous printed article) IBM Technical Disclosure Bulletin 1 March 1987

  • George, Douglas B.; Morris, L. Robert (1989). "A computer-driven astronomical telescope guidance and control system with superimposed star field and celestial coordinate graphics display". Journal of the Royal Astronomical Society of Canada. 83: 32. Bibcode:1989JRASC..83...32G.

  • Lee, Kangdon (7 February 2012). "Augmented Reality in Education and Training". TechTrends. 56 (2): 13–21. doi:10.1007/s11528-012-0559-3. S2CID 40826055.

  • Louis B. Rosenberg. "The Use of Virtual Fixtures As Perceptual Overlays to Enhance Operator Performance in Remote Environments." Technical Report AL-TR-0089, USAF Armstrong Laboratory (AFRL), Wright-Patterson AFB OH, 1992.

  • Eric R. Fossum (1993), "Active Pixel Sensors: Are CCD's Dinosaurs?" Proc. SPIE Vol. 1900, p. 2–14, Charge-Coupled Devices and Solid State Optical Sensors III, Morley M. Blouke; Ed.

  • Schmalstieg, Dieter; Hollerer, Tobias (2016). Augmented Reality: Principles and Practice. Addison-Wesley Professional. pp. 209–10. ISBN 978-0-13-315320-0.

  • Wellner, Pierre; Mackay, Wendy; Gold, Rich (1 July 1993). "Back to the real world". Communications of the ACM. 36 (7): 24–27. doi:10.1145/159544.159555. S2CID 21169183.

  • Barrilleaux, Jon. Experiences and Observations in Applying Augmented Reality to Live Training.

  • "US Patent for Projection of images of computer models in three dimensional space Patent (Patent # 5,687,305 issued November 11, 1997) - Justia Patents Search". patents.justia.com. Retrieved 17 October 2021.

  • "Information Technology". www.nrl.navy.mil.

  • AviationNow.com Staff, "X-38 Test Features Use of Hybrid Synthetic Vision" AviationNow.com, 11 December 2001

  • Behringer, R.; Tam, C.; McGee, J.; Sundareswaran, S.; Vassiliou, M. (2000). "A wearable augmented reality testbed for navigation and control, built solely with commercial-off-the-shelf (COTS) hardware". Proceedings IEEE and ACM International Symposium on Augmented Reality (ISAR 2000). pp. 12–19. doi:10.1109/ISAR.2000.880918. ISBN 0-7695-0846-4. S2CID 18892611.

  • Behringer, R.; Tam, C.; McGee, J.; Sundareswaran, S.; Vassiliou, M. (2000). "Two wearable testbeds for augmented reality: ItWARNS and WIMMIS". Digest of Papers. Fourth International Symposium on Wearable Computers. pp. 189–190. doi:10.1109/ISWC.2000.888495. ISBN 0-7695-0795-6. S2CID 13459308.

  • "From EyeToy to NGP: PlayStation's Augmented Reality Legacy". PlayStation.Blog. 8 April 2011. Retrieved 29 November 2021.

  • 7732694, "United States Patent: 7732694 - Portable music player with synchronized transmissive visual overlays", published 9 Aug 2006, issued 8 June 2010

  • Slawski, Bill (4 September 2011). "Google Picks Up Hardware and Media Patents from Outland Research". SEO by the Sea ⚓.

  • Wikitude AR Travel Guide. YouTube.com. Retrieved 9 June 2012.

  • Cameron, Chris. Flash-based AR Gets High-Quality Markerless Upgrade, ReadWriteWeb 9 July 2010.

  • Sterling, Bruce. "Augmented Reality: Ogmento, "Paranormal Activity: Sanctuary"". Wired. ISSN 1059-1028. Retrieved 27 September 2022.

  • Microsoft Channel, YouTube [2], 23 January 2015.

  • Bond, Sarah (17 July 2016). "After the Success of Pokémon Go, How Will Augmented Reality Impact Archaeological Sites?". Retrieved 17 July 2016.

  • C|NET [3], 20 December 2017.

    1. Official Blog, Microsoft [4], 24 February 2019.

    Sources

    External links

    Media related to Augmented reality at Wikimedia Commons

     

     https://en.wikipedia.org/wiki/Augmented_reality

    Liquid Image Corporation was a Winnipeg-based company that manufactured head-mounted displays. The company formed in 1992 by Tony Havelka,[1] David Collette[2] and Shannon O'Brien.[3] Liquid Image was started in Winnipeg, MB in response to the emergence of a market for virtual reality technology. Funding as provided by a group of local angels and the first office was in the attic of Tony Havelka.  

    https://en.wikipedia.org/wiki/Liquid_Image

    A flash is a device used in photography that produces a brief burst of light (typically lasting 1/1000 to 1/200 of a second) at a color temperature of about 5500 K[citation needed] to help illuminate a scene. A major purpose of a flash is to illuminate a dark scene. Other uses are capturing quickly moving objects or changing the quality of light. Flash refers either to the flash of light itself or to the electronic flash unit discharging the light. Most current flash units are electronic, having evolved from single-use flashbulbs and flammable powders. Modern cameras often activate flash units automatically.

    Flash units are commonly built directly into a camera. Some cameras allow separate flash units to be mounted via a standardized accessory mount bracket (a hot shoe). In professional studio equipment, flashes may be large, standalone units, or studio strobes, powered by special battery packs or connected to mains power. They are either synchronized with the camera using a flash synchronization cable or radio signal, or are light-triggered, meaning that only one flash unit needs to be synchronized with the camera, and in turn triggers the other units, called slaves

    https://en.wikipedia.org/wiki/Flash_(photography)

    Types

    Flash-lamp/Flash powder

    Demonstration of a magnesium flash powder lamp from 1909

    Studies of magnesium by Bunsen and Roscoe in 1859 showed that burning this metal produced a light with similar qualities to daylight. The potential application to photography inspired Edward Sonstadt to investigate methods of manufacturing magnesium so that it would burn reliably for this use. He applied for patents in 1862 and by 1864 had started the Manchester Magnesium Company with Edward Mellor. With the help of engineer William Mather, who was also a director of the company, they produced flat magnesium ribbon, which was said to burn more consistently and completely so giving better illumination than round wire. It also had the benefit of being a simpler and cheaper process than making round wire.[1] Mather was also credited with the invention of a holder for the ribbon, which formed a lamp to burn it in.[2] A variety of magnesium ribbon holders were produced by other manufacturers, such as the Pistol Flashmeter,[3] which incorporated an inscribed ruler that allowed the photographer to use the correct length of ribbon for the exposure they needed. The packaging also implies that the magnesium ribbon was not necessarily broken off before being ignited.

    Vintage AHA smokeless flash powder lamp kit, Germany

    An alternative to magnesium ribbon was flash powder, a mixture of magnesium powder and potassium chlorate, was introduced by its German inventors Adolf Miethe and Johannes Gaedicke in 1887. A measured amount was put into a pan or trough and ignited by hand, producing a brief brilliant flash of light, along with the smoke and noise that might be expected from such an explosive event. This could be a life-threatening activity, especially if the flash powder was damp.[4] An electrically triggered flash lamp was invented by Joshua Lionel Cowen in 1899. His patent describes a device for igniting photographers’ flash powder by using dry cell batteries to heat a wire fuse. Variations and alternatives were touted from time to time and a few found a measure of success, especially for amateur use. In 1905, one French photographer was using intense non-explosive flashes produced by a special mechanized carbon arc lamp to photograph subjects in his studio,[5] but more portable and less expensive devices prevailed. On through the 1920s, flash photography normally meant a professional photographer sprinkling powder into the trough of a T-shaped flash lamp, holding it aloft, then triggering a brief and (usually) harmless bit of pyrotechnics.

    Flashbulbs

    Ernst Leitz Wetzlar flash from 1950s
    Flashbulbs have ranged in size from the diminutive AG-1 to the massive No. 75.
    Kodak Brownie Hawkeye with "Kodalite Flasholder" and Sylvania P25 blue-dot daylight-type flashbulb
    The AG-1 flashbulb, introduced in 1958, used wires protruding from its base as electrical contacts; this eliminated the need for a separate metal base.

    The use of flash powder in an open lamp was replaced by flashbulbs; magnesium filaments were contained in bulbs filled with oxygen gas, and electrically ignited by a contact in the camera shutter.[6] Manufactured flashbulbs were first produced commercially in Germany in 1929.[7] Such a bulb could only be used once, and was too hot to handle immediately after use, but the confinement of what would otherwise have amounted to a small explosion was an important advance. A later innovation was the coating of flashbulbs with a plastic film to maintain bulb integrity in the event of the glass shattering during the flash. A blue plastic film was introduced as an option to match the spectral quality of the flash to daylight-balanced colour film. Subsequently, the magnesium was replaced by zirconium, which produced a brighter flash.

    There was a significant delay after ignition for a flashbulb to reach full brightness, and the bulb burned for a relatively long time, compared to shutter speeds required to stop motion and not display camera shake. Slower shutter speeds (typically from 1/10 to 1/50 of a second) were initially used on cameras to ensure proper synchronization and to make use of all the bulb's light output. Cameras with flash sync triggered the flashbulb a fraction of a second before opening the shutter to allow it to reach full brightness, allowing faster shutter speeds. A flashbulb widely used during the 1960s was the Press 25, the 25-millimetre (1 in) flashbulb often used by newspapermen in period movies, usually attached to a press camera or a twin-lens reflex camera. Its peak light output was around a million lumens. Other flashbulbs in common use were the M-series, M-2, M-3 etc., which had a small ("miniature") metal bayonet base fused to the glass bulb. The largest flashbulb ever produced was the GE Mazda No. 75, being over eight inches long with a girth of 14 inches, initially developed for nighttime aerial photography during World War II.[8]

    The all-glass PF1 bulb was introduced in 1954.[9] Eliminating the metal base and the multiple manufacturing steps needed to attach it to the glass bulb cut the cost substantially compared to the larger M series bulbs. The design required a fibre ring around the base to hold the contact wires against the side of the glass base. An adapter was available allowing the bulb to fit into flash guns made for bayonet-capped bulbs. The PF1 (along with the M2) had a faster ignition time (less delay between shutter contact and peak output), so it could be used with X synch below 1/30 of a second—while most bulbs require a shutter speed of 1/15 on X synch to keep the shutter open long enough for the bulb to ignite and burn. A smaller version which was not as bright but did not require the fibre ring, the AG-1, was introduced in 1958; it was cheaper, and rapidly supplanted the PF1.

    Flashcubes, Magicubes and Flipflash

    Flashcube fitted to a Kodak Instamatic camera, showing both unused (left) and used (right) bulbs
    Undersides of Flashcube (left) and Magicube (right) cartridges
    "Flip flash" type cartridge

    In 1965 Eastman Kodak of Rochester, New York replaced the individual flashbulb technology used on early Instamatic cameras with the Flashcube developed by Sylvania Electric Products.[10][11]

    A flashcube was a module with four expendable flashbulbs, each mounted at 90° from the others in its own reflector. For use it was mounted atop the camera with an electrical connection to the shutter release and a battery inside the camera. After each flash exposure, the film advance mechanism also rotated the flashcube 90° to a fresh bulb. This arrangement allowed the user to take four images in rapid succession before inserting a new flashcube.

    The later Magicube (or X-Cube) by General Electric retained the four-bulb format, but did not require electrical power. It was not interchangeable with the original Flashcube. Each bulb in a Magicube was set off by releasing one of four cocked wire springs within the cube. The spring struck a primer tube at the base of the bulb, which contained a fulminate, which in turn ignited shredded zirconium foil in the flash. A Magicube could also be fired using a key or paper clip to trip the spring manually. X-cube was an alternate name for Magicubes, indicating the appearance of the camera's socket.

    Other common flashbulb-based devices were the Flashbar and Flipflash, which provided ten flashes from a single unit. The bulbs in a Flipflash were set in a vertical array, putting a distance between the bulb and the lens, eliminating red eye. The Flipflash name derived from the fact that once half the flashbulbs had been used, the unit had to be flipped over and re-inserted to use the remaining bulbs. In many Flipflash cameras, the bulbs were ignited by electrical currents produced when a piezoelectric crystal was struck mechanically by a spring-loaded striker, which was cocked each time the film was advanced.

    Electronic flash

    The electronic flash tube was introduced by Harold Eugene Edgerton in 1931.[12] The electronic flash reaches full brightness almost instantaneously, and is of very short duration. Edgerton took advantage of the short duration to make several iconic photographs, such as one of a bullet bursting through an apple. The large photographic company Kodak was initially reluctant to take up the idea.[13] Electronic flash, often called "strobe" in the US following Edgerton's use of the technique for stroboscopy, came into some use in the late 1950s, although flashbulbs remained dominant in amateur photography until the mid 1970s. Early units were expensive, and often large and heavy; the power unit was separate from the flash head and was powered by a large lead-acid battery carried with a shoulder strap. Towards the end of the 1960s electronic flashguns of similar size to conventional bulb guns became available; the price, although it had dropped, was still high. The electronic flash system eventually superseded bulb guns as prices came down. Already in the early 1970s, amateur electronic flashes were available for less than $100.

    A typical electronic flash unit has electronic circuitry to charge a high-capacitance capacitor to several hundred volts. When the flash is triggered by the shutter's flash synchronization contact, the capacitor is discharged rapidly through a permanent flash tube, producing an immediate flash lasting typically less than 1/1000 of a second, shorter than shutter speeds used, with full brightness before the shutter has started to close, allowing easy synchronization of maximum shutter opening with full flash brightness, unlike flashbulbs which were slower to reach full brightness and burned for a longer time, typically 1/30 of a second.

    The built-in flash of a SLR camera, Pentax MZ-30, firing

    A single electronic flash unit is often mounted on a camera's accessory shoe or a bracket; many inexpensive cameras have an electronic flash unit built in. For more sophisticated and longer-range lighting several synchronised flash units at different positions may be used.

    Two professional xenon tube flashes

    Ring flashes that fit to a camera's lens can be used for shadow free portrait and macro photography; some lenses have built-in ring-flash.[14]

    In a photographic studio, more powerful and flexible studio flash systems are used. They usually contain a modeling light, an incandescent light bulb close to the flash tube; the continuous illumination of the modeling light lets the photographer visualize the effect of the flash. A system may comprise multiple synchronised flashes for multi-source lighting.

    The strength of a flash device is often indicated in terms of a guide number designed to simplify exposure setting. The energy released by larger studio flash units, such as monolights, is indicated in watt-seconds.

    Canon names its electronic flash units Speedlite, and Nikon uses Speedlight; these terms are frequently used as generic terms for electronic flash units designed to be mounted on, and triggered by, a camera hot shoe.

    High speed flash

    An air-gap flash is a high-voltage device that discharges a flash of light with an exceptionally short duration, often much less than one microsecond. These are commonly used by scientists or engineers for examining extremely fast-moving objects or reactions, famous for producing images of bullets tearing through light bulbs and balloons (see Harold Eugene Edgerton). An example of a process by which to create a high speed flash is the exploding wire method.

    A photo of a Smith & Wesson Model 686 firing, taken with a high speed air-gap flash. The photo was taken in a darkened room, with camera's shutter open and the flash was triggered by the sound of the shot using a microphone.

    Multi-flash

    A camera that implements multiple flashes can be used to find depth edges or create stylized images. Such a camera has been developed by researchers at the Mitsubishi Electric Research Laboratories (MERL). Successive flashing of strategically placed flash mechanisms results in shadows along the depths of the scene. This information can be manipulated to suppress or enhance details or capture the intricate geometric features of a scene (even those hidden from the eye), to create a non-photorealistic image form. Such images could be useful in technical or medical imaging.[15]

    Flash intensity

    Unlike flashbulbs, the intensity of an electronic flash can be adjusted on some units. To do this, smaller flash units typically vary the capacitor discharge time, whereas larger (e.g., higher power, studio) units typically vary the capacitor charge. Color temperature can change as a result of varying the capacitor charge, making color correction necessary. Constant-color-temperature flash can be achieved by using appropriate circuitry.[16]

    Flash intensity is typically measured in stops or in fractions (1, 1/2, 1/4, 1/8 etc.). Some monolights display an "EV Number", so that a photographer can know the difference in brightness between different flash units with different watt-second ratings. EV10.0 is defined as 6400 watt-seconds, and EV9.0 is one stop lower, i.e. 3200 watt-seconds.[17]

    Flash duration

    Flash duration is commonly described by two numbers that are expressed in fractions of a second:

    • t.1 is the length of time the light intensity is above 0.1 (10%) of the peak intensity
    • t.5 is the length of time the light intensity is above 0.5 (50%) of the peak intensity

    For example, a single flash event might have a t.5 value of 1/1200 and t.1 of 1/450. These values determine the ability of a flash to "freeze" moving subjects in applications such as sports photography.

    In cases where intensity is controlled by capacitor discharge time, t.5 and t.1 decrease with decreasing intensity. Conversely, in cases where intensity is controlled by capacitor charge, t.5 and t.1 increase with decreasing intensity due to the non-linearity of the capacitor's discharge curve.

    Flash LED used in phones

    Flash LED with charge pump integrated circuit

    High-current flash LEDs are used as flash sources in camera phones, although they are less bright than xenon flash tubes. Unlike xenon tubes, LEDs require only a low voltage. They are more energy-efficient, and very small. The LED flash can also be used for illumination of video recordings or as an autofocus assist lamp in low-light photography; it can also be used as a general-purpose non-photographic light source.

    Focal-plane-shutter synchronization

    Electronic flash units have shutter speed limits with focal-plane shutters. Focal-plane shutters expose using two curtains that cross the sensor. The first one opens and the second curtain follows it after a delay equal to the nominal shutter speed. A typical modern focal-plane shutter on a full-frame or smaller sensor camera takes about 1/400 s to 1/300 s to cross the sensor, so at exposure times shorter than this only part of the sensor is uncovered at any one time.

    The time available to fire a single flash which uniformly illuminates the image recorded on the sensor is the exposure time minus the shutter travel time. Equivalently, the minimum possible exposure time is the shutter travel time plus the flash duration (plus any delays in triggering the flash).

    For example, a Nikon D850 has a shutter travel time of about 2.4ms.[18] A full-power flash from a modern built-in or hot shoe mounted electronic flash has a typical duration of about 1ms, or a little less, so the minimum possible exposure time for even exposure across the sensor with a full-power flash is about 2.4ms + 1.0 ms = 3.4ms, corresponding to a shutter speed of about 1/290 s. However some time is required to trigger the flash. At the maximum (standard) D850 X-sync shutter speed of 1/250 s, the exposure time is 1/250 s = 4.0ms, so about 4.0ms - 2.4ms = 1.6ms are available to trigger and fire the flash, and with a 1ms flash duration, 1.6ms - 1.0ms = 0.6ms are available to trigger the flash in this Nikon D850 example.

    Mid- to high-end Nikon DSLRs with a maximum shutter speed of 1/8000 s (roughly D7000 or D800 and above) have an unusual menu-selectable feature which increases the maximum X-Sync speed to 1/320 s = 3.1ms with some electronic flashes. At 1/320 s only 3.1ms - 2.4ms = 0.7ms are available to trigger and fire the flash while achieving a uniform flash exposure, so the maximum flash duration, and therefore maximum flash output, must be, and is, reduced.

    Contemporary (2018) focal-plane shutter cameras with full-frame or smaller sensors typically have maximum normal X-sync speeds of 1/200 s or 1/250 s. Some cameras are limited to 1/160 s. X-sync speeds for medium format cameras when using focal-plane shutters are somewhat slower, e.g. 1/125 s,[19] because of the greater shutter travel time required for a wider, heavier, shutter that travels farther across a larger sensor.

    In the past, slow-burning single-use flash bulbs allowed the use of focal-plane shutters at maximum speed because they produced continuous light for the time taken for the exposing slit to cross the film gate. If these are found they cannot be used on modern cameras because the bulb must be fired *before* the first shutter curtain begins to move (M-sync); the X-sync used for electronic flash normally fires only when the first shutter curtain reaches the end of its travel.

    High-end flash units address this problem by offering a mode, typically called FP sync or HSS (High Speed Sync), which fires the flash tube multiple times during the time the slit traverses the sensor. Such units require communication with the camera and are thus dedicated to a particular camera make. The multiple flashes result in a significant decrease in guide number, since each is only a part of the total flash power, but it's all that illuminates any particular part of the sensor. In general, if s is the shutter speed, and t is the shutter traverse time, the guide number reduces by s / t. For example, if the guide number is 100, and the shutter traverse time is 5 ms (a shutter speed of 1/200s), and the shutter speed is set to 1/2000 s (0.5 ms), the guide number reduces by a factor of 0.5 / 5, or about 3.16, so the resultant guide number at this speed would be about 32.

    Current (2010) flash units frequently have much lower guide numbers in HSS mode than in normal modes, even at speeds below the shutter traverse time. For example, the Mecablitz 58 AF-1 digital flash unit has a guide number of 58 in normal operation, but only 20 in HSS mode, even at low speeds.

    Technique

    Image exposed without additional lighting (left) and with fill flash (right)
    Lighting produced by direct flash (left) and bounced flash (right)

    As well as dedicated studio use, flash may be used as the main light source where ambient light is inadequate, or as a supplementary source in more complex lighting situations. Basic flash lighting produces a hard, frontal light unless modified in some way.[20] Several techniques are used to soften light from the flash or provide other effects.

    Softboxes, diffusers that cover the flash lamp, scatter direct light and reduce its harshness. Reflectors, including umbrellas, flat-white backgrounds, drapes and reflector cards are commonly used for this purpose (even with small hand-held flash units). Bounce flash is a related technique in which flash is directed onto a reflective surface, for example a white ceiling or a flash umbrella, which then reflects light onto the subject. It can be used as fill-flash or, if used indoors, as ambient lighting for the whole scene. Bouncing creates softer, less artificial-looking illumination than direct flash, often reducing overall contrast and expanding shadow and highlight detail, and typically requires more flash power than direct lighting.[20] Part of the bounced light can be also aimed directly on the subject by "bounce cards" attached to the flash unit which increase the efficiency of the flash and illuminate shadows cast by light coming from the ceiling. It's also possible to use one's own palm for that purpose, resulting in warmer tones on the picture, as well as eliminating the need to carry additional accessories.

    Fill flash or "fill-in flash" describes flash used to supplement ambient light in order to illuminate a subject close to the camera that would otherwise be in shade relative to the rest of the scene. The flash unit is set to expose the subject correctly at a given aperture, while shutter speed is calculated to correctly expose for the background or ambient light at that aperture setting. Secondary or slave flash units may be synchronized to the master unit to provide light from additional directions. The slave units are electrically triggered by the light from the master flash. Many small flashes and studio monolights have optical slaves built in. Wireless radio transmitters, such as PocketWizards, allow the receiver unit to be around a corner, or at a distance too far to trigger using an optical sync.

    To strobe, some high end units can be set to flash a specified number of times at a specified frequency. This allows action to be frozen multiple times in a single exposure.[21]

    Colored gels can also be used to change the color of the flash. Correction gels are commonly used, so that the light of the flash is the same as tungsten lights (using a CTO gel) or fluorescent lights.

    Open flash, free flash or manually-triggered flash refers to modes in which the photographer manually triggers the flash unit to fire independently of the shutter.[22]

    Drawbacks

    The distance limitation as seen when taking picture of the wooden floor
    Flash
    The same picture taken with incandescent ambient light, using a longer exposure and a higher ISO speed setting. The distance is no longer restricted, but the colors are unnatural because of a lack of color temperature compensation, and the picture may suffer from more grain or noise.
    No flash
    Left: the distance limitation as seen when taking picture of the wooden floor. Right: the same picture taken with incandescent ambient light, using a longer exposure and a higher ISO speed setting. The distance is no longer restricted, but the colors are unnatural because of a lack of color temperature compensation, and the picture may suffer from more grain or noise.
    Using a flash in a museum is mostly prohibited.

    Using on-camera flash will give a very harsh light, which results in a loss of shadows in the image, because the only lightsource is in practically the same place as the camera. Balancing the flash power and ambient lighting or using off-camera flash can help overcome these issues. Using an umbrella or softbox (the flash will have to be off-camera for this) makes softer shadows.

    A typical problem with cameras using built-in flash units is the low intensity of the flash; the level of light produced will often not suffice for good pictures at distances of over 3 metres (10 ft) or so. Dark, murky pictures with excessive image noise or "grain" will result. In order to get good flash pictures with simple cameras, it is important not to exceed the recommended distance for flash pictures. Larger flashes, especially studio units and monoblocks, have sufficient power for larger distances, even through an umbrella, and can even be used against sunlight at short distances. Cameras which automatically flash in low light conditions often do not take into account the distance to the subject, causing them to fire even when the subject is several tens of metres away and unaffected by the flash. In crowds at sports matches, concerts and so on, the stands or the auditorium can be a constant sea of flashes, resulting in distraction to the performers or players and providing absolutely no benefit to the photographers.

    The "red-eye effect" is another problem with on camera and ring flash units. Since the retina of the human eye reflects red light straight back in the direction it came from, pictures taken from straight in front of a face often exhibit this effect. It can be somewhat reduced by using the "red eye reduction" found on many cameras (a pre-flash that makes the subject's irises contract). However, very good results can be obtained only with a flash unit that is separated from the camera, sufficiently far from the optical axis, or by using bounce flash, where the flash head is angled to bounce light off a wall, ceiling or reflector.

    On some cameras the flash exposure measuring logic fires a pre-flash very quickly before the real flash. In some camera/people combinations this will lead to shut eyes in every picture taken. The blink response time seems to be around 1/10 of a second. If the exposure flash is fired at approximately this interval after the TTL measuring flash, people will be squinting or have their eyes shut. One solution may be the FEL (flash exposure lock) offered on some more expensive cameras, which allows the photographer to fire the measuring flash at some earlier time, long (many seconds) before taking the real picture. Unfortunately many camera manufacturers do not make the TTL pre-flash interval configurable.

    Flash distracts people, limiting the number of pictures that can be taken without irritating them. Photographing with flash may not be permitted in some museums even after purchasing a permit for taking pictures. Flash equipment may take some time to set up, and like any grip equipment, may need to be carefully secured, especially if hanging overhead, so it does not fall on anyone. A small breeze can easily topple a flash with an umbrella on a lightstand if it is not tied down or sandbagged. Larger equipment (e.g., monoblocks) will need a supply of AC power.

    Gallery

    See also

    References


  • McNeil, Ian (2002). An Encyclopaedia of the History of Technology. Routledge. pp. 113–114. ISBN 978-1-134-98165-6. Archived from the original on 2018-05-02.

  • Chapman, James Gardiner (1934). Manchester and Photography. Manchester: Palatine Press. pp. 17–18.

  • Fisher, Maurice. "History of Flash and Ilford Flashguns". www.photomemorabilia.co.uk.

  • Jayon, Bill. "Dangers in the Dark". Archived from the original on May 4, 2015. Retrieved 25 July 2014.

  • "Taking instantaneous photographs by electric light". Popular Mechanics. Hearst Magazines. 7 (2): 233. February 1905.

  • Solbert, Oscar N.; Newhall, Beaumont; Card, James G., eds. (November 1953). "The First Flash Bulb" (PDF). Image, Journal of Photography of George Eastman House. 2 (6): 34. Archived from the original (PDF) on 14 July 2014. Retrieved 26 June 2014.

  • Wightman, Dr. Eugene P. "Photoflash 62 Years Ago" (PDF). Image, Journal of Photography of George Eastman House. IV (7): 49–50. Archived from the original (PDF) on 9 August 2014. Retrieved 4 August 2014.

  • Anderson, Christopher. "Photoflash bulbs". Darklight Imagery. Archived from the original on 28 August 2014. Retrieved 23 October 2014. The largest flashbulb, the mammoth GE Mazda Type 75, was initially developed to be used as a source of light for night time aerial photography during world war II. The Mazda 75 measured over eight inches long and had a girth of over four inches.

  • "flashbulbs.com - philips - page 6". www.flashbulbs.com. Archived from the original on 2 May 2018. Retrieved 2 May 2018.

  • "Kodak Unveils 8 'Flashcube' Camera Types", Democrat and Chronicle (Rochester NY), July 9, 1965, pC-1

  • "Flashcube, Cameras Introduced", Chicago Tribune, July 10, 1965, p2-5

  • Ivan Tolmachev (19 January 2011). "A Brief History of Photographic Flash". Https. Archived from the original on 25 February 2018. Retrieved 24 February 2018.

  • Stephen Dowling (23 July 2014). "Harold Edgerton: The man who froze time". BBC. Archived from the original on 30 January 2018. Retrieved 24 February 2018.

  • For example, the Nikon Medical Nikkor Lens Archived 2015-07-29 at the Wayback Machine

  • Nicholls, Kyle. "Non-photorealistic Camera". Photo.net. Archived from the original on 25 January 2012. Retrieved 28 December 2011.

  • "Studio Flash Explained: Flash Duration". Paul C. Buff, Inc. Retrieved 19 November 2022.

  • "Einstein – User Manual/Operation Instructions" (PDF). Paul C. Buff, Inc. p. 13. Archived from the original (PDF) on 1 July 2013. Retrieved 5 July 2013.

  • "How fast is the Nikon 850 electronic shutter?". Jim Kasson. Retrieved 4 December 2018.

  • "Fujifilm GFX 50R Specifications". Fujifilm. Retrieved 4 December 2018.

  • Langford, Michael (2000). Basic Photography (7th ed.). Focal Press/Butterworth Heinemann. p. 117. ISBN 978-0-240-51592-2.

  • "Stobe Tips". Addendum. June 12, 2010.

    1. George, Chris (2008). Mastering Digital Flash Photography: The Complete Reference Guide. Lark Books. pp. 102–. ISBN 9781600592096. Archived from the original on 2018-05-02.

    Further reading

    External links

     

    https://en.wikipedia.org/wiki/Flash_(photography)


    In photography, flash synchronization or flash sync is the synchronizing the firing of a photographic flash with the opening of the shutter admitting light to photographic film or electronic image sensor.

    PC-socket

    In cameras with mechanical (clockwork) shutters synchronization is supported by an electrical contact within the shutter mechanism, which closes the circuit at the appropriate moment in the shutter opening process. In electronic digital cameras, the mechanism is usually a programmable electronic timing circuit, which may, in some cameras, take input from a mechanical shutter contact. The flash is connected electrically to the camera either by a cable with a standardised coaxial PC (for Prontor/Compur) 3.5 mm (1/8") connector[1] (as defined in ISO 519[2]), or via contacts in an accessory mount (hot shoe) bracket.

    Faster shutter speeds are often better when there is significant ambient illumination, and flash is used to flash fill subjects that are backlit without motion blur, or to increase depth of field by using a small aperture. In another creative use, the photographer of a moving subject may deliberately combine a slow shutter speed with flash exposure in order to record motion blur of the ambient-lit regions of the image superimposed on the flash-lit regions.

    https://en.wikipedia.org/wiki/Flash_synchronization


    Fill flash is a photographic technique used to brighten deep shadow areas, typically outdoors on sunny days, though the technique is useful any time the background is significantly brighter than the subject of the photograph, particularly in backlit subjects. To use fill flash, the aperture and shutter speed are adjusted to correctly expose the background, and the flash is fired to lighten the foreground.

    Most point and shoot cameras include a fill flash mode that forces the flash to fire, even in bright light.

    Depending on the distance to the subject, using the full power of the flash may greatly overexpose the subject especially at close range. Certain cameras allow the level of flash to be manually adjusted e.g. 1/3, 1/2, or 1/8 power, so that both the foreground and background are correctly exposed, or allow an automatic flash exposure compensation

    https://en.wikipedia.org/wiki/Fill_flash

    Exposure compensation is a technique for adjusting the exposure indicated by a photographic exposure meter, in consideration of factors that may cause the indicated exposure to result in a less-than-optimal image. Factors considered may include unusual lighting distribution, variations within a camera system, filters, non-standard processing, or intended underexposure or overexposure. Cinematographers may also apply exposure compensation for changes in shutter angle or film speed (as exposure index), among other factors.

    Many digital cameras have a display setting and possibly a physical dial whereby the photographer can set the camera to either over or under expose the subject by up to three f-stops (f-numbers) in 1/3 stop intervals. Each number on the scale (1,2,3) represents one f-stop, decreasing the exposure by one f-stop will halve the amount of light reaching the sensor. The dots in between the numbers represent 1/3 of an f-stop.[1] 

    https://en.wikipedia.org/wiki/Exposure_compensation

    Autobracketing is a feature of some more advanced cameras, whether film or digital cameras, particularly single-lens reflex cameras, where the camera will take several successive shots (often three) with slightly different settings. The images may be automatically combined, for example into one high-dynamic-range image, or they may be stored separately so the best-looking pictures can be picked later from the batch. When the photographer achieves the same result by changing the camera settings between each shot, this is simply called bracketing.  

    https://en.wikipedia.org/wiki/Autobracketing#AEB

    A disassembled USB flash drive. The chip on the left is flash memory. The controller is on the right.

    Flash memory is an electronic non-volatile computer memory storage medium that can be electrically erased and reprogrammed. The two main types of flash memory, NOR flash and NAND flash, are named for the NOR and NAND logic gates. Both use the same cell design, consisting of floating gate MOSFETs. They differ at the circuit level depending on whether the state of the bit line or word lines is pulled high or low: in NAND flash, the relationship between the bit line and the word lines resembles a NAND gate; in NOR flash, it resembles a NOR gate.

    Flash memory, a type of floating-gate memory, was invented at Toshiba in 1980 and is based on EEPROM technology. Toshiba began marketing flash memory in 1987.[1] EPROMs had to be erased completely before they could be rewritten. NAND flash memory, however, may be erased, written, and read in blocks (or pages), which generally are much smaller than the entire device. NOR flash memory allows a single machine word to be written – to an erased location – or read independently. A flash memory device typically consists of one or more flash memory chips (each holding many flash memory cells), along with a separate flash memory controller chip.

    The NAND type is found mainly in memory cards, USB flash drives, solid-state drives (those produced since 2009), feature phones, smartphones, and similar products, for general storage and transfer of data. NAND or NOR flash memory is also often used to store configuration data in digital products, a task previously made possible by EEPROM or battery-powered static RAM. A key disadvantage of flash memory is that it can endure only a relatively small number of write cycles in a specific block.[2]

    Flash memory[3] is used in computers, PDAs, digital audio players, digital cameras, mobile phones, synthesizers, video games, scientific instrumentation, industrial robotics, and medical electronics. Flash memory has fast read access time, but it is not as fast as static RAM or ROM. In portable devices, it is preferred to use flash memory because of its mechanical shock resistance since mechanical drives are more prone to mechanical damage.[4]

    Because erase cycles are slow, the large block sizes used in flash memory erasing give it a significant speed advantage over non-flash EEPROM when writing large amounts of data. As of 2019, flash memory costs much less[by how much?] than byte-programmable EEPROM and had become the dominant memory type wherever a system required a significant amount of non-volatile solid-state storage. EEPROMs, however, are still used in applications that require only small amounts of storage, as in serial presence detect.[5][6]

    Flash memory packages can use die stacking with through-silicon vias and several dozen layers of 3D TLC NAND cells (per die) simultaneously to achieve capacities of up to 1 tebibyte per package using 16 stacked dies and an integrated flash controller as a separate die inside the package.[7][8][9][10]

    History

    Background

    The origins of flash memory can be traced back to the development of the floating-gate MOSFET (FGMOS), also known as the floating-gate transistor.[11][12] The original MOSFET (metal–oxide–semiconductor field-effect transistor), also known as the MOS transistor, was invented by Egyptian engineer Mohamed M. Atalla and Korean engineer Dawon Kahng at Bell Labs in 1959.[13] Kahng went on to develop a variation, the floating-gate MOSFET, with Taiwanese-America engineer Simon Min Sze at Bell Labs in 1967.[14] They proposed that it could be used as floating-gate memory cells for storing a form of programmable read-only memory (PROM) that is both non-volatile and re-programmable.[14]

    Early types of floating-gate memory included EPROM (erasable PROM) and EEPROM (electrically erasable PROM) in the 1970s.[14] However, early floating-gate memory required engineers to build a memory cell for each bit of data, which proved to be cumbersome,[15] slow,[16] and expensive, restricting floating-gate memory to niche applications in the 1970s, such as military equipment and the earliest experimental mobile phones.[11]

    Invention and commercialization

    Fujio Masuoka, while working for Toshiba, proposed a new type of floating-gate memory that allowed entire sections of memory to be erased quickly and easily, by applying a voltage to a single wire connected to a group of cells.[11] This led to Masuoka's invention of flash memory at Toshiba in 1980.[15][17][18] According to Toshiba, the name "flash" was suggested by Masuoka's colleague, Shōji Ariizumi, because the erasure process of the memory contents reminded him of the flash of a camera.[19] Masuoka and colleagues presented the invention of NOR flash in 1984,[20][21] and then NAND flash at the IEEE 1987 International Electron Devices Meeting (IEDM) held in San Francisco.[22]

    Toshiba commercially launched NAND flash memory in 1987.[1][14] Intel Corporation introduced the first commercial NOR type flash chip in 1988.[23] NOR-based flash has long erase and write times, but provides full address and data buses, allowing random access to any memory location. This makes it a suitable replacement for older read-only memory (ROM) chips, which are used to store program code that rarely needs to be updated, such as a computer's BIOS or the firmware of set-top boxes. Its endurance may be from as little as 100 erase cycles for an on-chip flash memory,[24] to a more typical 10,000 or 100,000 erase cycles, up to 1,000,000 erase cycles.[25] NOR-based flash was the basis of early flash-based removable media; CompactFlash was originally based on it, though later cards moved to less expensive NAND flash.

    NAND flash has reduced erase and write times, and requires less chip area per cell, thus allowing greater storage density and lower cost per bit than NOR flash. However, the I/O interface of NAND flash does not provide a random-access external address bus. Rather, data must be read on a block-wise basis, with typical block sizes of hundreds to thousands of bits. This makes NAND flash unsuitable as a drop-in replacement for program ROM, since most microprocessors and microcontrollers require byte-level random access. In this regard, NAND flash is similar to other secondary data storage devices, such as hard disks and optical media, and is thus highly suitable for use in mass-storage devices, such as memory cards and solid-state drives (SSD). Flash memory cards and SSDs store data using multiple NAND flash memory chips.

    The first NAND-based removable memory card format was SmartMedia, released in 1995. Many others followed, including MultiMediaCard, Secure Digital, Memory Stick, and xD-Picture Card.

    Later developments

    A new generation of memory card formats, including RS-MMC, miniSD and microSD, feature extremely small form factors. For example, the microSD card has an area of just over 1.5 cm2, with a thickness of less than 1 mm.

    NAND flash has achieved significant levels of memory density as a result of several major technologies that were commercialized during the late 2000s to early 2010s.[26]

    Multi-level cell (MLC) technology stores more than one bit in each memory cell. NEC demonstrated multi-level cell (MLC) technology in 1998, with an 80 Mb flash memory chip storing 2 bits per cell.[27] STMicroelectronics also demonstrated MLC in 2000, with a 64 MB NOR flash memory chip.[28] In 2009, Toshiba and SanDisk introduced NAND flash chips with QLC technology storing 4 bits per cell and holding a capacity of 64 Gbit.[29][30] Samsung Electronics introduced triple-level cell (TLC) technology storing 3-bits per cell, and began mass-producing NAND chips with TLC technology in 2010.[31]

    Charge trap flash

    Charge trap flash (CTF) technology replaces the polysilicon floating gate, which is sandwiched between a blocking gate oxide above and a tunneling oxide below it, with an electrically insulating silicon nitride layer; the silicon nitride layer traps electrons. In theory, CTF is less prone to electron leakage, providing improved data retention.[32][33][34][35][36][37]

    Because CTF replaces the polysilicon with an electrically insulating nitride, it allows for smaller cells and higher endurance (lower degradation or wear). However, electrons can become trapped and accumulate in the nitride, leading to degradation. Leakage is exacerbated at high temperatures since electrons become more excitated with increasing temperatures. CTF technology however still uses a tunneling oxide and blocking layer which are the weak points of the technology, since they can still be damaged in the usual ways (the tunnel oxide can be degraded due to extremely high electric fields and the blocking layer due to Anode Hot Hole Injection (AHHI).[38][39]

    Degradation or wear of the oxides is the reason why flash memory has limited endurance, and data retention goes down (the potential for data loss increases) with increasing degradation, since the oxides lose their electrically insulating characteristics as they degrade. The oxides must insulate against electrons to prevent them from leaking which would cause data loss.

    In 1991, NEC researchers including N. Kodama, K. Oyama and Hiroki Shirai described a type of flash memory with a charge trap method.[40] In 1998, Boaz Eitan of Saifun Semiconductors (later acquired by Spansion) patented a flash memory technology named NROM that took advantage of a charge trapping layer to replace the conventional floating gate used in conventional flash memory designs.[41] In 2000, an Advanced Micro Devices (AMD) research team led by Richard M. Fastow, Egyptian engineer Khaled Z. Ahmed and Jordanian engineer Sameer Haddad (who later joined Spansion) demonstrated a charge-trapping mechanism for NOR flash memory cells.[42] CTF was later commercialized by AMD and Fujitsu in 2002.[43] 3D V-NAND (vertical NAND) technology stacks NAND flash memory cells vertically within a chip using 3D charge trap flash (CTP) technology. 3D V-NAND technology was first announced by Toshiba in 2007,[44] and the first device, with 24 layers, was first commercialized by Samsung Electronics in 2013.[45][46]

    3D integrated circuit technology

    3D integrated circuit (3D IC) technology stacks integrated circuit (IC) chips vertically into a single 3D IC chip package.[26] Toshiba introduced 3D IC technology to NAND flash memory in April 2007, when they debuted a 16 GB eMMC compliant (product number THGAM0G7D8DBAI6, often abbreviated THGAM on consumer websites) embedded NAND flash memory chip, which was manufactured with eight stacked 2 GB NAND flash chips.[47] In September 2007, Hynix Semiconductor (now SK Hynix) introduced 24-layer 3D IC technology, with a 16 GB flash memory chip that was manufactured with 24 stacked NAND flash chips using a wafer bonding process.[48] Toshiba also used an eight-layer 3D IC for their 32 GB THGBM flash chip in 2008.[49] In 2010, Toshiba used a 16-layer 3D IC for their 128 GB THGBM2 flash chip, which was manufactured with 16 stacked 8 GB chips.[50] In the 2010s, 3D ICs came into widespread commercial use for NAND flash memory in mobile devices.[26]

    As of August 2017, microSD cards with a capacity up to 400 GB (400 billion bytes) are available.[51][52] The same year, Samsung combined 3D IC chip stacking with its 3D V-NAND and TLC technologies to manufacture its 512 GB KLUFG8R1EM flash memory chip with eight stacked 64-layer V-NAND chips.[53] In 2019, Samsung produced a 1024 GB flash chip, with eight stacked 96-layer V-NAND chips and with QLC technology.[54][55]

    Principles of operation

    A flash memory cell

    Flash memory stores information in an array of memory cells made from floating-gate transistors. In single-level cell (SLC) devices, each cell stores only one bit of information. Multi-level cell (MLC) devices, including triple-level cell (TLC) devices, can store more than one bit per cell.

    The floating gate may be conductive (typically polysilicon in most kinds of flash memory) or non-conductive (as in SONOS flash memory).[56]

    Floating-gate MOSFET

    In flash memory, each memory cell resembles a standard metal–oxide–semiconductor field-effect transistor (MOSFET) except that the transistor has two gates instead of one. The cells can be seen as an electrical switch in which current flows between two terminals (source and drain) and is controlled by a floating gate (FG) and a control gate (CG). The CG is similar to the gate in other MOS transistors, but below this, there is the FG insulated all around by an oxide layer. The FG is interposed between the CG and the MOSFET channel. Because the FG is electrically isolated by its insulating layer, electrons placed on it are trapped. When the FG is charged with electrons, this charge screens the electric field from the CG, thus, increasing the threshold voltage (VT) of the cell. This means that the VT of the cell can be changed between the uncharged FG threshold voltage (VT1) and the higher charged FG threshold voltage (VT2) by changing the FG charge. In order to read a value from the cell, an intermediate voltage (VI) between VT1 and VT2 is applied to the CG. If the channel conducts at VI, the FG must be uncharged (if it were charged, there would not be conduction because VI is less than VT2). If the channel does not conduct at the VI, it indicates that the FG is charged. The binary value of the cell is sensed by determining whether there is current flowing through the transistor when VI is asserted on the CG. In a multi-level cell device, which stores more than one bit per cell, the amount of current flow is sensed (rather than simply its presence or absence), in order to determine more precisely the level of charge on the FG.

    Floating gate MOSFETs are so named because there is an electrically insulating tunnel oxide layer between the floating gate and the silicon, so the gate "floats" above the silicon. The oxide keeps the electrons confined to the floating gate. Degradation or wear (and the limited endurance of floating gate Flash memory) occurs due to the extremely high electric field (10 million volts per centimeter) experienced by the oxide. Such high voltage densities can break atomic bonds over time in the relatively thin oxide, gradually degrading its electrically insulating properties and allowing electrons to be trapped in and pass through freely (leak) from the floating gate into the oxide, increasing the likelihood of data loss since the electrons (the quantity of which is used to represent different charge levels, each assigned to a different combination of bits in MLC Flash) are normally in the floating gate. This is why data retention goes down and the risk of data loss increases with increasing degradation.[57][58][36][59][60]The silicon oxide in a cell degrades with every erase operation. The degradation increases the amount of negative charge in the cell over time due to trapped electrons in the oxide and negates some of the control gate voltage, this over time also makes erasing the cell slower, so to maintain the performance and reliability of the NAND chip, the cell must be retired from use. Endurance also decreases with the number of bits in a cell. With more bits in a cell, the number of possible states (each represented by a different voltage level) in a cell increases and is more sensitive to the voltages used for programming. Voltages may be adjusted to compensate for degradation of the silicon oxide, and as the number of bits increases, the number of possible states also increases and thus the cell is less tolerant of adjustments to programming voltages, because there is less space between the voltage levels that define each state in a cell.[61]

    Fowler–Nordheim tunneling

    The process of moving electrons from the control gate and into the floating gate is called Fowler–Nordheim tunneling, and it fundamentally changes the characteristics of the cell by increasing the MOSFET's threshold voltage. This, in turn, changes the drain-source current that flows through the transistor for a given gate voltage, which is ultimately used to encode a binary value. The Fowler-Nordheim tunneling effect is reversible, so electrons can be added to or removed from the floating gate, processes traditionally known as writing and erasing.[62]

    Internal charge pumps

    Despite the need for relatively high programming and erasing voltages, virtually all flash chips today require only a single supply voltage and produce the high voltages that are required using on-chip charge pumps.

    Over half the energy used by a 1.8 V NAND flash chip is lost in the charge pump itself. Since boost converters are inherently more efficient than charge pumps, researchers developing low-power SSDs have proposed returning to the dual Vcc/Vpp supply voltages used on all early flash chips, driving the high Vpp voltage for all flash chips in an SSD with a single shared external boost converter.[63][64][65][66][67][68][69][70]

    In spacecraft and other high-radiation environments, the on-chip charge pump is the first part of the flash chip to fail, although flash memories will continue to work – in read-only mode – at much higher radiation levels.[71]

    NOR flash

    NOR flash memory wiring and structure on silicon

    In NOR flash, each cell has one end connected directly to ground, and the other end connected directly to a bit line. This arrangement is called "NOR flash" because it acts like a NOR gate: when one of the word lines (connected to the cell's CG) is brought high, the corresponding storage transistor acts to pull the output bit line low. NOR flash continues to be the technology of choice for embedded applications requiring a discrete non-volatile memory device.[citation needed] The low read latencies characteristic of NOR devices allow for both direct code execution and data storage in a single memory product.[72]

    Programming

    Programming a NOR memory cell (setting it to logical 0), via hot-electron injection
    Erasing a NOR memory cell (setting it to logical 1), via quantum tunneling

    A single-level NOR flash cell in its default state is logically equivalent to a binary "1" value, because current will flow through the channel under application of an appropriate voltage to the control gate, so that the bitline voltage is pulled down. A NOR flash cell can be programmed, or set to a binary "0" value, by the following procedure:

    • an elevated on-voltage (typically >5 V) is applied to the CG
    • the channel is now turned on, so electrons can flow from the source to the drain (assuming an NMOS transistor)
    • the source-drain current is sufficiently high to cause some high energy electrons to jump through the insulating layer onto the FG, via a process called hot-electron injection.

    Erasing

    To erase a NOR flash cell (resetting it to the "1" state), a large voltage of the opposite polarity is applied between the CG and source terminal, pulling the electrons off the FG through quantum tunneling. Modern NOR flash memory chips are divided into erase segments (often called blocks or sectors). The erase operation can be performed only on a block-wise basis; all the cells in an erase segment must be erased together. Programming of NOR cells, however, generally can be performed one byte or word at a time.

    NAND flash memory wiring and structure on silicon

    NAND flash

    NAND flash also uses floating-gate transistors, but they are connected in a way that resembles a NAND gate: several transistors are connected in series, and the bit line is pulled low only if all the word lines are pulled high (above the transistors' VT). These groups are then connected via some additional transistors to a NOR-style bit line array in the same way that single transistors are linked in NOR flash.

    Compared to NOR flash, replacing single transistors with serial-linked groups adds an extra level of addressing. Whereas NOR flash might address memory by page then word, NAND flash might address it by page, word and bit. Bit-level addressing suits bit-serial applications (such as hard disk emulation), which access only one bit at a time. Execute-in-place applications, on the other hand, require every bit in a word to be accessed simultaneously. This requires word-level addressing. In any case, both bit and word addressing modes are possible with either NOR or NAND flash.

    To read data, first the desired group is selected (in the same way that a single transistor is selected from a NOR array). Next, most of the word lines are pulled up above VT2, while one of them is pulled up to VI. The series group will conduct (and pull the bit line low) if the selected bit has not been programmed.

    Despite the additional transistors, the reduction in ground wires and bit lines allows a denser layout and greater storage capacity per chip. (The ground wires and bit lines are actually much wider than the lines in the diagrams.) In addition, NAND flash is typically permitted to contain a certain number of faults (NOR flash, as is used for a BIOS ROM, is expected to be fault-free). Manufacturers try to maximize the amount of usable storage by shrinking the size of the transistors or cells, however the industry can avoid this and achieve higher storage densities per die by using 3D NAND, which stacks cells on top of each other.

    NAND Flash cells are read by analysing their response to various voltages.[59]

    Writing and erasing

    NAND flash uses tunnel injection for writing and tunnel release for erasing. NAND flash memory forms the core of the removable USB storage devices known as USB flash drives, as well as most memory card formats and solid-state drives available today.

    The hierarchical structure of NAND flash starts at a cell level which establishes strings, then pages, blocks, planes and ultimately a die. A string is a series of connected NAND cells in which the source of one cell is connected to the drain of the next one. Depending on the NAND technology, a string typically consists of 32 to 128 NAND cells. Strings are organised into pages which are then organised into blocks in which each string is connected to a separate line called a bitline. All cells with the same position in the string are connected through the control gates by a wordline. A plane contains a certain number of blocks that are connected through the same bitline. A flash die consists of one or more planes, and the peripheral circuitry that is needed to perform all the read, write, and erase operations.

    The architecture of NAND flash means that data can be read and programmed (written) in pages, typically between 4 KiB and 16 KiB in size, but can only be erased at the level of entire blocks consisting of multiple pages. When a block is erased, all the cells are logically set to 1. Data can only be programmed in one pass to a page in a block that was erased. Any cells that have been set to 0 by programming can only be reset to 1 by erasing the entire block. This means that before new data can be programmed into a page that already contains data, the current contents of the page plus the new data must be copied to a new, erased page. If a suitable erased page is available, the data can be written to it immediately. If no erased page is available, a block must be erased before copying the data to a page in that block. The old page is then marked as invalid and is available for erasing and reuse.[73]

    Vertical NAND

    3D NAND continues scaling beyond 2D.

    Vertical NAND (V-NAND) or 3D NAND memory stacks memory cells vertically and uses a charge trap flash architecture. The vertical layers allow larger areal bit densities without requiring smaller individual cells.[74] It is also sold under the trademark BiCS Flash, which is a trademark of Kioxia Corporation (former Toshiba Memory Corporation). 3D NAND was first announced by Toshiba in 2007.[44] V-NAND was first commercially manufactured by Samsung Electronics in 2013.[45][46][75][76]

    Structure

    V-NAND uses a charge trap flash geometry (which was commercially introduced in 2002 by AMD and Fujitsu)[43] that stores charge on an embedded silicon nitride film. Such a film is more robust against point defects and can be made thicker to hold larger numbers of electrons. V-NAND wraps a planar charge trap cell into a cylindrical form.[74] As of 2020, 3D NAND Flash memories by Micron and Intel instead use floating gates, however, Micron 128 layer and above 3D NAND memories use a conventional charge trap structure, due to the dissolution of the partnership between Micron and Intel. Charge trap 3D NAND Flash is thinner than floating gate 3D NAND. In floating gate 3D NAND, the memory cells are completely separated from one another, whereas in charge trap 3D NAND, vertical groups of memory cells share the same silicon nitride material.[77]

    An individual memory cell is made up of one planar polysilicon layer containing a hole filled by multiple concentric vertical cylinders. The hole's polysilicon surface acts as the gate electrode. The outermost silicon dioxide cylinder acts as the gate dielectric, enclosing a silicon nitride cylinder that stores charge, in turn enclosing a silicon dioxide cylinder as the tunnel dielectric that surrounds a central rod of conducting polysilicon which acts as the conducting channel.[74]

    Memory cells in different vertical layers do not interfere with each other, as the charges cannot move vertically through the silicon nitride storage medium, and the electric fields associated with the gates are closely confined within each layer. The vertical collection is electrically identical to the serial-linked groups in which conventional NAND flash memory is configured.[74]

    Construction

    Growth of a group of V-NAND cells begins with an alternating stack of conducting (doped) polysilicon layers and insulating silicon dioxide layers.[74]

    The next step is to form a cylindrical hole through these layers. In practice, a 128 Gbit V-NAND chip with 24 layers of memory cells requires about 2.9 billion such holes. Next, the hole's inner surface receives multiple coatings, first silicon dioxide, then silicon nitride, then a second layer of silicon dioxide. Finally, the hole is filled with conducting (doped) polysilicon.[74]

    Performance

    As of 2013, V-NAND flash architecture allows read and write operations twice as fast as conventional NAND and can last up to 10 times as long, while consuming 50 percent less power. They offer comparable physical bit density using 10-nm lithography but may be able to increase bit density by up to two orders of magnitude, given V-NAND's use of up to several hundred layers.[74] As of 2020, V-NAND chips with 160 layers are under development by Samsung.[78]

    Cost

    Minimum bit cost of 3D NAND from non-vertical sidewall. The top opening widens with more layers, counteracting the increase in bit density.

    The wafer cost of a 3D NAND is comparable with scaled down (32 nm or less) planar NAND Flash.[79] However, with planar NAND scaling stopping at 16 nm, the cost per bit reduction can continue by 3D NAND starting with 16 layers. However, due to the non-vertical sidewall of the hole etched through the layers; even a slight deviation leads to a minimum bit cost, i.e., minimum equivalent design rule (or maximum density), for a given number of layers; this minimum bit cost layer number decreases for smaller hole diameter.[80]

    Limitations

    Block erasure

    One limitation of flash memory is that it can be erased only a block at a time. This generally sets all bits in the block to 1. Starting with a freshly erased block, any location within that block can be programmed. However, once a bit has been set to 0, only by erasing the entire block can it be changed back to 1. In other words, flash memory (specifically NOR flash) offers random-access read and programming operations but does not offer arbitrary random-access rewrite or erase operations. A location can, however, be rewritten as long as the new value's 0 bits are a superset of the over-written values. For example, a nibble value may be erased to 1111, then written as 1110. Successive writes to that nibble can change it to 1010, then 0010, and finally 0000. Essentially, erasure sets all bits to 1, and programming can only clear bits to 0.[81] Some file systems designed for flash devices make use of this rewrite capability, for example Yaffs1, to represent sector metadata. Other flash file systems, such as YAFFS2, never make use of this "rewrite" capability—they do a lot of extra work to meet a "write once rule".

    Although data structures in flash memory cannot be updated in completely general ways, this allows members to be "removed" by marking them as invalid. This technique may need to be modified for multi-level cell devices, where one memory cell holds more than one bit.

    Common flash devices such as USB flash drives and memory cards provide only a block-level interface, or flash translation layer (FTL), which writes to a different cell each time to wear-level the device. This prevents incremental writing within a block; however, it does help the device from being prematurely worn out by intensive write patterns.

    Data Retention

    45nm NOR flash memory example of data retention varying with temperatures

    Data stored on flash cells is steadily lost due to electron detrapping[definition needed]. The rate of loss increases exponentially as the absolute temperature increases. For example: For a 45 nm NOR Flash, at 1000 hours, the threshold voltage (Vt) loss at 25 deg Celsius is about half that at 90 deg Celsius.[82]

    Memory wear

    Another limitation is that flash memory has a finite number of program – erase cycles (typically written as P/E cycles). [83][84] Micron Technology and Sun Microsystems announced an SLC NAND flash memory chip rated for 1,000,000 P/E cycles on 17 December 2008.[85] Longer P/E cycles of Industrial SSDs speak for their endurance level and make them more reliable for Industrial usage.

    The guaranteed cycle count may apply only to block zero (as is the case with TSOP NAND devices), or to all blocks (as in NOR). This effect is mitigated in some chip firmware or file system drivers by counting the writes and dynamically remapping blocks in order to spread write operations between sectors; this technique is called wear leveling. Another approach is to perform write verification and remapping to spare sectors in case of write failure, a technique called bad block management (BBM). For portable consumer devices, these wear out management techniques typically extend the life of the flash memory beyond the life of the device itself, and some data loss may be acceptable in these applications. For high-reliability data storage, however, it is not advisable to use flash memory that would have to go through a large number of programming cycles. This limitation is meaningless for 'read-only' applications such as thin clients and routers, which are programmed only once or at most a few times during their lifetimes.

    In December 2012, Taiwanese engineers from Macronix revealed their intention to announce at the 2012 IEEE International Electron Devices Meeting that they had figured out how to improve NAND flash storage read/write cycles from 10,000 to 100 million cycles using a "self-healing" process that used a flash chip with "onboard heaters that could anneal small groups of memory cells."[86] The built-in thermal annealing was to replace the usual erase cycle with a local high temperature process that not only erased the stored charge, but also repaired the electron-induced stress in the chip, giving write cycles of at least 100 million.[87] The result was to be a chip that could be erased and rewritten over and over, even when it should theoretically break down. As promising as Macronix's breakthrough might have been for the mobile industry, however, there were no plans for a commercial product featuring this capability to be released any time in the near future.[88]

    Read disturb

    The method used to read NAND flash memory can cause nearby cells in the same memory block to change over time (become programmed). This is known as read disturb. The threshold number of reads is generally in the hundreds of thousands of reads between intervening erase operations. If reading continually from one cell, that cell will not fail but rather one of the surrounding cells on a subsequent read. To avoid the read disturb problem the flash controller will typically count the total number of reads to a block since the last erase. When the count exceeds a target limit, the affected block is copied over to a new block, erased, then released to the block pool. The original block is as good as new after the erase. If the flash controller does not intervene in time, however, a read disturb error will occur with possible data loss if the errors are too numerous to correct with an error-correcting code.[89][90][91]

    X-ray effects

    Most flash ICs come in ball grid array (BGA) packages, and even the ones that do not are often mounted on a PCB next to other BGA packages. After PCB Assembly, boards with BGA packages are often X-rayed to see if the balls are making proper connections to the proper pad, or if the BGA needs rework. These X-rays can erase programmed bits in a flash chip (convert programmed "0" bits into erased "1" bits). Erased bits ("1" bits) are not affected by X-rays.[92][93]

    Some manufacturers are now making X-ray proof SD[94] and USB[95] memory devices.

    Low-level access

    The low-level interface to flash memory chips differs from those of other memory types such as DRAM, ROM, and EEPROM, which support bit-alterability (both zero to one and one to zero) and random access via externally accessible address buses.

    NOR memory has an external address bus for reading and programming. For NOR memory, reading and programming are random-access, and unlocking and erasing are block-wise. For NAND memory, reading and programming are page-wise, and unlocking and erasing are block-wise.

    NOR memories

    NOR flash by Intel

    Reading from NOR flash is similar to reading from random-access memory, provided the address and data bus are mapped correctly. Because of this, most microprocessors can use NOR flash memory as execute in place (XIP) memory, meaning that programs stored in NOR flash can be executed directly from the NOR flash without needing to be copied into RAM first. NOR flash may be programmed in a random-access manner similar to reading. Programming changes bits from a logical one to a zero. Bits that are already zero are left unchanged. Erasure must happen a block at a time, and resets all the bits in the erased block back to one. Typical block sizes are 64, 128, or 256 KiB.

    Bad block management is a relatively new feature in NOR chips. In older NOR devices not supporting bad block management, the software or device driver controlling the memory chip must correct for blocks that wear out, or the device will cease to work reliably.

    The specific commands used to lock, unlock, program, or erase NOR memories differ for each manufacturer. To avoid needing unique driver software for every device made, special Common Flash Memory Interface (CFI) commands allow the device to identify itself and its critical operating parameters.

    Besides its use as random-access ROM, NOR flash can also be used as a storage device, by taking advantage of random-access programming. Some devices offer read-while-write functionality so that code continues to execute even while a program or erase operation is occurring in the background. For sequential data writes, NOR flash chips typically have slow write speeds, compared with NAND flash.

    Typical NOR flash does not need an error correcting code.[96]

    NAND memories

    NAND flash architecture was introduced by Toshiba in 1989.[97] These memories are accessed much like block devices, such as hard disks. Each block consists of a number of pages. The pages are typically 512,[98] 2,048 or 4,096 bytes in size. Associated with each page are a few bytes (typically 1/32 of the data size) that can be used for storage of an error correcting code (ECC) checksum.

    Typical block sizes include:

    • 32 pages of 512+16 bytes each for a block size (effective) of 16 KiB
    • 64 pages of 2,048+64 bytes each for a block size of 128 KiB[99]
    • 64 pages of 4,096+128 bytes each for a block size of 256 KiB[100]
    • 128 pages of 4,096+128 bytes each for a block size of 512 KiB.

    While reading and programming is performed on a page basis, erasure can only be performed on a block basis.[101]

    NAND devices also require bad block management by the device driver software or by a separate controller chip. Some SD cards, for example, include controller circuitry to perform bad block management and wear leveling. When a logical block is accessed by high-level software, it is mapped to a physical block by the device driver or controller. A number of blocks on the flash chip may be set aside for storing mapping tables to deal with bad blocks, or the system may simply check each block at power-up to create a bad block map in RAM. The overall memory capacity gradually shrinks as more blocks are marked as bad.

    NAND relies on ECC to compensate for bits that may spontaneously fail during normal device operation. A typical ECC will correct a one-bit error in each 2048 bits (256 bytes) using 22 bits of ECC, or a one-bit error in each 4096 bits (512 bytes) using 24 bits of ECC.[102] If the ECC cannot correct the error during read, it may still detect the error. When doing erase or program operations, the device can detect blocks that fail to program or erase and mark them bad. The data is then written to a different, good block, and the bad block map is updated.

    Hamming codes are the most commonly used ECC for SLC NAND flash. Reed-Solomon codes and BCH codes (Bose-Chaudhuri-Hocquenghem codes) are commonly used ECC for MLC NAND flash. Some MLC NAND flash chips internally generate the appropriate BCH error correction codes.[96]

    Most NAND devices are shipped from the factory with some bad blocks. These are typically marked according to a specified bad block marking strategy. By allowing some bad blocks, manufacturers achieve far higher yields than would be possible if all blocks had to be verified to be good. This significantly reduces NAND flash costs and only slightly decreases the storage capacity of the parts.

    When executing software from NAND memories, virtual memory strategies are often used: memory contents must first be paged or copied into memory-mapped RAM and executed there (leading to the common combination of NAND + RAM). A memory management unit (MMU) in the system is helpful, but this can also be accomplished with overlays. For this reason, some systems will use a combination of NOR and NAND memories, where a smaller NOR memory is used as software ROM and a larger NAND memory is partitioned with a file system for use as a non-volatile data storage area.

    NAND sacrifices the random-access and execute-in-place advantages of NOR. NAND is best suited to systems requiring high capacity data storage. It offers higher densities, larger capacities, and lower cost. It has faster erases, sequential writes, and sequential reads.

    Standardization

    A group called the Open NAND Flash Interface Working Group (ONFI) has developed a standardized low-level interface for NAND flash chips. This allows interoperability between conforming NAND devices from different vendors. The ONFI specification version 1.0[103] was released on 28 December 2006. It specifies:

    • A standard physical interface (pinout) for NAND flash in TSOP-48, WSOP-48, LGA-52, and BGA-63 packages
    • A standard command set for reading, writing, and erasing NAND flash chips
    • A mechanism for self-identification (comparable to the serial presence detection feature of SDRAM memory modules)

    The ONFI group is supported by major NAND flash manufacturers, including Hynix, Intel, Micron Technology, and Numonyx, as well as by major manufacturers of devices incorporating NAND flash chips.[104]

    Two major flash device manufacturers, Toshiba and Samsung, have chosen to use an interface of their own design known as Toggle Mode (and now Toggle). This interface isn't pin-to-pin compatible with the ONFI specification. The result is that a product designed for one vendor's devices may not be able to use another vendor's devices.[105]

    A group of vendors, including Intel, Dell, and Microsoft, formed a Non-Volatile Memory Host Controller Interface (NVMHCI) Working Group.[106] The goal of the group is to provide standard software and hardware programming interfaces for nonvolatile memory subsystems, including the "flash cache" device connected to the PCI Express bus.

    Distinction between NOR and NAND flash

    NOR and NAND flash differ in two important ways:

    • The connections of the individual memory cells are different.[107]
    • The interface provided for reading and writing the memory is different; NOR allows random access as it can be either byte-addressable or word-addressable, with words being for example 32 bits long,[108][109][110] while NAND allows only page access.[111]

    NOR and NAND flash get their names from the structure of the interconnections between memory cells.[citation needed] In NOR flash, cells are connected in parallel to the bit lines, allowing cells to be read and programmed individually.[112] The parallel connection of cells resembles the parallel connection of transistors in a CMOS NOR gate.[113] In NAND flash, cells are connected in series,[112] resembling a CMOS NAND gate. The series connections consume less space than parallel ones, reducing the cost of NAND flash.[112] It does not, by itself, prevent NAND cells from being read and programmed individually.[citation needed]

    Each NOR flash cell is larger than a NAND flash cell – 10 F2 vs 4 F2 – even when using exactly the same semiconductor device fabrication and so each transistor, contact, etc. is exactly the same size – because NOR flash cells require a separate metal contact for each cell.[114]

    Because of the series connection and removal of wordline contacts, a large grid of NAND flash memory cells will occupy perhaps only 60% of the area of equivalent NOR cells[115] (assuming the same CMOS process resolution, for example, 130 nm, 90 nm, or 65 nm). NAND flash's designers realized that the area of a NAND chip, and thus the cost, could be further reduced by removing the external address and data bus circuitry. Instead, external devices could communicate with NAND flash via sequential-accessed command and data registers, which would internally retrieve and output the necessary data. This design choice made random-access of NAND flash memory impossible, but the goal of NAND flash was to replace mechanical hard disks, not to replace ROMs.

    Attribute NAND NOR
    Main application File storage Code execution
    Storage capacity High Low
    Cost per bit Low
    Active power Low
    Standby power
    Low
    Write speed Fast
    Read speed
    Fast
    Execute in place (XIP) No Yes
    Reliability
    High

    Write endurance

    The write endurance of SLC floating-gate NOR flash is typically equal to or greater than that of NAND flash, while MLC NOR and NAND flash have similar endurance capabilities. Examples of endurance cycle ratings listed in datasheets for NAND and NOR flash, as well as in storage devices using flash memory, are provided.[116]

    Type of flash
    memory
    Endurance rating
    (erases per block)
    Example(s) of flash memory or storage device
    SLC NAND 100,000 Samsung OneNAND KFW4G16Q2M, Toshiba SLC NAND Flash chips,[117][118][119][120][121] Transcend SD500, Fujitsu S26361-F3298
    MLC NAND 5,000–10,000 for
    medium-capacity;
    1,000 to 3,000 for
    high-capacity[122]
    Samsung K9G8G08U0M (Example for medium-capacity applications), Memblaze PBlaze4,[123] ADATA SU900, Mushkin Reactor
    TLC NAND 1,000 Samsung SSD 840
    QLC NAND unknown SanDisk X4 NAND flash SD cards[124][125][126][127]
    3D SLC NAND 100,000 Samsung Z-NAND[128]
    3D MLC NAND 6,000–40,000 Samsung SSD 850 PRO, Samsung SSD 845DC PRO,[129][130] Samsung 860 PRO
    3D TLC NAND 1,000–3,000 Samsung SSD 850 EVO, Samsung SSD 845DC EVO, Crucial MX300[131][132][133],Memblaze PBlaze5 900, Memblaze PBlaze5 700, Memblaze PBlaze5 910/916,Memblaze PBlaze5 510/516,[134][135][136][137] ADATA SX 8200 PRO (also being sold under "XPG Gammix" branding, model S11 PRO)
    3D QLC NAND 100–1,000 Samsung SSD 860 QVO SATA, Intel SSD 660p, Samsung SSD 980 QVO NVMe, Micron 5210 ION, Samsung SSD BM991 NVMe[138][139][140][141][142][143][144][145]
    3D PLC NAND unknown In development by SK Hynix (formerly Intel)[146] and Kioxia (formerly Toshiba Memory).[122]
    SLC (floating-
    gate) NOR
    100,000–1,000,000 Numonyx M58BW (Endurance rating of 100,000 erases per block);
    Spansion S29CD016J (Endurance rating of 1,000,000 erases per block)
    MLC (floating-
    gate) NOR
    100,000 Numonyx J3 flash

    However, by applying certain algorithms and design paradigms such as wear leveling and memory over-provisioning, the endurance of a storage system can be tuned to serve specific requirements.[147]

    In order to compute the longevity of the NAND flash, one must account for the size of the memory chip, the type of memory (e.g. SLC/MLC/TLC), and use pattern. Industrial NAND are in demand due to their capacity, longer endurance and reliability in sensitive environments.

    3D NAND performance may degrade as layers are added.[128]

    As the number of bits per cell increases, the performance of NAND flash may degrade, increasing random read times to 100μs for TLC NAND which is 4 times the time required in SLC NAND, and twice the time required in MLC NAND, for random reads.[61]

    Flash file systems

    Because of the particular characteristics of flash memory, it is best used with either a controller to perform wear leveling and error correction or specifically designed flash file systems, which spread writes over the media and deal with the long erase times of NOR flash blocks. The basic concept behind flash file systems is the following: when the flash store is to be updated, the file system will write a new copy of the changed data to a fresh block, remap the file pointers, then erase the old block later when it has time.

    In practice, flash file systems are used only for memory technology devices (MTDs), which are embedded flash memories that do not have a controller. Removable flash memory cards, SSDs, eMMC/eUFS chips and USB flash drives have built-in controllers to perform wear leveling and error correction so use of a specific flash file system may not add benefit.

    Capacity

    Multiple chips are often arrayed or die stacked to achieve higher capacities[148] for use in consumer electronic devices such as multimedia players or GPSs. The capacity scaling (increase) of flash chips used to follow Moore's law because they are manufactured with many of the same integrated circuits techniques and equipment. Since the introduction of 3D NAND, scaling is no longer necessarily associated with Moore's law since ever smaller transistors (cells) are no longer used.

    Consumer flash storage devices typically are advertised with usable sizes expressed as a small integer power of two (2, 4, 8, etc.) and a conventional designation of megabytes (MB) or gigabytes (GB); e.g., 512 MB, 8 GB. This includes SSDs marketed as hard drive replacements, in accordance with traditional hard drives, which use decimal prefixes.[149] Thus, an SSD marked as "64 GB" is at least 64 × 10003 bytes (64 GB). Most users will have slightly less capacity than this available for their files, due to the space taken by file system metadata and because some operating systems report SSD capacity using binary prefixes which are somewhat larger than conventional prefixes .

    The flash memory chips inside them are sized in strict binary multiples, but the actual total capacity of the chips is not usable at the drive interface. It is considerably larger than the advertised capacity in order to allow for distribution of writes (wear leveling), for sparing, for error correction codes, and for other metadata needed by the device's internal firmware.

    In 2005, Toshiba and SanDisk developed a NAND flash chip capable of storing 1 GB of data using multi-level cell (MLC) technology, capable of storing two bits of data per cell. In September 2005, Samsung Electronics announced that it had developed the world's first 2 GB chip.[150]

    In March 2006, Samsung announced flash hard drives with a capacity of 4 GB, essentially the same order of magnitude as smaller laptop hard drives, and in September 2006, Samsung announced an 8 GB chip produced using a 40 nm manufacturing process.[151] In January 2008, SanDisk announced availability of their 16 GB MicroSDHC and 32 GB SDHC Plus cards.[152][153]

    More recent flash drives (as of 2012) have much greater capacities, holding 64, 128, and 256 GB.[154]

    A joint development at Intel and Micron will allow the production of 32-layer 3.5 terabyte (TB[clarification needed]) NAND flash sticks and 10 TB standard-sized SSDs. The device includes 5 packages of 16 × 48 GB TLC dies, using a floating gate cell design.[155]

    Flash chips continue to be manufactured with capacities under or around 1 MB (e.g. for BIOS-ROMs and embedded applications).

    In July 2016, Samsung announced the 4 TB[clarification needed] Samsung 850 EVO which utilizes their 256 Gbit 48-layer TLC 3D V-NAND.[156] In August 2016, Samsung announced a 32 TB 2.5-inch SAS SSD based on their 512 Gbit 64-layer TLC 3D V-NAND. Further, Samsung expects to unveil SSDs with up to 100 TB of storage by 2020.[157]

    Transfer rates

    Flash memory devices are typically much faster at reading than writing.[158] Performance also depends on the quality of storage controllers, which become more critical when devices are partially full.[vague][158] Even when the only change to manufacturing is die-shrink, the absence of an appropriate controller can result in degraded speeds.[159]

    Applications

    Serial flash

    Serial Flash: Silicon Storage Tech SST25VF080B

    Serial flash is a small, low-power flash memory that provides only serial access to the data - rather than addressing individual bytes, the user reads or writes large contiguous groups of bytes in the address space serially. Serial Peripheral Interface Bus (SPI) is a typical protocol for accessing the device. When incorporated into an embedded system, serial flash requires fewer wires on the PCB than parallel flash memories, since it transmits and receives data one bit at a time. This may permit a reduction in board space, power consumption, and total system cost.

    There are several reasons why a serial device, with fewer external pins than a parallel device, can significantly reduce overall cost:

    • Many ASICs are pad-limited, meaning that the size of the die is constrained by the number of wire bond pads, rather than the complexity and number of gates used for the device logic. Eliminating bond pads thus permits a more compact integrated circuit, on a smaller die; this increases the number of dies that may be fabricated on a wafer, and thus reduces the cost per die.
    • Reducing the number of external pins also reduces assembly and packaging costs. A serial device may be packaged in a smaller and simpler package than a parallel device.
    • Smaller and lower pin-count packages occupy less PCB area.
    • Lower pin-count devices simplify PCB routing.

    There are two major SPI flash types. The first type is characterized by small pages and one or more internal SRAM page buffers allowing a complete page to be read to the buffer, partially modified, and then written back (for example, the Atmel AT45 DataFlash or the Micron Technology Page Erase NOR Flash). The second type has larger sectors where the smallest sectors typically found in this type of SPI flash are 4 kB, but they can be as large as 64 kB. Since this type of SPI flash lacks an internal SRAM buffer, the complete page must be read out and modified before being written back, making it slow to manage. However, the second type is cheaper than the first and is therefore a good choice when the application is code shadowing.

    The two types are not easily exchangeable, since they do not have the same pinout, and the command sets are incompatible.

    Most FPGAs are based on SRAM configuration cells and require an external configuration device, often a serial flash chip, to reload the configuration bitstream every power cycle.[160]

    Firmware storage

    With the increasing speed of modern CPUs, parallel flash devices are often much slower than the memory bus of the computer they are connected to. Conversely, modern SRAM offers access times below 10 ns, while DDR2 SDRAM offers access times below 20 ns. Because of this, it is often desirable to shadow code stored in flash into RAM; that is, the code is copied from flash into RAM before execution, so that the CPU may access it at full speed. Device firmware may be stored in a serial flash chip, and then copied into SDRAM or SRAM when the device is powered-up.[161] Using an external serial flash device rather than on-chip flash removes the need for significant process compromise (a manufacturing process that is good for high-speed logic is generally not good for flash and vice versa). Once it is decided to read the firmware in as one big block it is common to add compression to allow a smaller flash chip to be used. Since 2005, many devices use serial NOR flash to deprecate parallel NOR flash for firmware storage. Typical applications for serial flash include storing firmware for hard drives, Ethernet network interface adapters, DSL modems, etc.

    Flash memory as a replacement for hard drives

    An Intel mSATA SSD

    One more recent application for flash memory is as a replacement for hard disks. Flash memory does not have the mechanical limitations and latencies of hard drives, so a solid-state drive (SSD) is attractive when considering speed, noise, power consumption, and reliability. Flash drives are gaining traction as mobile device secondary storage devices; they are also used as substitutes for hard drives in high-performance desktop computers and some servers with RAID and SAN architectures.

    There remain some aspects of flash-based SSDs that make them unattractive. The cost per gigabyte of flash memory remains significantly higher than that of hard disks.[162] Also flash memory has a finite number of P/E (program/erase) cycles, but this seems to be currently under control since warranties on flash-based SSDs are approaching those of current hard drives.[163] In addition, deleted files on SSDs can remain for an indefinite period of time before being overwritten by fresh data; erasure or shred techniques or software that work well on magnetic hard disk drives have no effect on SSDs, compromising security and forensic examination. However, due to the so-called TRIM command employed by most solid state drives, which marks the logical block addresses occupied by the deleted file as unused to enable garbage collection, data recovery software is not able to restore files deleted from such.

    For relational databases or other systems that require ACID transactions, even a modest amount of flash storage can offer vast speedups over arrays of disk drives.[164][165]

    In May 2006, Samsung Electronics announced two flash-memory based PCs, the Q1-SSD and Q30-SSD were expected to become available in June 2006, both of which used 32 GB SSDs, and were at least initially available only in South Korea.[166] The Q1-SSD and Q30-SSD launch was delayed and finally was shipped in late August 2006.[167]

    The first flash-memory based PC to become available was the Sony Vaio UX90, announced for pre-order on 27 June 2006 and began to be shipped in Japan on 3 July 2006 with a 16Gb flash memory hard drive.[168] In late September 2006 Sony upgraded the flash-memory in the Vaio UX90 to 32Gb.[169]

    A solid-state drive was offered as an option with the first MacBook Air introduced in 2008, and from 2010 onwards, all models were shipped with an SSD. Starting in late 2011, as part of Intel's Ultrabook initiative, an increasing number of ultra-thin laptops are being shipped with SSDs standard.

    There are also hybrid techniques such as hybrid drive and ReadyBoost that attempt to combine the advantages of both technologies, using flash as a high-speed non-volatile cache for files on the disk that are often referenced, but rarely modified, such as application and operating system executable files.

    Flash memory as RAM

    As of 2012, there are attempts to use flash memory as the main computer memory, DRAM.[170]

    Archival or long-term storage

    Floating-gate transistors in the flash storage device hold charge which represents data. This charge gradually leaks over time, leading to an accumulation of logical errors, also known as "bit rot" or "bit fading".[171]

    Data retention

    It is unclear how long data on flash memory will persist under archival conditions (i.e., benign temperature and humidity with infrequent access with or without prophylactic rewrite). Datasheets of Atmel's flash-based "ATmega" microcontrollers typically promise retention times of 20 years at 85 °C (185 °F) and 100 years at 25 °C (77 °F).[172]

    The retention span varies among types and models of flash storage. When supplied with power and idle, the charge of the transistors holding the data is routinely refreshed by the firmware of the flash storage.[171] The ability to retain data varies among flash storage devices due to differences in firmware, data redundancy, and error correction algorithms.[173]

    An article from CMU in 2015 states "Today's flash devices, which do not require flash refresh, have a typical retention age of 1 year at room temperature." And that retention time decreases exponentially with increasing temperature. The phenomenon can be modeled by the Arrhenius equation.[174][175]

    FPGA configuration

    Some FPGAs are based on flash configuration cells that are used directly as (programmable) switches to connect internal elements together, using the same kind of floating-gate transistor as the flash data storage cells in data storage devices.[160]

    Industry

    One source states that, in 2008, the flash memory industry includes about US$9.1 billion in production and sales. Other sources put the flash memory market at a size of more than US$20 billion in 2006, accounting for more than eight percent of the overall semiconductor market and more than 34 percent of the total semiconductor memory market.[176] In 2012, the market was estimated at $26.8 billion.[177] It can take up to 10 weeks to produce a flash memory chip.[178]

    Manufacturers

    The following were the largest NAND flash memory manufacturers, as of the first quarter of 2019.[179]

    1. Samsung Electronics – 34.9%
    2. Kioxia – 18.1%
    3. Western Digital Corporation – 14%
    4. Micron Technology – 13.5%
    5. SK Hynix – 10.3%
    6. Intel – 8.7% Note: SK Hynix acquired Intel's NAND business at the end of 2021[180]

    Samsung remains the largest NAND flash memory manufacturer as of first quarter 2022.[181]

    Shipments

    Flash memory shipments (est. manufactured units)
    Year(s) Discrete flash memory chips Flash memory data capacity (gigabytes) Floating-gate MOSFET memory cells (billions)
    1992 26,000,000[182] 3[182] 24[a]
    1993 73,000,000[182] 17[182] 139[a]
    1994 112,000,000[182] 25[182] 203[a]
    1995 235,000,000[182] 38[182] 300[a]
    1996 359,000,000[182] 140[182] 1,121[a]
    1997 477,200,000+[183] 317+[183] 2,533+[a]
    1998 762,195,122[184] 455+[183] 3,642+[a]
    1999 12,800,000,000[185] 635+[183] 5,082+[a]
    2000–2004 134,217,728,000 (NAND)[186] 1,073,741,824,000 (NAND)[186]
    2005–2007 ?
    2008 1,226,215,645 (mobile NAND)[187]
    2009 1,226,215,645+ (mobile NAND)
    2010 7,280,000,000+[b]
    2011 8,700,000,000[189]
    2012 5,151,515,152 (serial)[190]
    2013 ?
    2014 ? 59,000,000,000[191] 118,000,000,000+[a]
    2015 7,692,307,692 (NAND)[192] 85,000,000,000[193] 170,000,000,000+[a]
    2016 ? 100,000,000,000[194] 200,000,000,000+[a]
    2017 ? 148,200,000,000[c] 296,400,000,000+[a]
    2018 ? 231,640,000,000[d] 463,280,000,000+[a]
    2019 ? ? ?
    2020 ? ? ?
    1992–2020 45,358,454,134+ memory chips 758,057,729,630+ gigabytes 2,321,421,837,044 billion+ cells

    In addition to individual flash memory chips, flash memory is also embedded in microcontroller (MCU) chips and system-on-chip (SoC) devices.[198] Flash memory is embedded in ARM chips,[198] which have sold 150 billion units worldwide as of 2019,[199] and in programmable system-on-chip (PSoC) devices, which have sold 1.1 billion units as of 2012.[200] This adds up to at least 151.1 billion MCU and SoC chips with embedded flash memory, in addition to the 45.4 billion known individual flash chip sales as of 2015, totalling at least 196.5 billion chips containing flash memory.

    Flash scalability

    Due to its relatively simple structure and high demand for higher capacity, NAND flash memory is the most aggressively scaled technology among electronic devices. The heavy competition among the top few manufacturers only adds to the aggressiveness in shrinking the floating-gate MOSFET design rule or process technology node.[90] While the expected shrink timeline is a factor of two every three years per original version of Moore's law, this has recently been accelerated in the case of NAND flash to a factor of two every two years.

    ITRS or company 2010 2011 2012 2013 2014 2015 2016 2017 2018
    ITRS Flash Roadmap 2011[201] 32 nm 22 nm 20 nm 18 nm 16 nm



    Updated ITRS Flash Roadmap[202]



    17 nm 15 nm 14 nm

    Samsung[201][202][203]
    (Samsung 3D NAND)[202]
    35–20 nm[31] 27 nm 21 nm
    (MLC, TLC)
    19–16 nm
    19–10 nm (MLC, TLC)[204]
    19–10 nm
    V-NAND (24L)
    16–10 nm
    V-NAND (32L)
    16–10 nm 12–10 nm 12–10 nm
    Micron, Intel[201][202][203] 34–25 nm 25 nm 20 nm
    (MLC + HKMG)
    20 nm
    (TLC)
    16 nm 16 nm
    3D NAND
    16 nm
    3D NAND
    12 nm
    3D NAND
    12 nm
    3D NAND
    Toshiba, WD (SanDisk)[201][202][203] 43–32 nm
    24 nm (Toshiba)[205]
    24 nm 19 nm
    (MLC, TLC)

    15 nm 15 nm
    3D NAND
    15 nm
    3D NAND
    12 nm
    3D NAND
    12 nm
    3D NAND
    SK Hynix[201][202][203] 46–35 nm 26 nm 20 nm (MLC)
    16 nm 16 nm 16 nm 12 nm 12 nm

    As the MOSFET feature size of flash memory cells reaches the 15–16 nm minimum limit, further flash density increases will be driven by TLC (3 bits/cell) combined with vertical stacking of NAND memory planes. The decrease in endurance and increase in uncorrectable bit error rates that accompany feature size shrinking can be compensated by improved error correction mechanisms.[206] Even with these advances, it may be impossible to economically scale flash to smaller and smaller dimensions as the number of electron holding capacity reduces. Many promising new technologies (such as FeRAM, MRAM, PMC, PCM, ReRAM, and others) are under investigation and development as possible more scalable replacements for flash.[207]

    Timeline

    Date of introduction Chip name Memory Package Capacity
    Megabits (Mb), Gigabits (Gb), Terabits (Tb)
    Flash type Cell type Layers or
    Stacks of Layers
    Manufacturer(s) Process Area Ref
    1984 ? ? NOR SLC 1 Toshiba ? ? [20]
    1985 ? 256 kb NOR SLC 1 Toshiba 2,000 nm ? [28]
    1987 ? ? NAND SLC 1 Toshiba ? ? [1]
    1989 ? 1 Mb NOR SLC 1 Seeq, Intel ? ? [28]


    4 Mb NAND SLC 1 Toshiba 1,000 nm

    1991 ? 16 Mb NOR SLC 1 Mitsubishi 600 nm ? [28]
    1993 DD28F032SA 32 Mb NOR SLC 1 Intel ? 280 mm² [208][209]
    1994 ? 64 Mb NOR SLC 1 NEC 400 nm ? [28]
    1995 ? 16 Mb DINOR SLC 1 Mitsubishi, Hitachi ? ? [28][210]



    NAND SLC 1 Toshiba ? ? [211]


    32 Mb NAND SLC 1 Hitachi, Samsung, Toshiba ? ? [28]


    34 Mb Serial SLC 1 SanDisk


    1996 ? 64 Mb NAND SLC 1 Hitachi, Mitsubishi 400 nm ? [28]




    QLC 1 NEC




    128 Mb NAND SLC 1 Samsung, Hitachi ?

    1997 ? 32 Mb NOR SLC 1 Intel, Sharp 400 nm ? [212]



    NAND SLC 1 AMD, Fujitsu 350 nm

    1999 ? 256 Mb NAND SLC 1 Toshiba 250 nm ? [28]




    MLC 1 Hitachi 1

    2000 ? 32 Mb NOR SLC 1 Toshiba 250 nm ? [28]


    64 Mb NOR QLC 1 STMicroelectronics 180 nm



    512 Mb NAND SLC 1 Toshiba ? ? [213]
    2001 ? 512 Mb NAND MLC 1 Hitachi ? ? [28]


    1 Gibit NAND MLC 1 Samsung







    1 Toshiba, SanDisk 160 nm ? [214]
    2002 ? 512 Mb NROM MLC 1 Saifun 170 nm ? [28]


    2 Gb NAND SLC 1 Samsung, Toshiba ? ? [215][216]
    2003 ? 128 Mb NOR MLC 1 Intel 130 nm ? [28]


    1 Gb NAND MLC 1 Hitachi


    2004 ? 8 Gb NAND SLC 1 Samsung 60 nm ? [215]
    2005 ? 16 Gb NAND SLC 1 Samsung 50 nm ? [31]
    2006 ? 32 Gb NAND SLC 1 Samsung 40 nm

    Apr-07 THGAM 128 Gb Stacked NAND SLC
    Toshiba 56 nm 252 mm² [47]
    Sep-07 ? 128 Gb Stacked NAND SLC
    Hynix ? ? [48]
    2008 THGBM 256 Gb Stacked NAND SLC
    Toshiba 43 nm 353 mm² [49]
    2009 ? 32 Gb NAND TLC
    Toshiba 32 nm 113 mm² [29]


    64 Gb NAND QLC
    Toshiba, SanDisk 43 nm ? [29][30]
    2010 ? 64 Gb NAND SLC
    Hynix 20 nm ? [217]




    TLC
    Samsung 20 nm ? [31]

    THGBM2 1 Tb Stacked NAND QLC
    Toshiba 32 nm 374 mm² [50]
    2011 KLMCG8GE4A 512 Gb Stacked NAND MLC
    Samsung ? 192 mm² [218]
    2013 ? ? NAND SLC
    SK Hynix 16 nm ? [217]


    128 Gb V-NAND TLC
    Samsung 10 nm ?
    2015 ? 256 Gb V-NAND TLC
    Samsung ? ? [204]
    2017 eUFS 2.1 512 Gb V-NAND TLC 8 of 64 Samsung ? ? [53]


    768 Gb V-NAND QLC
    Toshiba ? ? [219]

    KLUFG8R1EM 4 Tb Stacked V-NAND TLC
    Samsung ? 150 mm² [53]
    2018 ? 1 Tb V-NAND QLC
    Samsung ? ? [220]


    1.33 Tb V-NAND QLC
    Toshiba ? 158 mm² [221][222]
    2019 ? 512 Gb V-NAND QLC
    Samsung ? ? [54][55]


    1 Tb V-NAND TLC
    SK Hynix ? ? [223]

    eUFS 2.1 1 Tb Stacked V-NAND[224] QLC 16 of 64 Samsung ? 150 mm² [54][55][225]

    See also

    Notes


  • Single-level cell (1-bit per cell) up until 2009. Multi-level cell (up to 4-bit or half-byte per cell) commercialised in 2009.[29][30]

  • Flash memory chip shipments in 2010:
    • NOR – 3.64 billion[188]
    • NAND – 3.64 billion+ (est.)

  • Flash memory data capacity shipments in 2017:

    1. Flash memory data capacity shipments in 2018 (est.)
      • NAND NVM – 140 exabytes[195]
      • SSD – 91.64 exabytes[197]

    References


  • "1987: Toshiba Launches NAND Flash". eWeek. 11 April 2012. Retrieved 20 June 2019.

  • "A Flash Storage Technical and Economic Primer". FlashStorage.com. 30 March 2015. Archived from the original on 20 July 2015.

  • "What is Flash Memory". Bitwarsoft.com. 22 July 2020.

  • "HDD vs SSD: What Does the Future for Storage Hold?". backblaze.com. 6 March 2018. Archived from the original on 22 December 2022.

  • "TN-04-42: Memory Module Serial Presence-Detect Introduction" (PDF). Micron. Retrieved 1 June 2022.

  • "What is serial presence detect (SPD)? - Definition from WhatIs.com". WhatIs.com.

  • Shilov, Anton. "Samsung Starts Production of 1 TB eUFS 2.1 Storage for Smartphones". AnandTech.com.

  • Shilov, Anton. "Samsung Starts Production of 512 GB UFS NAND Flash Memory: 64-Layer V-NAND, 860 MB/s Reads". AnandTech.com.

  • Kim, Chulbum; Cho, Ji-Ho; Jeong, Woopyo; Park, Il-han; Park, Hyun-Wook; Kim, Doo-Hyun; Kang, Daewoon; Lee, Sunghoon; Lee, Ji-Sang; Kim, Wontae; Park, Jiyoon; Ahn, Yang-lo; Lee, Jiyoung; Lee, Jong-Hoon; Kim, Seungbum; Yoon, Hyun-Jun; Yu, Jaedoeg; Choi, Nayoung; Kwon, Yelim; Kim, Nahyun; Jang, Hwajun; Park, Jonghoon; Song, Seunghwan; Park, Yongha; Bang, Jinbae; Hong, Sangki; Jeong, Byunghoon; Kim, Hyun-Jin; Lee, Chunan; et al. (2017). "11.4 a 512Gb 3b/Cell 64-stacked WL 3D V-NAND flash memory". 2017 IEEE International Solid-State Circuits Conference (ISSCC). pp. 202–203. doi:10.1109/ISSCC.2017.7870331. ISBN 978-1-5090-3758-2. S2CID 206998691.

  • "Samsung enables 1TB eUFS 2.1 smartphones - Storage - News - HEXUS.net". m.hexus.net.

  • "Not just a flash in the pan". The Economist. 11 March 2006. Retrieved 10 September 2019.

  • Bez, R.; Pirovano, A. (2019). Advances in Non-Volatile Memory and Storage Technology. Woodhead Publishing. ISBN 9780081025857.

  • "1960 - Metal Oxide Semiconductor (MOS) Transistor Demonstrated". The Silicon Engine. Computer History Museum.

  • "1971: Reusable semiconductor ROM introduced". Computer History Museum. Retrieved 19 June 2019.

  • Fulford, Adel (24 June 2002). "Unsung hero". Forbes. Archived from the original on 3 March 2008. Retrieved 18 March 2008.

  • "How ROM Works". HowStuffWorks. 29 August 2000. Retrieved 10 September 2019.

  • US 4531203 Fujio Masuoka

  • Semiconductor memory device and method for manufacturing the same

  • "NAND Flash Memory: 25 Years of Invention, Development - Data Storage - News & Reviews - eWeek.com". eweek.com. Archived from the original on 28 April 2017.

  • "Toshiba: Inventor of Flash Memory". Toshiba. Retrieved 20 June 2019.

  • Masuoka, F.; Asano, M.; Iwahashi, H.; Komuro, T.; Tanaka, S. (December 1984). "A new flash E2PROM cell using triple polysilicon technology". 1984 International Electron Devices Meeting: 464–467. doi:10.1109/IEDM.1984.190752. S2CID 25967023.

  • Masuoka, F.; Momodomi, M.; Iwata, Y.; Shirota, R. (1987). "New ultra high density EPROM and flash EEPROM with NAND structure cell". Electron Devices Meeting, 1987 International. IEDM 1987. IEEE. pp. 552–555. doi:10.1109/IEDM.1987.191485.

  • Tal, Arie (February 2002). "NAND vs. NOR flash technology: The designer should weigh the options when using flash memory". Archived from the original on 28 July 2010. Retrieved 31 July 2010.

  • "H8S/2357 Group, H8S/2357F-ZTATTM, H8S/2398F-ZTATTM Hardware Manual, Section 19.6.1" (PDF). Renesas. October 2004. Retrieved 23 January 2012. The flash memory can be reprogrammed up to 100 times.[permanent dead link]

  • "AMD DL160 and DL320 Series Flash: New Densities, New Features" (PDF). AMD. July 2003. Archived (PDF) from the original on 24 September 2015. Retrieved 13 November 2014. The devices offer single-power-supply operation (2.7 V to 3.6 V), sector architecture, Embedded Algorithms, high performance, and a 1,000,000 program/erase cycle endurance guarantee.

  • James, Dick (2014). "3D ICs in the real world". 25th Annual SEMI Advanced Semiconductor Manufacturing Conference (ASMC 2014): 113–119. doi:10.1109/ASMC.2014.6846988. ISBN 978-1-4799-3944-2. S2CID 42565898.

  • "NEC: News Release 97/10/28-01". www.nec.co.jp.

  • "Memory". STOL (Semiconductor Technology Online). Retrieved 25 June 2019.

  • "Toshiba Makes Major Advances in NAND Flash Memory with 3-bit-per-cell 32nm generation and with 4-bit-per-cell 43nm technology". Toshiba. 11 February 2009. Retrieved 21 June 2019.

  • "SanDisk ships world's first memory cards with 64 gigabit X4 NAND flash". SlashGear. 13 October 2009. Retrieved 20 June 2019.

  • "History". Samsung Electronics. Samsung. Retrieved 19 June 2019.

  • "StackPath". www.electronicdesign.com.

  • Ito, T., & Taito, Y. (2017). SONOS Split-Gate eFlash Memory. Embedded Flash Memory for Embedded Systems: Technology, Design for Sub-Systems, and Innovations, 209–244. doi:10.1007/978-3-319-55306-1_7

  • Bez, R., Camerlenghi, E., Modelli, A., & Visconti, A. (2003). Introduction to flash memory. Proceedings of the IEEE, 91(4), 489–502. doi:10.1109/jproc.2003.811702

  • Lee, J.-S. (2011). Review paper: Nano-floating gate memory devices. Electronic Materials Letters, 7(3), 175–183. doi:10.1007/s13391-011-0901-5

  • Aravindan, Avinash (13 November 2018). "Flash 101: Types of NAND Flash".

  • Meena, J., Sze, S., Chand, U., & Tseng, T.-Y. (2014). Overview of emerging nonvolatile memory technologies. Nanoscale Research Letters, 9(1), 526. doi:10.1186/1556-276x-9-526

  • "Charge trap technology advantages for 3D NAND flash drives". SearchStorage.

  • Grossi, A., Zambelli, C., & Olivo, P. (2016). Reliability of 3D NAND Flash Memories. 3D Flash Memories, 29–62. doi:10.1007/978-94-017-7512-0_2

  • Kodama, N.; Oyama, K.; Shirai, H.; Saitoh, K.; Okazawa, T.; Hokari, Y. (December 1991). "A symmetrical side wall (SSW)-DSA cell for a 64 Mbit flash memory". International Electron Devices Meeting 1991 [Technical Digest]: 303–306. doi:10.1109/IEDM.1991.235443. ISBN 0-7803-0243-5. S2CID 111203629.

  • Eitan, Boaz. "US Patent 5,768,192: Non-volatile semiconductor memory cell utilizing asymmetrical charge trapping". US Patent & Trademark Office. Retrieved 22 May 2012.

  • Fastow, Richard M.; Ahmed, Khaled Z.; Haddad, Sameer; et al. (April 2000). "Bake induced charge gain in NOR flash cells". IEEE Electron Device Letters. 21 (4): 184–186. Bibcode:2000IEDL...21..184F. doi:10.1109/55.830976. S2CID 24724751.

  • "Samsung produces first 3D NAND, aims to boost densities, drive lower cost per GB". ExtremeTech. 6 August 2013. Retrieved 4 July 2019.

  • "Toshiba announces new "3D" NAND flash technology". Engadget. 12 June 2007. Retrieved 10 July 2019.

  • "Samsung Introduces World's First 3D V-NAND Based SSD for Enterprise Applications | Samsung | Samsung Semiconductor Global Website". Samsung.com.

  • Clarke, Peter. "Samsung Confirms 24 Layers in 3D NAND". EETimes.

  • "TOSHIBA COMMERCIALIZES INDUSTRY'S HIGHEST CAPACITY EMBEDDED NAND FLASH MEMORY FOR MOBILE CONSUMER PRODUCTS". Toshiba. 17 April 2007. Archived from the original on 23 November 2010. Retrieved 23 November 2010.

  • "Hynix Surprises NAND Chip Industry". The Korea Times. 5 September 2007. Retrieved 8 July 2019.

  • "Toshiba Launches the Largest Density Embedded NAND Flash Memory Devices". Toshiba. 7 August 2008. Retrieved 21 June 2019.

  • "Toshiba Launches Industry's Largest Embedded NAND Flash Memory Modules". Toshiba. 17 June 2010. Retrieved 21 June 2019.

  • SanDisk. "Western Digital Breaks Boundaries with World's Highest-Capacity microSD Card". SanDisk.com. Archived from the original on 1 September 2017. Retrieved 2 September 2017.

  • Bradley, Tony. "Expand Your Mobile Storage With New 400GB microSD Card From SanDisk". Forbes. Archived from the original on 1 September 2017. Retrieved 2 September 2017.

  • Shilov, Anton (5 December 2017). "Samsung Starts Production of 512 GB UFS NAND Flash Memory: 64-Layer V-NAND, 860 MB/s Reads". AnandTech. Retrieved 23 June 2019.

  • Manners, David (30 January 2019). "Samsung makes 1TB flash eUFS module". Electronics Weekly. Retrieved 23 June 2019.

  • Tallis, Billy (17 October 2018). "Samsung Shares SSD Roadmap for QLC NAND And 96-layer 3D NAND". AnandTech. Retrieved 27 June 2019.

  • Basinger, Matt (18 January 2007), PSoC Designer Device Selection Guide (PDF), AN2209, archived from the original (PDF) on 31 October 2009, The PSoC ... utilizes a unique Flash process: SONOS

  • "2.1.1 Flash Memory". www.iue.tuwien.ac.at.

  • "Floating Gate MOS Memory". www.princeton.edu.

  • Shimpi, Anand Lal. "The Intel SSD 710 (200GB) Review". www.anandtech.com.

  • "Flash Memory Reliability, Life & Wear » Electronics Notes".

  • "Understanding TLC NAND".

  • "Solid State bit density, and the Flash Memory Controller". hyperstone.com. 17 April 2018. Retrieved 29 May 2018.

  • Yasufuku, Tadashi; Ishida, Koichi; Miyamoto, Shinji; Nakai, Hiroto; Takamiya, Makoto; Sakurai, Takayasu; Takeuchi, Ken (2009), Proceedings of the 14th ACM/IEEE international symposium on Low power electronics and design - ISLPED '09, pp. 87–92, doi:10.1145/1594233.1594253, ISBN 9781605586847, S2CID 6055676, archived from the original on 5 March 2016 (abstract).

  • Micheloni, Rino; Marelli, Alessia; Eshghi, Kam (2012), Inside Solid State Drives (SSDs), Bibcode:2013issd.book.....M, ISBN 9789400751460, archived from the original on 9 February 2017

  • Micheloni, Rino; Crippa, Luca (2010), Inside NAND Flash Memories, ISBN 9789048194315, archived from the original on 9 February 2017 In particular, pp 515-536: K. Takeuchi. "Low power 3D-integrated SSD"

  • Mozel, Tracey (2009), CMOSET Fall 2009 Circuits and Memories Track Presentation Slides, ISBN 9781927500217, archived from the original on 9 February 2017

  • Tadashi Yasufuku et al., "Inductor and TSV Design of 20-V Boost Converter for Low Power 3D Solid State Drive with NAND Flash Memories" Archived 4 February 2016 at the Wayback Machine. 2010.

  • Hatanaka, T. and Takeuchi, K. "4-times faster rising VPASS (10V), 15% lower power VPGM (20V), wide output voltage range voltage generator system for 4-times faster 3D-integrated solid-state drives". 2011.

  • Takeuchi, K., "Low power 3D-integrated Solid-State Drive (SSD) with adaptive voltage generator". 2010.

  • Ishida, K. et al., "1.8 V Low-Transient-Energy Adaptive Program-Voltage Generator Based on Boost Converter for 3D-Integrated NAND Flash SSD". 2011.

  • A. H. Johnston, "Space Radiation Effects in Advanced Flash Memories" Archived 4 March 2016 at the Wayback Machine. NASA Electronic Parts and Packaging Program (NEPP). 2001. "... internal transistors used for the charge pump and erase/write control have much thicker oxides because of the requirement for high voltage. This causes flash devices to be considerably more sensitive to total dose damage compared to other ULSI technologies. It also implies that write and erase functions will be the first parameters to fail from total dose. ... Flash memories will work at much higher radiation levels in the read mode. ... The charge pumps that are required to generate the high voltage for erasing and writing are usually the most sensitive circuit functions, usually failing below 10 krad(SI)."

  • Zitlaw, Cliff. "The Future of NOR Flash Memory". Memory Designline. UBM Media. Retrieved 3 May 2011.

  • "NAND Flash Controllers - The key to endurance and reliability". hyperstone.com. 7 June 2018. Retrieved 1 June 2022.

  • "Samsung moves into mass production of 3D flash memory". Gizmag.com. 27 August 2013. Archived from the original on 27 August 2013. Retrieved 27 August 2013.

  • "Samsung Electronics Starts Mass Production of Industry First 3-bit 3D V-NAND Flash Memory". news.samsung.com.

  • "Samsung V-NAND technology" (PDF). Samsung Electronics. September 2014. Archived from the original (PDF) on 27 March 2016. Retrieved 27 March 2016.

  • Tallis, Billy. "Micron Announces 176-layer 3D NAND". www.anandtech.com.

  • "Samsung said to be developing industry's first 160-layer NAND flash memory chip". TechSpot.

  • "Toshiba's Cost Model for 3D NAND". www.linkedin.com.

  • "Calculating the Maximum Density and Equivalent 2D Design Rule of 3D NAND Flash". linkedin.com. Retrieved 1 June 2022.; "Calculating the Maximum Density and Equivalent 2D Design Rule of 3D NAND Flash". semwiki.com. Retrieved 1 June 2022.

  • "AVR105: Power Efficient High Endurance Parameter Storage in Flash Memory". p. 3

  • Calabrese, Marcello (May 2013). "Accelerated reliability testing of flash memory: Accuracy and issues on a 45nm NOR technology". Proceedings of 2013 International Conference on IC Design & Technology (ICICDT): 37–40. doi:10.1109/ICICDT.2013.6563298. ISBN 978-1-4673-4743-3. S2CID 37127243. Retrieved 22 June 2022.

  • Jonathan Thatcher, Fusion-io; Tom Coughlin, Coughlin Associates; Jim Handy, Objective-Analysis; Neal Ekker, Texas Memory Systems (April 2009). "NAND Flash Solid State Storage for the Enterprise, An In-depth Look at Reliability" (PDF). Solid State Storage Initiative (SSSI) of the Storage Network Industry Association (SNIA). Archived (PDF) from the original on 14 October 2011. Retrieved 6 December 2011.

  • "Difference between SLC, MLC, TLC and 3D NAND in USB flash drives, SSDs and memory cards". Kingston Technology Company.

  • "Micron Collaborates with Sun Microsystems to Extend Lifespan of Flash-Based Storage, Achieves One Million Write Cycles" (Press release). Micron Technology, Inc. 17 December 2008. Archived from the original on 4 March 2016.

  • "Taiwan engineers defeat limits of flash memory". phys.org. Archived from the original on 9 February 2016.

  • "Flash memory made immortal by fiery heat". theregister.co.uk. Archived from the original on 13 September 2017.

  • "Flash memory breakthrough could lead to even more reliable data storage". news.yahoo.com. Archived from the original on 21 December 2012.

  • "TN-29-17 NAND Flash Design and Use Considerations Introduction" (PDF). Micron. April 2010. Archived (PDF) from the original on 12 December 2015. Retrieved 29 July 2011.

  • Kawamatus, Tatsuya. "Technology For Managing NAND Flash" (PDF). Hagiwara sys-com co., LTD. Archived from the original (PDF) on 15 May 2018. Retrieved 15 May 2018.

  • Cooke, Jim (August 2007). "The Inconvenient Truths of NAND Flash Memory" (PDF). Flash Memory Summit 2007. Archived (PDF) from the original on 15 February 2018.

  • Richard Blish. "Dose Minimization During X-ray Inspection of Surface-Mounted Flash ICs" Archived 20 February 2016 at the Wayback Machine. p. 1.

  • Richard Blish. "Impact of X-Ray Inspection on Spansion Flash Memory" Archived 4 March 2016 at the Wayback Machine

  • "SanDisk Extreme PRO SDHC/SDXC UHS-I Memory Card". Archived from the original on 27 January 2016. Retrieved 3 February 2016.

  • "Samsung 32GB USB 3.0 Flash Drive FIT MUF-32BB/AM". Archived from the original on 3 February 2016. Retrieved 3 February 2016.

  • Spansion. "What Types of ECC Should Be Used on Flash Memory?" Archived 4 March 2016 at the Wayback Machine. 2011.

  • "DSstar: TOSHIBA ANNOUNCES 0.13 MICRON 1GB MONOLITHIC NAND". Tgc.com. 23 April 2002. Archived from the original on 27 December 2012. Retrieved 27 August 2013.

  • Kim, Jesung; Kim, John Min; Noh, Sam H.; Min, Sang Lyul; Cho, Yookun (May 2002). "A Space-Efficient Flash Translation Layer for CompactFlash Systems". Proceedings of the IEEE. Vol. 48, no. 2. pp. 366–375. doi:10.1109/TCE.2002.1010143.

  • TN-29-07: Small-Block vs. Large-Block NAND flash Devices Archived 8 June 2013 at the Wayback Machine Explains 512+16 and 2048+64-byte blocks

  • AN10860 LPC313x NAND flash data and bad block management Archived 3 March 2016 at the Wayback Machine Explains 4096+128-byte blocks.

  • Thatcher, Jonathan (18 August 2009). "NAND Flash Solid State Storage Performance and Capability – an In-depth Look" (PDF). SNIA. Archived (PDF) from the original on 7 September 2012. Retrieved 28 August 2012.

  • "Samsung ECC algorithm" (PDF). Samsung. June 2008. Archived (PDF) from the original on 12 October 2008. Retrieved 15 August 2008.

  • "Open NAND Flash Interface Specification" (PDF). Open NAND Flash Interface. 28 December 2006. Archived from the original (PDF) on 27 July 2011. Retrieved 31 July 2010.

  • A list of ONFi members is available at "Membership - ONFi". Archived from the original on 29 August 2009. Retrieved 21 September 2009.

  • "Toshiba Introduces Double Data Rate Toggle Mode NAND in MLC And SLC Configurations". toshiba.com. Archived from the original on 25 December 2015.

  • "Dell, Intel And Microsoft Join Forces To Increase Adoption of NAND-Based Flash Memory in PC Platforms". REDMOND, Wash: Microsoft. 30 May 2007. Archived from the original on 12 August 2014. Retrieved 12 August 2014.

  • Micheloni, Rino; Crippa, Luca; Marelli, Alessia (27 July 2010). "Inside NAND Flash Memories". Springer Science & Business Media – via Google Books.

  • Wright, Edmund (14 May 2014). "The Facts on File Dictionary of Computer Science". Infobase Publishing – via Google Books.

  • Bhattacharyya, Arup (6 July 2017). "Silicon Based Unified Memory Devices and Technology". CRC Press – via Google Books.

  • RAJARAMAN, V.; ADABALA, NEEHARIKA (15 December 2014). "FUNDAMENTALS OF COMPUTERS". PHI Learning Pvt. Ltd. – via Google Books.

  • Aravindan, Avinash (23 July 2018). "Flash 101: NAND Flash vs NOR Flash". Embedded.com. Retrieved 23 December 2020.

  • Micheloni, Rino; Crippa, Luca; Marelli, Alessia (27 July 2010). "Inside NAND Flash Memories". Springer Science & Business Media – via Google Books.

  • Rudan, Massimo; Brunetti, Rossella; Reggiani, Susanna (10 November 2022). "Springer Handbook of Semiconductor Devices". Springer Nature – via Google Books.

  • NAND Flash 101: An Introduction to NAND Flash and How to Design It in to Your Next Product (PDF), Micron, pp. 2–3, TN-29-19, archived from the original (PDF) on 4 June 2016

  • Pavan, Paolo; Bez, Roberto; Olivo, Piero; Zanoni, Enrico (1997). "Flash Memory Cells – An Overview". Proceedings of the IEEE. Vol. 85, no. 8 (published August 1997). pp. 1248–1271. doi:10.1109/5.622505. Retrieved 15 August 2008.

  • "The Fundamentals of Flash Memory Storage". 20 March 2012. Archived from the original on 4 January 2017. Retrieved 3 January 2017.

  • "SLC NAND Flash Memory | TOSHIBA MEMORY | Europe(EMEA)". business.toshiba-memory.com. Archived from the original on 1 January 2019. Retrieved 1 January 2019.

  • "Loading site please wait..." Toshiba.com.

  • "Serial Interface NAND | TOSHIBA MEMORY | Europe(EMEA)". business.toshiba-memory.com. Archived from the original on 1 January 2019. Retrieved 1 January 2019.

  • "BENAND | TOSHIBA MEMORY | Europe(EMEA)". business.toshiba-memory.com. Archived from the original on 1 January 2019. Retrieved 1 January 2019.

  • "SLC NAND Flash Memory | TOSHIBA MEMORY | Europe(EMEA)". business.toshiba-memory.com. Archived from the original on 1 January 2019. Retrieved 1 January 2019.

  • Salter, Jim (28 September 2019). "SSDs are on track to get bigger and cheaper thanks to PLC technology". Ars Technica.

  • "PBlaze4_Memblaze". memblaze.com. Retrieved 28 March 2019.

  • Crothers, Brooke. "SanDisk to begin making 'X4' flash chips". CNET.

  • Crothers, Brooke. "SanDisk ships 'X4' flash chips". CNET.

  • "SanDisk Ships Flash Memory Cards With 64 Gigabit X4 NAND Technology". phys.org.

  • "SanDisk Begins Mass Production of X4 Flash Memory Chips". 17 February 2012.

  • Tallis, Billy. "The Samsung 983 ZET (Z-NAND) SSD Review: How Fast Can Flash Memory Get?". AnandTech.com.

  • Vättö, Kristian. "Testing Samsung 850 Pro Endurance & Measuring V-NAND Die Size". AnandTech. Archived from the original on 26 June 2017. Retrieved 11 June 2017.

  • Vättö, Kristian. "Samsung SSD 845DC EVO/PRO Performance Preview & Exploring IOPS Consistency". AnandTech. p. 3. Archived from the original on 22 October 2016. Retrieved 11 June 2017.

  • Vättö, Kristian. "Samsung SSD 850 EVO (120GB, 250GB, 500GB & 1TB) Review". AnandTech. p. 4. Archived from the original on 31 May 2017. Retrieved 11 June 2017.

  • Vättö, Kristian. "Samsung SSD 845DC EVO/PRO Performance Preview & Exploring IOPS Consistency". AnandTech. p. 2. Archived from the original on 22 October 2016. Retrieved 11 June 2017.

  • Ramseyer, Chris (9 June 2017). "Flash Industry Trends Could Lead Users Back to Spinning Disks". AnandTech. Retrieved 11 June 2017.

  • "PBlaze5 700". memblaze.com. Retrieved 28 March 2019.

  • "PBlaze5 900". memblaze.com. Retrieved 28 March 2019.

  • "PBlaze5 910/916 series NVMe SSD". memblaze.com. Retrieved 26 March 2019.

  • "PBlaze5 510/516 series NVMe™ SSD". memblaze.com. Retrieved 26 March 2019.

  • "QLC NAND - What can we expect from the technology?". 7 November 2018.

  • "Say Hello: Meet the World's First QLC SSD, the Micron 5210 ION". Micron.com.

  • "QLC NAND". Micron.com.[permanent dead link]

  • Tallis, Billy. "The Intel SSD 660p SSD Review: QLC NAND Arrives For Consumer SSDs". AnandTech.com.

  • "SSD endurance myths and legends articles on StorageSearch.com". StorageSearch.com.

  • "Samsung Announces QLC SSDs And Second-Gen Z-NAND". Tom's Hardware. 18 October 2018.

  • "Samsung 860 QVO review: the first QLC SATA SSD, but it can't topple TLC yet". PCGamesN.

  • "Samsung Electronics Starts Mass Production of Industry's First 4-bit Consumer SSD". news.samsung.com.

  • Nellis, Hyunjoo Jin, Stephen (20 October 2020). "South Korea's SK Hynix to buy Intel's NAND business for $9 billion". Reuters – via www.reuters.com.

  • "NAND Evolution and its Effects on Solid State Drive Useable Life" (PDF). Western Digital. 2009. Archived from the original (PDF) on 12 November 2011. Retrieved 22 April 2012.

  • "Flash vs DRAM follow-up: chip stacking". The Daily Circuit. 22 April 2012. Archived from the original on 24 November 2012. Retrieved 22 April 2012.

  • "Computer data storage unit conversion - non-SI quantity". Archived from the original on 8 May 2015. Retrieved 20 May 2015.

  • Shilov, Anton (12 September 2005). "Samsung Unveils 2GB Flash Memory Chip". X-bit labs. Archived from the original on 24 December 2008. Retrieved 30 November 2008.

  • Gruener, Wolfgang (11 September 2006). "Samsung announces 40 nm Flash, predicts 20 nm devices". TG Daily. Archived from the original on 23 March 2008. Retrieved 30 November 2008.

  • "SanDisk Media Center". sandisk.com. Archived from the original on 19 December 2008.

  • "SanDisk Media Center". sandisk.com. Archived from the original on 19 December 2008.

  • https://www.pcworld.com/article/225370/look_out_for_the_256gb_thumb_drive_and_the_128gb_tablet.html[dead link]; "Kingston outs the first 256GB flash drive". 20 July 2009. Archived from the original on 8 July 2017. Retrieved 28 August 2017. 20 July 2009, Kingston DataTraveler 300 is 256 GB.

  • Borghino, Dario (31 March 2015). "3D flash technology moves forward with 10 TB SSDs and the first 48-layer memory cells". Gizmag. Archived from the original on 18 May 2015. Retrieved 31 March 2015.

  • "Samsung Launches Monster 4TB 850 EVO SSD Priced at $1,499 | Custom PC Review". Custom PC Review. 13 July 2016. Archived from the original on 9 October 2016. Retrieved 8 October 2016.

  • "Samsung Unveils 32TB SSD Leveraging 4th Gen 64-Layer 3D V-NAND | Custom PC Review". Custom PC Review. 11 August 2016. Archived from the original on 9 October 2016. Retrieved 8 October 2016.

  • Master, Neal; Andrews, Mathew; Hick, Jason; Canon, Shane; Wright, Nicholas (2010). "Performance analysis of commodity and enterprise class flash devices" (PDF). IEEE Petascale Data Storage Workshop. Archived (PDF) from the original on 6 May 2016.

  • "DailyTech - Samsung Confirms 32nm Flash Problems, Working on New SSD Controller". dailytech.com. Archived from the original on 4 March 2016. Retrieved 3 October 2009.

  • Clive Maxfield. "Bebop to the Boolean Boogie: An Unconventional Guide to Electronics". p. 232.

  • Many serial flash devices implement a bulk read mode and incorporate an internal address counter, so that it is trivial to configure them to transfer their entire contents to RAM on power-up. When clocked at 50 MHz, for example, a serial flash could transfer a 64 Mbit firmware image in less than two seconds.

  • Lyth0s (17 March 2011). "SSD vs. HDD". elitepcbuilding.com. Archived from the original on 20 August 2011. Retrieved 11 July 2011.

  • "Flash Solid State Disks – Inferior Technology or Closet Superstar?". STORAGEsearch. Archived from the original on 24 December 2008. Retrieved 30 November 2008.

  • Vadim Tkachenko (12 September 2012). "Intel SSD 910 vs HDD RAID in tpcc-mysql benchmark". MySQL Performance Blog.

  • Matsunobu, Yoshinori. "SSD Deployment Strategies for MySQL." Archived 3 March 2016 at the Wayback Machine Sun Microsystems, 15 April 2010.

  • "Samsung Electronics Launches the World's First PCs with NAND Flash-based Solid State Disk". Press Release. Samsung. 24 May 2006. Archived from the original on 20 December 2008. Retrieved 30 November 2008.

  • "Samsung's SSD Notebook". 22 August 2006. Archived from the original on 15 October 2018. Retrieved 15 October 2018.

  • "文庫本サイズのVAIO「type U」 フラッシュメモリー搭載モデル発売". Sony.jp (in Japanese).

  • "Sony Vaio UX UMPC – now with 32 GB Flash memory | NBnews.info. Laptop and notebook news, reviews, test, specs, price | Каталог ноутбуков, ультрабуков и планшетов, новости, обзоры". Archived from the original on 28 June 2022. Retrieved 7 November 2018.

  • Douglas Perry (2012) Princeton: Replacing RAM with Flash Can Save Massive Power.

  • "Understanding Life Expectancy of Flash Storage". www.ni.com. 23 July 2020. Retrieved 19 December 2020.

  • "8-Bit AVR Microcontroller ATmega32A Datasheet Complete" (PDF). 19 February 2016. p. 18. Archived from the original (PDF) on 9 April 2016. Retrieved 29 May 2016. Reliability Qualification results show that the projected data retention failure rate is much less than 1 PPM over 20 years at 85 °C or 100 years at 25 °C

  • "On Hacking MicroSD Cards « bunnie's blog".

  • "Data Retention in MLC NAND Flash Memory: Characterization, Optimization, and Recovery" (PDF). 27 January 2015. p. 10. Archived (PDF) from the original on 7 October 2016. Retrieved 27 April 2016.

  • "JEDEC SSD Specifications Explained" (PDF). p. 27.

  • Yinug, Christopher Falan (July 2007). "The Rise of the Flash Memory Market: Its Impact on Firm Behavior and Global Semiconductor Trade Patterns" (PDF). Journal of International Commerce and Economics. Archived from the original (PDF) on 29 May 2008. Retrieved 19 April 2008.

  • NAND memory market rockets Archived 8 February 2016 at the Wayback Machine, 17 April 2013, Nermin Hajdarbegovic, TG Daily, retrieved at 18 April 2013

  • "Power outage may have ruined 15 exabytes of WD and Toshiba flash storage". AppleInsider.

  • "NAND Flash manufacturers' market share 2019". Statista. Retrieved 3 July 2019.

  • "SK Hynix completes first phase of $9 bln Intel NAND business buy". Reuters. 29 December 2021. Retrieved 27 June 2022.

  • "NAND Revenue by Manufacturers Worldwide (2014-2022)". 26 May 2020. Retrieved 27 June 2022.

  • "The Flash Memory Market" (PDF). Integrated Circuit Engineering Corporation. Smithsonian Institution. 1997. p. 4. Retrieved 16 October 2019.

  • Cappelletti, Paulo; Golla, Carla; Olivo, Piero; Zanoni, Enrico (2013). Flash Memories. Springer Science & Business Media. p. 32. ISBN 9781461550150.

  • "Not Flashing Quite As Fast". Electronic Business. Cahners Publishing Company. 26 (7–13): 504. 2000. Unit shipments increased 64% in 1999 from the prior year, and are forecast to increase 44% to 1.8 billion units in 2000.

  • Sze, Simon Min. "EVOLUTION OF NONVOLATILE SEMICONDUCTOR MEMORY: From Invention to Nanocrystal Memory" (PDF). CERN. National Yang Ming Chiao Tung University. p. 41. Retrieved 22 October 2019.

  • Handy, Jim (26 May 2014). "How Many Transistors Have Ever Shipped?". Forbes. Retrieved 21 October 2019.

  • "【Market View】Major events in the 2008 DRAM industry; End application demand remains weak, 2009 NAND Flash demand bit growth being revised down to 81%". DRAMeXchange. 30 December 2008. Retrieved 16 October 2019.

  • "NOR Flash Memory Finds Growth Opportunities in Tablets and E-Book Readers". IHS Technology. IHS Markit. 9 June 2011. Retrieved 16 October 2019.

  • "Samsung to unveil new mass-storage memory cards". The Korea Times. 29 August 2012. Retrieved 16 October 2019.

  • "Winbond Top Serial Flash Memory Supplier Worldwide, Ships 1.7 Billion Units in 2012, Ramps 58nm Production". Business Wire. Winbond. 10 April 2013. Retrieved 16 October 2019.

  • Shilov, Anton (1 October 2015). "Samsung: NAND flash industry will triple output to 253EB by 2020". KitGuru. Retrieved 16 October 2019.

  • "Flash memory prices rebound as makers introduce larger-capacity chips". Nikkei Asian Review. Nikkei, Inc. 21 July 2016. Retrieved 16 October 2019.

  • Tidwell, William (30 August 2016). "Data 9, Storage 1 - NAND Production Falls Behind in the Age of Hyperscale". Seeking Alpha. Retrieved 17 October 2019.

  • Coughlin, Thomas M. (2017). Digital Storage in Consumer Electronics: The Essential Guide. Springer. p. 217. ISBN 9783319699073.

  • Reinsel, David; Gantz, John; Rydning, John (November 2018). "IDC White Paper: The Digitization of the World" (PDF). Seagate Technology. International Data Corporation. p. 14. Retrieved 17 October 2019.

  • Mellor, Chris (28 February 2018). "Who was the storage dollar daddy in 2017? S. S. D". The Register. Retrieved 17 October 2019.

  • "Combined SSD, HDD Storage Shipped Jumps 21% to 912 Exabytes in 2018". Business Wire. TRENDFOCUS. 7 March 2019. Retrieved 17 October 2019.

  • Yiu, Joseph (February 2015). "Embedded Processors" (PDF). ARM. Embedded World 2015. Retrieved 23 October 2019.

  • Smith, Ryan (8 October 2019). "Arm TechCon 2019 Keynote Live Blog (Starts at 10am PT/17:00 UTC)". AnandTech. Retrieved 15 October 2019.

  • "2011 Annual Report". Cypress Semiconductor. 2012. Archived from the original on 16 October 2019. Retrieved 16 October 2019.

  • "Technology Roadmap for NAND Flash Memory". techinsights. April 2013. Archived from the original on 9 January 2015. Retrieved 9 January 2015.

  • "Technology Roadmap for NAND Flash Memory". techinsights. April 2014. Archived from the original on 9 January 2015. Retrieved 9 January 2015.

  • "NAND Flash Memory Roadmap" (PDF). TechInsights. June 2016. Archived from the original (PDF) on 25 June 2018. Retrieved 25 June 2018.

  • "Samsung Mass Producing 128Gb 3-bit MLC NAND Flash". Tom's Hardware. 11 April 2013. Archived from the original on 21 June 2019. Retrieved 21 June 2019.

  • "Toshiba : News Release (31 Aug, 2010): Toshiba launches 24nm process NAND flash memory". Toshiba.co.jp.

  • Lal Shimpi, Anand (2 December 2010). "Micron's ClearNAND: 25nm + ECC, Combats Increasing Error Rates". Anandtech. Archived from the original on 3 December 2010. Retrieved 2 December 2010.

  • Kim, Kinam; Koh, Gwan-Hyeob (16 May 2004). 2004 24th International Conference on Microelectronics (IEEE Cat. No.04TH8716). Vol. 1. Serbia and Montenegro: Proceedings of the 24th International Conference on Microelectronics. pp. 377–384. doi:10.1109/ICMEL.2004.1314646. ISBN 978-0-7803-8166-7. S2CID 40985239.

  • "A chronological list of Intel products. The products are sorted by date" (PDF). Intel museum. Intel Corporation. July 2005. Archived from the original (PDF) on 9 August 2007. Retrieved 31 July 2007.

  • "DD28F032SA Datasheet". Intel. Retrieved 27 June 2019.

  • "Japanese Company Profiles" (PDF). Smithsonian Institution. 1996. Retrieved 27 June 2019.

  • "Toshiba to Introduce Flash Memory Cards". Toshiba. 2 March 1995. Retrieved 20 June 2019.

  • "WORLDWIDE IC MANUFACTURERS" (PDF). Smithsonian Institution. 1997. Retrieved 10 July 2019.

  • "TOSHIBA ANNOUNCES 0.13 MICRON 1Gb MONOLITHIC NAND FEATURING LARGE BLOCK SIZE FOR IMPROVED WRITE/ERASE SPEED PERFORMANCE". Toshiba. 9 September 2002. Archived from the original on 11 March 2006. Retrieved 11 March 2006.

  • "TOSHIBA AND SANDISK INTRODUCE A ONE GIGABIT NAND FLASH MEMORY CHIP, DOUBLING CAPACITY OF FUTURE FLASH PRODUCTS". Toshiba. 12 November 2001. Retrieved 20 June 2019.

  • "Our Proud Heritage from 2000 to 2009". Samsung Semiconductor. Samsung. Retrieved 25 June 2019.

  • "TOSHIBA ANNOUNCES 1 GIGABYTE COMPACTFLASH™CARD". Toshiba. 9 September 2002. Archived from the original on 11 March 2006. Retrieved 11 March 2006.

  • "History: 2010s". SK Hynix. Archived from the original on 17 May 2021. Retrieved 8 July 2019.

  • "Samsung e·MMC Product family" (PDF). Samsung Electronics. December 2011. Retrieved 15 July 2019.

  • "Toshiba Develops World's First 4-bit Per Cell QLC NAND Flash Memory". TechPowerUp. 28 June 2017. Retrieved 20 June 2019.

  • Shilov, Anton (6 August 2018). "Samsung Starts Mass Production of QLC V-NAND-Based SSDs". AnandTech. Retrieved 23 June 2019.

  • "Toshiba's flash chips could boost SSD capacity by 500 percent". Engadget. 20 July 2018. Retrieved 23 June 2019.

  • McGrath, Dylan (20 February 2019). "Toshiba Claims Highest-Capacity NAND". EE Times. Retrieved 23 June 2019.

  • Shilov, Anton (26 June 2019). "SK Hynix Starts Production of 128-Layer 4D NAND, 176-Layer Being Developed". AnandTech. Retrieved 8 July 2019.

  • Mu-Hyun, Cho. "Samsung produces 1TB eUFS memory for smartphones". ZDNet.

  • External links

     https://en.wikipedia.org/wiki/Flash_memory

     https://en.wikipedia.org/wiki/Magnetic_resonance_imaging

    https://en.wikipedia.org/wiki/U-70_(synchrotron)

    https://en.wikipedia.org/wiki/Hard_disk_drive

    https://en.wikipedia.org/wiki/Non-volatile_random-access_memory

    https://en.wikipedia.org/wiki/Degaussing#Permanent_magnet_degausser

    https://en.wikipedia.org/wiki/Large_Hadron_Collider#Initial_lower_magnet_currents

    https://en.wikipedia.org/wiki/Dynamo#Induction_with_permanent_magnets

    https://en.wikipedia.org/wiki/Solid-state_drive

    Faraday disk, the first homopolar generator

    A homopolar generator is a DC electrical generator comprising an electrically conductive disc or cylinder rotating in a plane perpendicular to a uniform static magnetic field. A potential difference is created between the center of the disc and the rim (or ends of the cylinder) with an electrical polarity that depends on the direction of rotation and the orientation of the field. It is also known as a unipolar generator, acyclic generator, disk dynamo, or Faraday disc. The voltage is typically low, on the order of a few volts in the case of small demonstration models, but large research generators can produce hundreds of volts, and some systems have multiple generators in series to produce an even larger voltage.[1] They are unusual in that they can source tremendous electric current, some more than a million amperes, because the homopolar generator can be made to have very low internal resistance. Also, the homopolar generator is unique in that no other rotary electric machine can produce DC without using rectifiers or commutators.[2]

     https://en.wikipedia.org/wiki/Homopolar_generator

    Magnet "training"

    In certain cases, superconducting magnets designed for very high currents require extensive bedding in, to enable the magnets to function at their full planned currents and fields. This is known as "training" the magnet, and involves a type of material memory effect. One situation this is required in is the case of particle colliders such as CERN's Large Hadron Collider.[8][9] The magnets of the LHC were planned to run at 8 TeV (2×4 TeV) on its first run and 14 TeV (2×7 TeV) on its second run, but were initially operated at a lower energy of 3.5 TeV and 6.5 TeV per beam respectively. Because of initial crystallographic defects in the material, they will initially lose their superconducting ability ("quench") at a lower level than their design current. CERN states that this is due to electromagnetic forces causing tiny movements in the magnets, which in turn cause superconductivity to be lost when operating at the high precisions needed for their planned current.[9] By repeatedly running the magnets at a lower current and then slightly increasing the current until they quench under control, the magnet will gradually both gain the required ability to withstand the higher currents of its design specification without quenches occurring, and have any such issues "shaken" out of them, until they are eventually able to operate reliably at their full planned current without experiencing quenches.[9]

     https://en.wikipedia.org/wiki/Superconducting_magnet#Magnet_%22training%22

     

    Magnetic memory may refer to:

    • Magnetic storage, the storage of data on a magnetized medium
    • Magnetic-core memory, an early form of random-access memory
    • Remanence, or residual magnetization, the magnetization left behind in a ferromagnet after an external magnetic field is removed
    • Rock magnetism, the study of the magnetic properties of rocks, sediments and soils

     https://en.wikipedia.org/wiki/Magnetic_memory

     

    Remanence or remanent magnetization or residual magnetism is the magnetization left behind in a ferromagnetic material (such as iron) after an external magnetic field is removed.[1] Colloquially, when a magnet is "magnetized", it has remanence.[2] The remanence of magnetic materials provides the magnetic memory in magnetic storage devices, and is used as a source of information on the past Earth's magnetic field in paleomagnetism. The word remanence is from remanent + -ence, meaning "that which remains".[3]

    The equivalent term residual magnetization is generally used in engineering applications. In transformers, electric motors and generators a large residual magnetization is not desirable (see also electrical steel) as it is an unwanted contamination, for example a magnetization remaining in an electromagnet after the current in the coil is turned off. Where it is unwanted, it can be removed by degaussing.

    Sometimes the term retentivity is used for remanence measured in units of magnetic flux density.[4]

     https://en.wikipedia.org/wiki/Remanence

    Paleomagnetism (occasionally palaeomagnetism[note 1]) is the study of magnetic fields recorded in rocks, sediment, or archeological materials. Geophysicists who specialize in paleomagnetism are called paleomagnetists.

    Certain magnetic minerals in rocks can record the direction and intensity of Earth's magnetic field at the time they formed. This record provides information on the past behavior of the geomagnetic field and the past location of tectonic plates. The record of geomagnetic reversals preserved in volcanic and sedimentary rock sequences (magnetostratigraphy) provides a time-scale that is used as a geochronologic tool.

    Evidence from paleomagnetism led to the revival of the continental drift hypothesis and its transformation into the modern theory of plate tectonics. Apparent polar wander paths provided the first clear geophysical evidence for continental drift, while marine magnetic anomalies did the same for seafloor spreading. Paleomagnetic data continues to extend the history of plate tectonics back in time, constraining the ancient position and movement of continents and continental fragments (terranes).

    The field of paleomagnetism also encompasses equivalent measurements of samples from other Solar System bodies, such as Moon rocks and meteorites, where it is used to investigate the ancient magnetic fields of those bodies and dynamo theory. Paleomagnetism relies on developments in rock magnetism, and overlaps with biomagnetism, magnetic fabrics (used as strain indicators in rocks and soils), and environmental magnetism.

     

     https://en.wikipedia.org/wiki/Paleomagnetism

    Rock magnetism is the study of the magnetic properties of rocks, sediments and soils. The field arose out of the need in paleomagnetism to understand how rocks record the Earth's magnetic field. This remanence is carried by minerals, particularly certain strongly magnetic minerals like magnetite (the main source of magnetism in lodestone). An understanding of remanence helps paleomagnetists to develop methods for measuring the ancient magnetic field and correct for effects like sediment compaction and metamorphism. Rock magnetic methods are used to get a more detailed picture of the source of the distinctive striped pattern in marine magnetic anomalies that provides important information on plate tectonics. They are also used to interpret terrestrial magnetic anomalies in magnetic surveys as well as the strong crustal magnetism on Mars.

    Strongly magnetic minerals have properties that depend on the size, shape, defect structure and concentration of the minerals in a rock. Rock magnetism provides non-destructive methods for analyzing these minerals such as magnetic hysteresis measurements, temperature-dependent remanence measurements, Mössbauer spectroscopy, ferromagnetic resonance and so on. With such methods, rock magnetists can measure the effects of past climate change and human impacts on the mineralogy (see environmental magnetism). In sediments, a lot of the magnetic remanence is carried by minerals that were created by magnetotactic bacteria, so rock magnetists have made significant contributions to biomagnetism.

    History

    Until the 20th century, the study of the Earth's field (geomagnetism and paleomagnetism) and of magnetic materials (especially ferromagnetism) developed separately.

    Rock magnetism had its start when scientists brought these two fields together in the laboratory.[1] Koenigsberger (1938), Thellier (1938) and Nagata (1943) investigated the origin of remanence in igneous rocks.[1] By heating rocks and archeological materials to high temperatures in a magnetic field, they gave the materials a thermoremanent magnetization (TRM), and they investigated the properties of this magnetization. Thellier developed a series of conditions (the Thellier laws) that, if fulfilled, would allow the determination of the intensity of the ancient magnetic field to be determined using the Thellier–Thellier method. In 1949, Louis Néel developed a theory that explained these observations, showed that the Thellier laws were satisfied by certain kinds of single-domain magnets, and introduced the concept of blocking of TRM.[2]

    When paleomagnetic work in the 1950s lent support to the theory of continental drift,[3][4] skeptics were quick to question whether rocks could carry a stable remanence for geological ages.[5] Rock magnetists were able to show that rocks could have more than one component of remanence, some soft (easily removed) and some very stable. To get at the stable part, they took to "cleaning" samples by heating them or exposing them to an alternating field. However, later events, particularly the recognition that many North American rocks had been pervasively remagnetized in the Paleozoic,[6] showed that a single cleaning step was inadequate, and paleomagnetists began to routinely use stepwise demagnetization to strip away the remanence in small bits.

    Fundamentals

    Types of magnetic order

    The contribution of a mineral to the total magnetism of a rock depends strongly on the type of magnetic order or disorder. Magnetically disordered minerals (diamagnets and paramagnets) contribute a weak magnetism and have no remanence. The more important minerals for rock magnetism are the minerals that can be magnetically ordered, at least at some temperatures. These are the ferromagnets, ferrimagnets and certain kinds of antiferromagnets. These minerals have a much stronger response to the field and can have a remanence.

    Diamagnetism

    Diamagnetism is a magnetic response shared by all substances. In response to an applied magnetic field, electrons precess (see Larmor precession), and by Lenz's law they act to shield the interior of a body from the magnetic field. Thus, the moment produced is in the opposite direction to the field and the susceptibility is negative. This effect is weak but independent of temperature. A substance whose only magnetic response is diamagnetism is called a diamagnet.

    Paramagnetism

    Paramagnetism is a weak positive response to a magnetic field due to rotation of electron spins. Paramagnetism occurs in certain kinds of iron-bearing minerals because the iron contains an unpaired electron in one of their shells (see Hund's rules). Some are paramagnetic down to absolute zero and their susceptibility is inversely proportional to the temperature (see Curie's law); others are magnetically ordered below a critical temperature and the susceptibility increases as it approaches that temperature (see Curie–Weiss law).

    Ferromagnetism

    Schematic of parallel spin directions in a ferromagnet.

    Collectively, strongly magnetic materials are often referred to as ferromagnets. However, this magnetism can arise as the result of more than one kind of magnetic order. In the strict sense, ferromagnetism refers to magnetic ordering where neighboring electron spins are aligned by the exchange interaction. The classic ferromagnet is iron. Below a critical temperature called the Curie temperature, ferromagnets have a spontaneous magnetization and there is hysteresis in their response to a changing magnetic field. Most importantly for rock magnetism, they have remanence, so they can record the Earth's field.

    Iron does not occur widely in its pure form. It is usually incorporated into iron oxides, oxyhydroxides and sulfides. In these compounds, the iron atoms are not close enough for direct exchange, so they are coupled by indirect exchange or superexchange. The result is that the crystal lattice is divided into two or more sublattices with different moments.[1]

    Ferrimagnetism

    Schematic of unbalanced antiparallel moments in a ferrimagnet.

    Ferrimagnets have two sublattices with opposing moments. One sublattice has a larger moment, so there is a net unbalance. Magnetite, the most important of the magnetic minerals, is a ferrimagnet. Ferrimagnets often behave like ferromagnets, but the temperature dependence of their spontaneous magnetization can be quite different. Louis Néel identified four types of temperature dependence, one of which involves a reversal of the magnetization. This phenomenon played a role in controversies over marine magnetic anomalies.

    Antiferromagnetism

    Schematic of alternating spin directions in an antiferromagnet.

    Antiferromagnets, like ferrimagnets, have two sublattices with opposing moments, but now the moments are equal in magnitude. If the moments are exactly opposed, the magnet has no remanence. However, the moments can be tilted (spin canting), resulting in a moment nearly at right angles to the moments of the sublattices. Hematite has this kind of magnetism.

    Magnetic mineralogy

    Types of remanence

    Magnetic remanence is often identified with a particular kind of remanence that is obtained after exposing a magnet to a field at room temperature. However, the Earth's field is not large, and this kind of remanence would be weak and easily overwritten by later fields. A central part of rock magnetism is the study of magnetic remanence, both as natural remanent magnetization (NRM) in rocks obtained from the field and remanence induced in the laboratory. Below are listed the important natural remanences and some artificially induced kinds.

    Thermoremanent magnetization (TRM)

    When an igneous rock cools, it acquires a thermoremanent magnetization (TRM) from the Earth's field. TRM can be much larger than it would be if exposed to the same field at room temperature (see isothermal remanence). This remanence can also be very stable, lasting without significant change for millions of years. TRM is the main reason that paleomagnetists are able to deduce the direction and magnitude of the ancient Earth's field.[7]

    If a rock is later re-heated (as a result of burial, for example), part or all of the TRM can be replaced by a new remanence. If it is only part of the remanence, it is known as partial thermoremanent magnetization (pTRM). Because numerous experiments have been done modeling different ways of acquiring remanence, pTRM can have other meanings. For example, it can also be acquired in the laboratory by cooling in zero field to a temperature (below the Curie temperature), applying a magnetic field and cooling to a temperature , then cooling the rest of the way to room temperature in zero field.

    The standard model for TRM is as follows. When a mineral such as magnetite cools below the Curie temperature, it becomes ferromagnetic but is not immediately capable of carrying a remanence. Instead, it is superparamagnetic, responding reversibly to changes in the magnetic field. For remanence to be possible there must be a strong enough magnetic anisotropy to keep the magnetization near a stable state; otherwise, thermal fluctuations make the magnetic moment wander randomly. As the rock continues to cool, there is a critical temperature at which the magnetic anisotropy becomes large enough to keep the moment from wandering: this temperature is called the blocking temperature and referred to by the symbol . The magnetization remains in the same state as the rock is cooled to room temperature and becomes a thermoremanent magnetization.

    Chemical (or crystallization) remanent magnetization (CRM)

    Magnetic grains may precipitate from a circulating solution, or be formed during chemical reactions, and may record the direction of the magnetic field at the time of mineral formation. The field is said to be recorded by chemical remanent magnetization (CRM). The mineral recording the field commonly is hematite, another iron oxide. Redbeds, clastic sedimentary rocks (such as sandstones) that are red primarily because of hematite formation during or after sedimentary diagenesis, may have useful CRM signatures, and magnetostratigraphy can be based on such signatures.

    Depositional remanent magnetization (DRM)

    Magnetic grains in sediments may align with the magnetic field during or soon after deposition; this is known as detrital remanent magnetization (DRM). If the magnetization is acquired as the grains are deposited, the result is a depositional detrital remanent magnetization (dDRM); if it is acquired soon after deposition, it is a post-depositional detrital remanent magnetization (pDRM).

    Viscous remanent magnetization

    Viscous remanent magnetization (VRM), also known as viscous magnetization, is remanence that is acquired by ferromagnetic minerals by sitting in a magnetic field for some time. The natural remanent magnetization of an igneous rock can be altered by this process. To remove this component, some form of stepwise demagnetization must be used.[1]

    Applications of rock magnetism

    Notes


  • Dunlop & Özdemir 1997

  • Néel 1949

  • Irving 1956

  • Runcorn 1956

  • For example, Sir Harold Jeffreys, in his influential textbook The Earth, had the following to say about it:

    "When I last did a magnetic experiment (about 1909) we were warned against careless handling of permanent magnets, and the magnetism was liable to change without much carelessness. In studying the magnetism of rocks the specimen has to be broken off with a geological hammer and then carried to the laboratory. It is supposed that in the process its magnetism does not change to any important extent, and though I have often asked how this comes to be the case I have never received any answer.Jeffreys 1959, p. 371


  • McCabe & Elmore 1989

  • References

    • Dunlop, David J.; Özdemir, Özden (1997). Rock Magnetism: Fundamentals and Frontiers. Cambridge Univ. Press. ISBN 0-521-32514-5.
    • Hunt, Christopher P.; Moskowitz, Bruce P. (1995). "Magnetic properties of rocks and minerals". In Ahrens, T. J. (ed.). Rock Physics and Phase Relations: A Handbook of Physical Constants. Vol. 3. Washington, DC: American Geophysical Union. pp. 189–204.
    • Irving, E. (1956). "Paleomagnetic and palaeoclimatological aspects of polar wandering". Geofis. Pura. Appl. 33 (1): 23–41. Bibcode:1956GeoPA..33...23I. doi:10.1007/BF02629944. S2CID 129781412.
    • Jeffreys, Sir Harold (1959). The earth: its origin, history, and physical constitution. Cambridge Univ. Press. ISBN 0-521-20648-0.
    • McCabe, C.; Elmore, R. D. (1989). "The occurrence and origin of Late Paleozoic remagnetization in the sedimentary rocks of North America". Reviews of Geophysics. 27 (4): 471–494. Bibcode:1989RvGeo..27..471M. doi:10.1029/RG027i004p00471.
    • Néel, Louis (1949). "Théorie du traînage magnétique des ferromagnétiques en grains fins avec application aux terres cuites". Ann. Géophys. 5: 99–136.
    • Runcorn, S. K. (1956). "Paleomagnetic comparisons between Europe and North America". Proc. Geol. Assoc. Canada. 8: 77–85.
    • Stacey, Frank D.; Banerjee, Subir K. (1974). The Physical Principles of Rock Magnetism. Elsevier. ISBN 0-444-41084-8.

    External links

     

     https://en.wikipedia.org/wiki/Rock_magnetism

     

    The Sun is an MHD system that is not well understood, possibly because of the simplifications of plasma physics required in this framework.

    Magnetohydrodynamics (MHD; also called magneto-fluid dynamics or hydro­magnetics) is the study of the magnetic properties and behaviour of electrically conducting fluids. Examples of such magneto­fluids include plasmas, liquid metals, salt water, and electrolytes.[1] The word magneto­hydro­dynamics is derived from magneto- meaning magnetic field, hydro- meaning water, and dynamics meaning movement. The field of MHD was initiated by Hannes Alfvén,[2] for which he received the Nobel Prize in Physics in 1970.

    The fundamental concept behind MHD is that magnetic fields can induce currents in a moving conductive fluid, which in turn polarizes the fluid and reciprocally changes the magnetic field itself. The set of equations that describe MHD is a combination of the Navier–Stokes equations of fluid dynamics and Maxwell’s equations of electro­magnetism. These differential equations must be solved simultaneously, either analytically or numerically.

    History

    The first recorded use of the word magnetohydrodynamics is by Hannes Alfvén in 1942:[3]

    At last some remarks are made about the transfer of momentum from the Sun to the planets, which is fundamental to the theory. The importance of the Magnetohydrodynamic waves in this respect are pointed out.

    The ebbing salty water flowing past London's Waterloo Bridge interacts with the Earth's magnetic field to produce a potential difference between the two river banks. Michael Faraday called this effect "magneto-electric induction" and tried this experiment in 1832 but the current was too small to measure with the equipment at the time,[4] and the river bed contributed to short-circuit the signal. However, by a similar process, the voltage induced by the tide in the English Channel was measured in 1851.[5]

    Faraday carefully omitted the term of hydrodynamics in this work. By omission, an entire body of work regarding hydromagnetic power within dams has been excluded.[clarification needed]

    Equations

    In MHD, motion in the fluid is described using linear combinations of the mean motions of the individual species: the current density and the center of mass velocity . In a given fluid, each species has a number density , mass , electric charge , and a mean velocity . The fluid's total mass density is then , and the motion of the fluid can be described by the current density expressed as

    and the center of mass velocity expressed as:

    MHD can be described by a set of equations consisting of a continuity equation, an equation of motion, an equation of state, Ampère's Law, Faraday's law, and Ohm's law. As with any fluid description to a kinetic system, a closure approximation must be applied to highest moment of the particle distribution equation. This is often accomplished with approximations to the heat flux through a condition of adiabaticity or isothermality.

    In the adiabatic limit, that is, the assumption of an isotropic pressure and isotropic temperature, a fluid with an adiabatic index , electrical resistivity , magnetic field , and electric field can be described by the continuity equation

    the equation of state

    the equation of motion

    the low-frequency Ampère's law

    Faraday's law

    and Ohm's law

    Taking the curl of this equation and using Ampère's law and Faraday's law results in the induction equation,

    where is the magnetic diffusivity.

    In the equation of motion, the Lorentz force term can be expanded using Ampère's law and a vector calculus identity to give

    where the first term on the right hand side is the magnetic tension force and the second term is the magnetic pressure force.[6]

    Ideal MHD

    In view of the infinite conductivity, every motion (perpendicular to the field) of the liquid in relation to the lines of force is forbidden because it would give infinite eddy currents. Thus the matter of the liquid is "fastened" to the lines of force...

    Hannes Alfvén, 1943[7]

    The simplest form of MHD, ideal MHD, assumes that the resistive term in Ohm's law is small relative to the other terms such that it can be taken to be equal to zero. This occurs in the limit of large magnetic Reynolds numbers during which magnetic induction dominates over magnetic diffusion at the velocity and length scales under consideration.[6] Consequently, processes in ideal MHD that convert magnetic energy into kinetic energy, referred to as ideal processes, cannot generate heat and raise entropy.[8]: 6 

    A fundamental concept underlying ideal MHD is the frozen-in flux theorem which states that the bulk fluid and embedded magnetic field are constrained to move together such that one can be said to be "tied" or "frozen" to the other. Therefore, any two points that move with the bulk fluid velocity and lie on the same magnetic field line will continue to lie on the same field line even as the points are advected by fluid flows in the system.[9][8]: 25  The connection between the fluid and magnetic field fixes the topology of the magnetic field in the fluid—for example, if a set of magnetic field lines are tied into a knot, then they will remain so as long as the fluid has negligible resistivity. This difficulty in reconnecting magnetic field lines makes it possible to store energy by moving the fluid or the source of the magnetic field. The energy can then become available if the conditions for ideal MHD break down, allowing magnetic reconnection that releases the stored energy from the magnetic field.

    Ideal MHD equations

    In ideal MHD, the resistive term vanishes in Ohm's law giving the ideal Ohm's law,[6]

    Similarly, the magnetic diffusion term in the induction equation vanishes giving the ideal induction equation,[8]: 23 

    Applicability of ideal MHD to plasmas

    Ideal MHD is only strictly applicable when:

    1. The plasma is strongly collisional, so that the time scale of collisions is shorter than the other characteristic times in the system, and the particle distributions are therefore close to Maxwellian.
    2. The resistivity due to these collisions is small. In particular, the typical magnetic diffusion times over any scale length present in the system must be longer than any time scale of interest.
    3. Interest in length scales much longer than the ion skin depth and Larmor radius perpendicular to the field, long enough along the field to ignore Landau damping, and time scales much longer than the ion gyration time (system is smooth and slowly evolving).

    Importance of resistivity

    In an imperfectly conducting fluid the magnetic field can generally move through the fluid following a diffusion law with the resistivity of the plasma serving as a diffusion constant. This means that solutions to the ideal MHD equations are only applicable for a limited time for a region of a given size before diffusion becomes too important to ignore. One can estimate the diffusion time across a solar active region (from collisional resistivity) to be hundreds to thousands of years, much longer than the actual lifetime of a sunspot—so it would seem reasonable to ignore the resistivity. By contrast, a meter-sized volume of seawater has a magnetic diffusion time measured in milliseconds.

    Even in physical systems[10]—which are large and conductive enough that simple estimates of the Lundquist number suggest that the resistivity can be ignored—resistivity may still be important: many instabilities exist that can increase the effective resistivity of the plasma by factors of more than 109. The enhanced resistivity is usually the result of the formation of small scale structure like current sheets or fine scale magnetic turbulence, introducing small spatial scales into the system over which ideal MHD is broken and magnetic diffusion can occur quickly. When this happens, magnetic reconnection may occur in the plasma to release stored magnetic energy as waves, bulk mechanical acceleration of material, particle acceleration, and heat.

    Magnetic reconnection in highly conductive systems is important because it concentrates energy in time and space, so that gentle forces applied to a plasma for long periods of time can cause violent explosions and bursts of radiation.

    When the fluid cannot be considered as completely conductive, but the other conditions for ideal MHD are satisfied, it is possible to use an extended model called resistive MHD. This includes an extra term in Ohm's Law which models the collisional resistivity. Generally MHD computer simulations are at least somewhat resistive because their computational grid introduces a numerical resistivity.

    Structures in MHD systems

    Schematic view of the different current systems which shape the Earth's magnetosphere

    In many MHD systems most of the electric current is compressed into thin nearly-two-dimensional ribbons termed current sheets.[11] These can divide the fluid into magnetic domains, inside of which the currents are relatively weak. Current sheets in the solar corona are thought to be between a few meters and a few kilometers in thickness, which is quite thin compared to the magnetic domains (which are thousands to hundreds of thousands of kilometers across).[12] Another example is in the Earth's magnetosphere, where current sheets separate topologically distinct domains, isolating most of the Earth's ionosphere from the solar wind.

    Waves

    The wave modes derived using MHD plasma theory are called magnetohydrodynamic waves or MHD waves. In general there are three MHD wave modes:

    • Pure (or oblique) Alfvén wave
    • Slow MHD wave
    • Fast MHD wave
    Phase velocity plotted with respect to θ
    '"`UNIQ--postMath-0000001F-QINU`"'
    vA > vs
    '"`UNIQ--postMath-00000020-QINU`"'
    vA < vs

    All these waves have constant phase velocities for all frequencies, and hence there is no dispersion. At the limits when the angle between the wave propagation vector k and magnetic field B is either 0° (180°) or 90°, the wave modes are called:[13]

    Name Type Propagation Phase velocity Association Medium Other names
    Sound wave longitudinal kB adiabatic sound velocity none compressible, nonconducting fluid
    Alfvén wave transverse kB Alfvén velocity B
    shear Alfvén wave, the slow Alfvén wave, torsional Alfvén wave
    Magnetosonic wave longitudinal kB
    B, E
    compressional Alfvén wave, fast Alfvén wave, magnetoacoustic wave

    The phase velocity depends on the angle between the wave vector k and the magnetic field B. An MHD wave propagating at an arbitrary angle θ with respect to the time independent or bulk field B0 will satisfy the dispersion relation

    where

    is the Alfvén speed. This branch corresponds to the shear Alfvén mode. Additionally the dispersion equation gives

    where

    is the ideal gas speed of sound. The plus branch corresponds to the fast-MHD wave mode and the minus branch corresponds to the slow-MHD wave mode.

    The MHD oscillations will be damped if the fluid is not perfectly conducting but has a finite conductivity, or if viscous effects are present.

    MHD waves and oscillations are a popular tool for the remote diagnostics of laboratory and astrophysical plasmas, for example, the corona of the Sun (Coronal seismology).

    Extensions

    Resistive
    Resistive MHD describes magnetized fluids with finite electron diffusivity (η ≠ 0). This diffusivity leads to a breaking in the magnetic topology; magnetic field lines can 'reconnect' when they collide. Usually this term is small and reconnections can be handled by thinking of them as not dissimilar to shocks; this process has been shown to be important in the Earth-Solar magnetic interactions.
    Extended
    Extended MHD describes a class of phenomena in plasmas that are higher order than resistive MHD, but which can adequately be treated with a single fluid description. These include the effects of Hall physics, electron pressure gradients, finite Larmor Radii in the particle gyromotion, and electron inertia.
    Two-fluid
    Two-fluid MHD describes plasmas that include a non-negligible Hall electric field. As a result, the electron and ion momenta must be treated separately. This description is more closely tied to Maxwell's equations as an evolution equation for the electric field exists.
    Hall
    In 1960, M. J. Lighthill criticized the applicability of ideal or resistive MHD theory for plasmas.[14] It concerned the neglect of the "Hall current term" in Ohm's law, a frequent simplification made in magnetic fusion theory. Hall-magnetohydrodynamics (HMHD) takes into account this electric field description of magnetohydrodynamics, and Ohm's law takes the form
    where is the electron number density and is the elementary charge. The most important difference is that in the absence of field line breaking, the magnetic field is tied to the electrons and not to the bulk fluid.[15]
    Electron MHD
    Electron Magnetohydrodynamics (EMHD) describes small scales plasmas when electron motion is much faster than the ion one. The main effects are changes in conservation laws, additional resistivity, importance of electron inertia. Many effects of Electron MHD are similar to effects of the Two fluid MHD and the Hall MHD. EMHD is especially important for z-pinch, magnetic reconnection, ion thrusters, neutron stars, and plasma switches.
    Collisionless
    MHD is also often used for collisionless plasmas. In that case the MHD equations are derived from the Vlasov equation.[16]
    Reduced
    By using a multiscale analysis the (resistive) MHD equations can be reduced to a set of four closed scalar equations. This allows for, amongst other things, more efficient numerical calculations.[17]

    Limitations

    Importance of kinetic effects

    Another limitation of MHD (and fluid theories in general) is that they depend on the assumption that the plasma is strongly collisional (this is the first criterion listed above), so that the time scale of collisions is shorter than the other characteristic times in the system, and the particle distributions are Maxwellian. This is usually not the case in fusion, space and astrophysical plasmas. When this is not the case, or the interest is in smaller spatial scales, it may be necessary to use a kinetic model which properly accounts for the non-Maxwellian shape of the distribution function. However, because MHD is relatively simple and captures many of the important properties of plasma dynamics it is often qualitatively accurate and is therefore often the first model tried.

    Effects which are essentially kinetic and not captured by fluid models include double layers, Landau damping, a wide range of instabilities, chemical separation in space plasmas and electron runaway. In the case of ultra-high intensity laser interactions, the incredibly short timescales of energy deposition mean that hydrodynamic codes fail to capture the essential physics.

    Applications

    Geophysics

    Beneath the Earth's mantle lies the core, which is made up of two parts: the solid inner core and liquid outer core.[18][19] Both have significant quantities of iron. The liquid outer core moves in the presence of the magnetic field and eddies are set up into the same due to the Coriolis effect.[20] These eddies develop a magnetic field which boosts Earth's original magnetic field—a process which is self-sustaining and is called the geomagnetic dynamo.[21]

    Based on the MHD equations, Glatzmaier and Paul Roberts have made a supercomputer model of the Earth's interior. After running the simulations for thousands of years in virtual time, the changes in Earth's magnetic field can be studied. The simulation results are in good agreement with the observations as the simulations have correctly predicted that the Earth's magnetic field flips every few hundred thousand years. During the flips, the magnetic field does not vanish altogether—it just gets more complex.[22]

    Earthquakes

    Some monitoring stations have reported that earthquakes are sometimes preceded by a spike in ultra low frequency (ULF) activity. A remarkable example of this occurred before the 1989 Loma Prieta earthquake in California,[23] although a subsequent study indicates that this was little more than a sensor malfunction.[24] On December 9, 2010, geoscientists announced that the DEMETER satellite observed a dramatic increase in ULF radio waves over Haiti in the month before the magnitude 7.0 Mw 2010 earthquake.[25] Researchers are attempting to learn more about this correlation to find out whether this method can be used as part of an early warning system for earthquakes.

    Space Physics

    The study of space plasmas near Earth and throughout the Solar System is known as space physics. Areas researched within space physics encompass a large number of topics, ranging from the ionosphere to auroras, Earth's magnetosphere, the Solar wind, and coronal mass ejections.

    MHD forms the framework for understanding how populations of plasma interact within the local geospace environment. Researchers have developed global models using MHD to simulate phenomena within Earth's magnetosphere, such as the location of Earth's magnetopause[26] (the boundary between the Earth's magnetic field and the solar wind), the formation of the ring current, auroral electrojets,[27] and geomagnetically induced currents.[28]

    One prominent use of global MHD models is in space weather forecasting. Intense solar storms have the potential to cause extensive damage to satellites[29] and infrastructure, thus it is crucial that such events are detected early. The Space Weather Prediction Center (SWPC) runs MHD models to predict the arrival and impacts of space weather events at Earth.

    Astrophysics

    MHD applies to astrophysics, including stars, the interplanetary medium (space between the planets), and possibly within the interstellar medium (space between the stars) and jets.[30] Most astrophysical systems are not in local thermal equilibrium, and therefore require an additional kinematic treatment to describe all the phenomena within the system (see Astrophysical plasma).[31][32]

    Sunspots are caused by the Sun's magnetic fields, as Joseph Larmor theorized in 1919. The solar wind is also governed by MHD. The differential solar rotation may be the long-term effect of magnetic drag at the poles of the Sun, an MHD phenomenon due to the Parker spiral shape assumed by the extended magnetic field of the Sun.

    Previously, theories describing the formation of the Sun and planets could not explain how the Sun has 99.87% of the mass, yet only 0.54% of the angular momentum in the Solar System. In a closed system such as the cloud of gas and dust from which the Sun was formed, mass and angular momentum are both conserved. That conservation would imply that as the mass concentrated in the center of the cloud to form the Sun, it would spin faster, much like a skater pulling their arms in. The high speed of rotation predicted by early theories would have flung the proto-Sun apart before it could have formed. However, magnetohydrodynamic effects transfer the Sun's angular momentum into the outer solar system, slowing its rotation.

    Breakdown of ideal MHD (in the form of magnetic reconnection) is known to be the likely cause of solar flares. The magnetic field in a solar active region over a sunspot can store energy that is released suddenly as a burst of motion, X-rays, and radiation when the main current sheet collapses, reconnecting the field.[33][34]

    Sensors

    Magnetohydrodynamic sensors are used for precision measurements of angular velocities in inertial navigation systems such as in aerospace engineering. Accuracy improves with the size of the sensor. The sensor is capable of surviving in harsh environments.[35]

    Engineering

    MHD is related to engineering problems such as plasma confinement, liquid-metal cooling of nuclear reactors, and electromagnetic casting (among others).

    A magnetohydrodynamic drive or MHD propulsor is a method for propelling seagoing vessels using only electric and magnetic fields with no moving parts, using magnetohydrodynamics. The working principle involves electrification of the propellant (gas or water) which can then be directed by a magnetic field, pushing the vehicle in the opposite direction. Although some working prototypes exist, MHD drives remain impractical.

    The first prototype of this kind of propulsion was built and tested in 1965 by Steward Way, a professor of mechanical engineering at the University of California, Santa Barbara. Way, on leave from his job at Westinghouse Electric, assigned his senior-year undergraduate students to develop a submarine with this new propulsion system.[36] In the early 1990s, a foundation in Japan (Ship & Ocean Foundation (Minato-ku, Tokyo)) built an experimental boat, the Yamato-1, which used a magnetohydrodynamic drive incorporating a superconductor cooled by liquid helium, and could travel at 15 km/h.[37]

    MHD power generation fueled by potassium-seeded coal combustion gas showed potential for more efficient energy conversion (the absence of solid moving parts allows operation at higher temperatures), but failed due to cost-prohibitive technical difficulties.[38] One major engineering problem was the failure of the wall of the primary-coal combustion chamber due to abrasion.

    In microfluidics, MHD is studied as a fluid pump for producing a continuous, nonpulsating flow in a complex microchannel design.[39]

    MHD can be implemented in the continuous casting process of metals to suppress instabilities and control the flow.[40][41]

    Industrial MHD problems can be modeled using the open-source software EOF-Library.[42] Two simulation examples are 3D MHD with a free surface for electromagnetic levitation melting,[43] and liquid metal stirring by rotating permanent magnets.[44]

    Magnetic drug targeting

    An important task in cancer research is developing more precise methods for delivery of medicine to affected areas. One method involves the binding of medicine to biologically compatible magnetic particles (such as ferrofluids), which are guided to the target via careful placement of permanent magnets on the external body. Magnetohydrodynamic equations and finite element analysis are used to study the interaction between the magnetic fluid particles in the bloodstream and the external magnetic field.[45]

    See also

    References


  • Hunt, Julian C. R. (2016). Some aspects of magnetohydrodynamics (Thesis). University of Cambridge. doi:10.17863/CAM.14141.

  • Alfvén, H (1942). "Existence of electromagnetic-hydrodynamic waves". Nature. 150 (3805): 405–406. Bibcode:1942Natur.150..405A. doi:10.1038/150405d0. S2CID 4072220.

  • Alfvén, H. (1942). "On the cosmogony of the solar system III". Stockholms Observatoriums Annaler. 14: 9.1–9.29. Bibcode:1946StoAn..14....9A.

  • Dynamos in Nature by David P. Stern

  • "Research and development of electromagnetic flowmeter_Kailiu Instrument Co., Ltd".

  • Bellan, Paul Murray (2006). Fundamentals of plasma physics. Cambridge: Cambridge University Press. ISBN 0521528003.

  • Alfvén, Hannes (1943). "On the Existence of Electromagnetic-Hydrodynamic Waves". Arkiv för matematik, astronomi och fysik. 29B(2): 1–7.

  • Priest, Eric; Forbes, Terry (2000). Magnetic Reconnection: MHD Theory and Applications (First ed.). Cambridge University Press. ISBN 0-521-48179-1.

  • Rosenbluth, M. (April 1956). "Stability of the Pinch". OSTI 4329910.

  • Wesson, J.A. (1978). "Hydromagnetic stability of tokamaks". Nuclear Fusion. 18: 87–132. doi:10.1088/0029-5515/18/1/010. S2CID 122227433.

  • Pontin, David I.; Priest, Eric R. (2022). "Magnetic reconnection: MHD theory and modelling". Living Reviews in Solar Physics. 19 (1): 1. Bibcode:2022LRSP...19....1P. doi:10.1007/s41116-022-00032-9. S2CID 248673571.

  • Khabarova, O.; Malandraki, O.; Malova, H.; Kislov, R.; Greco, A.; Bruno, R.; Pezzi, O.; Servidio, S.; Li, Gang; Matthaeus, W.; Le Roux, J.; Engelbrecht, N. E.; Pecora, F.; Zelenyi, L.; Obridko, V.; Kuznetsov, V. (2021). "Current Sheets, Plasmoids and Flux Ropes in the Heliosphere". Space Science Reviews. 217 (3). doi:10.1007/s11214-021-00814-x. S2CID 231592434.

  • MHD waves [Oulu] Archived 2007-08-10 at the Wayback Machine

  • M. J. Lighthill, "Studies on MHD waves and other anisotropic wave motion," Phil. Trans. Roy. Soc., London, vol. 252A, pp. 397–430, 1960.

  • Witalis, E.A. (1986). "Hall Magnetohydrodynamics and Its Applications to Laboratory and Cosmic Plasma". IEEE Transactions on Plasma Science. PS-14 (6): 842–848. Bibcode:1986ITPS...14..842W. doi:10.1109/TPS.1986.4316632. S2CID 31433317.

  • W. Baumjohann and R. A. Treumann, Basic Space Plasma Physics, Imperial College Press, 1997

  • Kruger, S.E.; Hegna, C.C.; Callen, J.D. "Reduced MHD equations for low aspect ratio plasmas" (PDF). University of Wisconsin. Archived from the original (PDF) on 25 September 2015. Retrieved 27 April 2015.

  • "Why Earth's Inner and Outer Cores Rotate in Opposite Directions". Live Science. 19 September 2013.

  • "Earth's contrasting inner core rotation and magnetic field rotation linked". 7 October 2013.

  • "Geodynamo".

  • NOVA | Magnetic Storm | What Drives Earth's Magnetic Field? | PBS

  • Earth's Inconstant Magnetic Field – NASA Science

  • Fraser-Smith, Antony C.; Bernardi, A.; McGill, P. R.; Ladd, M. E.; Helliwell, R. A.; Villard Jr., O. G. (August 1990). "Low-Frequency Magnetic Field Measurements Near the Epicenter of the Ms 7.1 Loma Prieta Earthquake" (PDF). Geophysical Research Letters. 17 (9): 1465–1468. Bibcode:1990GeoRL..17.1465F. doi:10.1029/GL017i009p01465. ISSN 0094-8276. OCLC 1795290. Archived (PDF) from the original on 2022-10-09. Retrieved December 18, 2010.

  • Thomas, J. N.; Love, J. J.; Johnston, M. J. S. (April 2009). "On the reported magnetic precursor of the 1989 Loma Prieta earthquake". Physics of the Earth and Planetary Interiors. 173 (3–4): 207–215. Bibcode:2009PEPI..173..207T. doi:10.1016/j.pepi.2008.11.014.

  • KentuckyFC (December 9, 2010). "Spacecraft Saw ULF Radio Emissions over Haiti before January Quake". Physics arXiv Blog. Cambridge, Massachusetts: TechnologyReview.com. Retrieved December 18, 2010. Athanasiou, M; Anagnostopoulos, G; Iliopoulos, A; Pavlos, G; David, K (2010). "Enhanced ULF radiation observed by DEMETER two months around the strong 2010 Haiti earthquake". Natural Hazards and Earth System Sciences. 11 (4): 1091. arXiv:1012.1533. Bibcode:2011NHESS..11.1091A. doi:10.5194/nhess-11-1091-2011. S2CID 53456663.

  • Mukhopadhyay, Agnit; Jia, Xianzhe; Welling, Daniel T.; Liemohn, Michael W. (2021). "Global Magnetohydrodynamic Simulations: Performance Quantification of Magnetopause Distances and Convection Potential Predictions". Frontiers in Astronomy and Space Sciences. 8. doi:10.3389/fspas.2021.637197. ISSN 2296-987X.

  • Wiltberger, M.; Lyon, J. G.; Goodrich, C. C. (2003-07-01). "Results from the Lyon–Fedder–Mobarry global magnetospheric model for the electrojet challenge". Journal of Atmospheric and Solar-Terrestrial Physics. 65 (11): 1213–1222. doi:10.1016/j.jastp.2003.08.003. ISSN 1364-6826.

  • Welling, Daniel (2019-09-25), Gannon, Jennifer L.; Swidinsky, Andrei; Xu, Zhonghua (eds.), "Magnetohydrodynamic Models of B and Their Use in GIC Estimates", Geophysical Monograph Series (1 ed.), Wiley, pp. 43–65, doi:10.1002/9781119434412.ch3, ISBN 978-1-119-43434-4, retrieved 2023-03-10

  • "What is Space Weather ? - Space Weather". swe.ssa.esa.int. Retrieved 2023-03-10.

  • Kennel, C.F.; Arons, J.; Blandford, R.; Coroniti, F.; Israel, M.; Lanzerotti, L.; Lightman, A. (1985). "Perspectives on Space & Astrophysical Plasma Physics" (PDF). Unstable Current Systems and Plasma Instabilities in Astrophysics. 107: 537–552. Bibcode:1985IAUS..107..537K. doi:10.1007/978-94-009-6520-1_63. ISBN 978-90-277-1887-7. S2CID 117512943. Archived (PDF) from the original on 2022-10-09. Retrieved 2019-07-22.

  • Andersson, Nils; Comer, Gregory L. (2021). "Relativistic fluid dynamics: Physics for many different scales". Living Reviews in Relativity. 24 (1): 3. arXiv:2008.12069. Bibcode:2021LRR....24....3A. doi:10.1007/s41114-021-00031-6. S2CID 235631174.

  • Kunz, Matthew W. (9 November 2020). "Lecture Notes on Introduction to Plasma Astrophysics (Draft)" (PDF). astro.princeton.edu. Archived (PDF) from the original on 2022-10-09.

  • "Solar Activity".

  • Shibata, Kazunari; Magara, Tetsuya (2011). "Solar Flares: Magnetohydrodynamic Processes". Living Reviews in Solar Physics. 8 (1): 6. Bibcode:2011LRSP....8....6S. doi:10.12942/lrsp-2011-6. S2CID 122217405.

  • "Archived copy" (PDF). Archived from the original (PDF) on 2014-08-20. Retrieved 2014-08-19. D.Titterton, J.Weston, Strapdown Inertial Navigation Technology, chapter 4.3.2

  • "Run Silent, Run Electromagnetic". Time. 1966-09-23. Archived from the original on January 14, 2009.

  • Setsuo Takezawa et al. (March 1995) Operation of the Thruster for Superconducting Electromagnetohydrodynamic Propu1sion Ship YAMATO 1

  • Partially Ionized Gases Archived 2008-09-05 at the Wayback Machine, M. Mitchner and Charles H. Kruger, Jr., Mechanical Engineering Department, Stanford University. See Ch. 9 "Magnetohydrodynamic (MHD) Power Generation", pp. 214–230.

  • Nguyen, N.T.; Wereley, S. (2006). Fundamentals and Applications of Microfluidics. Artech House.

  • Fujisaki, Keisuke (Oct 2000). "In-mold electromagnetic stirring in continuous casting". Conference Record of the 2000 IEEE Industry Applications Conference. Thirty-Fifth IAS Annual Meeting and World Conference on Industrial Applications of Electrical Energy (Cat. No.00CH37129). Industry Applications Conference. Vol. 4. IEEE. pp. 2591–2598. doi:10.1109/IAS.2000.883188. ISBN 0-7803-6401-5.

  • Kenjeres, S.; Hanjalic, K. (2000). "On the implementation of effects of Lorentz force in turbulence closure models". International Journal of Heat and Fluid Flow. 21 (3): 329–337. doi:10.1016/S0142-727X(00)00017-5.

  • Vencels, Juris; Råback, Peter; Geža, Vadims (2019-01-01). "EOF-Library: Open-source Elmer FEM and OpenFOAM coupler for electromagnetics and fluid dynamics". SoftwareX. 9: 68–72. Bibcode:2019SoftX...9...68V. doi:10.1016/j.softx.2019.01.007. ISSN 2352-7110.

  • Vencels, Juris; Jakovics, Andris; Geza, Vadims (2017). "Simulation of 3D MHD with free surface using Open-Source EOF-Library: levitating liquid metal in an alternating electromagnetic field". Magnetohydrodynamics. 53 (4): 643–652. doi:10.22364/mhd.53.4.5. ISSN 0024-998X.

  • Dzelme, V.; Jakovics, A.; Vencels, J.; Köppen, D.; Baake, E. (2018). "Numerical and experimental study of liquid metal stirring by rotating permanent magnets". IOP Conference Series: Materials Science and Engineering. 424 (1): 012047. Bibcode:2018MS&E..424a2047D. doi:10.1088/1757-899X/424/1/012047. ISSN 1757-899X.

    1. Nacev, A.; Beni, C.; Bruno, O.; Shapiro, B. (2011-03-01). "The Behaviors of Ferro-Magnetic Nano-Particles In and Around Blood Vessels under Applied Magnetic Fields". Journal of Magnetism and Magnetic Materials. 323 (6): 651–668. Bibcode:2011JMMM..323..651N. doi:10.1016/j.jmmm.2010.09.008. ISSN 0304-8853. PMC 3029028. PMID 21278859.

    Further reading

     

    https://en.wikipedia.org/wiki/Magnetohydrodynamics

    https://en.wikipedia.org/wiki/Category:Plasma_physics


    In magnetohydrodynamics (MHD), shocks and discontinuities are transition layers where properties of a plasma change from one equilibrium state to another. The relation between the plasma properties on both sides of a shock or a discontinuity can be obtained from the conservative form of the MHD equations, assuming conservation of mass, momentum, energy and of

    https://en.wikipedia.org/wiki/Shocks_and_discontinuities_(magnetohydrodynamics)

    No comments:

    Post a Comment