Human error is an action that has been done but that was "not intended by the actor; not desired by a set of rules or an external observer; or that led the task or system outside its acceptable limits".[1] Human error has been cited as a primary cause contributing factor in disasters and accidents in industries as diverse as nuclear power (e.g., the Three Mile Island accident), aviation, space exploration (e.g., the Space Shuttle Challenger disaster and Space Shuttle Columbia disaster), and medicine. Prevention of human error is generally seen as a major contributor to reliability and safety of (complex) systems. Human error is one of the many contributing causes of risk events.
https://en.wikipedia.org/wiki/Human_error
In cognitive psychology, chunking is a process by which small individual pieces of a set of information are bound together to create a meaningful whole later on in memory.[1] The chunks, by which the information is grouped, are meant to improve short-term retention of the material, thus bypassing the limited capacity of working memory and allowing the working memory to be more efficient.[2][3][4] A chunk is a collection of basic units that are strongly associated with one another, and have been grouped together and stored in a person's memory. These chunks can be retrieved easily due to their coherent grouping.[5] It is believed that individuals create higher-order cognitive representations of the items within the chunk. The items are more easily remembered as a group than as the individual items themselves. These chunks can be highly subjective because they rely on an individual's perceptions and past experiences, which are linked to the information set. The size of the chunks generally ranges from two to six items but often differs based on language and culture.[6]
According to Johnson (1970), there are four main concepts associated with the memory process of chunking: chunk, memory code, decode and recode.[7] The chunk, as mentioned prior, is a sequence of to-be-remembered information that can be composed of adjacent terms. These items or information sets are to be stored in the same memory code. The process of recoding is where one learns the code for a chunk, and decoding is when the code is translated into the information that it represents.
The phenomenon of chunking as a memory mechanism is easily observed in the way individuals group numbers, and information, in day-to-day life. For example, when recalling a number such as 12101946, if numbers are grouped as 12, 10, and 1946, a mnemonic is created for this number as a month, day, and year. It would be stored as December 10, 1946, instead of a string of numbers. Similarly, another illustration of the limited capacity of working memory as suggested by George Miller can be seen from the following example: While recalling a mobile phone number such as 9849523450, we might break this into 98 495 234 50. Thus, instead of remembering 10 separate digits that are beyond the putative "seven plus-or-minus two" memory span, we are remembering four groups of numbers.[8] An entire chunk can also be remembered simply by storing the beginnings of a chunk in the working memory, resulting in the long-term memory recovering the remainder of the chunk.[4]
Modality effect
A modality effect is present in chunking. That is, the mechanism used to convey the list of items to the individual affects how much "chunking" occurs.
Experimentally, it has been found that auditory presentation results in a larger amount of grouping in the responses of individuals than visual presentation does. Previous literature, such as George Miller's The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information (1956) has shown that the probability of recall of information is greater when the chunking strategy is used.[8] As stated above, the grouping of the responses occurs as individuals place them into categories according to their inter-relatedness based on semantic and perceptual properties. Lindley (1966) showed that since the groups produced have meaning to the participant, this strategy makes it easier for an individual to recall and maintain information in memory during studies and testing.[9] Therefore, when "chunking" is used as a strategy, one can expect a higher proportion of correct recalls.
https://en.wikipedia.org/wiki/Chunking_(psychology)
In computer programming, unreachable memory is a block of dynamically allocated memory where the program that allocated the memory no longer has any reachable pointer that refers to it. Similarly, an unreachable object is a dynamically allocated object that has no reachable reference to it. Informally, unreachable memory is dynamic memory that the program cannot reach directly, nor get to by starting at an object it can reach directly, and then following a chain of pointer references.
In dynamic memory allocation implementations that employ a garbage collector, objects are reclaimed after they become unreachable. The garbage collector is able to determine if an object is reachable; any object that is determined to no longer be reachable can be deallocated. Many programming languages (for example, Java, C#, D, Dylan, Julia) use automatic garbage collection.
In contrast, when memory becomes unreachable in dynamic memory allocation implementations that require explicit deallocation, the memory can no longer be explicitly deallocated. Unreachable memory in systems that use manual memory management results in a memory leak.
Some garbage collectors implement weak references.
If an object is reachable only through either weak references or chains
of references that include a weak reference, then the object is said to
be weakly reachable. The garbage collector can treat a weakly reachable object graph as unreachable and deallocate it. (Conversely, references that prevent an object from being garbage collected are called strong references; a weakly reachable object is unreachable by any chain consisting only of strong references.) Some garbage-collected object-oriented languages, such as Java and Python, feature weak references. The Java package java.lang.ref
supports soft, weak and phantom references, resulting in the additional object reachability states softly reachable and phantom reachable.
Unreachable memory (in languages, like C, that do not reclaim) is often associated with software aging.
https://en.wikipedia.org/wiki/Unreachable_memory
A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event.[1][2][3] Informally, this may be thought of as, "What happens next depends only on the state of affairs now." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). It is named after the Russian mathematician Andrey Markov.
Markov chains have many applications as statistical models of real-world processes,[1][4][5][6] such as studying cruise control systems in motor vehicles, queues or lines of customers arriving at an airport, currency exchange rates and animal population dynamics.[7]
Markov processes are the basis for general stochastic simulation methods known as Markov chain Monte Carlo, which are used for simulating sampling from complex probability distributions, and have found application in Bayesian statistics, thermodynamics, statistical mechanics, physics, chemistry, economics, finance, signal processing, information theory and speech processing.[7][8][9]
The adjectives Markovian and Markov are used to describe something that is related to a Markov process.[1][10][11]
https://en.wikipedia.org/wiki/Markov_chain
Stacks in computing architectures are regions of memory where data is added or removed in a last-in-first-out (LIFO) manner.
In most modern computer systems, each thread has a reserved region of memory referred to as its stack. When a function executes, it may add some of its local state data to the top of the stack; when the function exits it is responsible for removing that data from the stack. At a minimum, a thread's stack is used to store the location of a return address provided by the caller in order to allow return statements to return to the correct location.
The stack is often used to store variables of fixed length local to the currently active functions. Programmers may further choose to explicitly use the stack to store local data of variable length. If a region of memory lies on the thread's stack, that memory is said to have been allocated on the stack, i.e. stack-based memory allocation (SBMA). This is contrasted with a heap-based memory allocation (HBMA). The SBMA is often closely coupled with a function call stack.
https://en.wikipedia.org/wiki/Stack-based_memory_allocation
Chain loading is a method used by computer programs to replace the currently executing program with a new program, using a common data area to pass information from the current program to the new program. It occurs in several areas of computing.
Chain loading is similar to the use of overlays. Unlike overlays, however, chain loading replaces the currently executing program in its entirety. Overlays usually replace only a portion of the running program. Like the use of overlays, the use of chain loading increases the I/O load of an application.
Chain loading in boot manager programs
In operating system boot manager programs, chain loading is used to pass control from the boot manager to a boot sector. The target boot sector is loaded in from disk, replacing the in-memory boot sector from which the boot manager itself was bootstrapped, and executed.
https://en.wikipedia.org/wiki/Chain_loading
In computer graphics, a swap chain (also swapchain) is a series of virtual framebuffers used by the graphics card and graphics API for frame rate stabilization, stutter reduction, and several other purposes. Because of these benefits, many graphics APIs require the use of a swap chain. The swap chain usually exists in graphics memory, but it can exist in system memory as well. A swap chain with two buffers is a double buffer.
https://en.wikipedia.org/wiki/Swap_chain
Chaining is a type of intervention that aims to create associations between behaviors in a behavior chain.[1] A behavior chain is a sequence of behaviors that happen in a particular order where the outcome of the previous step in the chain serves as a signal to begin the next step in the chain. In terms of behavior analysis, a behavior chain is begun with a discriminative stimulus (SD) which sets the occasion for a behavior, the outcome of that behavior serves as a reinforcer for completing the previous step and as another SD to complete the next step. This sequence repeats itself until the last step in the chain is completed and a terminal reinforcer (the outcome of a behavior chain, i.e. with brushing one's teeth the terminal reinforcer is having clean teeth) is achieved. For example, the chain in brushing one's teeth starts with seeing the toothbrush, this sets the occasion to get toothpaste, which then leads to putting it on one's brush, brushing the sides and front of mouth, spitting out the toothpaste, rinsing one's mouth, and finally putting away one's toothbrush. To outline behavior chains, as done in the example, a task analysis is used.
Chaining is used to teach complex behaviors made of behavior chains that the current learner does not have in their repertoire. Various steps of the chain can be in the learner’s repertoire, but the steps the learner doesn’t know how to do have to be in the category of can’t do instead of won’t do (issue with knowing the skill not an issue of compliance). There are three different types of chaining which can be used and they are forward chaining, backward chaining, and total task chaining (not to be confused with a task analysis).
Forward chaining
Forward chaining is a procedure where a behavior chain is learned and completed by teaching the steps in chronological order using prompting and fading. The teacher teaches the first step by presenting a distinctive stimulus to the learner.[2] Once they complete the first step in the chain, the teacher then prompts them through the remaining steps in the chain. Once the learner is consistently completing the first step without prompting, the teacher has them complete the first and second step then prompts the learner through the remaining steps and so on until the learner is able to complete the entire chain independently. Reinforcement is delivered for completion of the step, although they do not attain the terminal reinforcer (outcome of the behavior chain) until they are prompted through the remaining steps.
Backward chaining
Backward chaining is the same process as forward chaining but starts with the last step. Backward chaining is the procedure that is typically used for people with limited abilities. This process uses prompting and fading techniques to teach the last step first. The biggest benefit of using a backwards chain is that the learner receives the terminal reinforcer (the outcome of the behavior chain) naturally. Backward chaining is the preferred method when teaching skills to individuals with severe delays because they complete the last step and see the direct outcome of the chain immediately rather than having to be prompted through the remaining steps to receive that reinforcement.
The teacher begins by prompting the learner through the entire chain, starting with the last behavior. The teacher repeats this until the learner can perform the last step without prompting upon the distinctive stimulus being presented. Once the learner can complete the last step consistently, the second to last step is taught while continuing the prompts for the other steps. The teacher repeats this procedure of teaching the next step while prompting the remaining ones until the learner can perform (or achieve) all the steps without prompting.[2]
References
- Miltenberger, Raymond (2018). Behavior Modification Principles and Procedures (6 ed.). Cengage Learning Solutions. pp. 207–208. ISBN 978-1-305-10939-1.
- Bancroft, S. L., Weiss, J. S., Libby, M. E., & Ahearn, W. H. (2011). A comparison of procedural variations in teaching behavior chains: manual guidance, trainer completion, and no completion of untrained steps. Journal of applied behavior analysis, 44(3), 559–569.
- Cooper, J. O., Heron, T. E., & Heward, W. L. (2014). Applied behavior analysis. (434-452). Harlow: Pearson Education.
- Slocum, S. K., & Tiger, J. H. (2011). An assessment of the efficiency of and child preference for forward and backward chaining. Journal of Applied Behavior Analysis, 44(4), 793–805.
https://en.wikipedia.org/wiki/Chaining
https://en.wikipedia.org/wiki/Education_Resources_Information_Center
https://en.wikipedia.org/wiki/Open_educational_resources
https://en.wikipedia.org/wiki/Correspondence
https://en.wikipedia.org/wiki/Parameter
https://en.wikipedia.org/wiki/Paradigm
https://en.wikipedia.org/wiki/Set
https://en.wikipedia.org/wiki/Comprehension
https://en.wikipedia.org/wiki/Grade
https://en.wikipedia.org/wiki/Fallacy
https://en.wikipedia.org/wiki/Misinformation
https://en.wikipedia.org/wiki/Talking_statues_of_Rome
https://en.wikipedia.org/wiki/Epigram
https://en.wikipedia.org/wiki/List_of_narrative_techniques
https://en.wikipedia.org/wiki/Poetry
https://en.wikipedia.org/wiki/A_Midsummer_Night%27s_Dream
https://en.wikipedia.org/wiki/Intensive_and_extensive_properties
https://en.wikipedia.org/wiki/Accuracy_and_precision
https://en.wikipedia.org/wiki/List_of_materials_properties
https://en.wikipedia.org/wiki/Measure
https://en.wikipedia.org/wiki/Calibration
https://en.wikipedia.org/wiki/Analytics
https://en.wikipedia.org/wiki/Intelligence
https://en.wikipedia.org/wiki/Social_learning_theory
https://en.wikipedia.org/wiki/Mimesis
https://en.wikipedia.org/wiki/Observational_learning
https://en.wikipedia.org/wiki/Instruction
No comments:
Post a Comment