Blog Archive

Friday, May 12, 2023

05-12-2023-1448 - Earth rotates once in about 24 hours with respect to the Sun, but once every 23 hours, 56 minutes and 4 seconds with respect to other distant stars (see below). Earth's rotation is slowing slightly with time; thus, a day was shorter in the past. This is due to the tidal effects the Moon has on Earth's rotation. Atomic clocks show that a modern day is longer by about 1.7 milliseconds than a century ago,[1] slowly increasing the rate at which UTC is adjusted by leap seconds. Analysis of historical astronomical records shows a slowing trend; the length of a day increased about 2.3 milliseconds per century since the 8th century BCE.[2], photomask, rubylith, autocorrelation, maximum at zero, zeros and infinites, floating point format, HAM, pellicle, 1960, thin film, glue, particle, optics, nitrocellulose, film, magnet, IBM, 1978, trihydrogen cation, ozone, triangle wave, core dump, 'preparation programs for electronic' (joke), compiler, chain reaction, control rod, nuclear reactor core, catalytic converter, mask inspection, nanochannel glass materials, lithography, photoresist, contact lithography, intertial navigation system, timer, envelope, embedded system, graphene, coherency, order, land grid array, pin grid array, flip chip, wire bonding, socket, time constant, static, random access memory, microchip, flake ice, comparator, zero-crossing detection, xor gate, microcontroller, nanoscale, microscale, microscope, shear, rock, sparse matrix, emv, zero-ohm link, nmos logic, inverter (logic gate), pin grid array flip chip, flash memory, magnetic core, state match drive, double connect state match drive, continued consciousness (joke) [e.g. memory computer etc.], processor design, original chip set, life below zero (joke), flipper zero (joke), sign polarity charge infinite limit zero undefined not in system no access unlimited etc., errors, read only, core rope memory, rope memory, 1973, core dump, major error, magnetic core memory, read only memory, magnetic stripe card, strip, hard disk drive, wire recording, bubble memory, thin-film memory, magnetic ink character recognition, zero point energy, aether, drag, etc. (Draft)

https://en.wikipedia.org/wiki/Flipper_Zero

https://en.wikipedia.org/wiki/Life_Below_Zero

https://en.wikipedia.org/wiki/Rake_angle

https://en.wikipedia.org/wiki/ATmega328

https://en.wikipedia.org/wiki/List_of_Super_NES_enhancement_chips

https://en.wikipedia.org/wiki/EMV

https://en.wikipedia.org/wiki/Bank_Zero

https://en.wikipedia.org/wiki/MOS_Technology_6502

https://en.wikipedia.org/wiki/XOR_gate

https://en.wikipedia.org/wiki/Microcontroller

https://en.wikipedia.org/wiki/MuZero

https://en.wikipedia.org/wiki/Google_Tensor

https://en.wikipedia.org/wiki/Pin_grid_array#Flip_chip

https://en.wikipedia.org/wiki/Zero-ohm_link

https://en.wikipedia.org/wiki/CHIPS_and_Science_Act

https://en.wikipedia.org/wiki/Digital_signal_processor

https://en.wikipedia.org/wiki/Microprocessor

https://en.wikipedia.org/wiki/Socket_AM5

https://en.wikipedia.org/wiki/IEEE_754-1985

https://en.wikipedia.org/wiki/Digital_signal_processor

https://en.wikipedia.org/wiki/555_timer_IC

https://en.wikipedia.org/wiki/CHIP_(computer)

https://en.wikipedia.org/wiki/A_Haunted_House

https://en.wikipedia.org/wiki/Simba_Chips

https://en.wikipedia.org/wiki/MOS_Technology_VIC

https://en.wikipedia.org/wiki/RISC-V

https://en.wikipedia.org/wiki/Sparse_matrix

https://en.wikipedia.org/wiki/Mega_Man_Network_Transmission

https://en.wikipedia.org/wiki/ESP32#QFN_packaged_chip_and_module

https://en.wikipedia.org/wiki/ARM_Cortex-M

https://en.wikipedia.org/wiki/Flash_memory

https://en.wikipedia.org/wiki/Original_Chip_Set

https://en.wikipedia.org/wiki/NMOS_logic

https://en.wikipedia.org/wiki/Inverter_(logic_gate)

https://en.wikipedia.org/wiki/Zilog_Z80

https://en.wikipedia.org/wiki/Processor_design

https://en.wikipedia.org/wiki/Broadcom_Inc.

https://en.wikipedia.org/wiki/MOSFET

https://en.wikipedia.org/wiki/Chromatin_immunoprecipitation


https://en.wikipedia.org/wiki/Flash_memory

https://en.wikipedia.org/wiki/Original_Chip_Set

https://en.wikipedia.org/wiki/NMOS_logic

https://en.wikipedia.org/wiki/Inverter_(logic_gate)

https://en.wikipedia.org/wiki/EMV

https://en.wikipedia.org/wiki/Sparse_matrix

https://en.wikipedia.org/wiki/Pin_grid_array#Flip_chip

https://en.wikipedia.org/wiki/Zero-ohm_link

https://en.wikipedia.org/wiki/XOR_gate

https://en.wikipedia.org/wiki/Microcontroller

https://en.wikipedia.org/wiki/Comparator#Zero-crossing_detectors

 

https://en.wikipedia.org/wiki/RC_time_constant

https://en.wikipedia.org/wiki/Static_random-access_memory


https://en.wikipedia.org/wiki/BedZED

https://en.wikipedia.org/wiki/Microchip_Technology

https://en.wikipedia.org/wiki/Intel_8008

https://en.wikipedia.org/wiki/Flake_ice

 https://en.wikipedia.org/wiki/CPU_socket

https://en.wikipedia.org/wiki/LGA_1700


https://en.wikipedia.org/wiki/Land_grid_array

https://en.wikipedia.org/wiki/Pin_grid_array

https://en.wikipedia.org/wiki/Flip_chip

https://en.wikipedia.org/wiki/Wire_bonding

 

In materials science, hardness (antonym: softness) is a measure of the resistance to localized plastic deformation induced by either mechanical indentation or abrasion. In general, different materials differ in their hardness; for example hard metals such as titanium and beryllium are harder than soft metals such as sodium and metallic tin, or wood and common plastics. Macroscopic hardness is generally characterized by strong intermolecular bonds, but the behavior of solid materials under force is complex; therefore, hardness can be measured in different ways, such as scratch hardness, indentation hardness, and rebound hardness. Hardness is dependent on ductility, elastic stiffness, plasticity, strain, strength, toughness, viscoelasticity, and viscosity. Common examples of hard matter are ceramics, concrete, certain metals, and superhard materials, which can be contrasted with soft matter.

 

Measuring hardness

A Vickers hardness tester

There are three main types of hardness measurements: scratch, indentation, and rebound. Within each of these classes of measurement there are individual measurement scales. For practical reasons conversion tables are used to convert between one scale and another.

https://en.wikipedia.org/wiki/Hardness

https://en.wikipedia.org/wiki/Hardness_comparison

https://en.wikipedia.org/wiki/Sclerometer

https://en.wikipedia.org/wiki/Scleroscope

https://en.wikipedia.org/wiki/Category:Hardness_instruments

https://en.wikipedia.org/wiki/Schmidt_hammer

https://en.wikipedia.org/wiki/Compressive_strength


https://en.wikipedia.org/wiki/Plasticity_(physics)

https://en.wikipedia.org/wiki/Compressive_strength

https://en.wikipedia.org/wiki/Tension_(physics)

https://en.wikipedia.org/wiki/Fracture

https://en.wikipedia.org/wiki/Universal_testing_machine

https://en.wikipedia.org/wiki/Technical_standard

https://en.wikipedia.org/wiki/Structural_load

https://en.wikipedia.org/wiki/Structural_system

https://en.wikipedia.org/wiki/Strength_of_materials


https://en.wikipedia.org/wiki/Soft_error

https://en.wikipedia.org/wiki/Teradici


https://en.wikipedia.org/wiki/Graphene

https://en.wikipedia.org/wiki/Transistor%E2%80%93transistor_logic


https://en.wikipedia.org/w/index.php?limit=20&offset=420&profile=default&search=zero+chip&title=Special:Search&ns0=1&searchToken=awevw1zs79b9jhlbrds6347ql

https://en.wikipedia.org/wiki/Embedded_system

https://en.wikipedia.org/wiki/Matter_(standard)

https://en.wikipedia.org/wiki/AVR_microcontrollers

https://en.wikipedia.org/wiki/Code-division_multiple_access

https://en.wikipedia.org/wiki/ARM_architecture_family

https://en.wikipedia.org/wiki/Moore%27s_law

https://en.wikipedia.org/wiki/Department_for_Business,_Energy_and_Industrial_Strategy

https://en.wikipedia.org/wiki/Intel_iAPX_432

https://en.wikipedia.org/wiki/7400-series_integrated_circuits

https://en.wikipedia.org/wiki/Graphics_card

https://en.wikipedia.org/wiki/Digital_image_processing

https://en.wikipedia.org/wiki/Minimax#In_zero-sum_games

https://en.wikipedia.org/wiki/Socket_3

https://en.wikipedia.org/wiki/Electronic_circuit

https://en.wikipedia.org/wiki/LGA_1155

https://en.wikipedia.org/wiki/I486

https://en.wikipedia.org/wiki/7400-series_integrated_circuits

https://en.wikipedia.org/wiki/Instruction_set_architecture

https://en.wikipedia.org/wiki/Complementary_code_keying

https://en.wikipedia.org/wiki/Signetics_2650#Peripheral_chips

https://en.wikipedia.org/wiki/IMP-16

https://en.wikipedia.org/wiki/Thin_client

https://en.wikipedia.org/wiki/LGA_1150

https://en.wikipedia.org/wiki/CP_System

https://en.wikipedia.org/wiki/AMD_Lance_Am7990

https://en.wikipedia.org/wiki/Envelope_(music)

https://en.wikipedia.org/wiki/LGA_1151

https://en.wikipedia.org/wiki/Inertial_navigation_system

https://en.wikipedia.org/wiki/Timer

https://en.wikipedia.org/wiki/Comparison_of_instruction_set_architectures

https://en.wikipedia.org/wiki/Chip_Authentication_Program

https://en.wikipedia.org/wiki/Parallax_Propeller

https://en.wikipedia.org/wiki/Bit_bucket

https://en.wikipedia.org/wiki/Digital_model_railway_control_systems

https://en.wikipedia.org/wiki/ECC_memory

https://en.wikipedia.org/wiki/IMEC#Brain-On-Chip_Research

https://en.wikipedia.org/wiki/Address_decoder

https://en.wikipedia.org/wiki/Real-time_clock

https://en.wikipedia.org/wiki/PIC_microcontrollers

https://en.wikipedia.org/wiki/Argonaut_Games

https://en.wikipedia.org/wiki/C_(programming_language)

https://en.wikipedia.org/wiki/USB_(Communications)

https://en.wikipedia.org/wiki/Secure_cryptoprocessor

https://en.wikipedia.org/wiki/OBD-II_PIDs

https://en.wikipedia.org/wiki/System_on_a_chip

https://en.wikipedia.org/wiki/Ceramic_capacitor

https://en.wikipedia.org/wiki/Casimir_effect

https://en.wikipedia.org/wiki/Pentium_FDIV_bug

https://en.wikipedia.org/wiki/16-bit_computing

https://en.wikipedia.org/wiki/Push-button_telephone

https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)

https://en.wikipedia.org/wiki/GMC_Hummer_EV

https://en.wikipedia.org/wiki/Pulse-width_modulation

https://en.wikipedia.org/wiki/Motorola_6800

https://en.wikipedia.org/wiki/Motorola_68000

https://en.wikipedia.org/wiki/Commodore_128

https://en.wikipedia.org/wiki/Charge-coupled_device

https://en.wikipedia.org/wiki/Call_of_Duty:_Advanced_Warfare

https://en.wikipedia.org/wiki/Motorola_6845

https://en.wikipedia.org/wiki/Schmitt_trigger

https://en.wikipedia.org/wiki/Maze-solving_algorithm

https://en.wikipedia.org/wiki/Steam_Machine_(computer)

https://en.wikipedia.org/wiki/MOS_Technology_6522

https://en.wikipedia.org/wiki/RCA_1802#Support_chips

https://en.wikipedia.org/wiki/Harvard_architecture

https://en.wikipedia.org/wiki/Motorola_6845

https://en.wikipedia.org/wiki/The_Victims%27_Game

https://en.wikipedia.org/wiki/Absolute_Zero_(Bruce_Hornsby_album)

https://en.wikipedia.org/wiki/POKEY

https://en.wikipedia.org/wiki/Grey_parrot

https://en.wikipedia.org/wiki/Intel_8086

https://en.wikipedia.org/wiki/Resistor

https://en.wikipedia.org/wiki/AirPort

https://en.wikipedia.org/wiki/Physical_design_(electronics)

https://en.wikipedia.org/wiki/List_of_open-source_mobile_phones

https://en.wikipedia.org/wiki/Samsung_Electronics

https://en.wikipedia.org/wiki/Single-molecule_real-time_sequencing

https://en.wikipedia.org/wiki/Intel_8253

https://en.wikipedia.org/wiki/Tap_and_die

https://en.wikipedia.org/wiki/Computer_cooling

https://en.wikipedia.org/wiki/Josephson_voltage_standard

https://en.wikipedia.org/wiki/Intel_Core#Core_i3

https://en.wikipedia.org/wiki/Comparison_of_EDA_software

https://en.wikipedia.org/wiki/1-bit_computing

https://en.wikipedia.org/wiki/IPhone

https://en.wikipedia.org/wiki/Card_security_code

https://en.wikipedia.org/wiki/Apple_Inc.

https://en.wikipedia.org/wiki/Intel_Management_Engine#Zero-touch_provisioning

https://en.wikipedia.org/wiki/Small_Form-factor_Pluggable

https://en.wikipedia.org/wiki/Pixel_6a

https://en.wikipedia.org/wiki/List_of_Beavis_and_Butt-Head_episodes

https://en.wikipedia.org/wiki/Fairchild_F8

https://en.wikipedia.org/wiki/Buck_converter

https://en.wikipedia.org/wiki/Scratchpad_memory

https://en.wikipedia.org/wiki/OBD-II_PIDs

https://en.wikipedia.org/wiki/Efficiently_updatable_neural_network

https://en.wikipedia.org/wiki/Hold-And-Modify#Original_Chip_Set_HAM_mode_%28HAM6%29

https://en.wikipedia.org/wiki/Bfloat16_floating-point_format#Zeros_and_infinities

https://en.wikipedia.org/wiki/Bfloat16_floating-point_format

https://en.wikipedia.org/wiki/Autocorrelation#Maximum_at_zero

https://en.wikipedia.org/wiki/Autocorrelation

https://en.wikipedia.org/wiki/Photomask

https://en.wikipedia.org/wiki/Rubylith

https://en.wikipedia.org/wiki/Offset_printing

https://en.wikipedia.org/wiki/Screen_printing

https://en.wikipedia.org/wiki/BoPET

https://en.wikipedia.org/wiki/Imperial_Chemical_Industries

https://en.wikipedia.org/wiki/Project_Echo#Echo_2

https://en.wikipedia.org/wiki/Optical_pattern_generator


History

For IC production in the 1960s and early 1970s, an opaque rubylith film laminated onto a transparent mylar sheet was used. The design of one layer was cut into the rubylith, initially by hand on an illuminated drafting table (later by machine (plotter)) and the unwanted rubylith was peeled off by hand, forming the master image of that layer of the chip. Increasingly complex and thus larger chips required larger and larger rubyliths, eventually even filling the wall of a room. (Eventually this whole process was replaced by the optical pattern generator to produce the master image). At this point the master image could be arrayed into a multi-chip image called a reticle. The reticle was originally a 10X image of the chip. 

https://en.wikipedia.org/wiki/Photomask

Photomask materials changed over time. Initially soda glass was used with silver halide opacity. Later borosilicate and then fused silica to control expansion, and chromium which has better opacity to ultraviolet light were introduced. The original pattern generators have since been replaced by electron beam lithography and laser-driven systems which generate reticles directly from the original computerized design. 

https://en.wikipedia.org/wiki/Photomask

https://en.wikipedia.org/wiki/Contact_lithography

https://en.wikipedia.org/wiki/Photoresist

 

Overview

A simulated photomask. The thicker features are the integrated circuit that is desired to be printed on the wafer. The thinner features are assists that do not print themselves but help the integrated circuit print better out-of-focus. The zig-zag appearance of the photomask is because optical proximity correction was applied to it to create a better print.

Lithographic photomasks are typically transparent fused silica plates covered with a pattern defined with a chromium (Cr) or Fe2O3 metal absorbing film.[1] Photomasks are used at wavelengths of 365 nm, 248 nm, and 193 nm. Photomasks have also been developed for other forms of radiation such as 157 nm, 13.5 nm (EUV), X-ray, electrons, and ions; but these require entirely new materials for the substrate and the pattern film.[1]

A set of photomasks, each defining a pattern layer in integrated circuit fabrication, is fed into a photolithography stepper or scanner, and individually selected for exposure. In multi-patterning techniques, a photomask would correspond to a subset of the layer pattern.

In photolithography for the mass production of integrated circuit devices, the more correct term is usually photoreticle or simply reticle. In the case of a photomask, there is a one-to-one correspondence between the mask pattern and the wafer pattern. This was the standard for the 1:1 mask aligners that were succeeded by steppers and scanners with reduction optics.[2] As used in steppers and scanners, the reticle commonly contains only one layer of the designed VLSI circuit. (However, some photolithography fabrications utilize reticles with more than one layer placed side by side onto the same mask).

The pattern is projected and shrunk by four or five times onto the wafer surface.[3] To achieve complete wafer coverage, the wafer is repeatedly "stepped" from position to position under the optical column until full exposure is achieved.

Features 150 nm or below in size generally require phase-shifting to enhance the image quality to acceptable values. This can be achieved in many ways. The two most common methods are to use an attenuated phase-shifting background film on the mask to increase the contrast of small intensity peaks, or to etch the exposed quartz so that the edge between the etched and unetched areas can be used to image nearly zero intensity. In the second case, unwanted edges would need to be trimmed out with another exposure. The former method is attenuated phase-shifting, and is often considered a weak enhancement, requiring special illumination for the most enhancement, while the latter method is known as alternating-aperture phase-shifting, and is the most popular strong enhancement technique.

As leading-edge semiconductor features shrink, photomask features that are 4× larger must inevitably shrink as well. This could pose challenges since the absorber film will need to become thinner, and hence less opaque.[4] A 2005 study by IMEC found that thinner absorbers degrade image contrast and therefore contribute to line-edge roughness, using state-of-the-art photolithography tools.[5] One possibility is to eliminate absorbers altogether and use "chromeless" masks, relying solely on phase-shifting for imaging.

The emergence of immersion lithography has a strong impact on photomask requirements. The commonly used attenuated phase-shifting mask is more sensitive to the higher incidence angles applied in "hyper-NA" lithography, due to the longer optical path through the patterned film.[6]

EUV lithography

EUV photomasks work by reflecting light,[7] which is achieved by using multiple alternating layers of molybdenum and silicon.

Mask error enhancement factor (MEEF)

Leading-edge photomasks (pre-corrected) images of the final chip patterns are magnified by four times. This magnification factor has been a key benefit in reducing pattern sensitivity to imaging errors. However, as features continue to shrink, two trends come into play: the first is that the mask error factor begins to exceed one, i.e., the dimension error on the wafer may be more than 1/4 the dimension error on the mask,[8] and the second is that the mask feature is becoming smaller, and the dimension tolerance is approaching a few nanometers. For example, a 25 nm wafer pattern should correspond to a 100 nm mask pattern, but the wafer tolerance could be 1.25 nm (5% spec), which translates into 5 nm on the photomask. The variation of electron beam scattering in directly writing the photomask pattern can easily well exceed this.[9][10]

Pellicles

The term "pellicle" is used to mean "film", "thin film", or "membrane." Beginning in the 1960s, thin film stretched on a metal frame, also known as a "pellicle", was used as a beam splitter for optical instruments. It has been used in a number of instruments to split a beam of light without causing an optical path shift due to its small film thickness. In 1978, Shea et al. at IBM patented a process to use the "pellicle" as a dust cover to protect a photomask or reticle. In the context of this entry, "pellicle" means "thin film dust cover to protect a photomask".

Particle contamination can be a significant problem in semiconductor manufacturing. A photomask is protected from particles by a pellicle – a thin transparent film stretched over a frame that is glued over one side of the photomask. The pellicle is far enough away from the mask patterns so that moderate-to-small sized particles that land on the pellicle will be too far out of focus to print. Although they are designed to keep particles away, pellicles become a part of the imaging system and their optical properties need to be taken into account. Pellicles material are Nitrocellulose and made for various Transmission Wavelengths.[11]

Pellicle Mounting Machine MLI

Leading commercial photomask manufacturers

The SPIE Annual Conference, Photomask Technology reports the SEMATECH Mask Industry Assessment which includes current industry analysis and the results of their annual photomask manufacturers survey. The following companies are listed in order of their global market share (2009 info):[12]

Major chipmakers such as Intel, Globalfoundries, IBM, NEC, TSMC, UMC, Samsung, and Micron Technology, have their own large maskmaking facilities or joint ventures with the abovementioned companies.

The worldwide photomask market was estimated as $3.2 billion in 2012[13] and $3.1 billion in 2013. Almost half of the market was from captive mask shops (in-house mask shops of major chipmakers).[14]

The costs of creating new mask shop for 180 nm processes were estimated in 2005 as $40 million, and for 130 nm - more than $100 million.[15]

The purchase price of a photomask, in 2006, could range from $250 to $100,000[16] for a single high-end phase-shift mask. As many as 30 masks (of varying price) may be required to form a complete mask set.

See also

References

  • Shubham, Kumar n (2021). Integrated circuit fabrication. Ankaj Gupta. Abingdon, Oxon. ISBN 978-1-000-39644-7. OCLC 1246513110.

  • Rizvi, Syed (2005). "1.3 The Technology History of Masks". Handbook of Photomask Manufacturing Technology. CRC Press. p. 728. ISBN 9781420028782.

  • Lithography experts back higher magnification in photomasks to ease challenges // EETimes 2000

  • Y. Sato et al., Proc. SPIE, vol. 4889, pp. 50-58 (2002).

  • M. Yoshizawa et al., Proc. SPIE, vol. 5853, pp. 243-251 (2005)

  • C. A. Mack et al., Proc. SPIE, vol. 5992, pp. 306-316 (2005)

  • "Archived copy" (PDF). Archived from the original (PDF) on 2017-06-02. Retrieved 2019-06-23.

  • E. Hendrickx et al., Proc. SPIE 7140, 714007 (2008).

  • C-J. Chen et al., Proc. SPIE 5256, 673 (2003).

  • W-H. Cheng and J. Farnsworth, Proc. SPIE 6607, 660724 (2007).

  • Chris A. Mack (November 2007). "Optical behavior of pellicles". Microlithography World. Retrieved 2008-09-13.

  • Hughes, Greg; Henry Yun (2009-10-01). "Mask industry assessment: 2009". Proceedings of SPIE. 7488 (1): 748803-748803-13. doi:10.1117/12.832722. ISSN 0277-786X.

  • Chamness, Lara (May 7, 2013). "Semiconductor Photomask Market: Forecast $3.5 Billion in 2014". SEMI Industry Research and Statistics. Retrieved 6 September 2014.

  • Tracy, Dan; Deborah Geiger (April 14, 2014). "SEMI Reports 2013 Semiconductor Photomask Sales of $3.1 Billion". SEMI. Retrieved 6 September 2014.

  • Weber, Charles M.; Berglund, C. Neil (February 9, 2005). "The Mask Shop's Perspective". An Analysis of the Economics of Photomask Manufacturing Part – 1: The Economic Environment (PDF). ISMT Mask Automation Workshop. p. 6. Archived from the original (PDF) on 2016-03-03. Capital-intensive industry. Investment levels….. – ~$40M for 'conventional' (180-nm node or above) – >$100M for 'advanced' (130-nm node and beyond)

    1. Weber, C.M; Berglund, C.N.; Gabella, P. (13 November 2006). "Mask Cost and Profitability in Photomask Manufacturing: An Empirical Analysis" (PDF). IEEE Transactions on Semiconductor Manufacturing. 19 (4). doi:10.1109/TSM.2006.883577; page 23 table 1

    https://en.wikipedia.org/wiki/Photomask

    https://en.wikipedia.org/wiki/Nanochannel_glass_materials

    https://en.wikipedia.org/wiki/Carbon_nanotube_computer

    https://en.wikipedia.org/wiki/1-bit_computing

     

    https://en.wikipedia.org/wiki/Photographic_plate

    https://en.wikipedia.org/wiki/Category:Lithography_(microfabrication)

    https://en.wikipedia.org/wiki/Integrated_circuit_layout_design_protection

    https://en.wikipedia.org/wiki/Stepping_level

    https://en.wikipedia.org/wiki/Scale

    https://en.wikipedia.org/wiki/Scale_(analytical_tool)

    https://en.wikipedia.org/wiki/Scale_(map)

    https://en.wikipedia.org/wiki/Weighing_scale

    https://en.wikipedia.org/wiki/Scale_(ratio)


    Spatial scale is a specific application of the term scale for describing or categorizing (e.g. into orders of magnitude) the size of a space (hence spatial), or the extent of it at which a phenomenon or process occurs.[1] 

    https://en.wikipedia.org/wiki/Spatial_scale

    https://en.wikipedia.org/wiki/Orders_of_magnitude_(length)

     

    https://en.wikipedia.org/wiki/Scale_(music)

    https://en.wikipedia.org/wiki/Fundamental_frequency

    https://en.wikipedia.org/wiki/Zero-based_numbering

    https://en.wikipedia.org/wiki/Ordinal_numeral

    https://en.wikipedia.org/wiki/Cardinal_numeral

    https://en.wikipedia.org/wiki/Order

    https://en.wikipedia.org/wiki/Precedence


    https://en.wikipedia.org/wiki/One-instruction_set_computer

    https://en.wikipedia.org/wiki/Reduced_instruction_set_computer


    https://en.wikipedia.org/wiki/Instruction_pipelining

    https://en.wikipedia.org/wiki/Load%E2%80%93store_architecture

    https://en.wikipedia.org/wiki/Load%E2%80%93store_unit

    https://en.wikipedia.org/wiki/Register%E2%80%93memory_architecture

    https://en.wikipedia.org/wiki/Addressing_mode

    https://en.wikipedia.org/wiki/Assembly_language

    https://en.wikipedia.org/wiki/Compiler


    https://en.wikipedia.org/wiki/The_Preparation_of_Programs_for_an_Electronic_Digital_Computer

    https://en.wikipedia.org/wiki/Core_dump

    https://en.wikipedia.org/wiki/Nuclear_reactor_core

    https://en.wikipedia.org/wiki/Boron

    https://en.wikipedia.org/wiki/Control_rod

    https://en.wikipedia.org/wiki/Neutron

    https://en.wikipedia.org/wiki/Chain_reaction

    https://en.wikipedia.org/wiki/Catalytic_converter

    https://en.wikipedia.org/wiki/Catalytic_cycle

    https://en.wikipedia.org/wiki/Catalysis

    https://en.wikipedia.org/wiki/Cascade

    https://en.wikipedia.org/wiki/Cycle


    https://en.wikipedia.org/wiki/Trihydrogen_cation

    https://en.wikipedia.org/wiki/Triatomic_molecule

    https://en.wikipedia.org/wiki/Triangle_wave

    https://en.wikipedia.org/wiki/Piecewise_linear_function


    https://en.wikipedia.org/wiki/Real-valued_function

    https://en.wikipedia.org/wiki/Sign

    https://en.wikipedia.org/wiki/Space

    https://en.wikipedia.org/wiki/Correspondence


    Trihydrogen cation
    Space-filling model of the H+ 3 cation
    Names
    IUPAC name
    Hydrogenonium
    Identifiers
    3D model (JSmol)
    ChEBI
    ChemSpider
    249


    Properties
    H+
    3
    Molar mass 3.02
    Conjugate base Dihydrogen
    Related compounds
    Other anions
    hydride
    Other cations
    hydrogen ion, dihydrogen cation, hydrogen ion cluster
    Related compounds
    trihydrogen
    Except where otherwise noted, data are given for materials in their standard state (at 25 °C [77 °F], 100 kPa).

    The trihydrogen cation or protonated molecular hydrogen is a cation (positive ion) with formula H+
    3
    , consisting of three hydrogen nuclei (protons) sharing two electrons.

    The trihydrogen cation is one of the most abundant ions in the universe. It is stable in the interstellar medium (ISM) due to the low temperature and low density of interstellar space. The role that H+
    3
    plays in the gas-phase chemistry of the ISM is unparalleled by any other molecular ion.

    The trihydrogen cation is the simplest triatomic molecule, because its two electrons are the only valence electrons in the system. It is also the simplest example of a three-center two-electron bond system. 

    https://en.wikipedia.org/wiki/Trihydrogen_cation

     

    Ozone (/ˈoʊzoʊn/) (or trioxygen) is an inorganic molecule with the chemical formula O
    3
    . It is a pale blue gas with a distinctively pungent smell. It is an allotrope of oxygen that is much less stable than the diatomic allotrope O
    2
    , breaking down in the lower atmosphere to O
    2
    (dioxygen). Ozone is formed from dioxygen by the action of ultraviolet (UV) light and electrical discharges within the Earth's atmosphere. It is present in very low concentrations throughout the latter, with its highest concentration high in the ozone layer of the stratosphere, which absorbs most of the Sun's ultraviolet (UV) radiation.

    Ozone's odour is reminiscent of chlorine, and detectable by many people at concentrations of as little as 0.1 ppm in air. Ozone's O3 structure was determined in 1865. The molecule was later proven to have a bent structure and to be weakly diamagnetic. In standard conditions, ozone is a pale blue gas that condenses at cryogenic temperatures to a dark blue liquid and finally a violet-black solid. Ozone's instability with regard to more common dioxygen is such that both concentrated gas and liquid ozone may decompose explosively at elevated temperatures, physical shock, or fast warming to the boiling point.[5][6] It is therefore used commercially only in low concentrations.

    Ozone is a powerful oxidant (far more so than dioxygen) and has many industrial and consumer applications related to oxidation. This same high oxidizing potential, however, causes ozone to damage mucous and respiratory tissues in animals, and also tissues in plants, above concentrations of about 0.1 ppm. While this makes ozone a potent respiratory hazard and pollutant near ground level, a higher concentration in the ozone layer (from two to eight ppm) is beneficial, preventing damaging UV light from reaching the Earth's surface. 

    Nomenclature

    The trivial name ozone is the most commonly used and preferred IUPAC name. The systematic names 2λ4-trioxidiene[dubious ] and catena-trioxygen, valid IUPAC names, are constructed according to the substitutive and additive nomenclatures, respectively. The name ozone derives from ozein (ὄζειν), the Greek verb for smell, referring to ozone's distinctive smell.

    In appropriate contexts, ozone can be viewed as trioxidane with two hydrogen atoms removed, and as such, trioxidanylidene may be used as a systematic name, according to substitutive nomenclature. By default, these names pay no regard to the radicality of the ozone molecule. In an even more specific context, this can also name the non-radical singlet ground state, whereas the diradical state is named trioxidanediyl.

    Trioxidanediyl (or ozonide) is used, non-systematically, to refer to the substituent group (-OOO-). Care should be taken to avoid confusing the name of the group for the context-specific name for the ozone given above.

     

    https://en.wikipedia.org/wiki/Ozone

    https://en.wikipedia.org/wiki/Ozone_layer

    https://en.wikipedia.org/wiki/Magnetosphere

    https://en.wikipedia.org/wiki/Stratosphere

    https://en.wikipedia.org/wiki/Troposphere

    https://en.wikipedia.org/wiki/Mesosphere

    https://en.wikipedia.org/wiki/Atmosphere_of_Earth

    https://en.wikipedia.org/wiki/Tropopause

    https://en.wikipedia.org/wiki/Polar_regions_of_Earth

    https://en.wikipedia.org/wiki/Geographical_pole

    https://en.wikipedia.org/wiki/Earth%27s_rotation

    https://en.wikipedia.org/wiki/Clockwise


    Earth rotates once in about 24 hours with respect to the Sun, but once every 23 hours, 56 minutes and 4 seconds with respect to other distant stars (see below). Earth's rotation is slowing slightly with time; thus, a day was shorter in the past. This is due to the tidal effects the Moon has on Earth's rotation. Atomic clocks show that a modern day is longer by about 1.7 milliseconds than a century ago,[1] slowly increasing the rate at which UTC is adjusted by leap seconds. Analysis of historical astronomical records shows a slowing trend; the length of a day increased about 2.3 milliseconds per century since the 8th century BCE.[2] 

    https://en.wikipedia.org/wiki/Earth%27s_rotation

    https://en.wikipedia.org/wiki/Leap_second

    https://en.wikipedia.org/wiki/Coordinated_Universal_Time

     

    https://en.wikipedia.org/wiki/Opcode

    https://en.wikipedia.org/wiki/1-bit_computing

     

    https://en.wikipedia.org/wiki/Serial_computer

    https://en.wikipedia.org/wiki/Soft_microprocessor

    https://en.wikipedia.org/wiki/Massively_parallel

    https://en.wikipedia.org/wiki/In-memory_processing

    https://en.wikipedia.org/wiki/In-memory_database

    https://en.wikipedia.org/wiki/Grid_computing


    https://en.wikipedia.org/wiki/Grid_computing

    https://en.wikipedia.org/wiki/Network_interface_controller

    https://en.wikipedia.org/wiki/Massively_parallel

    https://en.wikipedia.org/wiki/Ethernet

    https://en.wikipedia.org/wiki/Loose_coupling

    https://en.wikipedia.org/wiki/Computer_network

    https://en.wikipedia.org/wiki/Parallel_computing

    https://en.wikipedia.org/wiki/Computer_cluster

    https://en.wikipedia.org/wiki/Grid_Systems

    https://en.wikipedia.org/wiki/Concurrency_(computer_science)

    https://en.wikipedia.org/wiki/Dial-up_Internet_access

    https://en.wikipedia.org/wiki/Grid_computing#See_also

     

    https://en.wikipedia.org/wiki/Transaction_processing

    https://en.wikipedia.org/wiki/High-throughput_computing

    https://en.wikipedia.org/wiki/Failure


    https://en.wikipedia.org/wiki/Zero-sum_game

    https://en.wikipedia.org/wiki/Prisoner%27s_dilemma

    https://en.wikipedia.org/wiki/Rational_agent

    https://en.wikipedia.org/wiki/Practical_reason

    https://en.wikipedia.org/wiki/Theoretical_philosophy

    https://en.wikipedia.org/wiki/Ethics

    https://en.wikipedia.org/wiki/Aesthetics

    https://en.wikipedia.org/wiki/Logical_truth


    Practical reason is understood by most philosophers as determining a plan of action. Thomistic ethics defines the first principle of practical reason as "good is to be done and pursued, and evil is to be avoided."[1] For Kant, practical reason has a law-abiding quality because the categorical imperative is understood to be binding one to one's duty rather than subjective preferences. Utilitarians tend to see reason as an instrument for the satisfactions of wants and needs.

    In classical philosophical terms, it is very important to distinguish three domains of human activity: theoretical reason, which investigates the truth of contingent events as well as necessary truths; practical reason, which determines whether a prospective course of action is worth pursuing; and productive or technical reason, which attempts to find the best means for a given end. Aristotle viewed philosophical activity as the highest activity of the human being and gave pride of place to metaphysics or wisdom. Since Descartes practical judgment and reasoning have been treated with less respect because of the demand for greater certainty and an infallible method to justify beliefs. 

    https://en.wikipedia.org/wiki/Practical_reason

     

    https://en.wikipedia.org/wiki/Sentience

     

    https://en.wikipedia.org/wiki/Categorical_imperative

     

    https://en.wikipedia.org/wiki/Hypothetical_imperative

    https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value

     

     


    The categorical imperative (German: kategorischer Imperativ) is the central philosophical concept in the deontological moral philosophy of Immanuel Kant. Introduced in Kant's 1785 Groundwork of the Metaphysics of Morals, it is a way of evaluating motivations for action. It is best known in its original formulation: "Act only according to that maxim whereby you can at the same time will that it should become a universal law."[1]

    According to Kant, sentient beings[clarification needed] occupy a special place in creation, and morality can be summed up in an imperative, or ultimate commandment of reason, from which all duties and obligations derive. He defines an imperative as any proposition declaring a certain action (or inaction) to be necessary. Hypothetical imperatives apply to someone who wishes to attain certain ends. For example, "I must drink something to quench my thirst" or "I must study to pass this exam." A categorical imperative, on the other hand, denotes an absolute, unconditional requirement that must be obeyed in all circumstances and is justified as an end in itself

    https://en.wikipedia.org/wiki/Categorical_imperative

     

    Outline

    Pure practical reason

    The capacity that underlies deciding what is moral is called pure practical reason, which is contrasted with: pure reason, which is the capacity to know without having been shown; and mere practical reason, which allows us to interact with the world in experience.

    Hypothetical imperatives tell us which means best achieve our ends. They do not, however, tell us which ends we should choose. The typical dichotomy in choosing ends is between ends that are right (e.g., helping someone) and those that are good (e.g., enriching oneself). Kant considered the right superior to the good; to him, the latter was morally irrelevant. In Kant's view, a person cannot decide whether conduct is right, or moral, through empirical means. Such judgments must be reached a priori, using pure practical reason.[2]

    What action can be constituted as moral is universally reasoned by the categorical imperative, separate from observable experience. This distinction, that it is imperative that each action is not empirically reasoned by observable experience, has had wide social impact in the legal and political concepts of human rights and equality.[2]

    Possibility

    People see themselves as belonging to both the world of understanding and the world of sense. As a member of the world of understanding, a person's actions would always conform to the autonomy of the will. As a part of the world of sense, he would necessarily fall under the natural law of desires and inclinations. However, since the world of understanding contains the ground of the world of sense, and thus of its laws, his actions ought to conform to the autonomy of the will, and this categorical "ought" represents a synthetic proposition a priori.[3]

    Freedom and autonomy

    Kant viewed the human individual as a rationally self-conscious being with "impure" freedom of choice:

    The faculty of desire in accordance with concepts, in-so-far as the ground determining it to action lies within itself and not in its object, is called a faculty to "do or to refrain from doing as one pleases". Insofar as it is joined with one's consciousness of the ability to bring about its object by one's action it is called choice (Willkür); if it is not joined with this consciousness its act is called a wish. The faculty of desire whose inner determining ground, hence even what pleases it, lies within the subject's reason is called the will (Wille). The will is therefore the faculty of desire considered not so much in relation to action (as choice is) but rather in relation to the ground determining choice in action. The will itself, strictly speaking, has no determining ground; insofar as it can determine choice, it is instead practical reason itself. Insofar as reason can determine the faculty of desire as such, not only choice but also mere wish can be included under the will. That choice which can be determined by pure reason is called free choice. That which can be determined only by inclination (sensible impulse, stimulus) would be animal choice (arbitrium brutum). Human choice, however, is a choice that can indeed be affected but not determined by impulses, and is therefore of itself (apart from an acquired proficiency of reason) not pure but can still be determined to actions by pure will.

    — Immanuel Kant, Metaphysics of Morals 6:213–4

    For a will to be considered free, we must understand it as capable of affecting causal power without being caused to do so. However, the idea of lawless free will, meaning a will acting without any causal structure, is incomprehensible. Therefore, a free will must be acting under laws that it gives to itself.

    Although Kant conceded that there could be no conceivable example of free will, because any example would only show us a will as it appears to us—as a subject of natural laws—he nevertheless argued against determinism. He proposed that determinism is logically inconsistent: the determinist claims that because A caused B, and B caused C, that A is the true cause of C. Applied to a case of the human will, a determinist would argue that the will does not have causal power and that something outside the will causes the will to act as it does. But this argument merely assumes what it sets out to prove: viz. that the human will is part of the causal chain.

    Secondly, Kant remarks that free will is inherently unknowable. Since even a free person could not possibly have knowledge of their own freedom, we cannot use our failure to find a proof for freedom as evidence for a lack of it. The observable world could never contain an example of freedom because it would never show us a will as it appears to itself, but only a will that is subject to natural laws imposed on it. But we do appear to ourselves as free. Therefore, he argued for the idea of transcendental freedom—that is, freedom as a presupposition of the question "what ought I to do?" This is what gives us sufficient basis for ascribing moral responsibility: the rational and self-actualizing power of a person, which he calls moral autonomy: "the property the will has of being a law unto itself." 

    https://en.wikipedia.org/wiki/Categorical_imperative

     

    https://en.wikipedia.org/wiki/Utilitarianism

    https://en.wikipedia.org/wiki/Two-level_utilitarianism

    https://en.wikipedia.org/wiki/Utility

    https://en.wikipedia.org/wiki/Effective_altruism

     

    https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value

    https://en.wikipedia.org/wiki/Means_to_an_End

     

    The fact–value distinction is a fundamental epistemological distinction described between:[1]

    1. Statements of fact (positive or descriptive statements), based upon reason and physical observation, and which are examined via the empirical method.
    2. Statements of value (normative or prescriptive statements), which encompass ethics and aesthetics, and are studied via axiology.

    This barrier between fact and value, as construed in epistemology, implies it is impossible to derive ethical claims from factual arguments, or to defend the former using the latter.[2]

    The fact–value distinction is closely related to, and derived from, the is–ought problem in moral philosophy, characterized by David Hume.[3] The terms are often used interchangeably, though philosophical discourse concerning the is–ought problem does not usually encompass aesthetics.

    If values do not arise from facts, it opens questions about whether these have distinct spheres of origin. Evolutionary psychology implicitly challenges the fact–value distinction to the extent that cognitive psychology is positioned as foundational in human ethics, as a single sphere of origin. Among the new atheists, who often lean toward evolutionary psychology, Sam Harris in particular endorses and promotes a science of morality as outlined in his book The Moral Landscape (2010).

    Religion and science

    In his essay Science as a Vocation (1917) Max Weber draws a distinction between facts and values. He argues that facts can be determined through the methods of a value-free, objective social science, while values are derived through culture and religion, the truth of which cannot be known through science. He writes, "it is one thing to state facts, to determine mathematical or logical relations or the internal structure of cultural values, while it is another thing to answer questions of the value of culture and its individual contents and the question of how one should act in the cultural community and in political associations. These are quite heterogeneous problems."[4] In his 1919 essay Politics as a Vocation, he argues that facts, like actions, do not in themselves contain any intrinsic meaning or power: "any ethic in the world could establish substantially identical commandments applicable to all relationships."[5]

    To MLK Jr., "Science deals mainly with facts; religion deals mainly with values. The two are not rivals. They are complementary."[6][7][8] He stated that science keeps religion from"crippling irrationalism and paralyzing obscurantism" whereas Religion prevents science from "falling into ... obsolete materialism and moral nihilism."[9]

    Albert Einstein remarked that

    the realms of religion and science in themselves are clearly marked off from each other, nevertheless there exist between the two strong reciprocal relationships and dependencies. Though religion may be that which determines the goal, it has, nevertheless, learned from science, in the broadest sense, what means will contribute to the attainment of the goals it has set up. But science can only be created by those who are thoroughly imbued with the aspiration toward truth and understanding. This source of feeling, however, springs from the sphere of religion. To this there also belongs the faith in the possibility that the regulations valid for the world of existence are rational, that is, comprehensible to reason. I cannot conceive of a genuine scientist without that profound faith. The situation may be expressed by an image: science without religion is lame, religion without science is blind.[10]

    David Hume's skepticism

    In A Treatise of Human Nature (1739), David Hume discusses the problems in grounding normative statements in positive statements, that is in deriving ought from is. It is generally regarded that Hume considered such derivations untenable, and his 'is–ought' problem is considered a principal question of moral philosophy.[11]

    Hume shared a political viewpoint with early Enlightenment philosophers such as Thomas Hobbes (1588–1679) and John Locke (1632–1704). Specifically, Hume, at least to some extent, argued that religious and national hostilities that divided European society were based on unfounded beliefs. In effect, Hume contended that such hostilities are not found in nature, but are a human creation, depending on a particular time and place, and thus unworthy of mortal conflict.

    Prior to Hume, Aristotelian philosophy maintained that all actions and causes were to be interpreted teleologically. This rendered all facts about human action examinable under a normative framework defined by cardinal virtues and capital vices. "Fact" in this sense was not value-free, and the fact-value distinction was an alien concept. The decline of Aristotelianism in the 16th century set the framework in which those theories of knowledge could be revised.[12]

    Naturalistic fallacy

    The fact–value distinction is closely related to the naturalistic fallacy, a topic debated in ethical and moral philosophy. G. E. Moore believed it essential to all ethical thinking.[13] However, contemporary philosophers like Philippa Foot have called into question the validity of such assumptions. Others, such as Ruth Anna Putnam, argue that even the most "scientific" of disciplines are affected by the "values" of those who research and practice the vocation.[14][15] Nevertheless, the difference between the naturalistic fallacy and the fact–value distinction is derived from the manner in which modern social science has used the fact–value distinction, and not the strict naturalistic fallacy to articulate new fields of study and create academic disciplines.

    Moralistic fallacy

    The fact–value distinction is also closely related to the moralistic fallacy, an invalid inference of factual conclusions from purely evaluative premises. For example, an invalid inference "Because everybody ought to be equal, there are no innate genetic differences between people" is an instance of the moralistic fallacy. As for the naturalistic fallacy one attempts to move from an "is" to an "ought" statement, with the moralistic fallacy one attempts to move from an "ought" to an "is" statement.

    Nietzsche's table of values

    Friedrich Nietzsche (1844–1900) in Thus Spoke Zarathustra said that a table of values hangs above every great people. Nietzsche argues that what is common among different peoples is the act of esteeming, of creating values, even if the values are different from one people to the next. Nietzsche asserts that what made people great was not the content of their beliefs, but the act of valuing. Thus the values a community strives to articulate are not as important as the collective will to act on those values.[16] The willing is more essential than the intrinsic worth of the goal itself, according to Nietzsche.[17] "A thousand goals have there been so far," says Zarathustra, "for there are a thousand peoples. Only the yoke for the thousand necks is still lacking: the one goal is lacking. Humanity still has no goal." Hence, the title of the aphorism, "On The Thousand And One Goals." The idea that one value-system is no more worthy than the next, although it may not be directly ascribed to Nietzsche, has become a common premise in modern social science. Max Weber and Martin Heidegger absorbed it and made it their own. It shaped their philosophical endeavor, as well as their political understanding.

    Criticisms

    Virtually all modern philosophers affirm some sort of fact–value distinction, insofar as they distinguish between science and "valued" disciplines such as ethics, aesthetics, or the fine arts. However, philosophers such as Hilary Putnam argue that the distinction between fact and value is not as absolute as Hume envisioned.[18] Philosophical pragmatists, for instance, believe that true propositions are those that are useful or effective in predicting future (empirical) states of affairs.[19] Far from being value-free, the pragmatists' conception of truth or facts directly relates to an end (namely, empirical predictability) that human beings regard as normatively desirable. Other thinkers, such as N. Hanson among others, talk of theory-ladenness, and reject an absolutist fact–value distinction by contending that our senses are imbued with prior conceptualizations, making it impossible to have any observation that is totally value-free, which is how Hume and the later positivists conceived of facts.

    Functionalist counterexamples

    Several counterexamples have been offered by philosophers claiming to show that there are cases when an evaluative statement does indeed logically follow from a factual statement. A. N. Prior argues, from the statement "He is a sea captain," that it logically follows, "He ought to do what a sea captain ought to do."[20] Alasdair MacIntyre argues, from the statement "This watch is grossly inaccurate and irregular in time-keeping and too heavy to carry about comfortably," that the evaluative conclusion validly follows, "This is a bad watch."[21] John Searle argues, from the statement "Jones promised to pay Smith five dollars," that it logically follows that "Jones ought to pay Smith five dollars", such that the act of promising by definition places the promiser under obligation.[22]

    Moral realism

    Philippa Foot adopts a moral realist position, criticizing the idea that when evaluation is superposed on fact there has been a "committal in a new dimension".[23] She introduces, by analogy, the practical implications of using the word "injury". Not just anything counts as an injury. There must be some impairment. When we suppose a man wants the things the injury prevents him from obtaining, haven’t we fallen into the old naturalist fallacy?

    It may seem that the only way to make a necessary connection between 'injury' and the things that are to be avoided, is to say that it is only used in an 'action-guiding sense' when applied to something the speaker intends to avoid. But we should look carefully at the crucial move in that argument, and query the suggestion that someone might happen not to want anything for which he would need the use of hands or eyes. Hands and eyes, like ears and legs, play a part in so many operations that a man could only be said not to need them if he had no wants at all.[24]

    Foot argues that the virtues, like hands and eyes in the analogy, play so large a part in so many operations that it is implausible to suppose that a committal in a non-naturalist dimension is necessary to demonstrate their goodness.

    Philosophers who have supposed that actual action was required if 'good' were to be used in a sincere evaluation have got into difficulties over weakness of will, and they should surely agree that enough has been done if we can show that any man has reason to aim at virtue and avoid vice. But is this impossibly difficult if we consider the kinds of things that count as virtue and vice? Consider, for instance, the cardinal virtues, prudence, temperance, courage and justice. Obviously any man needs prudence, but does he not also need to resist the temptation of pleasure when there is harm involved? And how could it be argued that he would never need to face what was fearful for the sake of some good? It is not obvious what someone would mean if he said that temperance or courage were not good qualities, and this not because of the 'praising' sense of these words, but because of the things that courage and temperance are.[25]

    Of Weber

    Philosopher Leo Strauss criticizes Weber for attempting to isolate reason completely from opinion. Strauss acknowledges the philosophical trouble of deriving "ought" from "is", but argues that what Weber has done in his framing of this puzzle is in fact deny altogether that the "ought" is within reach of human reason.[26]: 66  Strauss worries that if Weber is right, we are left with a world in which the knowable truth is a truth that cannot be evaluated according to ethical standards. This conflict between ethics and politics would mean that there can be no grounding for any valuation of the good, and without reference to values, facts lose their meaning.[26]: 72 

    See also

    https://en.wikipedia.org/wiki/Fact%E2%80%93value_distinction


    In moral philosophy, instrumental and intrinsic value are the distinction between what is a means to an end and what is as an end in itself.[1] Things are deemed to have instrumental value if they help one achieve a particular end; intrinsic values, by contrast, are understood to be desirable in and of themselves. A tool or appliance, such as a hammer or washing machine, has instrumental value because it helps you pound in a nail or cleans your clothes. Happiness and pleasure are typically considered to have intrinsic value insofar as asking why someone would want them makes little sense: they are desirable for their own sake irrespective of their possible instrumental value. The classic names instrumental and intrinsic were coined by sociologist Max Weber, who spent years studying good meanings people assigned to their actions and beliefs.

    The Oxford Handbook of Value Theory provide three modern definitions of intrinsic and instrumental value:

    1. They are "the distinction between what is good 'in itself' and what is good 'as a means'."[1]: 14 
    2. "The concept of intrinsic value has been glossed variously as what is valuable for its own sake, in itself, on its own, in its own right, as an end, or as such. By contrast, extrinsic value has been characterized mainly as what is valuable as a means, or for something else's sake."[1]: 29 
    3. "Among nonfinal values, instrumental value—intuitively, the value attaching a means to what is finally valuable—stands out as a bona fide example of what is not valuable for its own sake."[1]: 34 

    When people judge efficient means and legitimate ends at the same time, both can be considered as good. However, when ends are judged separately from means, it may result in a conflict: what works may not be right; what is right may not work. Separating the criteria contaminates reasoning about the good. Philosopher John Dewey argued that separating criteria for good ends from those for good means necessarily contaminates recognition of efficient and legitimate patterns of behavior. Economist J. Fagg Foster explained why only instrumental value is capable of correlating good ends with good means. Philosopher Jacques Ellul argued that instrumental value has become completely contaminated by inhuman technological consequences, and must be subordinated to intrinsic supernatural value. Philosopher Anjan Chakravartty argued that instrumental value is only legitimate when it produces good scientific theories compatible with the intrinsic truth of mind-independent reality.

    The word value is ambiguous in that it is both a verb and a noun, as well as denoting both a criterion of judgment itself and the result of applying a criterion.[2][3]: 37–44  To reduce ambiguity, throughout this article the noun value names a criterion of judgment, as opposed to valuation which is an object that is judged valuable. The plural values identifies collections of valuations, without identifying the criterion applied.

    Max Weber

    The classic names instrumental and intrinsic were coined by sociologist Max Weber, who spent years studying good meanings people assigned to their actions and beliefs. According to Weber, "[s]ocial action, like all action, may be" judged as:[4]: 24–5 

    1. Instrumental rational (zweckrational): action "determined by expectations as to the behavior of objects in the environment of other human beings; these expectations are used as 'conditions' or "means' for the attainment of the actor's own rationally pursued and calculated ends."
    2. Value-rational (wertrational): action "determined by a conscious belief in the value for its own sake of some ethical, aesthetic, religious, or other form of behavior, independently of its prospects of success."

    Weber's original definitions also include a comment showing his doubt that conditionally efficient means can achieve unconditionally legitimate ends:[4]: 399–400 

    [T]he more the value to which action is oriented is elevated to the status of an absolute [intrinsic] value, the more "irrational" in this [instrumental] sense the corresponding action is. For the more unconditionally the actor devotes himself to this value for its own sake…the less he is influenced by considerations of the [conditional] consequences of his action.

    John Dewey

    John Dewey thought that belief in intrinsic value was a mistake. Although the application of instrumental value is easily contaminated, it is the only means humans have to coordinate group behaviour efficiently and legitimately.

    Every social transaction has good or bad consequences depending on prevailing conditions, which may or may not be satisfied. Continuous reasoning adjusts institutions to keep them working on the right track as conditions change. Changing conditions demand changing judgments to maintain efficient and legitimate correlation of behavior.[5]

    For Dewey, "restoring integration and cooperation between man's beliefs about the world in which he lives and his beliefs about the values [valuations] and purposes that should direct his conduct is the deepest problem of modern life."[6]: 255  Moreover, a "culture which permits science to destroy traditional values [valuations] but which distrusts its power to create new ones is a culture which is destroying itself."[7]

    Dewey agreed with Max Weber that people talk as if they apply instrumental and intrinsic criteria. He also agreed with Weber's observation that intrinsic value is problematic in that it ignores the relationship between context and consequences of beliefs and behaviors. Both men questioned how anything valued intrinsically "for its own sake" can have operationally efficient consequences. However, Dewey rejects the common belief—shared by Weber—that supernatural intrinsic value is necessary to show humans what is permanently "right." He argues that both efficient and legitimate qualities must be discovered in daily life:

    Man who lives in a world of hazards…has sought to attain [security] in two ways. One of them began with an attempt to propitiate the [intrinsic] powers which environ him and determine his destiny. It expressed itself in supplication, sacrifice, ceremonial rite and magical cult.… The other course is to invent [instrumental] arts and by their means turn the powers of nature to account.…[6]: 3  [F]or over two thousand years, the…most influential and authoritatively orthodox tradition…has been devoted to the problem of a purely cognitive certification (perhaps by revelation, perhaps by intuition, perhaps by reason) of the antecedent immutable reality of truth, beauty, and goodness.… The crisis in contemporary culture, the confusions and conflicts in it, arise from a division of authority. Scientific [instrumental] inquiry seems to tell one thing, and traditional beliefs [intrinsic valuations] about ends and ideals that have authority over conduct tell us something quite different.… As long as the notion persists that knowledge is a disclosure of [intrinsic] reality…prior to and independent of knowing, and that knowing is independent of a purpose to control the quality of experienced objects, the failure of natural science to disclose significant values [valuations] in its objects will come as a shock.[6]: 43–4 

    Finding no evidence of "antecedent immutable reality of truth, beauty, and goodness," Dewey argues that both efficient and legitimate goods are discovered in the continuity of human experience:[6]: 114, 172–3, 197 

    Dewey's ethics replaces the goal of identifying an ultimate end or supreme principle that can serve as a criterion of ethical evaluation with the goal of identifying a method for improving our value judgments. Dewey argued that ethical inquiry is of a piece with empirical inquiry more generally.… This pragmatic approach requires that we locate the conditions of warrant for our value judgments in human conduct itself, not in any a priori fixed reference point outside of conduct, such as in God's commands, Platonic Forms, pure reason, or "nature," considered as giving humans a fixed telos [intrinsic end].[8]

    Philosophers label a "fixed reference point outside of conduct' a "natural kind," and presume it to have eternal existence knowable in itself without being experienced. Natural kinds are intrinsic valuations presumed to be "mind-independent" and "theory-independent."[9]

    Dewey grants the existence of "reality" apart from human experience, but denied that it is structured as intrinsically real natural kinds.[6]: 122, 196  Instead, he sees reality as functional continuity of ways-of-acting, rather than as interaction among pre-structured intrinsic kinds. Humans may intuit static kinds and qualities, but such private experience cannot warrant inferences or valuations about mind-independent reality. Reports or maps of perceptions or intuitions are never equivalent to territories mapped.[10]

    People reason daily about what they ought to do and how they ought to do it. Inductively, they discover sequences of efficient means that achieve consequences. Once an end is reached—a problem solved—reasoning turns to new conditions of means-end relations. Valuations that ignore consequence-determining conditions cannot coordinate behavior to solve real problems; they contaminate rationality.

    Value judgments have the form: if one acted in a particular way (or valued this object), then certain consequences would ensue, which would be valued. The difference between an apparent and a real good [means or end], between an unreflectively and a reflectively valued good, is captured by its value [valuation of goodness] not just as immediately experienced in isolation, but in view of its wider consequences and how they are valued.… So viewed, value judgments are tools for discovering how to live a better life, just as scientific hypotheses are tools for uncovering new information about the world.[8]

    In brief, Dewey rejects the traditional belief that judging things as good in themselves, apart from existing means-end relations, can be rational. The sole rational criterion is instrumental value. Each valuation is conditional but, cumulatively, all are developmental—and therefore socially-legitimate solutions of problems. Competent instrumental valuations treat the "function of consequences as necessary tests of the validity of propositions, provided these consequences are operationally instituted and are such as to resolve the specific problems evoking the operations."[11][12]: 29–31 

    J. Fagg Foster

    John Fagg Foster made John Dewey's rejection of intrinsic value more operational by showing that its competent use rejects the legitimacy of utilitarian ends—satisfaction of whatever ends individuals adopt. It requires recognizing developmental sequences of means and ends.[13][14][15]: 40–8 

    Utilitarians hold that individual wants cannot be rationally justified; they are intrinsically worthy subjective valuations and cannot be judged instrumentally. This belief supports philosophers who hold that facts ("what is") can serve as instrumental means for achieving ends, but cannot authorize ends ("what ought to be"). This fact-value distinction creates what philosophers label the is-ought problem: wants are intrinsically fact-free, good in themselves; whereas efficient tools are valuation-free, usable for good or bad ends.[15]: 60  In modern North-American culture, this utilitarian belief supports the libertarian assertion that every individual's intrinsic right to satisfy wants makes it illegitimate for anyone—but especially governments—to tell people what they ought to do.[16]

    Foster finds that the is-ought problem is a useful place to attack the irrational separation of good means from good ends. He argues that want-satisfaction ("what ought to be") cannot serve as an intrinsic moral compass because 'wants' are themselves consequences of transient conditions.

    [T]he things people want are a function of their social experience, and that is carried on through structural institutions that specify their activities and attitudes. Thus the pattern of people's wants takes visible form partly as a result of the pattern of the institutional structure through which they participate in the economic process. As we have seen, to say that an economic problem exists is to say that part of the particular patterns of human relationships has ceased or failed to provide the effective participation of its members. In so saying, we are necessarily in the position of asserting that the instrumental efficiency of the economic process is the criterion of judgment in terms of which, and only in terms of which, we may resolve economic problems.[17]

    Since 'wants' are shaped by social conditions, they must be judged instrumentally; they arise in problematic situations when habitual patterns of behavior fail to maintain instrumental correlations.[15]: 27 

    Examples

    Foster uses with homely examples to support his thesis that problematic situations ("what is") contain the means for judging legitimate ends ("what ought to be"). Rational efficient means achieve rational developmental ends. Consider the problem all infants face learning to walk. They spontaneously recognize that walking is more efficient differently to crawling—an instrumental valuation of a desirable end. They learn to walk by repeatedly moving and balancing, judging the efficiency with which these means achieve their instrumental goal. When they master this new way-of-acting, they experience great satisfaction, but satisfaction is never their end-in-view.[18]

    Revised definition of 'instrumental value'

    To guard against contamination of instrumental value by judging means and ends independently, Foster revised his definition to embrace both.

    Instrumental value is the criterion of judgment which seeks instrumentally-efficient means that "work" to achieve developmentally-continuous ends. This definition stresses the condition that instrumental success is never short term; it must not lead down a dead-end street. The same point is made by the currently popular concern for sustainability—a synonym for instrumental value.[19]

    Dewey's and Foster's argument that there is no intrinsic alternative to instrumental value continues to be ignored rather than refuted. Scholars continue to accept the possibility and necessity of knowing "what ought to be" independently of transient conditions that determine actual consequences of every action. Jacques Ellul and Anjan Chakravartty were prominent exponents of the truth and reality of intrinsic value as constraint on relativistic instrumental value.

    Jacques Ellul

    Jacques Ellul made scholarly contributions to many fields, but his American reputation grew out of his criticism of the autonomous authority of instrumental value, the criterion that John Dewey and J. Fagg Foster found to be the core of human rationality. He specifically criticized the valuations central to Dewey's and Foster's thesis: evolving instrumental technology.

    His principal work, published in 1954, bore the French title La technique and tackles the problem that Dewey addressed in 1929: a culture in which the authority of evolving technology destroys traditional valuations without creating legitimate new ones. Both men agree that conditionally-efficient valuations ("what is") become irrational when viewed as unconditionally efficient in themselves ("what ought to be"). However, while Dewey argues that contaminated instrumental valuations can be self-correcting, Ellul concludes that technology has become intrinsically destructive. The only escape from this evil is to restore authority to unconditional sacred valuations:[20]: 143 

    Nothing belongs any longer to the realm of the gods or the supernatural. The individual who lives in the technical milieu knows very well that there is nothing spiritual anywhere. But man cannot live without the [intrinsic] sacred. He therefore transfers his sense of the sacred to the very thing which has destroyed its former object: to technique itself.

    The English edition of La technique was published in 1964, titled The Technological Society, and quickly entered ongoing disputes in the United States over the responsibility of instrumental value for destructive social consequences. The translator[who?] of Technological Society summarizes Ellul's thesis:[21]

    Technological Society is a description of the way in which an autonomous [instrumental] technology is in process of taking over the traditional values [intrinsic valuations] of every society without exception, subverting and suppressing those values to produce at last a monolithic world culture in which all non-technological difference and variety is mere appearance.

    Ellul opens The Technological Society by asserting that instrumental efficiency is no longer a conditional criterion. It has become autonomous and absolute:[20]: xxxvi 

    The term technique, as I use it, does not mean machines, technology, or this or that procedure for attaining an end. In our technological society, technique is the totality of methods rationally arrived at and having absolute efficiency (for a given stage of development) in every field of human activity.

    He blames instrumental valuations for destroying intrinsic meanings of human life: "Think of our dehumanized factories, our unsatisfied senses, our working women, our estrangement from nature. Life in such an environment has no meaning."[20]: 4–5  While Weber had labeled the discrediting of intrinsic valuations as disenchantment, Ellul came to label it as "terrorism."[22]: 384, 19  He dates its domination to the 1800s, when centuries-old handicraft techniques were massively eliminated by inhuman industry.

    When, in the 19th century, society began to elaborate an exclusively rational technique which acknowledged only considerations of efficiency, it was felt that not only the traditions but the deepest instincts of humankind had been violated.[20]: 73  Culture is necessarily humanistic or it does not exist at all.… [I]t answers questions about the meaning of life, the possibility of reunion with ultimate being, the attempt to overcome human finitude, and all other questions that they have to ask and handle. But technique cannot deal with such things.… Culture exists only if it raises the question of meaning and values [valuations].… Technique is not at all concerned about the meaning of life, and it rejects any relation to values [intrinsic valuations].[22]: 147–8 

    Ellul's core accusation is that instrumental efficiency has become absolute, i.e., a good-in-itself;[20]: 83  it wraps societies in a new technological milieu with six intrinsically inhuman characteristics:[3]: 22 

    1. artificiality;
    2. autonomy, "with respect to values [valuations], ideas, and the state;"
    3. self-determinative, independent "of all human intervention;"
    4. "It grows according to a process which is causal but not directed to [good] ends;"
    5. "It is formed by an accumulation of means which have established primacy over ends;"
    6. "All its parts are mutually implicated to such a degree that it is impossible to separate them or to settle any technical problems in isolation."

    Criticism

    Philosophers Tiles and Oberdiek (1995) find Ellul's characterization of instrumental value inaccurate.[3]: 22–31  They criticize him for anthropomorphizing and demonizing instrumental value. They counter this by examining the moral reasoning of scientists whose work led to nuclear weapons: those scientists demonstrated the capacity of instrumental judgments to provide them with a moral compass to judge nuclear technology; they were morally responsible without intrinsic rules. Tiles and Oberdiek's conclusion coincides with that of Dewey and Foster: instrumental value, when competently applied, is self-correcting and provides humans with a developmental moral compass.

    For although we have defended general principles of the moral responsibilities of professional people, it would be foolish and wrongheaded to suggest codified [intrinsic] rules. It would be foolish because concrete cases are more complex and nuanced than any code could capture; it would be wrongheaded because it would suggest that our sense of moral responsibility can be fully captured by a code.[3]: 193  In fact, as we have seen in many instances, technology simply allows us to go on doing stupid things in clever ways. The questions that technology cannot solve, although it will always frame and condition the answers, are "What should we be trying to do? What kind of lives should we, as human beings, be seeking to live? And can this kind of life be pursued without exploiting others? But until we can at least propose [instrumental] answers to those questions we cannot really begin to do sensible things in the clever ways that technology might permit.[3]: 197 

    Semi realism (Anjan Chakravartty)

    Anjan Chakravartty came indirectly to question the autonomous authority of instrumental value. He viewed it as a foil for the currently dominant philosophical school labeled "scientific realism," with which he identifies. In 2007, he published a work defending the ultimate authority of intrinsic valuations to which realists are committed. He links the pragmatic instrumental criterion to discredited anti-realist empiricist schools including logical positivism and instrumentalism.

    Chakravartty began his study with rough characterizations of realist and anti-realist valuations of theories. Anti-realists believe "that theories are merely instruments for predicting observable phenomena or systematizing observation reports;" they assert that theories can never report or prescribe truth or reality "in itself." By contrast, scientific realists believe that theories can "correctly describe both observable and unobservable parts of the world."[23]: xi, 10  Well-confirmed theories—"what ought to be" as the end of reasoning—are more than tools; they are maps of intrinsic properties of an unobservable and unconditional territory—"what is" as natural-but-metaphysical real kinds.[23]: xiii, 33, 149 

    Chakravartty treats criteria of judgment as ungrounded opinion, but admits that realists apply the instrumental criterion to judge theories that "work."[23]: 25  He restricts such criterion's scope, claiming that every instrumental judgment is inductive, heuristic, accidental. Later experience might confirm a singular judgment only if it proves to have universal validity, meaning it possesses "detection properties" of natural kinds.[23]: 231  This inference is his fundamental ground for believing in intrinsic value.

    He commits modern realists to three metaphysical valuations or intrinsic kinds of knowledge of truth. Competent realists affirm that natural kinds exist in a mind-independent territory possessing 1) meaningful and 2) mappable intrinsic properties.

    Ontologically, scientific realism is committed to the existence of a mind-independent world or reality. A realist semantics implies that the theoretical claims [valuations] about this reality have truth values, and should be construed literally.… Finally, the epistemological commitment is to the idea that these theoretical claims give us knowledge of the world. That is, predictively successful (mature, non-ad hoc) theories, taken literally as describing the nature of a mind-independent reality are (approximately) true.[23]: 9 

    He labels these intrinsic valuations as semi-realist, meaning they are currently the most accurate theoretical descriptions of mind-independent natural kinds. He finds these carefully qualified statements necessary to replace earlier realist claims of intrinsic reality discredited by advancing instrumental valuations. Science has destroyed for many people the supernatural intrinsic value embraced by Weber and Ellul. But Chakravartty defended intrinsic valuations as necessary elements of all science—belief in unobservable continuities. He advances the thesis of semi-realism, according to which well-tested theories are good maps of natural kinds, as confirmed by their instrumental success; their predictive success means they conform to mind-independent, unconditional reality.

    Causal properties are the fulcrum of semirealism. Their [intrinsic] relations compose the concrete structures that are the primary subject matters of a tenable scientific realism. They regularly cohere to form interesting units, and these groupings make up the particulars investigated by the sciences and described by scientific theories.[23]: 119  Scientific theories describe [intrinsic] causal properties, concrete structures, and particulars such as objects, events, and processes. Semirealism maintains that under certain conditions it is reasonable for realists to believe that the best of these descriptions tell us not merely about things that can be experienced with the unaided senses, but also about some of the unobservable things underlying them.[23]: 151 

    Chakravartty argues that these semirealist valuations legitimize scientific theorizing about pragmatic kinds. The fact that theoretical kinds are frequently replaced does not mean that mind-independent reality is changing, but simply that theoretical maps are approximating intrinsic reality.

    The primary motivation for thinking that there are such things as natural kinds is the idea that carving nature according to its own divisions yields groups of objects that are capable of supporting successful inductive generalizations and prediction. So the story goes, one's recognition of natural categories facilitates these practices, and thus furnishes an excellent explanation for their success.[23]: 151  The moral here is that however realists choose to construct particulars out of instances of properties, they do so on the basis of a belief in the [mind-independent] existence of those properties. That is the bedrock of realism. Property instances lend themselves to different forms of packaging [instrumental valuations], but as a feature of scientific description, this does not compromise realism with respect to the relevant [intrinsic] packages.[23]: 81 

    In sum, Chakravartty argues that contingent instrumental valuations are warranted only as they approximate unchanging intrinsic valuations. Scholars continue to perfect their explanations of intrinsic value, as they deny the developmental continuity of applications of instrumental value.

    Abstraction is a process in which only some of the potentially many relevant factors present in [unobservable] reality are represented in a model or description with some aspect of the world, such as the nature or behavior of a specific object or process. ... Pragmatic constraints such as these play a role in shaping how scientific investigations are conducted, and together which and how many potentially relevant factors [intrinsic kinds] are incorporated into models and descriptions during the process of abstraction. The role of pragmatic constraints, however, does not undermine the idea that putative representations of factors composing abstract models can be thought to have counterparts in the [mind-independent] world.[23]: 191 

    Realist intrinsic value as proposed by Chakravartty, is widely endorsed in modern scientific circles, while the supernatural intrinsic value endorsed by Max Weber and Jacques Ellul maintains its popularity throughout the world. Doubters about the reality of instrumental and intrinsic value are few.

    See also

    References

     

  • Hirose, Iwao; Olson, Jonas (2015). The Oxford Handbook of Value Theory. Oxford University Press.

  • Dewey, John (1939). Theory of Valuation. University of Chicago Press. pp. 1–6.

  • Tiles, Mary; Oberdiek, Hans (1995). Living in a Technological Culture. Routledge.

  • Weber, Max (1978). Economy and Society. University of California Press. ISBN 9780520028241.

  • Tool, Marc. 1994. "John Dewey." Pp. 152–7 in Elgar Companion to Institutional and Evolutionary Economics 1, edited by G. M. Hodgson. Edward Elgar Publishing.

  • Dewey, John (1929). Quest for Certainty. G. P. Putnam's Sons.

  • Dewey, John (1963). Freedom and Culture. G. P. Putnam's Sons. p. 228.

  • Anderson, Elizabeth (20 January 2005). "Dewey's Moral Philosophy". In Zalta, Edward N. (ed.). The Stanford Encyclopedia of Philosophy.

  • Bird, Alexander; Tobin, Emma (17 September 2008). "Natural Kinds". In Zalta, Edward N. (ed.). The Stanford Encyclopedia of Philosophy.

  • Burke, Tom (1994). Dewey's New Logic. University of Chicago Press. pp. 54=65.

  • Dewey, John (1938). Logic: the Theory of Inquiry. Holt, Rinehart and Winston. p. iv.

  • Tiles, Mary; Oberdiek, Hans (1995). Living in a Technological Culture. Routledge.

  • Miller, Edythe. 1994. "John Fagg Foster." Pp. 256–62 in Elgar Companion to Institutional and Evolutionary Economics 1, edited by G. M. Hodgson. Edward Elgar Publishing.

  • MacIntyre, Alasdair (2007). After Virtue. University of Notre Dame Press. pp. 62–66. ISBN 9780268035044.

  • Tool, Marc (2000). Value Theory and Economic Progress: The Institutional Economics of J. Fagg Foster. Kluwer Academic.

  • Nozick, Robert (1974). Anarchy, State and Utopia. Basic Books. p. ix.

  • Foster, John Fagg (1981). "The Relation Between the Theory of Value and Economic Analysis". Journal of Economic Issues: 904–5.

  • Ranson, Baldwin (2008). "Confronting Foster's Wildest Claim: Only the Instrumental Theory of Value Can Be applied". Journal of Economic Issues. 42 (2): 537–44. doi:10.1080/00213624.2008.11507163. S2CID 157465132.

  • Foster, John Fagg (1981). "Syllabus for Problems of Modern Society: The Theory of Institutional Adjustment". Journal of Economic Issues: 929–35.

  • Ellul, Jacques (1964). The Technological Society. Knopf.

  • Translator. 1964. "Introduction." In The Technological Society. Knopf. pp. v–vi, x.

  • Ellul, Jacques (1990). The Technological Bluff. William B. Erdmans.

    1. Chakravartty, Anjan (2007). A Metaphysics for Scientific Realism. Cambridge University Press.

     https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value

     

    https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value

    https://en.wikipedia.org/wiki/Moral_nihilism

    https://en.wikipedia.org/wiki/Materialism

    https://en.wikipedia.org/wiki/Obscurantism

    https://en.wikipedia.org/wiki/Irrationalism

    https://en.wikipedia.org/wiki/Empiricism

    https://en.wikipedia.org/wiki/Epistemology

    https://en.wikipedia.org/wiki/Axiology

    https://en.wikipedia.org/wiki/Observation

    https://en.wikipedia.org/wiki/Reason

    https://en.wikipedia.org/wiki/Normative_statement

    https://en.wikipedia.org/wiki/Positive_statement

    https://en.wikipedia.org/wiki/Empirical_research


    https://en.wikipedia.org/wiki/Program_counter

    https://en.wikipedia.org/wiki/Programmable_logic_controller

    https://en.wikipedia.org/wiki/Turing_machine 

    https://en.wikipedia.org/wiki/Ladder_logic

    https://en.wikipedia.org/wiki/Connection_Machine

     

    https://en.wikipedia.org/wiki/Instruction_list


    https://en.wikipedia.org/wiki/1-bit_computing

    https://en.wikipedia.org/wiki/Motorola_MC14500B#WDR-1


    https://en.wikipedia.org/wiki/Bit_slicing

    https://en.wikipedia.org/wiki/Bit-serial_architecture

    https://en.wikipedia.org/wiki/WDR_paper_computer


    https://en.wikipedia.org/wiki/Processor_register

    https://en.wikipedia.org/wiki/Match


    The original printed version of the paper computer has up to 21 lines of code on the left and eight registers on the right, which are represented as boxes that contain as many matches as the value in the corresponding register.[3] A pen is used to indicate the line of code which is about to be executed. The user steps through the program, adding and subtracting matches from the appropriate registers and following program flow until the stop instruction is encountered.

    https://en.wikipedia.org/wiki/WDR_paper_computer

     

    https://en.wikipedia.org/wiki/Register_machine

     

    https://en.wikipedia.org/wiki/Category:Educational_abstract_machines

    https://en.wikipedia.org/wiki/Category:Abstract_machines

     

    https://en.wikipedia.org/wiki/Bit_numbering#Least_significant_byte

    https://en.wikipedia.org/wiki/Steganography

    https://en.wikipedia.org/wiki/Bitwise_operation#Bit_shifts

    https://en.wikipedia.org/wiki/Positional_notation

     

    https://en.wikipedia.org/wiki/Boolean_data_type

    https://en.wikipedia.org/wiki/MAC_address#Bit-reversed_notation

    https://en.wikipedia.org/wiki/Find_first_set

    https://en.wikipedia.org/wiki/Unit_in_the_last_place

    https://en.wikipedia.org/wiki/Fortran

    https://en.wikipedia.org/wiki/Integer_(computer_science)

     

    https://en.wikipedia.org/wiki/Integer_(computer_science)

    https://en.wikipedia.org/wiki/Data_type

    https://en.wikipedia.org/wiki/Data

    https://en.wikipedia.org/wiki/Decimal_separator#Digit_grouping

    https://en.wikipedia.org/wiki/Source_code

    https://en.wikipedia.org/wiki/Decimal_computer

    https://en.wikipedia.org/wiki/Nibble

    https://en.wikipedia.org/wiki/Radix

    https://en.wikipedia.org/wiki/Signed_number_representations#Sign%E2%80%93magnitude

    https://en.wikipedia.org/wiki/Offset_binary

    https://en.wikipedia.org/wiki/Excess-3

    https://en.wikipedia.org/wiki/Bijection

    https://en.wikipedia.org/wiki/ASCII

     

    https://en.wikipedia.org/wiki/Positional_notation#Exponentiation

    https://en.wikipedia.org/wiki/Signedness

    https://en.wikipedia.org/wiki/Ternary_numeral_system

    https://en.wikipedia.org/wiki/Overflow_flag

    https://en.wikipedia.org/wiki/Status_register

     

    https://en.wikipedia.org/wiki/Hardware_register

     

    The hardware registers inside a central processing unit (CPU) are called processor registers.

    Strobe registers have the same interface as normal hardware registers, but instead of storing data, they trigger an action each time they are written to (or, in rare cases, read from). They are a means of signaling.

    Registers are normally measured by the number of bits they can hold, for example, an "8-bit register" or a "32-bit register".

    Designers can implement registers in a wide variety of ways, including:

    In addition to the "programmer-visible" registers that can be read and written with software, many chips have internal microarchitectural registers that are used for state machines and pipelining; for example, registered memory

    https://en.wikipedia.org/wiki/Hardware_register

    https://en.wikipedia.org/wiki/Magnetic-core_memory

     

    Magnetic-core memory was the predominant form of random-access computer memory for 20 years between about 1955 and 1975. Such memory is often just called core memory, or, informally, core.

    Core memory uses toroids (rings) of a hard magnetic material (usually a semi-hard ferrite) as transformer cores, where each wire threaded through the core serves as a transformer winding. Two or more wires pass through each core. Magnetic hysteresis allows each of the cores to "remember", or store a state.

    Each core stores one bit of information. A core can be magnetized in either the clockwise or counter-clockwise direction. The value of the bit stored in a core is zero or one according to the direction of that core's magnetization. Electric current pulses in some of the wires through a core allow the direction of the magnetization in that core to be set in either direction, thus storing a one or a zero. Another wire through each core, the sense wire, is used to detect whether the core changed state.

    The process of reading the core causes the core to be reset to a zero, thus erasing it. This is called destructive readout. When not being read or written, the cores maintain the last value they had, even if the power is turned off. Therefore, they are a type of non-volatile memory.

    Using smaller cores and wires, the memory density of core slowly increased, and by the late 1960s a density of about 32 kilobits per cubic foot (about 0.9 kilobits per litre) was typical. However, reaching this density required extremely careful manufacture, which was almost always carried out by hand in spite of repeated major efforts to automate the process. The cost declined over this period from about $1 per bit to about 1 cent per bit. The introduction of the first semiconductor memory chips in the late 1960s, which initially created static random-access memory (SRAM), began to erode the market for core memory. The first successful dynamic random-access memory (DRAM), the Intel 1103, followed in 1970. Its availability in quantity at 1 cent per bit marked the beginning of the end for core memory.[1]

    Improvements in semiconductor manufacturing led to rapid increases in storage capacity and decreases in price per kilobyte, while the costs and specs of core memory changed little. Core memory was driven from the market gradually between 1973 and 1978.

    Depending on how it was wired, core memory could be exceptionally reliable. Read-only core rope memory, for example, was used on the mission-critical Apollo Guidance Computer essential to NASA's successful Moon landings.

    Although core memory is obsolete, computer memory is still sometimes called "core" even though it is made of semiconductors, particularly by people who had worked with machines having actual core memory. The files that result from saving the entire contents of memory to disk for inspection, which is nowadays commonly performed automatically when a major error occurs in a computer program, are still called "core dumps".


     

    https://en.wikipedia.org/wiki/Magnetic-core_memory

     https://en.wikipedia.org/wiki/Read-only_memory

    https://en.wikipedia.org/wiki/Core_rope_memory

     

    https://en.wikipedia.org/wiki/Category:Logic

     https://en.wikipedia.org/wiki/Core_rope_memory

    https://en.wikipedia.org/wiki/Digital_card#Magnetic_stripe_card

    https://en.wikipedia.org/wiki/Hard_disk_drive

    https://en.wikipedia.org/wiki/Magnetic-core_memory

    https://en.wikipedia.org/wiki/Wire_recording

    https://en.wikipedia.org/wiki/Bubble_memory

    https://en.wikipedia.org/wiki/Thin-film_memory

    https://en.wikipedia.org/wiki/Magnetic_ink_character_recognition

    https://en.wikipedia.org/wiki/International_Organization_for_Standardization


    https://en.wikipedia.org/wiki/Zero_point_energy

    https://en.wikipedia.org/wiki/Luminiferous_aether

    https://en.wikipedia.org/wiki/Drag




    No comments:

    Post a Comment