Blog Archive

Wednesday, September 22, 2021

09-22-2021-1114 - gel sol

 A gel is a semi-solid that can have properties ranging from soft and weak to hard and tough.[1][2] Gels are defined as a substantially dilute cross-linked system, which exhibits no flow when in the steady-state, although the liquid phase may still diffuse through this system.[3] A gel has been defined phenomenologically as a soft, solid or solid-like material consisting of two or more components, one of which is a liquid, present in substantial quantity.[4]

By weight, gels are mostly liquid, yet they behave like solids because of a three-dimensional cross-linked network within the liquid. It is the crosslinking within the fluid that gives a gel its structure (hardness) and contributes to the adhesive stick (tack). In this way, gels are a dispersion of molecules of a liquid within a solid medium. The word gel was coined by 19th-century Scottish chemist Thomas Graham by clipping from gelatine.[5]

The process of forming a gel is called gelation.

https://en.wikipedia.org/wiki/Gel


sol is a colloid made out of solid particles[1] in a continuous liquid medium. Sols are quite stable and show the Tyndall effect. Examples include bloodpigmentedink, cell fluids, paintantacids and mud.

Artificial sols may be prepared by dispersion or condensation. Dispersion techniques include grinding solids to colloidal dimensions by ball milling and Bredig's arc method. The stability of sols may be maintained by using dispersing agents.

Sols are commonly used as part of the sol–gel process.

A sol generally has a liquid as the dispersing medium and solid as a dispersed phase.

Properties of a Colloid (applicable to sols)

  • Heterogeneous Mixture
  • Size of colloid varies from 1 nm - 100 nm
  • They show the Tyndall effect
  • They are quite stable and hence they do not settle down when left undisturbed

See also[edit]

https://en.wikipedia.org/wiki/Sol_(colloid)

The Tyndall effect is light scattering by particles in a colloid or in a very fine suspension. Also known as Tyndall scattering, it is similar to Rayleigh scattering, in that the intensity of the scattered light is inversely proportional to the fourth power of the wavelength, so blue light is scattered much more strongly than red light. An example in everyday life is the blue colour sometimes seen in the smoke emitted by motorcycles, in particular two-stroke machines where the burnt engine oil provides these particles. 

Under the Tyndall effect, the longer wavelengths are more transmitted while the shorter wavelengths are more diffusely reflected via scattering. The Tyndall effect is seen when light-scattering particulate matter is dispersed in an otherwise light-transmitting medium, when the diameter of an individual particle is the range of roughly between 40 and 900 nm, i.e. somewhat below or near the wavelengths of visible light (400–750 nm).

It is particularly applicable to colloidal mixtures and fine suspensions; for example, the Tyndall effect is used in nephelometersto determine the size and density of particles in aerosols and other colloidal matter (see ultramicroscope and turbidimeter).

It is named after the 19th-century physicist John Tyndall, who first studied the phenomenon extensively.

https://en.wikipedia.org/wiki/Tyndall_effect


Iphysicsbackscatter (or backscattering) is the reflection of waves, particles, or signals back to the direction from which they came. It is usually a diffuse reflection due to scattering, as opposed to specular reflection as from a mirror, although specular backscattering can occur at normal incidence with a surface. Backscattering has important applications in astronomyphotography, and medical ultrasonography. The opposite effect is forward scatter, e.g. when a translucentmaterial like a cloud diffuses sunlight, giving soft light.

https://en.wikipedia.org/wiki/Backscatter


Reflection is the change in direction of a wavefront at an interface between two different media so that the wavefront returns into the medium from which it originated. Common examples include the reflection of lightsound and water waves. The law of reflection says that for specular reflection the angle at which the wave is incident on the surface equals the angle at which it is reflected. Mirrors exhibit specular reflection.

In acoustics, reflection causes echoes and is used in sonar. In geology, it is important in the study of seismic waves. Reflection is observed with surface waves in bodies of water. Reflection is observed with many types of electromagnetic wave, besides visible light. Reflection of VHF and higher frequencies is important for radio transmission and for radar. Even hard X-rays and gamma rays can be reflected at shallow angles with special "grazing" mirrors.

https://en.wikipedia.org/wiki/Reflection_(physics)


Seismic waves are waves of energy that travel through Earth's layers, and are a result of earthquakesvolcanic eruptions, magma movement, large landslides and large man-made explosions that give out low-frequency acoustic energy. Many other natural and anthropogenic sources create low-amplitude waves commonly referred to as ambient vibrations. Seismic waves are studied by geophysicists called seismologists. Seismic wave fields are recorded by a seismometerhydrophone (in water), or accelerometer.

The propagation velocity of seismic waves depends on density and elasticity of the medium as well as the type of wave. Velocity tends to increase with depth through Earth's crust and mantle, but drops sharply going from the mantle to outer core.[2]

Earthquakes create distinct types of waves with different velocities; when reaching seismic observatories, their different travel times help scientists to locate the source of the hypocenter. In geophysics, the refraction or reflection of seismic waves is used for research into the structure of Earth's interior, and man-made vibrations are often generated to investigate shallow, subsurface structures.

https://en.wikipedia.org/wiki/Seismic_wave


In geophysicsgeologycivil engineering, and related disciplines, seismic noise is a generic name for a relatively persistent vibration of the ground, due to a multitude of causes, that is often a non-interpretable or unwanted component of signals recorded by seismometers.

Physically, seismic noise arises primarily due to surface or near surface sources and thus consists mostly of elastic surface waves. Low frequency waves (below 1 Hz) are commonly called microseisms and high frequency waves (above 1 Hz) are called microtremors. Primary sources of seismic waves include human activities (such as transportation or industrial activities), winds and other atmospheric phenomena, rivers, and ocean waves.

Seismic noise is relevant to any discipline that depends on seismology, including geologyoil explorationhydrology, and earthquake engineering, and structural health monitoring. It is often called the ambient wavefield or ambient vibrations in those disciplines (however, the latter term may also refer to vibrations transmitted through by air, building, or supporting structures.)

Seismic noise is often a nuisance for activities that are sensitive to extraneous vibrations, including earthquake monitoring and research, precision millingtelescopesgravitational wave detectors, and crystal growing. However, seismic noise also has practical uses, including determining the low-strain and time-varying dynamic properties of civil-engineering structures, such as bridges, buildings, and dams; seismic studies of subsurface structure at many scales, often using the methods of seismic interferometryEnvironmental monitoring; and estimating seismic microzonation maps to characterize local and regional ground response during earthquakes.

https://en.wikipedia.org/wiki/Seismic_noise


Vibration is a mechanical phenomenon whereby oscillations occur about an equilibrium point. The word comes from Latin vibrationem ("shaking, brandishing"). The oscillations may be periodic, such as the motion of a pendulum—or random, such as the movement of a tire on a gravel road.

Vibration can be desirable: for example, the motion of a tuning fork, the reed in a woodwind instrument or harmonica, a mobile phone, or the cone of a loudspeaker.

In many cases, however, vibration is undesirable, wasting energy and creating unwanted sound. For example, the vibrational motions of engineselectric motors, or any mechanical device in operation are typically unwanted. Such vibrations could be caused by imbalances in the rotating parts, uneven friction, or the meshing of gear teeth. Careful designs usually minimize unwanted vibrations.

The studies of sound and vibration are closely related. Sound, or pressure waves, are generated by vibrating structures (e.g. vocal cords); these pressure waves can also induce the vibration of structures (e.g. ear drum). Hence, attempts to reduce noise are often related to issues of vibration.[1]

https://en.wikipedia.org/wiki/Vibration


In physics, a phonon is a collective excitation in a periodic, elastic arrangement of atoms or molecules in condensed matter, specifically in solids and some liquids. Often referred to as a quasiparticle,[1] it is an excited state in the quantum mechanical quantization of the modes of vibrations for elastic structures of interacting particles. Phonons can be thought of as quantized sound waves, similar to photons as quantized light waves.[2]

The study of phonons is an important part of condensed matter physics. They play a major role in many of the physical properties of condensed matter systems, such as thermal conductivity and electrical conductivity, as well as play a fundamental role in models of neutron scattering and related effects.

The concept of phonons was introduced in 1932 by Soviet physicist Igor Tamm. The name phonon comes from the Greek word φωνή (phonÄ“), which translates to sound or voice, because long-wavelength phonons give rise to sound. The name is analogous to the word photon.[citation needed]

https://en.wikipedia.org/wiki/Phonon


Light or visible light is electromagnetic radiation within the portion of the electromagnetic spectrum that is perceivedby the human eye.[1] Visible light is usually defined as having wavelengths in the range of 400–700 nanometres (nm), between the infrared (with longer wavelengths) and the ultraviolet (with shorter wavelengths).[2][3] This wavelength means a frequency range of roughly 430–750 terahertz (THz).

Beam of sun light inside the cavity of Rocca ill'Abissu at Fondachelli-Fantina, Sicily

The primary properties of visible light are intensity, propagation-direction, frequency or wavelength spectrum and polarization. Its speed in a vacuum, 299 792 458 metres a second (m/s), is one of the fundamental constants of nature, as with all types of electromagnetic radiation (EMR), light is found in experimental conditions to always move at this speed in a vacuum.[4]

In physics, the term 'light' sometimes refers to electromagnetic radiation of any wavelength, whether visible or not.[5][6]In this sense, gamma raysX-raysmicrowaves and radio waves are also light. Like all types of electromagnetic radiation, visible light propagates as waves. However, the energy imparted by the waves is absorbed at single locations the way particles are absorbed. The absorbed energy of the electromagnetic waves is called a photon and represents the quanta of light. When a wave of light is transformed and absorbed as a photon, the energy of the wave instantly collapses to a single location and this location is where the photon "arrives". This is what is called the wave function collapse. This dual wave-like and particle-like nature of light is known as the wave–particle duality. The study of light, known as optics, is an important research area in modern physics.

The main source of light on Earth is the Sun. Historically, another important source of light for humans has been fire, from ancient campfires to modern kerosene lamps. With the development of electric lights and power systems, electric lighting has effectively replaced firelight.

A triangular prism dispersing a beam of white light. The longer wavelengths (red) and the shorter wavelengths (blue) are separated.

https://en.wikipedia.org/wiki/Light

In physicselectromagnetic radiation (EMR) consists of waves of the electromagnetic (EM) field, propagating through space, carrying electromagnetic radiant energy.[1] It includes radio wavesmicrowavesinfrared(visible) lightultravioletX-rays, and gamma rays. All of these waves form part of the electromagnetic spectrum.[2]

Classically, electromagnetic radiation consists of electromagnetic waves, which are synchronized oscillations of electric and magnetic fields. Electromagnetic radiation or electromagnetic waves are created due to periodic change of electric or magnetic field. Depending on how this periodic change occurs and the power generated, different wavelengths of electromagnetic spectrum are produced. In a vacuum, electromagnetic waves travel at the speed of light, commonly denoted c. In homogeneous, isotropic media, the oscillations of the two fields are perpendicular to each other and perpendicular to the direction of energy and wave propagation, forming a transverse wave. The wavefront of electromagnetic waves emitted from a point source (such as a light bulb) is a sphere. The position of an electromagnetic wave within the electromagnetic spectrumcan be characterized by either its frequency of oscillation or its wavelength. Electromagnetic waves of different frequency are called by different names since they have different sources and effects on matter. In order of increasing frequency and decreasing wavelength these are: radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays and gamma rays.[3]

Electromagnetic waves are emitted by electrically charged particles undergoing acceleration,[4][5] and these waves can subsequently interact with other charged particles, exerting force on them. EM waves carry energy, momentum and angular momentum away from their source particle and can impart those quantities to matter with which they interact. Electromagnetic radiation is associated with those EM waves that are free to propagate themselves ("radiate") without the continuing influence of the moving charges that produced them, because they have achieved sufficient distance from those charges. Thus, EMR is sometimes referred to as the far field. In this language, the near field refers to EM fields near the charges and current that directly produced them, specifically electromagnetic induction and electrostatic induction phenomena.

In quantum mechanics, an alternate way of viewing EMR is that it consists of photons, uncharged elementary particles with zero rest mass which are the quanta of the electromagnetic field, responsible for all electromagnetic interactions.[6] Quantum electrodynamics is the theory of how EMR interacts with matter on an atomic level.[7] Quantum effects provide additional sources of EMR, such as the transition of electrons to lower energy levels in an atom and black-body radiation.[8] The energy of an individual photon is quantized and is greater for photons of higher frequency. This relationship is given by Planck's equation E = hf, where E is the energy per photon, f is the frequency of the photon, and h is Planck's constant. A single gamma ray photon, for example, might carry ~100,000 times the energy of a single photon of visible light.

The effects of EMR upon chemical compounds and biological organisms depend both upon the radiation's power and its frequency. EMR of visible or lower frequencies (i.e., visible light, infrared, microwaves, and radio waves) is called non-ionizing radiation, because its photons do not individually have enough energy to ionize atoms or molecules or break chemical bonds. The effects of these radiations on chemical systems and living tissue are caused primarily by heating effects from the combined energy transfer of many photons. In contrast, high frequency ultraviolet, X-rays and gamma rays are called ionizing radiation, since individual photons of such high frequency have enough energy to ionize molecules or break chemical bonds. These radiations have the ability to cause chemical reactions and damage living cells beyond that resulting from simple heating, and can be a health hazard.

linearly polarized sinusoidal electromagnetic wave, propagating in the direction +z through a homogeneous, isotropic, dissipationless medium, such as vacuum. The electric field (blue arrows) oscillates in the ±x-direction, and the orthogonal magnetic field (red arrows) oscillates in phase with the electric field, but in the ±y-direction.

https://en.wikipedia.org/wiki/Electromagnetic_radiation


In physicsangular momentum (rarely, moment of momentum or rotational momentum) is the rotational equivalent of linear momentum. It is an important quantity in physics because it is a conserved quantity—the total angular momentum of a closed system remains constant.

In three dimensions, the angular momentum for a point particle is a pseudovector r × p, the cross product of the particle's position vector r (relative to some origin) and its momentum vector; the latter is p = mv in Newtonian mechanics. Unlike momentum, angular momentum depends on where the origin is chosen, since the particle's position is measured from it.

Just as for angular velocity, there are two special types of angular momentum of an object: the spin angular momentumis the angular momentum about the object's centre of mass, while the orbital angular momentum is the angular momentum about a chosen center of rotation. The total angular momentum is the sum of the spin and orbital angular momenta. The orbital angular momentum vector of a point particle is always parallel and directly proportional to its orbital angular velocity vector Ï‰, where the constant of proportionality depends on both the mass of the particle and its distance from origin. The spin angular momentum vector of a rigid body is proportional but not always parallel to the spin angular velocity vector Î©, making the constant of proportionality a second-rank tensor rather than a scalar.

Angular momentum is an extensive quantity; i.e. the total angular momentum of any composite system is the sum of the angular momenta of its constituent parts. For a continuous rigid body or a fluid the total angular momentum is the volume integral of angular momentum density (i.e. angular momentum per unit volume in the limit as volume shrinks to zero) over the entire body.

Torque can be defined as the rate of change of angular momentum, analogous to force. The net external torque on any system is always equal to the total torque on the system; in other words, the sum of all internal torques of any system is always 0 (this is the rotational analogue of Newton's Third Law). Therefore, for a closed system (where there is no net external torque), the total torque on the system must be 0, which means that the total angular momentum of the system is constant. The conservation of angular momentum helps explain many observed phenomena, for example the increase in rotational speed of a spinning figure skater as the skater's arms are contracted, the high rotational rates of neutron stars, the Coriolis effect, and the precession of gyroscopes. In general, conservation limits the possible motion of a system but does not uniquely determine it.

In quantum mechanics, angular momentum (like other quantities) is expressed as an operator, and its one-dimensional projections have quantized eigenvalues. Angular momentum is subject to the Heisenberg uncertainty principle, implying that at any time, only one projection (also called "component") can be measured with definite precision; the other two then remain uncertain. Because of this, the axis of rotation of a quantum particle is undefined. Quantum particles dopossess a type of non-orbital angular momentum called "spin", but this angular momentum does not correspond to a spinning motion.[1]

https://en.wikipedia.org/wiki/Angular_momentum


In geometry and physics, spinors /spɪnÉ™r/ are elements of a complex vector space that can be associated with Euclidean space.[b] Like geometric vectors and more general tensors, spinors transform linearly when the Euclidean space is subjected to a slight (infinitesimal) rotation.[c] However, when a sequence of such small rotations is composed (integrated) to form an overall final rotation, the resulting spinor transformation depends on which sequence of small rotations was used. Unlike vectors and tensors, a spinor transforms to its negative when the space is continuously rotated through a complete turn from 0° to 360° (see picture). This property characterizes spinors: spinors can be viewed as the "square roots" of vectors (although this is inaccurate and may be misleading; they are better viewed as "square roots" of sections of vector bundles – in the case of the exterior algebra bundle of the cotangent bundle, they thus become "square roots" of differential forms).

It is also possible to associate a substantially similar notion of spinor to Minkowski space, in which case the Lorentz transformations of special relativity play the role of rotations. Spinors were introduced in geometry by Ã‰lie Cartan in 1913.[1][d] In the 1920s physicists discovered that spinors are essential to describe the intrinsic angular momentum, or "spin", of the electron and other subatomic particles.[e]

Spinors are characterized by the specific way in which they behave under rotations. They change in different ways depending not just on the overall final rotation, but the details of how that rotation was achieved (by a continuous path in the rotation group). There are two topologically distinguishable classes (homotopy classes) of paths through rotations that result in the same overall rotation, as illustrated by the belt trick puzzle. These two inequivalent classes yield spinor transformations of opposite sign. The spin group is the group of all rotations keeping track of the class.[f] It doubly covers the rotation group, since each rotation can be obtained in two inequivalent ways as the endpoint of a path. The space of spinors by definition is equipped with a (complex) linear representation of the spin group, meaning that elements of the spin group act as linear transformations on the space of spinors, in a way that genuinely depends on the homotopy class.[g] In mathematical terms, spinors are described by a double-valued projective representation of the rotation group SO(3).

Although spinors can be defined purely as elements of a representation space of the spin group (or its Lie algebra of infinitesimal rotations), they are typically defined as elements of a vector space that carries a linear representation of the Clifford algebra. The Clifford algebra is an associative algebra that can be constructed from Euclidean space and its inner product in a basis-independent way. Both the spin group and its Lie algebra are embedded inside the Clifford algebra in a natural way, and in applications the Clifford algebra is often the easiest to work with.[h] A Clifford space operates on a spinor space, and the elements of a spinor space are spinors.[3] After choosing an orthonormal basis of Euclidean space, a representation of the Clifford algebra is generated by gamma matrices, matrices that satisfy a set of canonical anti-commutation relations. The spinors are the column vectors on which these matrices act. In three Euclidean dimensions, for instance, the Pauli spin matrices are a set of gamma matrices,[i] and the two-component complex column vectors on which these matrices act are spinors. However, the particular matrix representation of the Clifford algebra, hence what precisely constitutes a "column vector" (or spinor), involves the choice of basis and gamma matrices in an essential way. As a representation of the spin group, this realization of spinors as (complex[j]) column vectors will either be irreducible if the dimension is odd, or it will decompose into a pair of so-called "half-spin" or Weyl representations if the dimension is even.[k]

https://en.wikipedia.org/wiki/Spinor


In linear algebra, a column vector is a column of entries, for example,

Similarly, a row vector is a row of entries[1]

Throughout, boldface is used for both row and column vectors. The transpose (indicated by T) of a row vector is the column vector

and the transpose of a column vector is the row vector

The set of all row vectors with n entries forms an n-dimensional vector space; similarly, the set of all column vectors with m entries forms an m-dimensional vector space.

The space of row vectors with n entries can be regarded as the dual space of the space of column vectors with n entries, since any linear functional on the space of column vectors can be represented as the left-multiplication of a unique row vector.

https://en.wikipedia.org/wiki/Row_and_column_vectors


In linear algebralinear transformations can be represented by matrices. If  is a linear transformation mapping  to  and  is a column vector with  entries, then

for some  matrix , called the transformation matrix of [citation needed]. Note that  has  rows and  columns, whereas the transformation  is from  to . There are alternative expressions of transformation matrices involving row vectors that are preferred by some authors.[1][2]

https://en.wikipedia.org/wiki/Transformation_matrix


In Euclidean geometry, an affine transformation, or an affinity (from the Latin, affinis, "connected with"), is a geometric transformation that preserves lines and parallelism (but not necessarily distances and angles).

More generally, an affine transformation is an automorphism of an affine space (Euclidean spaces are specific affine spaces), that is, a function which maps an affine space onto itself while preserving both the dimension of any affine subspaces (meaning that it sends points to points, lines to lines, planes to planes, and so on) and the ratios of the lengths of parallel line segments. Consequently, sets of parallel affine subspaces remain parallel after an affine transformation. An affine transformation does not necessarily preserve angles between lines or distances between points, though it does preserve ratios of distances between points lying on a straight line.

If X is the point set of an affine space, then every affine transformation on X can be represented as the composition of a linear transformation on X and a translation of X. Unlike a purely linear transformation, an affine transformation need not preserve the origin of the affine space. Thus, every linear transformation is affine, but not every affine transformation is linear.

Examples of affine transformations include translation, scalinghomothetysimilarityreflectionrotationshear mapping, and compositions of them in any combination and sequence.

Viewing an affine space as the complement of a hyperplane at infinity of a projective space, the affine transformations are the projective transformations of that projective space that leave the hyperplane at infinity invariant, restricted to the complement of that hyperplane.

generalization of an affine transformation is an affine map[1] (or affine homomorphism or affine mapping) between two (potentially different) affine spaces over the same field k. Let (XVk) and (ZWk) be two affine spaces with X and Z the point sets and V and W the respective associated vector spaces over the field k. A map fX → Z is an affine map if there exists a linear map mf : V → W such that mf (x − y) = f (x) − f (y) for all x, y in X.[2]

https://en.wikipedia.org/wiki/Affine_transformation


In Euclidean geometryuniform scaling (or isotropic scaling[1]) is a linear transformation that enlarges (increases) or shrinks (diminishes) objects by a scale factor that is the same in all directions. The result of uniform scaling is similar (in the geometric sense) to the original. A scale factor of 1 is normally allowed, so that congruent shapes are also classed as similar. Uniform scaling happens, for example, when enlarging or reducing a photograph, or when creating a scale model of a building, car, airplane, etc.

More general is scaling with a separate scale factor for each axis direction. Non-uniform scaling (anisotropic scaling) is obtained when at least one of the scaling factors is different from the others; a special case is directional scaling or stretching (in one direction). Non-uniform scaling changes the shape of the object; e.g. a square may change into a rectangle, or into a parallelogram if the sides of the square are not parallel to the scaling axes (the angles between lines parallel to the axes are preserved, but not all angles). It occurs, for example, when a faraway billboard is viewed from an oblique angle, or when the shadow of a flat object falls on a surface that is not parallel to it.

When the scale factor is larger than 1, (uniform or non-uniform) scaling is sometimes also called dilation or enlargement. When the scale factor is a positive number smaller than 1, scaling is sometimes also called contraction.

In the most general sense, a scaling includes the case in which the directions of scaling are not perpendicular. It also includes the case in which one or more scale factors are equal to zero (projection), and the case of one or more negative scale factors (a directional scaling by -1 is equivalent to a reflection).

Scaling is a linear transformation, and a special case of homothetic transformation. In most cases, the homothetic transformations are non-linear transformations.

Each iteration of the Sierpinski triangle contains triangles related to the next iteration by a scale factor of 1/2

https://en.wikipedia.org/wiki/Scaling_(geometry)


In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrixof any size, or any multiple of it (a scalar matrix), is a diagonal matrix.

A diagonal matrix is sometimes called a scaling matrix, since matrix multiplication with it results in changing scale (size). Its determinant is the product of its diagonal values.

https://en.wikipedia.org/wiki/Diagonal_matrix


In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrixof any size, or any multiple of it (a scalar matrix), is a diagonal matrix.

A diagonal matrix is sometimes called a scaling matrix, since matrix multiplication with it results in changing scale (size). Its determinant is the product of its diagonal values.

Scalar matrix[edit]

A diagonal matrix with equal diagonal entries is a scalar matrix; that is, a scalar multiple Î» of the identity matrix I. Its effect on a vector is scalar multiplication by Î». For example, a 3×3 scalar matrix has the form:

The scalar matrices are the center of the algebra of matrices: that is, they are precisely the matrices that commute with all other square matrices of the same size.[a]By contrast, over a field (like the real numbers), a diagonal matrix with all diagonal elements distinct only commutes with diagonal matrices (its centralizer is the set of diagonal matrices). That is because if a diagonal matrix  has  then given a matrix  with  the  term of the products are:  and  and  (since one can divide by ), so they do not commute unless the off-diagonal terms are zero.[b] Diagonal matrices where the diagonal entries are not all equal or all distinct have centralizers intermediate between the whole space and only diagonal matrices.[1]

For an abstract vector space V (rather than the concrete vector space ), the analog of scalar matrices are scalar transformations. This is true more generally for a module M over a ring R, with the endomorphism algebra End(M) (algebra of linear operators on M) replacing the algebra of matrices. Formally, scalar multiplication is a linear map, inducing a map  (from a scalar Î» to its corresponding scalar transformation, multiplication by Î») exhibiting End(M) as a R-algebra. For vector spaces, the scalar transforms are exactly the center of the endomorphism algebra, and, similarly, invertible transforms are the center of the general linear groupGL(V). The former is more generally true free modules , for which the endomorphism algebra is isomorphic to a matrix algebra.

Vector operations[edit]

Multiplying a vector by a diagonal matrix multiplies each of the terms by the corresponding diagonal entry. Given a diagonal matrix  and a vector , the product is:

This can be expressed more compactly by using a vector instead of a diagonal matrix, , and taking the Hadamard product of the vectors (entrywise product), denoted :

This is mathematically equivalent, but avoids storing all the zero terms of this sparse matrix. This product is thus used in machine learning, such as computing products of derivatives in backpropagation or multiplying IDF weights in TF-IDF,[2] since some BLAS frameworks, which multiply matrices efficiently, do not include Hadamard product capability directly.[3]

Matrix operations[edit]

The operations of matrix addition and matrix multiplication are especially simple for diagonal matrices. Write diag(a1, ..., an) for a diagonal matrix whose diagonal entries starting in the upper left corner are a1, ..., an. Then, for addition, we have

diag(a1, ..., an) + diag(b1, ..., bn) = diag(a1 + b1, ..., an + bn)

and for matrix multiplication,

diag(a1, ..., an) diag(b1, ..., bn) = diag(a1b1, ..., anbn).

The diagonal matrix diag(a1, ..., an) is invertible if and only if the entries a1, ..., an are all nonzero. In this case, we have

diag(a1, ..., an)−1 = diag(a1−1, ..., an−1).

In particular, the diagonal matrices form a subring of the ring of all n-by-n matrices.

Multiplying an n-by-n matrix A from the left with diag(a1, ..., an) amounts to multiplying the ith row of A by ai for all i; multiplying the matrix A from the right with diag(a1, ..., an) amounts to multiplying the ith column of A by ai for all i.

Operator matrix in eigenbasis[edit]

As explained in determining coefficients of operator matrix, there is a special basis, e1, …, en, for which the matrix  takes the diagonal form. Hence, in the defining equation , all coefficients  with i ≠ j are zero, leaving only one term per sum. The surviving diagonal elements, , are known as eigenvaluesand designated with  in the equation, which reduces to . The resulting equation is known as eigenvalue equation[4] and used to derive the characteristic polynomial and, further, eigenvalues and eigenvectors.

In other words, the eigenvalues of diag(λ1, …, Î»n) are Î»1, …, Î»n with associated eigenvectors of e1, …, en.

Properties[edit]

  • The determinant of diag(a1, ..., an) is the product a1an.
  • The adjugate of a diagonal matrix is again diagonal.
  • Where all matrices are square,
  • The identity matrix In and zero matrix are diagonal.
  • A 1×1 matrix is always diagonal.

Applications[edit]

Diagonal matrices occur in many areas of linear algebra. Because of the simple description of the matrix operation and eigenvalues/eigenvectors given above, it is typically desirable to represent a given matrix or linear map by a diagonal matrix.

In fact, a given n-by-n matrix A is similar to a diagonal matrix (meaning that there is a matrix X such that X−1AX is diagonal) if and only if it has n linearly independenteigenvectors. Such matrices are said to be diagonalizable.

Over the field of real or complex numbers, more is true. The spectral theorem says that every normal matrix is unitarily similar to a diagonal matrix (if AA = AA then there exists a unitary matrix U such that UAU is diagonal). Furthermore, the singular value decomposition implies that for any matrix A, there exist unitary matrices Uand V such that UAV is diagonal with positive entries.

Operator theory[edit]

In operator theory, particularly the study of PDEs, operators are particularly easy to understand and PDEs easy to solve if the operator is diagonal with respect to the basis with which one is working; this corresponds to a separable partial differential equation. Therefore, a key technique to understanding operators is a change of coordinates—in the language of operators, an integral transform—which changes the basis to an eigenbasis of eigenfunctions: which makes the equation separable. An important example of this is the Fourier transform, which diagonalizes constant coefficient differentiation operators (or more generally translation invariant operators), such as the Laplacian operator, say, in the heat equation.

Especially easy are multiplication operators, which are defined as multiplication by (the values of) a fixed function–the values of the function at each point correspond to the diagonal entries of a matrix.

See also[edit]


Notes[edit]

  1. ^ Proof: given the elementary matrix  is the matrix with only the i-th row of M and  is the square matrix with only the M j-th column, so the non-diagonal entries must be zero, and the ith diagonal entry much equal the jth diagonal entry.
  2. ^ Over more general rings, this does not hold, because one cannot always divide.

https://en.wikipedia.org/wiki/Diagonal_matrix#Scalar_matrix


In linear algebra, an eigenvector (/ˈaɪɡənËŒvÉ›ktÉ™r/) or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by ,[1] is the factor by which the eigenvector is scaled.

Geometrically, an eigenvector, corresponding to a real nonzero eigenvalue, points in a direction in which it is stretched by the transformation and the eigenvalue is the factor by which it is stretched. If the eigenvalue is negative, the direction is reversed.[2] Loosely speaking, in a multidimensional vector space, the eigenvector is not rotated.

https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors


Vibration analysis[edit]

Mode shape of a tuning fork at eigenfrequency 440.09 Hz

Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of freedom. The eigenvalues are the natural frequencies (or eigenfrequencies) of vibration, and the eigenvectors are the shapes of these vibrational modes. In particular, undamped vibration is governed by

or

that is, acceleration is proportional to position (i.e., we expect  to be sinusoidal in time).

In  dimensions,  becomes a mass matrix and  a stiffness matrix. Admissible solutions are then a linear combination of solutions to the generalized eigenvalue problem

where  is the eigenvalue and  is the (imaginary) angular frequency. The principal vibration modes are different from the principal compliance modes, which are the eigenvectors of  alone. Furthermore, damped vibration, governed by

leads to a so-called quadratic eigenvalue problem,

This can be reduced to a generalized eigenvalue problem by algebraic manipulation at the cost of solving a larger system.

The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved using finite element analysis, but neatly generalize the solution to scalar-valued vibration problems.

Eigenfaces[edit]

Eigenfaces as examples of eigenvectors

In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel.[51] The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated with a large set of normalized pictures of faces are called eigenfaces; this is an example of principal component analysis. They are very useful for expressing any face image as a linear combination of some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. Research related to eigen vision systems determining hand gestures has also been made.

Similar to this concept, eigenvoices represent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems for speaker adaptation.

Tensor of moment of inertia[edit]

In mechanics, the eigenvectors of the moment of inertia tensor define the principal axes of a rigid body. The tensor of moment of inertia is a key quantity required to determine the rotation of a rigid body around its center of mass.

Stress tensor[edit]

In solid mechanics, the stress tensor is symmetric and so can be decomposed into a diagonal tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has no shear components; the components it does have are the principal components.

Graphs[edit]

In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix , or (increasingly) of the graph's Laplacian matrix due to its discrete Laplace operator, which is either  (sometimes called the combinatorial Laplacian) or  (sometimes called the normalized Laplacian), where  is a diagonal matrix with  equal to the degree of vertex , and in , the th diagonal entry is . The th principal eigenvector of a graph is defined as either the eigenvector corresponding to the th largest or th smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector.

The principal eigenvector is used to measure the centrality of its vertices. An example is Google's PageRank algorithm. The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components. This vector corresponds to the stationary distribution of the Markov chainrepresented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second smallest eigenvector can be used to partition the graph into clusters, via spectral clustering. Other methods are also available for clustering.

Basic reproduction number[edit]

The basic reproduction number () is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, then  is the average number of people that one typical infectious person will infect. The generation time of an infection is the time, , from one person becoming infected to the next person becoming infected. In a heterogeneous population, the next generation matrix defines how many people in the population will become infected after time  has passed.  is then the largest eigenvalue of the next generation matrix.[52][53]

See also[edit]


https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors#Vibration_analysis

In mathematics, an eigenplane is a two-dimensional invariant subspace in a given vector space. By analogy with the term eigenvector for a vector which, when operated on by a linear operator is another vector which is a scalar multiple of itself, the term eigenplane can be used to describe a two-dimensional plane (a 2-plane), such that the operation of a linear operator on a vector in the 2-plane always yields another vector in the same 2-plane.

A particular case that has been studied is that in which the linear operator is an isometry M of the hypersphere (written S3) represented within four-dimensional Euclidean space:

where s and t are four-dimensional column vectors and Λθ is a two-dimensional eigenrotation within the eigenplane.

In the usual eigenvector problem, there is freedom to multiply an eigenvector by an arbitrary scalar; in this case there is freedom to multiply by an arbitrary non-zero rotation.

This case is potentially physically interesting in the case that the shape of the universe is a multiply connected 3-manifold, since finding the angles of the eigenrotations of a candidate isometry for topological lensing is a way to falsify such hypotheses.

See also[edit]


https://en.wikipedia.org/wiki/Eigenplane



No comments:

Post a Comment