1.3 Intro to Tensors#

Lecture 1.3
Saskia Goes, s.goes@imperial.ac.uk

Table of Contents#

Learning outcomes#

  • Be able to perform tensor operations (addition, multiplication) on Cartesian orthonormal bases

  • Be able to do basic tensor calculus (time and space derivatives, divergence, curl of a vector field) on these bases.

  • Understand differences/commonalities tensor and vector

  • Use index notation and Einstein convention

Tensors#

  • Tensors are a generalisation of vectors to more dimensions

  • Use when properties depend on direction in more than one way.

  • A physical quantity that is independent of coordinate system is used

  • Derives from the word tension (= stress)

  • Stress tensor is an example

  • Not just a multidimensional array

Further examples of the uses of tensors include: Stress, strain, and moment tensors. Electrostatics, electrodynamics, rotation, crystal properties

Tensor Rank#

Tensors describe properties that depend on direction

Tensor rank 0 - scalar - independent of direction
Tensor rank 1 - vector - depends on direction in 1 way
Tensor rank 2 - tensor - depends on direction in 2 ways

../_images/Tensor_rank.PNG

Notation #

  • Tensors as \(\mathbf{T}\)

  • Second order tensors wirtten as \(\bar{\bar{\mathbf{T}}}\) or \(\underline{\underline{\mathbf{T}}}\)

  • Index notation: \(T_{ij}\) where \(i,j=x,y,z\) or \(i,j=1,2,3\)

  • For higher order: \(T_{ijkl}\)

An Example Tensor#

Here is an example of a tensor, the gradient of velocity that depends on direction in two ways

\[\begin{split} \nabla \mathbf{v} = \frac{\partial v_j}{\partial x_i} = \left[\begin{array}{cc} \frac{\partial v_1}{\partial x_1} & \frac{\partial v_2}{\partial x_1} & \frac{\partial v_3}{\partial x_1} \\ \frac{\partial v_1}{\partial x_2} & \frac{\partial v_2}{\partial x_2} & \frac{\partial v_3}{\partial x_2} \\ \frac{\partial v_1}{\partial x_3} & \frac{\partial v_2}{\partial x_3} & \frac{\partial v_3}{\partial x_3} \\ \end{array}\right] \end{split}\]

Where each numerator, \(v_j\), is a component of velocity and each denominator \(x_i\) is the direction of the spatial variation, \(\frac{\partial v_j}{\partial x_i}\).

This tensor gradient definition is common in fluid dynamics
Note: some texts (including Lai et al., Reddy) use a transposed definition:

\[\begin{split} \nabla \mathbf{v} = \frac{\partial v_i}{\partial x_j} = \left[\begin{array}{cc} \frac{\partial v_1}{\partial x_1} & \frac{\partial v_1}{\partial x_2} & \frac{\partial v_1}{\partial x_3} \\ \frac{\partial v_2}{\partial x_1} & \frac{\partial v_2}{\partial x_2} & \frac{\partial v_2}{\partial x_3} \\ \frac{\partial v_3}{\partial x_1} & \frac{\partial v_3}{\partial x_2} & \frac{\partial v_3}{\partial x_3} \\ \end{array}\right] \end{split}\]

Stress Tensor #

  • Body forces - depend on volume, e.g. gravity

  • Surface forces - depend on surface area, e.g. friction

Forces introduce a state of stress in a body, where stress is force per area

../_images/Stress_tensor.PNG
  • the \(\Delta\mathbf{f}\) necessary to maintain equilibrium depends on orientation of the plane on which stress is considered. The orientation of the plane can be described by the direction of it normal, unit vector \(\hat{\mathbf{n}}\)

  • e.g. on a plane with normal in \(x_1\) direction, \(\hat{\mathbf{n}}_1\), the traction, \(\mathbf{t_1}\) on the surface is defined as:

\[ \mathbf{t}_1 = \mathbf{t}(\mathbf{\hat{n}}_1) = \lim_{\Delta A \rightarrow 0} \frac{\Delta\mathbf{f}}{\Delta A_1} \]
\[ \mathbf{t}_1 = (\sigma_{11}, \sigma_{12}, \sigma_{13}) \]
\[ \sigma_{11} = \lim_{\Delta A \rightarrow 0} \frac{\Delta\mathbf{f_1}}{\Delta A_1} \]
\[ \sigma_{12} = \lim_{\Delta A \rightarrow 0} \frac{\Delta\mathbf{f_2}}{\Delta A_1} \]
\[ \sigma_{13} = \lim_{\Delta A \rightarrow 0} \frac{\Delta\mathbf{f_3}}{\Delta A_1} \]

Nine components are needed to fully describe the stress in 3 dimensions on all possible planes.

\[\sigma_{11}, \sigma_{12}, \sigma_{13} \rightarrow \Delta A_1\]
\[\sigma_{21}, \sigma_{22}, \sigma_{23} \rightarrow \Delta A_2\]
\[\sigma_{31}, \sigma_{32}, \sigma_{33} \rightarrow \Delta A_3\]

Generalised stress tensor:

\[\begin{split} \sigma_{ij} = \left[\begin{array}{cc} \sigma_{11} & \sigma_{12} & \sigma_{13} \\ \sigma_{21} & \sigma_{22} & \sigma_{23} \\ \sigma_{31} & \sigma_{32} & \sigma_{33} \\ \end{array}\right] \end{split}\]
  • First index indicates orientation of plane

  • Second index indicates orientation of force

Distinction between tensor and its matrix #

  • Tensor – a physical quantity that is independent of coordinate system used

  • Matrix of a tensor – contains components of that tensor in a particular coordinate frame

You could test that indeed tensor addition and multiplication satisfy transformation laws

Tensor Operations #

Summation (Einstein) Convention #

When an index in a single term is a duplicate, dummy index, summation is implied without writing the summation symbol. An example is shown below:

\[ a_1 v_1 + a_2 v_2 + a_3 v_3 = \sum_{i=1}^3 a_i v_i = a_i v_i \]

An example with three terms is shown below:

\[a_{ij} x_i y_j = \sum_{i=1}^3 \sum_{j=1}^3 a_{ij} x_i y_j\]
\[= a_{11} x_1 y_1 + a_{12} x_1 y_2 + a_{13} x_1 y_3\]
\[= a_{21} x_2 y_1 + a_{22} x_2 y_2 + a_{23} x_2 y_3\]
\[= a_{31} x_3 y_1 + a_{32} x_3 y_2 + a_{33} x_3 y_3\]

The example below is invalid, as there are indices repeated more than twice

\[ \sum_{i=1}^3 a_i b_i v_i \neq a_i b_i v_i \]

For more details see https://en.wikipedia.org/wiki/Einstein_notation

Notation Conventions #

  • Index notation - \(\alpha_{ij} x_i y_j = \sum_{i=1}^3 \sum_{j=1}^3 \alpha_{ij} x_i y_j\)

  • Matrix-vector notation - \(\mathbf{x^T A y} = (\begin{array} \\ x_1 & x_2 & x_3\end{array})\) \(\left[\begin{array} \\ \alpha_{11} & \alpha_{12} & \alpha_{13} \\ \alpha_{11} & \alpha_{12} & \alpha_{13} \\ \alpha_{11} & \alpha_{12} & \alpha_{13} \\ \end{array}\right]\) \(\left(\begin{array} \\ y_1 \\ y_2 \\ y_3\end{array}\right)\)

  • Other versions of index notation - \(x_i \alpha_{ij} y_j = \alpha_{ij} x_i y_j = \alpha_{ij} y_j x_i\)

Dummy vs Free Index #

\[ a_1 v_2 + a_2 v_2 + a_3 v_3 = \sum_{i=1}^3 a_i v_i = \sum_{k=1}^3 a_k v_k \]
  • \(i,k\) - dummy index: appears in duplicates and can be substituted without changing equation

\[F_j = A_j \sum_{i=1}^3 B_i C_i\]
\[↓\]
\[F_1 = A_1 (B_1 C_1 + B_2 C_2 + B_3 C_3)\]
\[F_2 = A_2 (B_1 C_1 + B_2 C_2 + B_3 C_3)\]
\[F_3 = A_3 (B_1 C_1 + B_2 C_2 + B_3 C_3)\]
  • \(j\) - free index, appears once in each term of the equation

Addition and Subtraction of Tensors#

\(\mathbf{W} = a\mathbf{T}+b\mathbf{S}\)
add each component: \(W_{ijkl} = aT_{ijkl} + bS_{ijkl}\)

\(\mathbf{T}\) and \(\mathbf{S}\) must have the same rank, dimension and units.
\(\mathbf{W}\) has takes the same rank, dimension and units as \(\mathbf{T}\) and \(\mathbf{S}\)

\(\mathbf{T}\) and \(\mathbf{S}\) are tensors, hence \(\mathbf{W}\) is a tensor. Tensor addition is commutative and associative.
This is the same as how vectors and matrices are added.

Multiplication of Tensors#

Inner product#

= dot product

\(\mathbf{W} = \mathbf{T \cdot S}\) involves contraction over one index: \(W_{ik} = T_{ij} S_{jk}\), as normal matrix and matrix-vector multiplication.

\(\mathbf{T}\) and \(\mathbf{S}\) can have different rank, but the same dimension.
rank of \(\mathbf{W} = \) rank of \(\mathbf{T}\) + rank of \(\mathbf{S} - 2\), dimensions as \(\mathbf{T}\) and \(\mathbf{S}\) units as product of units \(\mathbf{T}\) and \(\mathbf{S}\)

\(\mathbf{T}\) and \(\mathbf{S}\) are tensors, hence \(\mathbf{W}\) is a tensor.

Examples:

  • \(\lvert \mathbf{v} \rvert^2 = \mathbf{v \cdot v}\)

  • \(\bar{\bar{\mathbf{\sigma}}} = \bar{\bar{\bar{\bar{\mathbf{C}}}}}:\bar{\bar{\mathbf{\epsilon}}}\) or \(\sigma_{ij} = C_{ijkl} \epsilon_{kl}\) - Hooke’s Law

Tensor product#

= outer product = dyadic product \(\neq\) cross product

\(\mathbf{W} = \mathbf{TS}\) is often written as \(\mathbf{W} = \mathbf{T \otimes S}\)
No contraction: \(W_{ijkl} = T_{ij} S_{kl}\)

\(\mathbf{T}\) and \(\mathbf{S}\) can have different rank, but same dimension.
Rank of \(\mathbf{W}\) = rank of \(\mathbf{T}\) + rank of \(\mathbf{S}\), dimension as \(\mathbf{T}\) and \(\mathbf{S}\), units as product of units \(\mathbf{T}\) and \(\mathbf{S}\).

\(\mathbf{T}\) and \(\mathbf{S}\) are tensors, hence \(\mathbf{W}\) is a tensor.

Examples:

  • \(\mathbf{\nabla v}\) (gradient of a vector) \(\neq \mathbf{ \nabla \cdot v}\) (divergence)

remember that gradient is a vector \( \nabla = \left( \frac{\partial}{\partial x_1}, \frac{\partial}{\partial x_2}, \frac{\partial}{\partial x_3} \right)\)

For both multiplications#

  • Distributive: \(\mathbf{A(B+C) = AB + AC}\)

  • Associative: \(\mathbf{A(BC) = (AB)C}\)

  • Not commutative: \(\mathbf{TS \neq ST}\) and \(\mathbf{T \cdot S \neq S \cdot T}\)
    However: \(\mathbf{T\cdot S = S^T \cdot T^T}\)
    And: \(\mathbf{ab = (ba)^T)}\) but only for rank 2

Remember transpose: \(\mathbf{a \cdot T \cdot b = b \cdot T^T \cdot a} \longrightarrow T_{ji} = T_{ij}^T\)

Special tensor - The Kronecker delta#

\(\mathbf{\delta}_{ij} = \mathbf{\hat{e}}_i \cdot \mathbf{\hat{e}}_j\)
\(\mathbf{\delta}_{ij} = 1\) for \(i = j\), \(\mathbf{\delta}_{ij} = 0\) for \(i \neq j\)

In 3-D:

\[\begin{split} \mathbf{\delta} = \mathbf{I} = \left[\begin{array}\ 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right]\end{split}\]

\(\mathbf{T \cdot \delta = T \cdot I = T}\) or \(T_{ij} \mathbf{\delta}_{jk} = T_{ik}\)

Isotropic tensors, invariant upon coordinate transformation:

  • scalars

  • \(\mathbf{0}\) vector

  • \(\delta_{ij}\)

\(\mathbf{\delta}\) is isotropic: \(\delta_{ij} = \delta_{ij}^{'}\) upon coordinate transformation can be used to write dot product: \(T_{ij} S_{jl} = T_{ij} S_{kl} \delta_{jk}\) can be used to write trace: \(A_{ii} = A_{ij} \delta_{ij}\).
Orthonormal transformation: \(\alpha_{ij} \alpha_{jk}^T = \delta_{ik}\)

See Exercise 8

Special tensor - Permutation symbol#

\(\epsilon_{ijk} = (\mathbf{\hat{e}}_i \times \mathbf{\hat{e}}_j) \cdot \mathbf{\hat{e}}_k\)
\(\epsilon_{ijk} = 1\) if \(i,j,k\) are an even permutation of 1,2,3
\(\epsilon_{ijk} = -1\) if \(i,j,k\) are an odd permutation of 1,2,3
\(\epsilon_{ijk} = 0\) for all other \(i,j,k\)

An even permutation follows the order 1 - 2 - 3, an odd permutation follows the order 1 - 3 - 2

../_images/even_perm.PNG

Example:

  • \(\epsilon_{123} = \epsilon_{231} = \epsilon_{312} = 1\)

  • \(\epsilon_{213} = \epsilon_{321} = \epsilon_{132} = -1\)

  • \(\epsilon_{111} = \epsilon_{112} = \epsilon_{222} = ... = 0\)

Note that \(\epsilon_{ijk}a_ib_j\mathbf{\hat{e}}_k\), where \(\mathbf{\hat{e}}_k\) is the unit vector in \(k\)-th direction, is index notation for the cross product \(\mathbf{a \times b}\)

Useful identity: \(\epsilon_{ijm} \epsilon_{klm} = \delta_{ik} \delta_{jl} - \delta_{il} \delta_{jk}\)

See Exercise 8

Vector Derivatives - Curl#

Curl of a vector:

\[\begin{split}\nabla \times \mathbf{v} = \epsilon_{ijk} \frac{\partial}{\partial x_i} v_j \mathbf{\hat{e}}_k = \left(\begin{array}\ \frac{\partial v_3}{ \partial x_2} - \frac{\partial v_2}{ \partial x_3} \\ \frac{\partial v_1}{ \partial x_3} - \frac{\partial v_3}{ \partial x_1} \\ \frac{\partial v_2}{ \partial x_1} - \frac{\partial v_1}{ \partial x_2} \\ \end{array}\right)\end{split}\]

Some Tensor Calculus#

Gradient of a vector is a tensor:

\[\begin{split}\nabla \mathbf{v} = \frac{\partial v_j}{\partial x_i}= \left(\begin{array}\ \frac{\partial v_1}{ \partial x_1} & \frac{\partial v_2}{ \partial x_1} & \frac{\partial v_3}{ \partial x_1} \\ \frac{\partial v_1}{ \partial x_2} & \frac{\partial v_2}{ \partial x_2} & \frac{\partial v_3}{ \partial x_2} \\ \frac{\partial v_1}{ \partial x_3} & \frac{\partial v_2}{ \partial x_3} & \frac{\partial v_3}{ \partial x_3} \\ \end{array}\right)\end{split}\]

Such that the change \(\mathbf{dv}\) in field \(\mathbf{v}\) is: \(\mathbf{dv} = \mathbf{dx \cdot \nabla v}\)

Divergence of a vector:

\[ \nabla \cdot \mathbf{v} = \frac{\partial v_i}{\partial x_i} = \frac{\partial v_1}{\partial x_1} + \frac{\partial v_2}{\partial x_2} + \frac{\partial v_3}{\partial x_3} \]
\[ \nabla \cdot \mathbf{v} = tr(\nabla \mathbf{v} )\]

Trace of a tensor is the sum of diagonal elements

Divergence of a tensor:

\[\begin{split} \nabla \cdot T = \frac{\partial T_{ij}}{\partial x_j} = \left(\begin{array}\ \frac{\partial T_{1j}}{ \partial x_j} \\ \frac{\partial T_{2j}}{ \partial x_j} \\ \frac{\partial T_{3j}}{ \partial x_j} \\ \end{array}\right) = \left(\begin{array}\ \frac{\partial T_{11}}{ \partial x_1} + \frac{\partial T_{12}}{ \partial x_2} + \frac{\partial T_{13}}{ \partial x_3} \\ \frac{\partial T_{21}}{ \partial x_1} + \frac{\partial T_{22}}{ \partial x_2} + \frac{\partial T_{23}}{ \partial x_3} \\ \frac{\partial T_{31}}{ \partial x_1} + \frac{\partial T_{32}}{ \partial x_2} + \frac{\partial T_{33}}{ \partial x_3} \\ \end{array}\right)\end{split}\]

Laplacian = div(grad f), where f is a scalar function:

\[\nabla \cdot \nabla f = \nabla^2 f = \Delta f = \frac{\partial^2}{\partial x_i \partial x_i} = \frac{\partial^2f}{\partial x_1^2} + \frac{\partial^2f}{\partial x_2^2} + \frac{\partial^2f}{\partial x_3^2}\]

Summary#

Vectors
– Addition, linear independence
– Orthonormal Cartesian bases, transformation
– Multiplication
– Derivatives, del, div, curl
Tensors
– Tensors, rank, stress tensor
– Index notation, summation convention
– Addition, multiplication
– Special tensors, \(\delta_{ij}\) and \(\epsilon_{ijk}\)
– Tensor calculus: gradient, divergence, curl, ..

Further reading/studying e.g:
Reddy (2013): 2.2.1-2.2.3, 2.2.5, 2.2.6,2.4.1, 2.4.4, 2.4.5, 2.4.6, 2.4.8 (not co/contravariant)
Lai, Rubin,Kremple (2010): 2.1-2.13, 2.16, 2.17, 2.27-2.32, 4.1-4.3
Khan Academy – linear algebra, multivariate calculus

Try yourself#

  • For this part of the lecture, try \(\color{blue}{\text{Exercise 7}}\) and optional advanced \(\color{orange}{\text{Exercise 8}}\)

  • Try to finish in the afternoon workshop: \(\color{blue}{\text{Exercise 2, 3, 5, 6, 7, 9}}\)

  • Additional practise: \(\color{green}{\text{Exercise 1, 4}}\)

  • Advanced practise: \(\color{orange}{\text{Exercise 8, 10}}\)