# Tensor Practice

- Tensor Review: Tensors are real!
- Tensor Practice: index notation and Einstein summation convention

## 1. Tensor Review: Tensors are real!

A brief, but useful, definition of an $n$-rank tensor is that it is a *real-valued function of $n$ vectors*. This means that it is an object into which we can put $n$ vectors and expect a scalar out. Scalars are nice because they don’t transform when we change viewpoints or perform coordinate transforms. There are many notations for a tensor:
$
\mathbf{T}, \mathbb{T}, \overleftrightarrow{T}, \underline{\underline{T}},
$
and many ways to indicate how we put vectors into them, such as with dots $\cdot$, like in our definition of rotational, kinetic energy $\mathbf{\omega}\cdot\mathbb{I}\cdot\mathbf{\omega}$. This works for tensors of rank 2, but becomes more difficult for higher rank tensors (do we put the third vector over it? the fourth below? the fifth …. in front??). Kip Thorne offers an alternative that I like because it reminds me of the functional nature of tensors and translates nicely into pseudo-code when I need to do numerics. (Read chapter 1 of the linked text for a very readable summary of tensors in physics.) He writes a tensor as
where the number of spots tell you the rank (the number of input vectors). If the order of the inputs doesn’t matter, then the tensor is *symmetric*. If the order does matter, we can label the inputs to keep them straight:

This suggests a nice shorthand for how to write a tensor, the way you’ll see fairly often, as an object with many indices telling you how many vectors can be put into it, $\mathbb{T}_{ij}$.

Let’s take a look at a couple of tensors. We’ll start with the one from class, $\mathbb{1} (\underline{\,}, \underline{\,})$, which takes two inputs - it is a rank-2 tensor. It is also symmetric, so we don’t need to label it’s inputs. Let’s put a position vector $\mathbf{r}$ into both slots, and what do we get? How about different position vectors? The tensor $\mathbb{1}$ gives us a sense of distance. We call the tensor that defines distance for a vector space the *metric tensor*, and usually denote it with a $\mathbf{g}$. For flat spaces, $\mathbf{g}$ is constant and doesn’t depend on where you are, an assumption relaxed in General Relativity.

Let’s try another. The moment of inertia tensor $\mathbb{I}\left(\underline{\,}, \underline{\,}\right)$. We can define the output of tensors for different kinds of inputs. If we put in a normal vector (just one), we get the moment of inertia along a particular axis. If we put in an angular velocity vector, we call it the *angular momentum* vector. And if we put in two angular velocity vectors, we call it twice the *rotational kinetic energy* (a scalar, more on this later).

To play with the moment of inertia tensor and understand a derivation from class, we need to introduce one more tensor, the Levi-Civita tensor $\mathbf{\varepsilon}$. This tensor defines the cross product. What is its rank? Well, when we put in two vectors, say $\mathbf{a}\times\mathbf{b}$, what comes out? A vector! But a tensor is a real-valued function, so there’s one more spot to put a vector, making it the rank-3 tensor

This tensor is not symmetric (in fact, it’s also called the *anti-symmetric tensor*), so I need to label the inputs. If I put three position vectors into $\mathbf{\varepsilon}$, what do we call that scalar? You may remember that the magnitude of the cross product is the area of the parallelogram defined by the two input vectors. Throw in another vector and what you have is a volume! Like the metric tensor, the Levo-Civita tensor tells us something very important about our vector space.

### Index Notation and Einstein summation convention

With a set of basis vectors, we can write a vector $\mathbf{a}$ by its components along those basis vectors $(a_1, a_2, a_3)$ which we can abbreviate as $a_i$ where $i$ goes from 1 to 3. A very important operation with vectors is the dot product, which is written and computed by
The last term in the above has our abbreviated vector notation and makes use of the *Einstein summation convention*. The convention is, to save time writing summation symbols, to implictly sum over indices any time an index is repeated.

To see how this convention works with tensors, we’ll continue on with the example of the dot product. We already saw above that the dot product can be represented as a rank-2 identity tensor $\mathbb{1}$ or the metric tensor $\mathbb{g}$. That is,

which we can write as \(a_i\mathbb{1}_{ij}a_j,\), where as before the summation over both indices (independantly) is implied. The components of the identiy tensor can be represented by a delta function

Now, when we sum over all the $i$ and $j$ combinations, the only terms that won’t be zero are the terms where $i$ equals $j$. Hence, the above reduces to

which is the dot product that we wanted to compute. This notation makes it easier to represent and perform computations with many tensors of various ranks.

## Tensor Practice

Let’s practice manipulating tensor equations by proving a relation we had in class. We said that the rotational kinetic energy, given by

could be written as

Prove this. To help, use the identity

We write as (note that the only unpaired index is $i$, which indicates the components of the cross product resultant vector). The second factor needs to be writte as $\varepsilon_{ilm}r_l\omega_m$, with the $i$ repeated to signify the dot product between these vectors. None of the other indices can be repeated because they are already paired. Plugging in, and simplifying the delta functions,

See that this has precisely the form we want! $\mathbf{\omega}$’s on either side of something with two indices ($k$ and $m$). Remembering that $\delta_{km}$ gives the compoments of the identity tensor, this can be written in a coordinate-independent way as

This is the form of the moment of inertia tensor we used in class and on the homeworks.