A simple note on SpaceTime. Note 1.

Date : 07/01/2018
Version: 0.7
By: Appie vd Sel
Status: not ready yet
Remark: Please refresh the page to see any updates.

Fig 1: Just an illustration of some views on SpaceTime.

It's nice to talk about some interesting fields in Physics.
However, I have no doubt that you will qualify this text as fairly "high-level",
and not much "in depth".

Quite a few new theoretical insights have emerged, say, as from the early '80's (or so), on the "structure"
of SpaceTime. Why I say SpaceTime, instead of just Space, is indeed somewhat debatable.
I try to illustrate that in a moment from now. It is true that the ideas of Einstein, Lorentz, Minkowski
and others, in the first decades of the 1900's, showed that Time and Space are indivisible parts
of the same "something", called SpaceTime.

Later, physicists always have dreamed to reconcile "General Relativity" of Einstein, with Quantum Mechanics.
Both are pretty old theories by now (developed in the first halve of the former century), but they still form
the very fundaments of physics.

For example, most people view Space as a true continuum. As an example that this is completely reasonable,
is that on a human scale, Space is "smooth" and there seems to be no reason to say that it has some
discrete, or sort of granular, or sort of lattice form.

However, if you consider Quantum Mechanics, which works very well in the (sub-)Atomic domain, and with
waves and particles, then it turns out that discrete states, and quanta, and eigenvalues,
and specific quantum numbers, seem to dominate the theory.
What's more, the theory is in accordance with observations and experiments.
So, it works very well with events, objects and observables, on a very small scale, with matter and radiation.

From such a perspective, it's not so strange that physicists, a long time ago, might thus have considered the idea,
that even quantization of "Space" might be true too. As if Space has a sort of "lattice" structure.
This then might be a reality on an extremely small scale, so that on larger scales (if you zoom out),
Space just simply "looks" like a classical continuum again.

Early ideas of Penrose, and a modern theory such as "Loop Quantum Gravity", indeed have provided
a physical framework (a fairly complete theory) for such ideas.

But physics is a strange adventure. Indeed, there exists quite a few alternatives (or slightly different
alternatives) for something like "Loop Quantum Gravity", like other unifying "Quantum Gravity" theories.
One other rather popular alternative unifying theory, is "String theory".

But some newer highly remarkable (almost exotic) ideas did hit the community of physicist too, during the 2000's.
Today, there exists quite a few "streams", within (most notably), theoretical physicists.
Some have published articles, saying that Space, when considered "in depth", is actually build from extremely
small "wormholes". Some very well-kown theoretical physicist, support such ideas.

Others, and this seems to get more and more consensus as time goes on, has made it plausible
that a feature called "entanglement", rules (or determine) Space or SpaceTime (and gravity).

And if the above was not enough, something else is going on too.
As of the '80's (or so), some hints from Quantum Gravity / Lattice theories, and even nummerical theoretical
calculations (based on very specific theorems), seem to indicate that the number of dimensions (d)
might depend on "scale" (dimensional reduction). In the limit, when the scale of SpaceTime goes to the minimum,
d goes to "2". In Jip and Janneke language: as if a surface is the most basic unit.

Needless to say it all still is very hypothetical, but those hints are not mere speculations (!)
You might wonder, as I do, how such statement as listed above, relate to the "curled up"
extra dimensions of String / M theories.
Anyway, I like to describe (in a very lightweight way) such hints as mentioned above.
Indeed, Physics today, is truly very exciting. But the "big picture", still is quite incomplete.

It's also nice to say some words on phrases as "canonical" and "covariant" formulations of a theory, and the meaning
of "background" (or: How to get the SpaceTime out of the Theory)

It's pure fun to take a closer look at such theories, and this is what I will try to do in this note.
This note will be then be further characterized by a fairly large number of short chapters.

CONFESSION: However, I will not attempt to comply to mathematical strictness and total mathematical "purety".
To convey basis information, of exciting new Theories, is my main goal. Unfortunately, often exact formulations
will not be realized, I am afraid.
But, the note should therefore be more easy to read.

I think it's going to be quite large. But, I Hope that someone likes it!

1. A tiny bit of Math.

First, I started this doc using a certain methodology. However, recently I realized that some Physical Theories
cannot be explained (albeit at high-level), without some basic understanding of certain principles.
Indeed, I have a few of such principles in mind, so let's do those first.

Sections 1.1 up to 1.4, is centered around the "metric". Section 1.5 tries to say something useful on gauge symmetry.
Then, section 1.6 is the fastest, and shortest (and most incomplete) overview on several branches of physics.

1.1 A few words about the Metric:

Ultimately, my main goal of this section is, to make the appearance of the "metric tensor" or Riemann tensor,
a bit "plausible", or a bit acceptable.
The fundamental metric (or metric tensor) may be written as:

ds2 = gμν dxμdxν     (equation 1)

Do not worry about the upper and lower indices in that equation. It will become clear later on.

A "metric" is simply a rather expensive, and luxury word for "distance" in Space (or distance in SpaceTime).

In a flat Space Eucedian Space, like an "ordinary" 3D Space (R3), you may draw a Cartesian coordinate system.
Basically, such Cartesian coordinate system uses three perpendicular axes, the x-, y- and z-axis.
The whole purpose of such coordinate system, is to describe or pinpoint "points" in Space.

A point in such Space, might be denoted by (x, y, z). It's also posssible to draw something that's called
a vector, from the Origin (center) of the coordinate system, to this random point (x, y, z).

Note 1:

If you like, you can try a simple introduction to vectors first. Only the first few pages
of the following link might be useful, here. If you like to try it, then use this link.
You can also use it for some illustrations of a flat R3 Space, and for some vector illustrations.

The fact that such Eucedian Space is "flat", means this. Suppose you are on the x-axis. Suppose you walk in
the +x direction. So, you position might then be, as time passes, something like (1,0,0), then (2,0,0), etc..
Your position does not depend in any way, on "y" or "z". That is Δ x, as you move, has no relation with changes
on "y" or "z": those does not happen at all. You will see this clearly in matrix form, in just a moment.

Distance in R3:

This is basically no more than applying the "Pythagorean theorem".

For example, in R3 we have the square of the distance between two points P=(x1, y1, z1), and Q=(x2, y2, z2):

|PQ|2 = (x2 - x1)2 + (y2 - y1)2 + (z2 - z1)2

Here, implicitly, it is assumed in this discussion, that the Coordinate system, or any point in it, is fully described by a linear
combination of the basis (or unit-) vectors (1,0,0), (0,1,0), and (0,0,1).

When we would consider the distance from some point (x,y,z) relative to the Origin of our coordinate system, we may simply write:

ds2=dx2 + dy2 + dz2   (equation 2)

The "ds" most often represents "very small distance", as if we would only consider very small variations.
Therefore, in true flat Space, you may view equation 2 to be equivalent to:

s2=x2 + y2 + z2   (equation 3)

However, I will stick most often to the "ds" (and "dx" etc..) notation.
In general, in Rn, of dimension "n" (with "n" axis):

ds2=dx12 + . . . + dxn2

Now, equation 1 might be very intruiging. But here is a very simple equivalent equation for Eucledian flat Space R3:

Suppose we review equation 1 again, however this time from a flat Space, and using plain matrices/vectors. Then:

ds2 = ┌ 1 0 0 ┐
│ 0 1 0 │
└ 0 0 1 ┘
┌ x ┐
│ y │
└ z ┘
┌ x ┐
│ y │
└ z ┘
= ┌ 1x+0y+0z ┐
│ 0x+1y+0z │
└ 0x+0y+1z ┘
┌ x ┐
│ y │
└ z ┘
= ┌ x ┐
│ y │
└ z ┘
┌ x ┐
│ y │
└ z ┘
= x2 + y2 + z2   (equation 4)

Let me explain this:

In equation 4, you see a matrix, then a columvector, then again a columvector.
Insteand of expressing the vector (x, y, z) as a rowvector, you may also express it as a columnvector,
which is very "common practice" in Euclidean space.

First, I will let operate the matrix, on the first columnvector. If you apply the rules from vector calculus,
with this specific matrix, you will get exactly the same columvector again.
Then, what is left, is no more than an "inner product" (scalar product), of (x, y, z), with itself.

I must say that equation 4 already "resembles" equation 1. Why? Since "gμν" is a tensor object,
(in general sense), and in most cases, such a tensor can be identified by a matrix.
The "dxμ and dxν" expressions in equation 1, are general expressions for vectors,
similar to (x, y, z).

The specific matrix, in this case:

gμν = ┌ 1 0 0 ┐
│ 0 1 0 │
└ 0 0 1 ┘
    (equation 5)

This really is an expression, that we are dealing with flat Space. Suppose you move along the z-axis, no matter
what direction, then while you are moving, there is no change in your x- coordinate, or y- coordinate.

Now, suppose the z-axis, is curved. Then, while moving along z, there would be changes in "x" and "y" too!

The fact that the matrix above, only have values in the diagonal elements (which are 1), is actually
the mathematical way of saying that we are in flat Space. And here it would be a flat 3D space.

In flat space, while traveling along a certain axis, there are no changes in the other coordinates.
In curved space, while traveling along a certain axis, there are changes in certain other coordinates.

A mathematical way to say "how" a certain coordinate changes, if you move along another axis, is taking
the partial differential. For example, to see how "z" would change due to a variation in "x",
we would write "∂ z / ∂ x".

This is rather similar to highschool math, where you may have seen expressions like "dy/dx",
which also expresses, how "y" would vary, under variations of "x".

If we would not be sure of our 3D space, if it would be really flat, or possibly strangely curved in some way,
then our "metric tensor" might be written as:

gμν = ┌ ∂x/∂x ∂x/∂y ∂x/∂z ┐
│ ∂y/∂x ∂y/∂y ∂y/∂z │
└ ∂z/∂x ∂z/∂y ∂z/∂z ┘
    (equation 6)

For example, ∂y/∂x would mean: what is the change in "y" due to variations in "x"?
In case of a flat Space, it would be "O". In case of some curved Space, it could have some "non zero" value.

For example, ∂z/∂z would mean: what is the change in "z" due to variations in "z"?
This would be "1". The ratio of z/z, or dz/dz, is always "1".

So, in case of a flat Space, ∂x/∂x, ∂y/∂y, ∂z/∂z, would all be "1",
while all other matrix elements (non-diagonal) would be "0", exactly is we see in equation 5.

The above still is not really completely equal to equation 1. This will come later.
But, I hope that the appearance of equation 1, is a bit more acceptable now.

Note 2:

It helps to have a certain understanding on Matrices, although for this note it is not absolutely required.
But I surely recommend it. I have a small note on Matrices too. You only need to browse through it.
If you like to try it, then use this link.

1.2 A few words about the Einstein notation and coordinate transformation:

If section 1.1 above, helped in understanding the Riemann tensor, then that's really great !
This is so, since that object is very important in SpaceTime discussions.

Another important "thing" is the "Einstein notation".
Let's see what this is about.

Compact notations: an example:

A matrix may have "n" rows, and "m" columns. In many cases, "n=m", in which case it is called
a square matrix. Below we see an example of a 3x3 matrix "A":

A = ┌ a11 a12 a13
│ a21 a22 a23
└ a31 a32 a33
    (equation 7)

It's really true that mathematicians and physicists, do want to minimize, or compactify, their representation
of mathematical objects. Believe it, or not: the matrix above is often simply abbreviated by "aij".

In equation 7 above, we can see that the matrix consists of the elements a11, a12 etc.., all the way up to a33.
Note that there are always 2 indices needed, to point to a certain matrix element. We need an "i" and a "j",
to exactly specify a certain element. Ofcourse, the indices do not need to be denoted by "i" and a "j".
You are free in your choice. They may also be denoted by Greek symbols like μ and ν.

However, it saves a lot of writing if everybody agrees that such "difficult to write down object",
like shown in equation 7, can simply be abbreviated by "aij".
Ofcourse, somewhere in the context, it must then be clear that both "i" and "j" run from 1 to 3.
Otherwise, it would not be clear that we are dealing with a 3x3 matrix, and not e.g. with a 5x5 matrix.

I hope you have tried the "hint" in note 2 above, in order to find out more on matrices (if you would need it).

The Einstein notation: an example:

The story below, is (I hope) a nico intro into what is called the "Einstein notation".
It's really true that professional articles, almost never fully write out mathematical objects,
but instead, the use the "compact" notation (like e.g. the "Einstein notation").

We already have seen the metric in a flat Eucledian space R3:

ds2=dx2 + dy2 + dz2  

Now, we are very curious as to how to express such metric if we change our coordinate system, or what is the same, switch
from one set of basis vectors to another set of basis vectors.

This procedure is no more that a "vehicle" to illustrate the Einstein notation.
So, the transformation itself, is less important.

Often, folks choose an orthonormal coordinate system, like the Cartesian one, where the unit vectors are all perpendicular to each other.
However, in a general discussion, it is no requirement that the basis vectors are perpendicular, as longs as they are "independant",
meaning that any point in Space can be described by a linear combination of those vectors.
But those basisvectors may use certain "angles" between all of them, and indeed, not neccesarily 90 degrees.

Let's consider R3 again. Suppose we have two sets of basis vectors:

S1 = {v1,v2,v3}
S2 = {w1,w2,w3}

Suppose that the basis vectors are all independent. Then any vector in R3 may be expended
in a linear combination of the vectors of S1, or S2.

So, if we have just "some" vector a, then for example a might be written as:

a = a v1 + b v2 + c v3.

Then, any of the basis vectors (say from S1), can be expressed as a combination of the vectors of the other set.
So, we may have:

v1 = a11w1 + a12w2 + a13w3
v2 = a21w1 + a22w2 + a23w3
v3 = a31w1 + a32w2 + a33w3     (equations 8)

Please take notice that the aij coefficients, form a "matrix". In this case, it's a square 3x3 matrix.

Now, if we consider two decriptions of some point in space, depending on the chosen coordinate system, we can describe that point
as for example (x, y, z), or (x', y', z'). So, in this case, (x, y, z) might be the representation of that point using set S,
while (x', y', z') might be the representation of that point using the set S'.

(Note the "apostrophes" denoted by '.)

It's not hard to express the coordinates of one system in terms of the other one. For example:

x' = a11x + a12y + a13z
y' = a21x + a22y + a23z
z' = a31x + a32y + a33z     (equations 9)

In a condensed notation, mathematicians and physicists often use an expression like the one showed below.
In this case, a whole set of equations is simply captured in one simple expression:

x'i = Σ j=13 aij xj   (equation 10)

Here, we also have generalized the coordinates. Instead of talking about x, y, z etc.., we simply use an index "i",
to denote the coordinates. So, something like xi, or x'i, will replace the different letters like x, y, etc..

Also, now we have a nice extension when we would talk about Rn, where i then ranges from 1 to n.

Also, Σ is a symbol that is used to denote a "summation". In the example above, we sum over "j".
So, each time we select a certain "i", we sum over the j's.

You can try it out yourself. For example for the second coordinate x'2, we would have:

x'2 = a21x1 + a22x2 + a23x3 = Σ j=13 a2j xj

Equation 10 can even be written in a more "condensed" format. If it's trivial that the summation
is along a certain index (say for example "j"), then the summation symbol is often completely left out.
At first, it may appear somewhat strange, but it's heavily used in scientific articles.
Then, equation 10 becomes:

x'i = aij xj   (equation 11)

Note that the whole set of equations 9 is captured in equation 11.
Per "i", we have a sum over the "j's", resulting in all of the equations listed in (9).
This is also often called "the Einstein notation".

Again, in general, it must be evident that the summation would go along a certain index, otherwise it would be
somewhat obscure. You can check for yourself, just like above, that we indeed sum along "j".

We are still not at our "core" subjects, like Relativity, Kaluza-Klein theory, micro-Black Holes etc..
It's still just some preliminary theory, we are studying here, in this Chapter.

Let's now touch another subject, namely covariant and contravariant indices. For a metric, my feeling
is that it is "a bit" overrated. It is important ofcourse, but not "world-shocking".

1.3 A few words about the covariant and contravariant indices:

I hope I do a reasonable job here. I think that it is not so very important to "nitty-gritty"
plough through all equations listed below. It's good enough to follow the main theme.
It all in all is not "world-shocking", but it's indeed not very trivial stuff either.

1.3.1 Introduction:

The gμν in equation 1, is a tensor object. In this example, it uses two indices, here named μν.
Ofcourse, there is nothing wrong to use "i" and "j" instead, as index identifier. But in this case it is simply by convention
to use μν (sort of).

Since there are two indices, it can be identified as a matrix. You probably know that we in general may have an "mxn" matrix,
using "m" rows and "n" columns. At the same time, the "m" and "n" will function as "indices", that is, use the "m"
to walk along the consecutive rows, and use the "n", to walk along the consecutive columns of the matrix.

A vector uses only 1 index, like the (row) vector (v1, v2, .., vn), which is denoted
in Einstein notation. simply by vi (where it is understood that i ranges between 1 .. n).

But what if we see this object?: Ti,j,k. This object uses three indices, and cannot identified as
a matrix anymore. However, with some creativity, you may say that it can be associated with a 3D matrix, which looks
like a cube (if the max ranges of i,j, and k are the same), and at one side we may have a "i x j" matrix, but there is also
"depth", due to the "k" index.
Or, you may also read it as a "stack" of 3x3 matrices.

So, mathematical objects are possible, which are even "wider / more descritive" (so to say) than a matrix.

But why the distinction between upper- and lower indices?
Let's stick to vectors for a moment. The discussion will also hold for tensor elements.

The qualifiers "covariant and contravariant" only applies for the components of vectors,
and thus we can also only talk about covariant and contravariant indices.

How does, say, a 4 dimensional vector looks like? You may see some 4 dimensional vector like so:

V = (v1, v2, v3, v4)

which is a row vector.

It also may be notated as a column vector, like:

V =
┌ v1
│ v2
│ v3
└ v4

The difference is really subtle. In some cases, folks "couple" the fact if a vector is written
as a row- or column vector, to the ideas of covariant or contravariant components.
I say: take care! In some cases it holds, like with elements which are complex numbers.
But in general: this comparison s?cks (question mark inserted on purpose).

A better, and more physical interpretation is this:

Suppose we change the basis of our coordinate system. What happens to the components of a vector?
Note that I am talking purely about components, like vi.

-We know that there exists row vectors, and columnvectors. Is it only the representation which is different?
Almost always: yes. But not 100% "always". For example, in Quantum Mechanics, they might have different
interpretations, and they are called Bra's and Ket's, in Dirac's vector formulation.

-Why a distinction between covariant and contravariant vectors? This is maybe a distinction
between a constructed vector and a physically observable vector.

A position in Space, or a velocity, can be viewed as physically observable vectors (contravariant).
A gradient of a scalar field, can be viewed as a vector construction (covariant).

The typing of "physically observable" or "via construction", is not universally valid,
but I use it since it may help in the following discussion.
Also: The distinction is not about the qualification whether an object would be a "true" vector or not.

1.3.2. Covariant (constructed vector):

Suppose you have a scalar function defined on R3 Space. Since it's a scalar function,
the values of the function are simply numbers. Let's call the numbers "w".
So the function is w = ϕ(x,y,z). This is a R3 -> R function.

It could be a function that describes pressures in Space, or Temperatures, or whatever other sort of "pure" values.

If those pressures or temperatures is not constant in Space, then the values thus differ over various regions.
In such a case, it's possible to define a vector ∇ ϕ, which represents the direction and magnitude of
of the max change of ϕ(x,y,z) at a certain point (or at all points actually).
If we now rotate our coordinate system, ϕ and ∇ ϕ, simply sweep along with the rotation.
If such behaviour happes, the vector is called covariant.

If we have such a vector, say "A", then it's components are notated with lower indices, like Ai.

1.3.3. Contravariant vector (directly physically observable vectors):

You might review a position in Space, or even in R2.
If in the plane, suppose you draw a vector (do not care what direction). Now you rotate the x- and y-axes
counterclock wise by, say, 60 degrees. To view the vector from this new perspective, you must
rotate the vector components the other way, in the same amount.
Take notice that I say: view the vector from this new perspective.
It's a nice excercise to try to visualize that mentally.

If we have such a vector, say "B", then it's components are notated with upper indices, like Bi.

It was just an agreement some to ago, to use lower- and upper indices that way.

1.3.4. Again a remark on the metric tensor:

If we now take a look at the Riemann metric tensor again (equation 1):

ds2 = gμν dxμdxν    

Then we have two contravariant vectors (indices) listed, namely dxμdxν.

Einstein notation is in use, so here we have a sum of all spatial components, which looks like:

ds2=dx2 + dy2 + dz2, if we are in flat Eucledian R3.

If we now have curved space, or a constructed object is in effect, describing lower and thicker densities
which determine the curvature of Space, then we need the gμν object too,
to describe the full metric (or account for all factors which have an effect on the distance).

1.3.5. The equations 9, 10, 11 written slightly "differently" :

Let's repeat equations 9 again:

x' = a11x + a12y + a13z
y' = a21x + a22y + a23z
z' = a31x + a32y + a33z    

We have the transformed coordinates (x', y', z'), and the original coordinates (x, y, z).

Let's for example focus for a moment on the equation for x'.
If we now take the partial differential with respect to x:

∂x'/∂x = ∂/∂x  (a11x + a12y + a13z) = a11.

It's quite an amazing result. It just returned the element a11. However, the differential itself
should not arouse your amazement.
Even from highschool math, we have similar results. Suppose you have the linear function y=3x+5, then dy/dx = 3.
You might remember that from highschool math.

Obviously, similar results hold for all aij, for all coefficients in equations 9. Thus:

aij = ∂xi
----       (equation 12)

Similarly, using general indices "i" and "j", then equation 10 (which we saw before):

x'i = Σ aij xj  

can be written as:

x'i = Σ ∂x'i
----   xj     (equation 13)

When using Einstein notation, the summation symbol Σ is omitted.
Now, we need to link this result to the understanding of covariant and contravariant objects (like a vector).

We will see that the aij as expressed in equation 12, is no more than the projection
of the vector x' on the axes of the coordinate system in use, or, in other words, the projection
of the vector x' on the set of basis vectors in use.

Take a look at the figure below. Here we see R2, and two sets of basis vectors.

The set {e1, e2} is the usual set of orthonormal basisvectors, (1,0) and (0,1).

We have a Linear Mapping "L", which rotates e1 and e2, into a new coordinate system.
This mapping rotates counterclock wise, over an angle ϕ.

Fig 2: Rotation of a set of basisvectors, to produces mapped basisvectors

So, we have the sets:

S1 = {e1, e2}   which is shown in "red" in the figure above.
S2 = {L(e1), L(e2)}   which is shown in "blue" in the figure above.

If you take a look at L(e1), and you project this vector on the original basisvectors,
then you can see that L(e1) = cos(ϕ) e1 + sin(ϕ) e2.
Here, I assume that you are comfortable with cos() and sin() functions.

If you need some help with sin() and cos() then you might want to look here.

It is thus easy to find:

L(e1) = cos(ϕ) e1 + sin(ϕ) e2.
L(e2) = -sin(ϕ) e1 + cos(ϕ) e2     (equations 14).

By the way, the matrix associate with the mapping L is:

┌ cos(ϕ) -sin(ϕ) ┐
└ sin(ϕ) cos(ϕ.) ┘

Remember, in coordinate transformations, if you know the mappings of the original
basisvectors, expressed in the new ones, you immediately know the columns of the Matrix.

Let's now see what happens to the usual coordinates:

┌ x1' ┐
└ x2' ┘
= ┌ cos(ϕ) -sin(ϕ) ┐
└ sin(ϕ) cos(ϕ.) ┘
┌ x1
└ x2

This leads to:

x1' = cos(ϕ) x1 - sin(ϕ)x2
x2' = sin(ϕ) x2 + cos(ϕ)x2     (equations 15)

Let's apply equation 12 on the first equation:

∂x1' / ∂ x1 = ∂ / ∂ x1 (cos(ϕ) x1 - sin(ϕ)x2) = cos(ϕ)

It's simply the length of the projection of L(e1) on the (original) x-axis, or e1

This was not new really. I only wanted to "associate" the coefficient "aij" of equations 12, 13, with the differential,
and show (by example) that this is indeed true.

By the way, the difference between covariant indices, or contravariant indices, written as coefficients, and thus
also in differentials, was not showed here.
A good example would be, to fully write down an example in polar coordinates. And then compare that with
an example in Cartesian coordinates.

All of the above is not sufficient to prove the statements below. However, I hope that you think that
those two statements below are quite plausible. That would really be enough to follow this note.

Definitions 1:

A vector A, or tensor of the first rank, is called contravariant if all of it's components
transforms (under rotation) as:

A'i = Σ ∂x'i
----   Aj     (equation 16)

A vector A, or tensor of the first rank, is called covariant if all of it's components
transforms (under rotation) as:

A'i = Σ ∂xj
----   Aj     (equation 17)

Note that in many general discussions about vectors, you might write upper- or lower indices for vector components.
However, strickly, the use of lower and upper indices is reserved for co- and contra variant vectors.

But if the distinction does not matter at all, you may sometimes see the usage of upper- or lower indices,
in various different textbooks or articles.
In Cartesian coordinates, all indices may be written as lower indices.

1.4 A few words om tensor operations:

You may call a simple number, a scalar (using 0 indices), a tensor of rank "0".

You may call a vector (using 1 index), as long as it adheres to one of the "definitions 1" above, a tensor of rank "1".

A tensor of rank 2 (using two indices) is a nxm matrix. However, not every nxm matrix is a tensor.
Indeed, similar to definitions 1, they need to conform to specific transformation rules.

A tensort of rank 3 (using three indices), looks like a cubic (nxmxl) matrix. However, not every nxmxl matrix is a tensor.
Indeed, similar to definitions 1, they need to conform to specific transformation rules.

The requirement that the object needs to adhere to to specific transformation rules, is simply not much more
than saying that their components (in a certain basis) are linear expressions in another basis, and thus
physically and mathematically consistent. And, thus they are "meaningfull".

Here are two examples of tensors of the second rank:

Example 1:

┌ 1 0 0 ┐
│ 0 1 0 │
└ 0 0 1 ┘

This tensor is the "tensor metric" of three dimensional flat Space. Yes, this is a very "unspicy" example.
However, it is a tensor.

Example 2:

┌ -xy -y2
└ x2 xy   ┘

Is this a tensor of the second rank? I do not know by just looking at it. Indeed, not every matrix is a tensor.
However, this example is a tensor. You can find out, by painstakingly investigate if all 4 components comply
to "transformation rules" for tensors of rank 2. These are quite similar to what we have seen in "definitions 1",
which hold for vectors.

You do not need to remember the "stuff" below, but the transformation rules for tensors of the second rank are:


A'ij = Σ Σ ∂x'i ∂x'j
-----------   Akl     (equation 18)


C'ij = Σ Σ ∂xk ∂xl
-----------   Ckl     (equation 19)

(The summations go over "k" and "l").

There is also a "mixed" variant, with upper and lower indices, but I will skip that one.

There exists also tensors with a rank higher than two, for example the one using 3 indices,
which is then a tensor of rank 3 (often can be represented by a cubic matrix).
Mathematically, it can even be higher than 3, like for example a tensor of rank 5: Tijklm.

A tensor used in physics, should have a clear purpose. Suppose you place a CO2 molecule
in some ElectroMagnetic field. The various charges inside the molecule, will all respond to the field,
and the "form" of the dipole moment can be quite complex. You cannot describe it with a vector (as you
would do, for example, for the position or velocity of a particle).

To get the dipole moment right, you probably need a rank 2 tensor (nxn matrix), describing the
various stresses in various directions.

A tensor, which can be discribed by a mtrix, which shows only constant numbers, is probably less interesting.
The "something" it tries to describe, is terribly constant here.
On the other hand, if the elements are coordinates, it describes something very useful.

Further, some important operations are possible with tensors, like "inner multiplication",
or "contraction", "tensor product" and others. It's not important for this note.
When it is important at a certain occasion, we will deal with that at that particular time.

1.5 A few words om gauge symmetry:

Above we have seen some theory around the "metric" and related stuff.

Although not quite so formulated in the sections above, you have seen some transformations, which are also needed
to verify, or validate, our formulations. That is, if you for example, rotate your lab, the laws of physics,
or how you describe a physical system using vectors or tensors, should not fundamentally change.
The same should hold for other Linear mappings (transformations), not only rotations.
It's possible that some elements in vectors and tensors change, but the description as a whole, must be
the same.

This section is rather similar, but more in a general sense, I think.

lots to do here...

1.6 An abnormally quick (and certainly inadequate) tour on several branches in Physics:

Strange person, this Albert.... Why an inadequate tour> -> If it had to be adequate, I need 10000000 years.

lots to do here...too

lots to do here...

2. SpaceTime in Relativity.

The considerations in this section still quite conservative in character.

However, we will also touch on a subject like "Lorentz violation" which is a very interesting field indeed.

Einstein produced two magnificent theories in the early 1900's: The "Theory of Special Relativity" (1905),
and the "Theory of General Relativity" (1916), both often abbrieviated with "STR" and "GTR".

Both theories are highly involved in discovering the properties of SpaceTime.
Ofcourse, both theories are absolutely monumental! I can only distill a few points from those theories,
which is what I am going to do here.

2.1 The common (Regular) 3D coordinate system:

Ofcourse we can visualize a three Dimensional Cartesian coordinate system, using an x-axis, y-axis, and z-axis, all
perpendicular to each other. Nothing special here. This is highschool math. In that "3D space",
points can be described by (x,y,z), where x, y, and z can take on any value.
The x, y, and z, are "spatial" meaning that they are also involved in something called a "metric",
which you often can relate to the fact that you are able to define a distance between points.

For example, between the points (x1,y1,z1) and (x2,y2,z2),
you can draw a linesegment, which also means that we can speak of the distance "d" between those two points.

Simply using the Pythagorean theorem, the distance "d" squared, is:

ds2 = (x2 - x1)2 + (y2 - y1)2 + (z2 - z1)2=

By the way, in math nothing prevents you from using e.g. a 6 dimensional space, where points might be described
as a 6 tuple (in general an n-tuple) like (x1,y1,z1,u1,v1,w1).

2.2 4D SpaceTime (Minkowski SpaceTime):

3D space and time, together form a 4D SpaceTime. But how to really define it in terms of, say, "points"
in such a space, just like we did above? First, I must say that 4D SpaceTime, is not just like adding
one extra spatial dimension to 3D Space (like if you would go from 2D space (x,y), simply by adding the z-axis,
in order to arrive to 3D Space).
No, it must have a time "t" related dimension. But, If we would simply use (z,y,z,t), then we would not be able
to get a "metric" as we saw above (like a distance "d" between points).

Now, in order to let the fourth coordinate, relate to a spatial dimension, we can use "ct", where "c" is the
universally constant speed of light. A simple illustration might help: You know that if you bike 10 m/s,
then after 5 seconds, you have covered 5 x 10 = 50m. So with a constant speed, distance = speed x time.
In order to correctly indentify points (actually "events", about later more on that), we thus might use:

(x,y,z,ct) or, which is not much different, (ct,x,y,z), which you may also find in the literature

In such a case, we are able to use a metric (like a "distance") between points in such SpaceTime:

ds2 = -c2t2 + dx2 + dy2 + dz2

Note: there is a little issue with the story above. In the true metric "ds" (distance), the c2t2
term must indeed be negative. I left out to correctly explain that.
However, my simple description of Minkowski SpaceTime, will already help us in a discussion of "events".

2.3 A few highlights of STR:

STR is mainly involved in "frames of reference" (coordinate systems) which move with a uniform or constant speed
with respect to each other.

Guided by reasonable assumptions, Einstein reasoned that:

-The laws of Physics should hold everywhere. The law of physics are the same in any frame of reference.
-There is no preferred "direction" in true space. Or Space itself is homogeneous and isotropic.

-And, what seems to be quite extraordinary, the speed of light (denoted by c) is constant, independend
of any frame of reference.

The last one is not so trivial. On a human scale, we know that if you are in a train, which
moves with a speed of 100 km/h with respect to the ground, and you are inside the train and shoot
an arrow with 100 km/h in the direction of movement, the speed of the arrow with respect to the ground
is 200 km/h. Likewise, if you drive a car with 70 km/h, and someone overtakes you with 72 km/h,
then for you the relative speed of the other car is only 2 km/h.

It's always simply a direct (vector) addition/subtraction of speeds (velocities).

Fig 2: Illustration of 2 frames of reference S and S', moving with constant speed.

Figure 2 illustrates this. An observer in S might think that he is stationary. Frame S' goes by,
with a speed of 20 m/s in the +x direction, relative to frame S.
Ofcourse, an observer in S' might think that it is he which is stationary, and that it is frame S
which moving in the -x direction with a speed of 20 m/s.
Let's return to the observer in S. If the observer in S' shoots an arrow with 30 m/s in the +x
direction (relative to S'), then the observer in S measures the speed of that arrow to be 50 m/s.

If you would replace the arrow, with any form of electromagnetic radiation, like
radiowave, radar, light etc.., then all observers, no matter which frame of reference, would
measure the same constant speed, namely the speed of light, which is universally constant.
This is highly remarkable, and will have profound implications to the structure of SpaceTime,
as seen by different observers in different frames of reference (different in the sense of speed
in some direction, like the x-axis).

In many articles, the speed of light ("c") is a central theme. However, visible light is just one of
the infinite manifestations of ElectroMagnetic (EM) radiation, which has an infinite spectrum
of frequencies (and energies).

So if you are in S', travelling with with 30% of "c" with respect to S, and you turn on a laser pointing in the +x direction,
then observers in S and S', still will only measure the same constant speed of light (denoted by "c").

The following is not an adequate solution to the riddle of the constant speed of light.
There is a relation that couples "c" to 2 fundamental electrical and magnetic constants of the Vacuum, namely
ε0 and μ0 which represents the "vacuum permittivity", or "permittivity of free space".
These constants say "something" about the capability/ability of the vacuum to permit electric- and magnetic fields.

c2   =    1
ε0 μ0

Viewed this way, and assuming ε0 and μ0 are constant throughout the Vacuum,
then c is constant too. Again, this is not adequate enough, as a full explanation as to why c is constant
in all frames of reference.

Let's go to the findings of Einstein in STR.

Suppose we have frames S and S' again. Suppose you are in S, which you think is stationary.
Ofcourse, you can specify Space coordinates in your frame, using (x,y,z). The time in your frame
is denoted by "t". While the x, y and z can vary ofcourse, you assume that t is the same throughout
your frame of reference. That's no more than a valid assumption.
Suppose you are located in the Origin of frame S, that is (0,0,0).

However, frame S' moves with speed "v" towards the +x direction, with respect to the (stationary) frame S.
An observer in S', uses the Spatial coordinates (x',y',z'), and time t'.

From a classical point of view, the times t and t' are exactly equal. This is also in correspondence
with all experiences in human life. The time in a plane is exactly equal to the time on the ground.
This is not exactly so in STR. However, the discrepancies will get clearer as v gets closer to c.
For now, we denote the time in S as t, and the time in S' by t' (although classically, they should be equal).

Classically, an observer in S' would say that the coordinates of S and S' relate in the following way:

x' = x - vt
y' = y
z' = z
t' = t

Since the relative movement of S and S' is only along the x-axis, it follows that y=y', and z=z'.

The set of equations above, is often referred to as a "Galilean Transformation".

Einstein futher reasoned in the following way. If a light explosion would take place, then the spherical
wavefront would be seen as equal by all observers in any moving frame of reference.
It means for our observers in S and S', that:

Sperical wavefront described from S:

x2 + y2 + z2 = (ct)2 = c2t2    

We can describe the sperical wavefront from the perspective of S' too. Then it will be:

Sperical wavefront described from S':

x'2 + y'2 + z'2 = (ct')2 = c2t'2    

Both equations describe the same "distance" in Minkowski SpaceTime.


x'2 + y'2 + z'2 - c2t'2 =d

x2 + y2 + z2 - c2t2 =d

But S' is moving into the +x direction only (as viewed from S). There is no reason
to expect "any effect" along the y and z directions. Sure, as you will see in a few minutes,
in the dimension in which we indeed have a "speed" ("x"), we will see a large effect.
But in the transpose directions, thus in this case the directions "y" and "z", there is no effect at all.
It's still reasonable to say that:

y' = y
z' = z

The distances in Minkowski spacetime as showed above, then reduce to:

x2 = c2t2    (1)

x'2 = c2t'2     (2)


c2t'2 - x'2 = c2t2 - x2     (3)

This is still the metric as we should use in Minkowski SpaceTime, but we were able to eliminate
the "y" and "z" coordinates.
Since (1) and (2), are the same distance in Minkowski SpaceTime, we were able to write down (3).

These equations can be solved, that is, express x' in terms of x and t, and express t' in terms
of x and t.

The math is not too hard, but a little too spaceous to write down here.
You can take a look at one of my earlier notes, which is says a little more on STR,
and indeed shows the derivation of the solutions.

If you are interested, then you might want to take a look at this note.

Below you will see the solutions for x' and t'. These are the famous "length contraction",
and "time dilation". It starts to "live" if you really see an example. That will be done below.
For now, let's first present the solutions for x' and t':
x'  =    x-vt

y'  =  y

z'  =  z

t'  =    t - (v/c2).x

Do you see that, for example t', is dependent on the speed "v" of S'?
From a classical viewpoint, that's absurd. However, from de deductions of Einstein,
it's really true. It simply means that the clocks in S and S', run at a different rates.
An observer in S, will see that the clock in S' runs slower.
When you see a simple example, these conclusions will start to "live".

The solutions of Einstein, as presented above, simply were possible by postulating
that "c" is constant in any from of reference, which already is "unclassical" by itself.

If we want, we can simplify the equations above, if we use the "gamma factor" γ, which is:

γ  =        1

In many articles, however, folks call γ = √(1-v2/c2), "the Lorentz factor".

Since the γ factor is common among the transformation equations,
we may also write (for v along the x-direction):

x'  =   γ  (x-vt)

y'  =  y

z'  =  z

t'  =   γ  (t - (v/c2).x)

The equations above, are called the "Lorentz Transformations" (for "v" along the x-direction).
Note that the " γ factor", to a high degree, determines the relativistic effect here.

Take a look at the first equation for x'. Note that if v is ver low, then √(1-v2/c2) is practiclly "1".
Thus it means that the equations converges to the Galilean Transformations for low speeds.


Example 1:

Suppose in S, we have a marked segment L0 = 1m, as a segment along the x-axis.
Suppose further, that frame S' is in rest too, just as S is, and they perfectly coincide.
In S', we have the same marked segment L', thus it has a length of 1m too. L0 and L' coincide too.

Now, suppose that "suddenly", S' moves with a constant speed of 0.7c along the +x direction.

What does the stationary observer in S measures of L', when S' moves with that speed?


L' = √(1- 0.7c2/c2) L0 = 0.714 * 1 = 0.714 m

So, according to the observer in S, the L' has shrinked. In other words, the spatial dimension
along the direction of movement seems to be contracted.

Note that in this example, the speed "v" was extremely large. It's 70% of the speed of light,
which is extremely fast indeed. True, relativistic phenomena will show better if the speed
of the moving frame of reference is significantly above 0.1 c.

Example 2:

Suppose in S, we have L0 = 1m.
When S' is in rest, we have the same distance L = 1m.

Now S' moves along the x direction with (only) 1000 km/s, which is about 0.003c.

How long does a stationary observer in S, measures L', when S' moves with that speed?


L' = √(1- 0.003c2/c2) L0 = (practically) √1 * 1 = 1 m

With low speeds, say below 0.01 c, relativistic phenomena are hardly observed.
That's why classical Newtonian mechanics works great with speeds that are only small fractions of "c".

Indeed, with speeds below 0.01 (where c is about 300000 km/s), the "world" looks fully classical again,
and that's why on a human scale, classical Mechanics still works fine.

"Length contraction", and "time dilation", have been experimentally confirmed at incredable precision.
For example, a clock on a sattelite runs a bit slower, exactly as predicted by the theory.
As another example, the decay rate of some elementary particles is longer, when they move with high velocity,
compared to Lab conditions.

This seems like a strange "flexibiliy" of Space. However, in SpaceTime (x,y,z,ct), it follows naturally
if the speed of "c" is constant in any moving frame of reference.

Ofcourse, the material above, evidently, only represents just a tiny glimpse on "The Theory of Special Relativity".

2.4 The essential meaning: Lorentz symmetry and SpaceTime distance:

Above, we already have seen an example of the Lorentz metrc (distance) in Minkowki SpaceTime.

ds2 = -c2t2 + dx2 + dy2 + dz2

The minus sign in "-c2t2" was not explained well above, but I can tell you
that the "extra" coordinate "ct", in fact should be "ict" (Henry Poincare, 1905) where "i" is the imaginary number
from Complex number theory. If you square that, it will give rise to the "-" sign.
I don't think that the very details are very important to the discussion I like to present.

To let the equation above, resemble more to a "distance", or interval "Δs", we can rewrite it like:

Δs2 = Δx2 + Δy2 + Δz2 -c2Δt2

where Δ is a universally accepted symbol for "small part", instead of infinitesemal qualifiers.

The equation means that the Lorentz distance (or Minkowski distance) between two "events" in SpaceTime,
is constant.

Since we speak of SpaceTime (Space, Time), points are better qualified by events (physical events),
that may take place, the one later than the other. It's possible to connect these events by light.
Suppose one particle (particle 1) emits a γ photon, which may be absorbed by another
particle (particle 2) somewhere else in SpaceTime.

Since the distance in SpaceTime is constant, you may sneaky contract a spatial component (say x), but them
the clock must run slower in order to have the same distance between the two events, again.

Using that as a principle, then apply some math, you will get the Lorentz transformations as
listed above.

You also might see that this framework enforces causality. It's not possible in this model, that,
for example, a particle 2 absorbs a γ photon, before it was send by particle 1 in the first place.

2.5 Lorentz violations:

In the above, we considered two frames of regerence, S and S', where S' had a constant velocity
along the +x direction., relative to S.
Ofcourse, we could have also choosen for a movement of S' along the -x direction, or along
the y-axis, or along the z-axis, or actually any direction in the coordinate system..

It would not have changed anything fundamentally.
The Lorentz transformations would still be the same format.

In SRT, there is no preffered direction in SpaceTime, and no dependency on whatever coordinate
system is used. This is also called "Lorentz symmetry".


You will see later that physicists appreciate (or nearly demand) that a "concept" is rotationally invariant,
invariant for transformatons, invariant for phase shifts, and invariant for change of coordinate systems.
This holds all the more for "something" that might be called a "fundamental concept".

This symmetry, or "gauge invariance" is reflected in theories which are (sort of) written or re-written
using the Yang-Mills fundamentals (or idea's).

Some theoretical considerations....


Is it really true that there exist no preferred "something" in SpaceTime?

Maybe there exists an extremely small bias towards some "direction", or energy potential
in the Vacuum, or "hidden" yet undetected field in the Vacuum, or even location in the Universe,
or even location in our own local Milky way etc.. etc..

It's difficult to say something truly useful on the above speculations.

But there are some anti-symmetrical things indeed.

If you would observe some special physical systems, with some particles having electrical charge, and spins,
and "invert" the charges (so that + will get -, and the other way around), or mirror them (in an actual mirror),
then sometimes surprising effects can be witnessed: Violations of symmetry.

There are some fundamental forces in our world, like the Strong nuclear force, gravity etc..,
but something called the "weak interaction" displays, as many physicists believe, some un-symmetrical
"behaviour" indeed.

Contemplating using this sort of intel, and the principles of STR, still have not resulted
in very clear statements.


Einsteins STR, uses a continuum, flat (not curved) 4D SpaceTime.

But what if the quantization of "Space" is true? Then, using the theory above, when a frame of referece
is (almost) infinitesmal close to "c", the quantization of Space must be "felt" in some way.
You can go very far in "length contraction", but what if when you come so close to the scale,
where Space quanta cannot be ignored anymore?

This sort theoretical considerations have also led to the search of "Lorentz violations".
Many experiments have been performed, to a very high prescision already, but no anomalies
have been detected yet.

The hope is, that measurments of any possible violation, might produce some insight
to which of the competing "Quantum Gravity" theories, is best.

2.6 A few words on General Relativity:

General Relativity is too much of a grande Theory, to discuss in any value in such a note like this one.
However, it is possible to distill a few main points.

General Description:

Einstein's Theory of General Relativity, is much more involved than Special Relativity.
One reason why it is called general, is because accelerated frames of reference are studied,
instead of "only" frames of reference moving with a uniform velocity.
In effect, all sorts of relative movements are considered.

One astounding finding was, that "gravity" is equivalent to acceleration.
The acceleration is then due to curved SpaceTime.
This was absolutely completely different from the common classical view, before 1916,
where gravity is a Force, just like the electrical- or other know forces.

The "core" idea of GTR, is that Einstein came up with the theory that SpaceTime is a
geometric object whose curvature is determined by the distribution of energy and matter.
The curvature determines how free objects will move in that curved SpaceTime.

Thus gravitational force is no longer a force in the classical Newtonian sense, but a mere
manifestation of the curvature of spacetime.

In a type of math, which was later called "differential geometry", curvatures of spaces (manifolds)
were already explored by Gauss, Riemann, Christoffel, Cauchy, and too many other mathematicians to name here.
For some important theorems in that realm, we can go back to the years around 1850, or even earlier.
Indeed incredable, that this mathematical branch needed well over 100 years to develop into a mature framework
where it is still intensely used by physicists today.

But Einstein too, relied heavily on "differential geometry" in the period he developed GR, from 1905 - 1016.

If you would consider some "manifold", like some 2D surface in 3D, it's possible to introduce
a tangent vectorfield "along" that surface, which describes the "rate of change" of how that surface
actually bends. It's a simple example, which hopefully you can visualize in Space.

An extension to a vectorfield, is a description using a tensor object. This mathematical object,
makes it possible to "express" more twists, in multiple directions, in any point.

An example of one of the field equations in GR:

A tensor is a very suitable mathemathical object to capture the differences in twists and bends,
from a point to other neighbouring points.
It's therefore no wonder Einstein found a way to describe the curvature in SpaceTime, using
implementations of tensor objects.

This can be illustratred by one of his field equations, where Guv and Τuv are tensor objects:

Guv + guv Λ = 8 π Τuv

(where G=c=1, or geometrised/normalized units)

In the field equation above, the curvature of SpaceTime (Guv) is related to the mass-energy distribution
uv) which is present "in that neighbourhood".
It's absolutely remarkable, that this mathematical expression "links" mass-energy (or simply mass) to
curvature in SpaceTime.
It's a departure from classical Physics, where Gravity was considered to be a "force", just like
for example the Electric force.
But Einstein managed to link the curvature of SpaceTime, to mass-energy.

Now, if somehow it can be made plausible that a free object follows the curvature of SpaceTime,
then (maybe?) we are close to understanding how "mass/curved spacetime/path of an object",
all are connected by the Theory.

Why does a small free object follows the curvature in SpaceTime?

If you would think that it's a trivial question, then you must be a relative of Einstein.

If a particle is small, there is hardly any "feedback" into the "warped" SpacetIime, which itself is due
to some larger mass distribution "nearby".
So, a small test particle, "in some way", finds it's path in curved SpaceTime. So, what is the path here?

If we would not consider a small object, then this object itself would significantly warp SpaceTime too,
which is covered by Einstein's GTR, but it's very complex.
It does not have to be really as small object, as long as it's small relative to the mass that curves SpaceTime
in the first place. It's a bit similar to Earth orbiting the warped spacetime due to the Sun.
The Sun is immensly more massive than Earth.

Short definition: In differential geometry, a "geodesic" is a generalization of the notion of a "straight line"
in "curved spaces".

Now, the question thus equivalent to:

The motion of a small test particle, is completely determined by the bending of the SpaceTime.

Some folks can prove it, by using the equivalence of intertial mass and passive gravitational mass.
These two interpretations of "mass" has not mentioined at all, in this simple text.

Others can prove it by using the general equation of motion in curved SpaceTime.

It's not so very trivial. One idea is using the concept of parallel transport. You can consider a tangent vector
along the motion, or orthogonal to the motion. The motion is in curved SpaceTime, ofcourse.
If the orientation of that vector does not change relative to the path of motion, then you stay on the geodesic.

If you are on a curved sphere (a surface) in R3, and you hold a stick exactly in front of you, and
you walk along a "great-circle" (a geodesic), the orientation of the stick (tangent vector) does not change.
So, if you go from the equator to the Northpole, and keep on going the straight line (the great-circle), the
tangent vector does not change. However, while on the Northpole, and you change suddenly direction, like turning left,
and then go back to the equator again, then there was a rather sudden disruption in the orientation
of the tangent vector. That does not correspond to the motion of a free particle moving in curved SpaceTime.

Relativity is a Theory using 4 dimensional SpaceTime:

Throughout section 1, it was hopefully clear that SpaceTime is 4-dimensional, which is reflected
for example in coordinates like (x,y,z,ct).

I like to stress that fact, since in section 3, Kaluza-Klein theory, which is a remarkable theory,
is an attempt to unify Einstein's GR, and the ElectroMagnetic (ElectroDynamics) Theory of Maxwell.
The arena where that seems to work, is a 5-dimensional SpaceTime, which is very remarkable.

The ideas in Kaluza-Klein, inspired many other Theories, even very modern ones.

However, Kaluza-Klein, does not seem to fit well enough in, e.g., modern Yang-Mills concepts, and even beside
that, Kaluza-Klein was more or less superseded by String-, M-, and Brane theories.

3. A few words on Planck's length, and Planck's time.

The "length of Planck", is an extremely small length, namely about 1.6 x 10-35 m.

Associated with this length, are two other values, namely "Plancks time", and "Plancks mass".
Of those two, "Plancks time" is somewhat more easy to understand, since it's the time needed for light to "traverse" Planck's length.

In order to get an appreciation on how small the "length of Planck" actually is, then take
a look at the following figures:

-The Bohr radius, that is the classical radius of the Hydrogen atom is about: 5.310-11 m.
-The classical radius of a proton is about: 0.87 10-15 m.

If we compare Planck's length to those upper examples, like the radius of a Hydrogen atom,
or what is often taken as the "classical" size of a proton then we will really appreciate
how insanely small Planck's length actually is.
If you would "inflate" a proton to the size of the Sun, relatively speaking, you still could not even see Planck's length.

This length is formed from other Universal constants (like the speed of light and others),
but we will also see on what theoretical basis this length was originally derived from.

We have to be very careful on how exactly to interpret such a small length.
For example, not all physicists are convinced that those Planck values really represent fundamental
constants in Nature.

At the same time, it cannot be denied that "Quantum Gravity" theories take Planck's length
as a reference point, that is, a scale that represents the dimensions of Space quanta (spins, loops etc..).
So, especially theoretical physicists working in fields like String theories, Quantum Gravity, Cosmology etc..
interpret Planck's length as a fundamental building block, in some way.

Planck's length is the following:

lp = √ (ħ G / c3) = (about) 1.6 x 10-35 m.

where c is the speed of light, ħ is the socalled reduced Plancks constant, and G is the universal gravitational constant.
So, the lenght of Planck is "build" from very fundamental constants from physics.

The theoretical time required for light to cross a distance of 1 Planck length, is about 5.4 x 10-44 seconds.

How is Planck's length derived? Where does it come from?

We are not going to do much math in this text. But basically, if one would compress one of those other constants, namely "Plancks mass"
to the "Schwarzschild radius", which is the critical radius of a Black Hole, then one would arrive to Planck's length.
To be honest, we would need to consider the Compton wavelenght as well, but we skip that here.

Note that Schwarzschild radius" is that metric, where SpaceTime fully collapses (into something we are not fully sure of).
Usually, the Schwarzschild radius can be understood as the "border" of a black hole.

By the way: some modern ideas in physics around black holes, will certainly be a subject in this modest note.

Some physicist tie the Planck scale to a phenomenon called Quantum Fluctuations, where Energy "pops up" from
the Vacuum in the form of a particle-antiparticle pair, which quickly destroy each other again.

Now we may see why "length of Planck" could be of significance of our discussion of the "Vacuum and SpaceTime".
Here are a few "suggestions":
  1. It's possibly the lenght where all regular, smooth, continuous SpaceTime principles do not apply anymore.
  2. It's possibly the scale of SpaceTime quanta.
  3. It's the scale where a compressed Planck Mass (1.22 1019 GeV/c2) will collapse into a black hole.
  4. It's possibly the length where Quantum Mechanics and Gravity might unite in a single theory.
    Some Quantum Gravity theories define loops or spins with such fundamental dimension.
  5. It's possibly the most basic "container" of information in "Quantum Information Theory".
  6. It's possibly the characteristic length of "strings" in Superstring theory.
  7. It's possibly the characteristic length, related to "Quantum fluctuations" in the Vacuum.
I can simply list all that stuff above, but it then it simply just has to be illustrated, with some core concepts
of such theories. That is what I will try to do in the following chapters.

It's true that Physics is in full development, and a very definitive, complete, Theory is simply not present.

In the next sections, it's very important to give a quick overview on the fundamental themes that gradually,
found it's way into physics, like the Yang-Mill ideas, Gauge invariance, Quantum Mechanics, Quantum Field Theory,
the position of Relativity, Quantum Gravity, the Standard Model etc...

It's important to get a feel into that "stuff". Ofcourse, it will not be in depth, and I could not ever
cover it in depth, since it takes an incredable amount of knowledge, and thus an incredable amount of time to master.

In depth studies indeed takes years. But I am confident I am able to at least touch upon these subjects
in order to convey a feel for the fundamental ideas behind those themes.

However, what appeared shortly after General Relativity, namely the Kaluza-Klein theory (around 1921),
gives a certain perspective on SpaceTime and unification. In that sense it's important.
So, I like to do that first.

4. Kaluza-Klein.

5. A few words on "The Dirac Sea".

Nobody has the full answer on the structure of SpaceTime. Not yet. And maybe never.
Or maybe in a few years from now? Some physicist say that we are pretty close.
Who knows..., but I am a little sceptical on any statement that the final theory is "just around the corner" .

If SpaceTime quanta are real, then even somebody may postulate that it looks like a real Matrix movie,
but this time realistic and not psychic, since the association with quanta and memory elements, is quickly made.
However, there are almost no physicists which supports such a view.
But it is quite intruiging to persue theories centered around themes as "The Universe as a Emulation", or
"Physical reality is just Virtual"....

You can only get a better apprecation when studying older en modern ideas from Physics, and other sciences
like psychology, philosophy and others.

One old idea, from the early 1900's (1928), is the Dirac sea. It does not address the physical structure of SpaceTime,
but it might show an important property of SpaceTime.

Dirac managed to combine important principles of quantum mechanics and the theory of special relativity,
to arrive at a relativistic wave equation. One pecularity from his work, is the existence of negative energy states.
If you consider a free electron for example, then it could endlessly emit energy in the form of photons.
This is, however, not observed.

To solve this, Dirac postulated a negative "sea" of particles in the Vacuum, where all such negative states
are already occupied. Then, using the Pauli Exclusion Principle (PEP), a "normal" electron (silly word indeed),
could not reach that sea since it's forbidden by PEP.

There is no way to easily explain PEP, but it is absolutely very profound in Physics.
It holds for what most people sea as "real particles" or fermions, like the electron.
PEP, for example, explains the number of electrons in certain energy levels in an atom, and quantum numbers.
Simply stated: in an atom, or in close vincinity, each particle must have a unique set of quantum numbers.

If for some reason, an electron managed to escape that sea, then a "hole" would remain.
This hole would interact with EM fields, exactly as if it is a positively charged electron.
In effect: Dirac predicted the positron, which is the real anti-matter partner of the electron.
Not much later, the positron was indeed discovered.

Fig 2: Just an illustration of Holes in the Vacuum with -E states.

What is the status of such ideas today? An interpretation, litterally as Dirac proposed,
is not how physicists look at SpaceTime today.

But those early ideas certainly contributed to QFT theory. However, some physicist still place bets
on "hole theories", like for example "causal fermion systems".
Maybe you like to Google on those keywords in combination with "arxiv".

6. SpaceTime and Entanglement.