A few simple notes on some special results of "Weak Measurements" and paradoxes.

Version : 0.5
Date : 29/01/2012
By : Albert van der Sel
Type of doc : Just an attempt to decribe the subject in a few simple weak words. Hopefully, it's any good.
For who : For anyone interested.

A few simple notes on some special results of "Weak Measurements" and paradoxes in Quantum Mechanics.

1. Introduction:

An observable (and suppose it's initially is in a superposition of different eigenstates), appears to reduce to a single value,
after interaction with an observer (that is: if it's being measured).

Now, this key principle of QM, has various interpretations, like the Copenhagen "Collapse of the wave function", or
the "Decoherence" theory, which states that only particular pointerstates are "einselected" or leaked into the environment,
due to the interaction of the observed system with measuring device (the environment).
Especially the "Decoherence" theory, clarifies that the measurable quantities (observables) of the system,
when actually measured,
produces results, which also are influenced by the specifics of the experiment.

In some way, in all discussions of the sort like above, we can say that the "art of measuring a system",
are "pretty disturbing": in one way or another, you have forced the system into a reduced state.
Sometimes, these conventional measurements are called "Strong measurements", (or "strong perturbative measurements"),
as to stress the fact that these types of measurements really influences an observed system (and reduce it in some way).

Contrary, although initially it sounds incredable, for certain classes of experimental setups, a method has been devised
to perform measurements without hardly disturbing a system. These special setups are called "Weak measurements".
Ofcourse, not every QM experimental setup, is a good candidate for weak measurements. It seems that especially two-state
systems lends themselves quite well for these setups. This seems to be a limiting factor, and indeed it is.
But not disturbing a system during a measurement, in such a way that the observable does not "collapes" or "projects" onto
an eigenvalue, already sounds rather contradictive to QM (but it is not).

However, at least in general terms, since the measurement is non-perturbative, what you get back is very weak too.
This means that generally you must perform many measurements on systems prepared in the same state (also called
an "ensemble").

The whole idea is relatively new. It was first published in 1988, by Aharonov, Albert and Vaidman (AAV),
although even in 1964, Aharonov, Bergman, and Lebowitz probably layed out the fundamentals of the whole idea.

We should probably distinguish two important categories here, but then where both are very related to each other.

First, we can say that a (relatively new) general theoretical framework has emerged, of which a time-symmetrical
approach (fowardtime- and backwardtime evolution) is a key element. The "Two State Vector Formalism", or the
"Time Symmetrized QM", are important representatives

Secondly, a new class of experiments were devised, which (as we have seen above) are called "weak measurements".

As for the latter: only very recently, say as of 2008, startling experiments have been performed, which might
have an effect on how we must view at certain aspects of QM. I say "might", because as always, different interpretations
emerge from newly discovered phenomena. Some indeed are quite spectecular.

One special interpretation goes like this: "for what you find at any point in time, it’s not just the past
that is relevant. It’s also the future”
Put it this way, it would disturb what we usually see as the "arrow of time".

Mind you, in a pure classical system, it would not be a surprising statement. But in QM, where predictions are supposed
to be statistical in nature, it really is.

Sometimes it's not easy to (sort of) qualify theories properly. For example, a theory like Newtonian mechanics can
be called a classical theory . Now, the usual Quantum Mechanics we know (like Copenhagen interpretation), is
not classical.
But let's call the "usual interpretations" (which collectively have resulted in Quantum Mechanics),
the "standard views" of Quantum Mechanics. Ofcourse, there really is no standard view of Quantum Mechanics,
but this time we really need a term to distinguish the new (rather fragile) interpretations, from the standard ones.
Do we need it then?

As far as the theory is concerned, most would say "yes", since a time symmetrical approach is quite different from the
standard one.

2. Some key notions:

Weak Measurement:

Often, a weak measurement is described as follows:

If the interaction of a measuring device and a system is made very weak, then the system will be negligibly "disturbed"
by the measurement (it should not "collapse" or "decoheres"), and any value measured will be so low, that it is
quite meaningless by itself. However, if a large number of measurements is made, then the average will converge to a value
defined as the "weak value" of the operator being measured.

The measuring device is often called the "pointer" device, with that name (also) choosen as to refer to
a traditional analog device, suggesting that the movement of the "output needle", at a weak measurement, is negligable.
With a perturbative measurement, the movement of the "output needle" would be very significant.

As a simple analogy of the above: suppose we have a droplet of some oil on a smooth table.
You probably have seen it yourself, that if you very gently touch the droplet with a spoon (or something),
the droplet might not be disturbed. However if touch it just a tiny less gently, the droplet "collapses"
around your spoon. Ofcourse, this example goes bad if we dig deeper any further. However, it's just to illustrate
the difference between perturbative- and weak measurements, if we could call "touching a droplet" a sort of measurement.
Well, maybe it could be, if we for example tried to determine the shape of the droplet.

Now, please note that it's very different from the traditional "perturbative" measuerements, which results in
finding an eigenvalue.
Weak measurement is thus a procedure, performed on pre- and post-selected quantum systems and
the coupling to the measuring device is very weak, thereby not disturbing the system.
The outcomes of weak measurements, are supposed to be very different, that is, not just a probability for finding
the eigenvalues of the measured observable (operator).
If measured many times, the weak values converges to a certain value, which could be the "true" value of that observable,
(presumably) valid for the ensemble of systems.
Actually, the value obtained is interpreted as a complex number. However, the "real" part of that number
represents the value of the Observable.

Why would we perform measurements this way?

A "strong" measurement sort of changes the system. However you want to formulate that change, like
"collapse of the wavefunction" or "the system decoheres" or "it projects to an eigenvalue" etc..,
it's not easy to talk about "a true value" before this measurement.
So, what was reality before this measurement? Well, If that would be possible at all,
that is: Isn't QM intrinsically stochastic to begin with?

Anyway, it is ofcourse very interesting to see what you can get, if it would be possible to measure
an observable without disturbing it !
An obvious question then is: Would you then have more, or better, information about reality?

It seems that "having more information about reality" is probably not exactly what the physicists think
what is going on. They rather explain the observed effects, in a "time sysmmetric" framework.
If you have a certain pre-selected state (at first), and a certain post-selected state (at a later moment),
it sort of influences what can be found "in between" those times.

In a gentle form, a weak measurement might be described this way:

⇒ pre selection on t0:

First, you start with a system in a defined state. It's called a pre-selected state, because we know that
the system takes the state |Ψpre > on on t0.

Actually, the system is in a defined state with respect to the observable A, while the system usually
has many more observables associated with it.

It's custom to speak of a "prepared" state with respect to A, which can be achieved by a strong measurement
of A, to obtain the state |A=a>

⇒ post selection on t1:

At time t1 we do a "strong" measurement of the system again, with respect to observable B,
and only select the cases where |B=b>.

This is then called a post-selected state

⇒ Perform a weak measurement on t where "t" in t0 < t < t1

Note that we perform the weak measurement in the time between the pre-selection and post-selection.

Now, in the time interval "t" in t0 < t < t1, we weakly measure an observable "O".
Then, after many measurements on equally prepared (initial) systems, we average the results found on "O".

Just to state for the record (we don't do any calculus here):

In addition to the above, using certain calculus on bra en ket states, one can find expectation values
of O, and additional calculus can be performed on the involved operators, leading to various results.

So, what is the essential meaning of all this?
Or I should refrase that into: what results are found and what is a possible interpretation?
If you are new to the subject, and possibly "somewhat orthodox", then: Are you sure you are seated?
(at times, I like to be "dramatic")

3. Some rather remarkable interpretations:

Below you will find some interpretations that a few scientists see
as possibly valid.

If measurements as described above are performed, and a surprising consistent correlation can be found
on what you average as (say) O=Oo1 under the condition that A=a and B=Bb1, and
and if you find different values of O if different values of B are (post) selected, then:

If you have a certain pre-selected state (at first), and a certain post-selected state (at a later moment),
it sort of influences what can be found as the weak value of O "in between" those times.

This looks as if the future, and past, have determined the present.

In true scientific notes, one does not come away by "just stating an interpretation", and not
further elaborate on it, or by not quoting the proper references.

Although I honestly do my best to be truthfull, this note can at "maximum" be regarded
as a way to stimulate curious readers, who are quite new on the subject, to proceed to more advanced stuff later on.

Also, as said before, note that there is not any full consensus about any specific interpretation
on weak measurements among physicists.
Especially on what's mentioned above, many have reservations, or are quite doubtfull.

Remarkably, when a pointer device couples weakly to the observable O of a pre- and post-selected system
the average of the "readings" is not to one of the eigenvalues, but to the weak value of O.

You may find that quite remarkable by itself.

If you started out with initial preparation |A=a>, and post select this state again,
you might suspect that both A and O were completely defined at the weak measurements.
Especially with non-commuting observables, you might find that quite remarkable (see section 4).
This seems to be in conflict with Heisenberg's uncertainty principle.

We don't do mathematics here, but some derivations of some real smart folks gives us relations like:

Forward Time evolution of the Weak operator at t = SomeFunction(t,t0)
Backward Time evolution of the Weak operator at t = SomeFunction(t,t1)

Ofcourse, these are not the real relations, but that is not the key issue here.

Both relations just expresses the fact that the events at t, are (at least partly) the result of the
post selection in the "future", and the pre selection of the "past".

If this has any touch with reality, you may consider this to be very remarkable !

Critical remarks:

Some folks argue this way:

The results are obtained, using an "ensemble" of entities (the systems
that were prepared in the same initial state).
So, you still can't say anything about an individual system. It's just statistics.

This argument cannot be resolved "quicly". A weak measurement, as a neccessity (because it's "weak"),
simply needs a lot of identically prepared systems.
In general however, the community of scientists involved, does not see this critism as fundamental.

Lot's of more interesting stuff can be discussed. True, this simple note leaves out a lot, but
if you were indeed new to the subjects, hopefully you are triggered sufficiently by now, to do more explorations.