Tuesday 28 September 2010

Final reply to the email

Yesterday, Jim French replied to my email:

"Hi Zainab,

Randomness is difficult to define. With regard to your first question, I think it's fair to say it's not clear that it is a proper question - that is to say, if we equate randomness with unpredictability, then of course we wouldn't be able to predict any random event (by definition). If you mean could we predict the behaviour of systems typically described as random (such as the roll of a die), the answer would be... it depends. Essentially, yes, in principle, if we knew enough about all the degrees of freedom (at its finest point, the positions and momenta of all particles in the system, though I suspect this could be done on a classical level, without recourse to quantum considerations), we could predict the result. Practically speaking, though, no. We would either need to set up such a precisely controlled system or know so much about the system under consideration, that it would be impractical and/or would require the power of a supercomputer that has better things to do with its time. At the very least, predicting the behaviour of a die with well-measured properties would actually be pretty trivial and it tells you little about whether or not truly random things do actually exist.

The concept of being able to predict and describe the future behaviour of anything we wanted dates back to the 18th century and is typically associated with the French mathematician Pierre-Simon Laplace and something called Laplace's Demon, which is some hypothetical creature that he thought of as (in principle and possibly way into the future) possessing sufficient knowledge of all the positions and speeds of all particles in the universe as to be able to perfectly predict the entire future evolution of the universe. This concept of causal or Newtonian determinism seemed unavoidable at the time, though it had some uncomfortable questions for the nature of free will, until the beginning of the twentieth century, when it ran into the twin problems of quantum theory and chaos theory. The latter presents no conflict with the principle of determinism, but it does in a practical sense. The physics of chaotic systems are still governed by underlying deterministic processes, but what was realised in the middle of the last century was that putting any knowledge of initial conditions into an actual prediction was far harder than previously realised. A small lack of precision in our knowledge of an initial state can quickly lead into huge uncertainty in the state of the system at some later time. Plenty of physical systems are not chaotic and (fortunately) perfectly predictable, but this is not so for many other systems (such as the weather, where even with huge computing power at our disposal, we struggle to make good, accurate and specific predictions more than a week ahead).

The most important concept at play in the existence (or lack thereof) of true randomness is quantum theory. Causal determinism assumes a realist view of the world - objects in it have definite, objective properties that are true regardless of our having measured or observed those properties (the moon does exist and it is it not made of cheese and this remains true whether or not I choose to taste a mouthful). However, in quantum theory (at least in the Copenhagen interpretation), it is meaningless to speak of a property of a particle (such as its position) before we go in and measure it. The particle is not sitting there, waiting for us to shine a light on it, revealing its location. All we can talk of is the probability of observing it in one place and not another. The Heisenberg Uncertainty Principle is related to this concept and it states that our certainty in predicting a particle's velocity is limited by our certainty in measuring its position (and vice versa). The more precisely we know where a particle is, the less precision we can have in knowing how fast it is moving. To be clear, this is not an engineering limitation, something that will be overcome in a hundred years’ time with improved technology; it is a fundamental property of nature. Before we have made a particular measurement, it is meaningless to talk of a particle’s position etc, since such properties simply do not exist. This has implications for predictability and randomness, since if a particle’s position (or velocity etc) does not objectively exist, it is impossible to predict precisely what that position will be measured to be and what the subsequent evolution of a system of particles will be.

Of course, it could be objected that this is only according to quantum theory and that theory may be incorrect. Indeed, the theory was not (and I suppose is still not) uncontroversial and its most famous detractor was Albert Einstein. He helped found the subject, but came to reject the theory as it was developed and pursued his own independent (and ultimately unsuccessful) line of research. He disliked the interpretation of nature as probabilistic at heart and famously declared “God does not play dice”. He developed various different thought experiments to try to show that quantum mechanics, as formulated at the time, was incomplete and led to contradictions and paradoxes. None of these convinced the mainstream, but one of the most intriguing was called the Einstein-Podolsky-Rosen (EPR) paradox. The details of the proposal aren’t important here, but they led a British physicist called John Bell to formulate Bell’s theorem. This showed that quantum mechanics gave predictions which couldn’t be explained by any locally real theory (that is, any theory which pictured particles as having objectively real, well-defined properties). Various experiments have demonstrated that nature does indeed obey the rules of quantum physics and we must therefore adopt this peculiar view of nature based probability and abstraction, rather than concrete realism.


In answer to your questions, then: yes, randomness does exist in nature and it is found in quantum processes. Radioactivity, for example, is governed by quantum physics. There is simply no way to predict when a radioactive nucleus will decay and it may be considered genuinely random.

There are all sorts of books out there that deal with the subjects of randomness, chaos theory and quantum physics. The standard popular exposition of chaos theory is Chaos by James Gleick. A good recent book that deals with randomness is The Drunkard’s Walk: How Randomness Rules Our Lives by Leonard Mlodinow. Having not read any pop science books about quantum theory for years, I can’t give many recommendations, but any library or book shop will be stuffed with many that all cover the same ground. One that was particularly important to me as a teenager (though it is about a particular aspect of quantum physics rather than a general introduction) was QED: The Strange Theory of Light and Matter by Richard Feynman. You just need to google things like the EPR paradox and the uncertainty principle to find out about them (and they are fascinating subjects). Wikipedia has large articles on them.

If you have any more questions, I’d be happy to answer them."

So, to conclude, all the graduates believe that for most random events, it is possible to predict the outcome. However, there are concepts such as chaos and quantum theory in which the outcome cannot be predicted and so pure randomness is present.

No comments:

Post a Comment