I had my first supervision with Ms Caroussis since last term. We discussed the work I did over the summer. I feel a lot more confident about my progress now, but I really need to make sure that I dont fall behind on my GANTT chart.
I talked about possibly obtaining a grant and my supervisor told me that I need to write a proposal to Mr Wright. This must include:
Project Brief
Detailed costs
Why I need the money to pay for these things
Why is it important to get the grant for my project
I asked about how large my range of sources should be and so, next supervision, I must bring a detailed, up to date bibliography so we can discuss other sources that I may need. I haven't managed to get opinions from any professionals so my supervisor started up an email with a few maths and physics undergraduates that she knows. I could also talk to physics and maths teachers in the school.
Possible questions that I should ask them:
- Whether they believe that randomness really exists
- Will we ever be able to predict the next number to show on a dice?
- Any book recommendations?
I can't think of any other questions at the moment but I need to come up with more so I can make the most out of having maths and physics related contacts.
During the supervision I thought about other things that I may want to put in my dissertation that I had not thought about before. My supervisor suggested that I wirte a bit about the importance of the topic - why should people be interested in randomness? And also possible a history of the debate of randomness.
The supervision helped me find out more about getting different types of sources to add to my list. I am now confident that my research will become very detailed and varied.
Tasks for this week:
FINISH READING RANDOMNESS!!
Write up random generators chapter.
Tuesday, 14 September 2010
Sunday, 12 September 2010
GANTT Chart Update

This week I need to focus on finishing Randomness and writing the "Random Generators/mechanisms" chapter of the dissertation. This means that I must complete my research on that topic because at the moment, I may not have as many sources as I would like that will help me with writing that chapter.
I also need to enquire about possibly receiving a grant so I can buy the remaining books.
Useful quote
"From where we stand the rain seems random. If we would stand somewhere else, we would see the order in it."
Tony Hillerman - american author.
Suggests that randomness is subjective - doesn't truly exist.
Tony Hillerman - american author.
Suggests that randomness is subjective - doesn't truly exist.
Wednesday, 8 September 2010
Random generators
http://www.scholarpedia.org/article/Algorithmic_randomness
"Algorithmic randomness is the study of random individual elements in sample spaces, mostly the set of all infinite binary sequences. An algorithmically random element passes all effectively devised tests for randomness."
An algorithm is a set of instructions that will solve a problem.(I have learnt this from my further maths AS level in a topic called Decision Maths 1). Algorithms are mainly used for computers.
I've begun to realise that a lot of the information about random mechanisms is very complex. I have two choices:
1. Don't go into much detail about the random mechanisms and it is very complicated and a very vast subject. This will make writing that part of my dissertation much easier.
2. Research random mechanisms IN DEPTH so that my dissertation will be more informed. I will then learn a lot more which is what this project is all about.
Number 2 it is.
This is an extract from the same webpage as the above quote:
"The theory of algorithmic randomness tries to clarify what it means for an individual element of a sample space, e.g. a sequence of coin tosses, represented as a binary string, to be random. While Kolmogorov's formalization of classical probability theory assigns probabilities to sets of outcomes and determines how to calculate with such probabilities, it does not distinguish between individual random and non-random elements. For example, under a uniform distribution, the outcome "000000000000000....0" (n zeros) has the same probability as any other outcome of n coin tosses, namely 2-n. However, there is an intuitive feeling that a sequence of all zeros is not very random. This is even more so when looking at infinite sequences. It seems desirable to clarify what we mean when we speak of a random object. The modern view of algorithmic randomness proposes three paradigms to distinguish random from non-random elements.
Unpredictability: It should be impossible to win against a random sequence in a fair betting game when using a feasible betting strategy.
Incompressibility: It should be impossible to feasibly compress a random sequence.
Measure theoretical typicalness: Random sequences pass every feasible statistical test.
It is the characteristic feature of algorithmic randomness that it interprets feasible as algorithmically feasible. "
This indicates that there are certain properties that a sequence must have for it to seem random. The fact that people are aware of these properties and can recreate them in things such as a random number generator shows that although one cannot predict a random sequence, a sequence can easily be indentified as random and it is possible to produce sequences that can easily pass as random.
"Algorithmic randomness is the study of random individual elements in sample spaces, mostly the set of all infinite binary sequences. An algorithmically random element passes all effectively devised tests for randomness."
An algorithm is a set of instructions that will solve a problem.(I have learnt this from my further maths AS level in a topic called Decision Maths 1). Algorithms are mainly used for computers.
I've begun to realise that a lot of the information about random mechanisms is very complex. I have two choices:
1. Don't go into much detail about the random mechanisms and it is very complicated and a very vast subject. This will make writing that part of my dissertation much easier.
2. Research random mechanisms IN DEPTH so that my dissertation will be more informed. I will then learn a lot more which is what this project is all about.
Number 2 it is.
This is an extract from the same webpage as the above quote:
"The theory of algorithmic randomness tries to clarify what it means for an individual element of a sample space, e.g. a sequence of coin tosses, represented as a binary string, to be random. While Kolmogorov's formalization of classical probability theory assigns probabilities to sets of outcomes and determines how to calculate with such probabilities, it does not distinguish between individual random and non-random elements. For example, under a uniform distribution, the outcome "000000000000000....0" (n zeros) has the same probability as any other outcome of n coin tosses, namely 2-n. However, there is an intuitive feeling that a sequence of all zeros is not very random. This is even more so when looking at infinite sequences. It seems desirable to clarify what we mean when we speak of a random object. The modern view of algorithmic randomness proposes three paradigms to distinguish random from non-random elements.
Unpredictability: It should be impossible to win against a random sequence in a fair betting game when using a feasible betting strategy.
Incompressibility: It should be impossible to feasibly compress a random sequence.
Measure theoretical typicalness: Random sequences pass every feasible statistical test.
It is the characteristic feature of algorithmic randomness that it interprets feasible as algorithmically feasible. "
This indicates that there are certain properties that a sequence must have for it to seem random. The fact that people are aware of these properties and can recreate them in things such as a random number generator shows that although one cannot predict a random sequence, a sequence can easily be indentified as random and it is possible to produce sequences that can easily pass as random.
I haven't blogged in a few days so here we go...
Bibliography so far:
BOOKS
Chance by Amir D. Aczel
Reckoning with Risk by Gerd Gigerenzer
Chaos by James Gleick
Randomness by Deborah J. Bennett
Introduction to random time and quantum randomness by Kai Lai Chung
Quantum Theory: A very Short Introduction by John Polkinghorne
Does God Play Dice? by Ian Stewart
WEBSITES
http://www.fortunecity.com/emachines/e11/86/random.html
RANDOM.ORG
http://www.scientificamerican.com/article.cfm?id=how-randomness-rules-our-world
http://en.wikipedia.org/wiki/Pseudorandom_number_generator
http://www.igs.net/~cmorris/index_subject.htm
http://ezinearticles.com/?Book-Review---Chance,-by-Amir-D-Aczel&id=3507603
http://www.goodreads.com/book/show/441215.Chance
http://www.faqs.org/docs/qp/chap01.html
I should be having a meeting with my supervisor very soon, now that I am back at school. I need to ask if this is a good amount of sources considering my stage in the project.
Reading through Randomness:
The next chapter is about dice rolls and how the probability of each number coming up will change depending on the dice (e.g. having 2,2,3,4,5,6 instead of 1,2,3,4,5,6)
Bennett then talks about rolling two dice.
"[...] let's imagine using coloured dice, one red and one green. For each of the 6 possible throws on the red die, 6 are possible on the green die, for a total of 6 x 6 - 36 equally possible throws. But many of those yield the same sum. To make things even more complicated, different throws can result in the same two numbers. For example, a sum of 3 can occur when the red dice shows 1 and the green die shows 2, or when the red die shows 2 and the green die shows 1. Thus the probability of throwing a total of 3 is 2 out of 36 possibilities, or 2/36. A sum of 7, on the other hand, can be thrown 6 different ways - when red is 1 and green is 6; red is 6 and green is 1; red is 2 and green is 5; red is 5 and green is 2; red is 3 and green is 4; red is 4 and green is 3. Therefore the probability of throwing a 7 is 6/36"
This extract implies that if you throw two dice at random many times and writed down the total of the dice, one can predict that a total of 7 will occur more than a total of 3. This quote, like many others that I have found on the subject of probability, show that is isn't absolutely impossible to predict a random event - we havea rough idea of what the occurence of numbers should be when a dice (or 2) are thrown a certain number of times.
Bibliography so far:
BOOKS
Chance by Amir D. Aczel
Reckoning with Risk by Gerd Gigerenzer
Chaos by James Gleick
Randomness by Deborah J. Bennett
Introduction to random time and quantum randomness by Kai Lai Chung
Quantum Theory: A very Short Introduction by John Polkinghorne
Does God Play Dice? by Ian Stewart
WEBSITES
http://www.fortunecity.com/emachines/e11/86/random.html
RANDOM.ORG
http://www.scientificamerican.com/article.cfm?id=how-randomness-rules-our-world
http://en.wikipedia.org/wiki/Pseudorandom_number_generator
http://www.igs.net/~cmorris/index_subject.htm
http://ezinearticles.com/?Book-Review---Chance,-by-Amir-D-Aczel&id=3507603
http://www.goodreads.com/book/show/441215.Chance
http://www.faqs.org/docs/qp/chap01.html
I should be having a meeting with my supervisor very soon, now that I am back at school. I need to ask if this is a good amount of sources considering my stage in the project.
Reading through Randomness:
The next chapter is about dice rolls and how the probability of each number coming up will change depending on the dice (e.g. having 2,2,3,4,5,6 instead of 1,2,3,4,5,6)
Bennett then talks about rolling two dice.
"[...] let's imagine using coloured dice, one red and one green. For each of the 6 possible throws on the red die, 6 are possible on the green die, for a total of 6 x 6 - 36 equally possible throws. But many of those yield the same sum. To make things even more complicated, different throws can result in the same two numbers. For example, a sum of 3 can occur when the red dice shows 1 and the green die shows 2, or when the red die shows 2 and the green die shows 1. Thus the probability of throwing a total of 3 is 2 out of 36 possibilities, or 2/36. A sum of 7, on the other hand, can be thrown 6 different ways - when red is 1 and green is 6; red is 6 and green is 1; red is 2 and green is 5; red is 5 and green is 2; red is 3 and green is 4; red is 4 and green is 3. Therefore the probability of throwing a 7 is 6/36"
This extract implies that if you throw two dice at random many times and writed down the total of the dice, one can predict that a total of 7 will occur more than a total of 3. This quote, like many others that I have found on the subject of probability, show that is isn't absolutely impossible to predict a random event - we havea rough idea of what the occurence of numbers should be when a dice (or 2) are thrown a certain number of times.
Monday, 6 September 2010
Conclusion of Reckoning with Risk
While surfing Google I came across this website which may help me later on, when I am writing my section about quantum mechanics:
http://www.faqs.org/docs/qp/chap01.html
Here is an extract of a Reckoning with Risk review, written by Helen Joyce:
"Gerd Gigerenzer is not a mathematician or statistician per se, but primarily a psychologist, working across disciplines to understand how human beings make decisions in the face of uncertainty. What he offers here is nothing less than a prescription for how to think, how to choose, and how to live, when the information on which we base our decisions is necessarily incomplete and flawed. For example - how worried should you be if you have a positive mammogram as part of a screening programme for breast cancer, or a positive HIV test despite the fact that you are in a low-risk group? You may be surprised to learn that the answer may well be "not too worried" - what should really worry you is that not many medical personnel know this!
The book also looks at the way courts deal with uncertainty, and offers some suggestions for improving the handling of statistical evidence such as DNA testimony"
The fact that Gerd Gigerenzer is not a mathematician or statistician may make the book less reliable. The book in general does not seem to talk about randomness, but more about uncertainty. There is a clear difference between the two because something that is uncertain may not be random. However, I did find a lot of information that will help me with my probability section of my dissertation. For example, at the end of the book, there is a chapter called "Fun Problems". The Monty Hall Problem is included in the chapter, along with many problems that I have never come across:
"The First Night in Paradise.
It is the night after Adam and Eve's first day in paradise. Together,they watched the sun rise and illuminate the marvelous trees, flowers, and birds. At some point the air got cooler, and the sun sank below the horizon. Will it stay dark forever? Adam and Eve wonder, What is the probability that the sun will rise again tomorrow?
[...] If Adam and Eve had never seen the sun rising, they would assign equal probabilities to both possible outcomes. Adam and Eve represent this initial belief y placing one white marble (a sun that rises) and one black marble (a sun that does not rise) into a bag. Because they have seen the sun rise once, they put another white marble in the bag. [...] Their degree of belief that the sun will rise tomorrow has increased from 1/2 to 2/3. [...] According to the rule of succession, introduces by the French mathematician Pierre-Simon Laplace in 1812, your degree of belief that the sun will rise again after you have seen the sun rise n times should be (n+1)/(n+2)."
This quote shows a different, unusual way to use probability which may be very useful to my dissertation.
Reckoning with Risk was a very interesting book but I don't think that it really gave me a lot of information that helped me progress in my project. I guess it showed me a more psychological side to mathematics but I do not know how relevant this actually is to randomness. In general, there was barely any information about the concept of randomness but it has shown me that randomness and uncertainty are two very different things.
http://www.faqs.org/docs/qp/chap01.html
Here is an extract of a Reckoning with Risk review, written by Helen Joyce:
"Gerd Gigerenzer is not a mathematician or statistician per se, but primarily a psychologist, working across disciplines to understand how human beings make decisions in the face of uncertainty. What he offers here is nothing less than a prescription for how to think, how to choose, and how to live, when the information on which we base our decisions is necessarily incomplete and flawed. For example - how worried should you be if you have a positive mammogram as part of a screening programme for breast cancer, or a positive HIV test despite the fact that you are in a low-risk group? You may be surprised to learn that the answer may well be "not too worried" - what should really worry you is that not many medical personnel know this!
The book also looks at the way courts deal with uncertainty, and offers some suggestions for improving the handling of statistical evidence such as DNA testimony"
The fact that Gerd Gigerenzer is not a mathematician or statistician may make the book less reliable. The book in general does not seem to talk about randomness, but more about uncertainty. There is a clear difference between the two because something that is uncertain may not be random. However, I did find a lot of information that will help me with my probability section of my dissertation. For example, at the end of the book, there is a chapter called "Fun Problems". The Monty Hall Problem is included in the chapter, along with many problems that I have never come across:
"The First Night in Paradise.
It is the night after Adam and Eve's first day in paradise. Together,they watched the sun rise and illuminate the marvelous trees, flowers, and birds. At some point the air got cooler, and the sun sank below the horizon. Will it stay dark forever? Adam and Eve wonder, What is the probability that the sun will rise again tomorrow?
[...] If Adam and Eve had never seen the sun rising, they would assign equal probabilities to both possible outcomes. Adam and Eve represent this initial belief y placing one white marble (a sun that rises) and one black marble (a sun that does not rise) into a bag. Because they have seen the sun rise once, they put another white marble in the bag. [...] Their degree of belief that the sun will rise tomorrow has increased from 1/2 to 2/3. [...] According to the rule of succession, introduces by the French mathematician Pierre-Simon Laplace in 1812, your degree of belief that the sun will rise again after you have seen the sun rise n times should be (n+1)/(n+2)."
This quote shows a different, unusual way to use probability which may be very useful to my dissertation.
Reckoning with Risk was a very interesting book but I don't think that it really gave me a lot of information that helped me progress in my project. I guess it showed me a more psychological side to mathematics but I do not know how relevant this actually is to randomness. In general, there was barely any information about the concept of randomness but it has shown me that randomness and uncertainty are two very different things.
Thursday, 2 September 2010
Reading through Randomness...
The book begins by discussing the misconceptions that occur when probability is used. The cases are similar to that of the ones used in Reckoning with Risk i.e. false positives and negatives, and so I won't use them.
She then talks about the use of randomness in decision making:
"Chance is a fair way to determine moves in some games and in certain real-life situations; the random element allows each participant to believe, 'I have an oppurtunity equal to that of my opponent.'"
I haven't really thought about the role of randomness is something like decision making. The quote suggests that randomness may be the only unbiased way of making decisions in certain situations.
Bennett then moves onto the randomizers that were used in ancient history.
"Though not always recognized or acknowledged as such, chance mechanisms have been used since antiquity: to divide property, delegate civic responsibilities or privileges, settle disputes among neighbors, choose which strategy to follow in the course of battle, and drive the play in games of chance."
It seems as though randomness has always been around, and people took advantage of its unpredictability to make quick decisions about certain events.
Deborah J. Bennett then talks through the various randomizers that were used in ancient times, many resembling the dice that is used in our time. Chapter three discusses religion and the role of God in the concept of random:
"The purpose of randomizers such as lots or dice was to eliminate the possibility of human manipulation and thereby to give the gods a clear channel through which to express their divine will. Even today, some people see a chance outcome as fate or destiny, that which was 'meant to be'"
This quote coincides with one I found in March: "God does not play dice" - Einstein
Both quotes seem to indicate that random does exist but it is an act of God. Randomness, like many other things in the world that humans do not fully understand, is a concept that we must accept and put faith in because it is a part of God.
She then talks about the use of randomness in decision making:
"Chance is a fair way to determine moves in some games and in certain real-life situations; the random element allows each participant to believe, 'I have an oppurtunity equal to that of my opponent.'"
I haven't really thought about the role of randomness is something like decision making. The quote suggests that randomness may be the only unbiased way of making decisions in certain situations.
Bennett then moves onto the randomizers that were used in ancient history.
"Though not always recognized or acknowledged as such, chance mechanisms have been used since antiquity: to divide property, delegate civic responsibilities or privileges, settle disputes among neighbors, choose which strategy to follow in the course of battle, and drive the play in games of chance."
It seems as though randomness has always been around, and people took advantage of its unpredictability to make quick decisions about certain events.
Deborah J. Bennett then talks through the various randomizers that were used in ancient times, many resembling the dice that is used in our time. Chapter three discusses religion and the role of God in the concept of random:
"The purpose of randomizers such as lots or dice was to eliminate the possibility of human manipulation and thereby to give the gods a clear channel through which to express their divine will. Even today, some people see a chance outcome as fate or destiny, that which was 'meant to be'"
This quote coincides with one I found in March: "God does not play dice" - Einstein
Both quotes seem to indicate that random does exist but it is an act of God. Randomness, like many other things in the world that humans do not fully understand, is a concept that we must accept and put faith in because it is a part of God.
Subscribe to:
Posts (Atom)