About the Journal
Contents All Volumes
Abstracting & Indexing
Processing Charges
Editorial Guidelines & Review
Manuscript Preparation
Submit Your Manuscript
Book/Journal Sales
Contact


Cosmology Science Books
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon


Journal of Cosmology, 2009, Vol 3, pages 529-539.
Cosmology, November 21, 2009

Can Discoverability Help Us Understand Cosmology?

Nicholas Beale, MA FRSA
Director of Sciteb: One Heddon Street, London.


Abstract

The discovery of the laws and constants of nature with increasing degree of verisimilitude requires that these laws are not only compatible with the evolution of intelligent beings, but also that these beings should have a sufficiently long period of cooperative existence and intellectual freedom. This leads to the formulation of a "Discoverability Principle" – more exacting in its conditions than the classic account of the Anthropic Principle. Furthermore if one adopts, at least as a thought experiment, a closely related "Discoverability Postulate", which can be expressed symbolically as a simple equation, then in addition to "explaining" the many anthropic features of the universe, it suggests a possible explanation of why the range of allowed parameters is so narrow. It also offers a basis for simplifying the highly profligate ontologies that occur in discussions of cosmology, and suggests some in-principle testable predictions. I also discuss some of the difficulties of attempted evolutionary explanations of the anthropic features of the universe, such as Smolin’s Cosmological Natural Selection.


1. "Introduction

Hydrogen has been described as "a colorless, odourless gas that, given enough time, turns into people" (S. Rasmussen, quoted in Balazs & Epstein 2009). Of course it is not that simple. According to the seminal book by Barrow & Tipler (1986), the initial conditions required for life to be able to evolve, anywhere in the universe, are very special indeed. At present we have a Standard Model of particle physics which describes, with extraordinary accuracy, all the known elementary particles and their interactions, but has about 20 adjustable parameters which have to be fitted to the data. Similarly there is a standard model of Cosmology with about 15 parameters. How are these 35-or-so parameters determined?

It’s well known that one strong constraint on the values of these parameters is the possibility of the existence of intelligent life anywhere in the universe. Even at the crudest level, assuming that life needs carbon and other similar elements and a reasonable amount of time to evolve (c 3bn years after heavier elements have been produced) imposes very strong constraints on some of the fundamental parameters. For example, the parameter which defines how firmly atomic nuclei bind together, is about 0.007: if it were 0.008 or 0.006 there would be no intelligent life. Similarly (the relative importance of gravity and expansion energy) has to be within 1 part in 1015 of 1 in the early universe. There are plenty of other examples: (the coefficient in the Einstein Equation now associated with "dark energy") has to be within about 10-100 of 0 (see eg Rees, 1999, Polkinghorne & Beale 2009).

Following Barrow & Tipler (1986) it has become common to use "anthropic" considerations to select initial conditions for the universe, though this has been strongly criticised by Lee Smolin (2007) and others, who proposed an alternative of Cosmological Natural Selection. Although I have some sympathy with Smolin’s concerns, there are fundamental difficulties with explanations of the type he proposes. Instead I want to draw attention to two additional constraints which strengthen the "anthropic principle" and suggest a "Discoverability Postulate". This can be expressed as a simple equation, and offers the prospect of explaining a number of puzzling features of the universe as presently perceived, as well as some in-principle testable predictions.

To avoid this looking too much like a mathematical paper I won’t give formal definitions, but I will bold a term when it is first defined, and I will number my observations and conjectures for ease of reference.

2. L, X and PI

Let’s think about a set of laws of nature L and a set of initial conditions X (cf eg Hartle 2002). Let’s assume that there exists some laws (L*) and initial conditions (X*) of that actually obtain in the Universe – this is impossible to prove but a reasonable working assumption if we are to do science. Why do these laws and these initial conditions obtain? If we define PI(L, X) as the probability that intelligent life will come into being somewhere in a universe with laws L and initial conditions X, we can note that PI(L*, X*) must be >0 otherwise we wouldn’t be here to ask these questions.

It became clear in the last 50 years that PI(L,X)>0 was a non-trivial constraint: many values of X would not give universes that lasted long enough, or produced carbon, or in some other way were highly inimical to intelligent life. Furthermore it became clear in the last 35 years or so that, at least with the Standard Model, there is an element of "fine-tuning" involved. In other words, if (Lc, Xc) are the Laws and Initial Conditions that we currently think we have.

Observation O1 ("Fine-Tuning"): PI(Lc, Xc + δX) = 0 for a significant fraction of possible values of X

That PI>0 may not be too surprising but O1 really is. There seem to be two possibilities:

  • There are some more accurate laws and initial conditions (L+ X+) which are better approximations to (L*, X*), and from which Lc and Xc derive (at least to a very good approximation: aspects of the standard model have been verified to enormous accuracy) for which PI(L+, X++ δX) is usually > 0, indeed if fine-tuning is to be avoided, we would ideally want PI(L+, X) > 0 for a wide range of parameters X.

    One can always artificially re-scale a parameter space and a set of laws. But we are reluctant to accept arbitrary re-scalings, which make the elegant mathematics of the Standard Model horribly ugly. Inflation and String Theory address aspects of the problem, but are examples of the Exploding Free Parameter Postulate (Polkinghorne and Beale 2009): "a theory that seeks to explain the fine-tuning of the Standard Model eventually has more free parameters than are explained".

  • There is some principle at work which selects (L*, X*) or at least selects X* from its neigbours.

One possibility is that X* represents some kind of maximum, so either:

Conjecture C1: X* is such that it maximises (or nearly maximises) PI(L*, X) or:

C2: X* is such that it maximises/nearly maximises the probability P(W|(L*,X)) of some quality W where P(I|W) >>0.

Again C2 will always be true for suitably artificial definitions of W, but we would like to find a quality W which is both plausible and ideally one for which there is some conceivable mechanism. But one fundamental problem with any attempt on these lines is that the value of x which maximises the abundance A(x) will usually have neighbouring values that give almost as large values of A:

O2: if x1 maximises A(x) and A is differentiable at x1 then A(x1+δx) &cong A(x1) – a''(x1)|δx|2 with a'' > 0

since the derivative will be zero at a maximum, and this tends to contradict O1. Although it is possible for A not to be differentiable this is unlikely with an evolutionary process. For example if we are considering evolution in discrete time then At+1(x)= At(y)ft(y)Qt(y,x)dy where f(y) is the fitness function and Q(y,x) is the mutation function giving the probability density that an entity with parameter values y will result in one with parameter values x (c/f Nowak 2006 p273). If Q is differentiable – as commonly-used continuous probability distributions are – then (subject to a few technicalities) then At+1 will be differentiable even if At and f are not. And even if Q is not differentiable, At+1may still be. For example if Q is a finite sum of the form ∑iwiδ(y-ki,x) then At+1will be differentiable if At and f are. Thus rather careful fine-tuning of the assumptions about fitness and mutation are required for any evolutionary "explanation" of O1 to escape O2.

3. Digression: The Trouble with CNS

An interesting example of the kind of thing envisaged by C2 is Lee Smolin’s idea of Cosmological Natural Selection (CNS). Smolin (2007) suggested that:

  • Universes might be "born" mainly from black holes in other universes,
  • There might be small random variations in their fundamental constants, and
  • The fine-tuning of the parameters required for life might also sharply maximize the production of black holes.
  • By a process analogous to Natural Selection therefore, the probability that a "random" universe drawn from the "multiverse" of all existing universes would be high.
Unfortunately there are problems with all of these (see Polkinghorne and Beale 2009, and Silk 1997 for more details). Some of these are specific issues about physics, which I do not want to repeat here, but there are more general points which seem to apply to all attempts of this type.

Firstly, natural selection and Evolutionary Dynamics depend on having a common timescale. This is very problematic with sets of universes. Consider for example a "multiverse" in which there were 2 types of universe: Type A produced 1010 Type A "children" and Type B had 2 Type B children. It is intuitively obvious that Type A would dominate the population. But if Type B produced children after 106 years and Type A required 1010 years then of course by the time Type A had produced its 1010 children, about 10300 Type Bs would have been produced. The relative dominance of Type A and Type B in the overall population would therefore depend entirely on how you projected each universe’s time onto a hypothetical multiversal time.

Secondly, specifying random variations in a parameter X requires more information than is contained in the parameter X: you need at minimum to specify a distribution and some kind of variance parameter. Although Evolutionary Dynamics will generally iron out the details of this choice, it will generally be possible to disrupt an Evolutionary Dynamical process with sufficiently strange distributions or high variances. If "explaining" the value of a parameter X requires postulating a distribution D and a variance V, then you end up with more postulates than explanations.

Thirdly, Evolutionary Dynamics works well because population sizes are constrained, generally by resources, and there is some effective competition between individuals and types, often mediated by physical proximity or other constraints. In typical multiverse theories these constraints are absent.

Fourthly, the most likely outcome of a randomised process may still be very unlikely, especially in the absence of constraints. If we tossed a coin 106 times and got exactly 5×105 Heads we would be very suspicious. Even if there were a plausible random process that generated values of X with a distribution S(X) and a maximum likelihood outcome XM such that PI(XM)>>0, this would not entail that the expected value of PI(X) given the distribution S(X) was >>0. Nor would such mechanisms explain the observation PI(XM+ δX)=0 unless S(X) was itself very finely tuned so that S(XM+ δX)=0, and we have discussed some of the problems about this in O2 above.

Smolin advocated CNS to demonstrate the possibility of developing a genuinely scientific theory about why the universe is likely to be anthropic—one that makes specific predictions that can, in principle, be falsified. It is highly commendable that he offers specific falsifiable predictions and engages in detail with the actual physics of black hole formation, and whether or not his ideas are right, his pursuit of them, in the teeth of a fashionable consensus, is admirable. But the problems mentioned above suggest that there are serious difficulties for all these types of theories, in addition to the specific problems with CNS.

4. Discoverability, Cooperation and Freedom

The view I want to suggest here is that (L*, X*) are constrained by a strengthened version of the Anthropic Principle, which we might call the Discoverability Principle, and would be highly constrained by a Discoverability Postulate.

Suppose that at time t it is believed, with good supporting evidence, that the laws and initial conditions are (Lt, Xt). (L*, X*) must be such that it is possible at time t that persons in the universe could have discovered (Lt, Xt). Let us write Dt(L,X) as the set of possible laws and initial conditions that could have been discovered by beings in a (L,X) universe at time t. Dt(L,X) is of course empty if there could be no intelligent life in such a universe at time t. We can then observe:

O3 ("Discoverability Principle"): If (Lt, Xt) has been discovered at time t then (Lt, Xt) ∈ Dt(L*,X*)

This is a non-trivial constraint. Not only does (L*, X*) have to allow for the emergence of intelligent life, it has to allow for a sufficient degree of cooperation between these intelligent life forms, and for enough creative thinking by them, to be able to do the necessary science. These are quite strong conditions:

O4: Critical levels of cooperation are needed between intelligent beings in order for them to do science for a sustained period of time. In particular, when technology has become sufficiently advanced for a set of intelligent beings to trigger their mass extinction (Elewa, 2009; Jones, 2009), then these beings need to achieve high enough levels of cooperation to avoid this for a substantial period of time (McKee, 2009; Tonn, 2009).

We need only mention the dangers of nuclear proliferation and extreme global warming to realise that it is not a foregone conclusion (Levy and Sidel, 2009; Rees 2004). The conditions under which cooperation will emerge in populations are becoming much better understood, and although the relationships between these and fundamental physics are beyond presently feasible calculation, it is clear that different physical conditions on otherwise habitable planets could influence the likelihood of highly cooperative advanced societies evolving. There are typically critical thresholds in terms of the ratio of benefits to cost of cooperation (b/c). For example if players have a probability w of having to play another round against each other, and the driving force for cooperation is "direct reciprocity" (I’ll cooperate if you do) then cooperation becomes an Evolutionary Stable Strategy if b/c > 1/w (Nowak et al. 2006, Nowak 2006). Therefore conditions which tended to reduce the number of days in which individuals in a society could interact would tend to prevent the emergence of certain cooperative strategies. Much fascinating work is being done in this area (eg Rand et al 2009) and it seems likely that in 10-20 years we will have detailed quantitative understanding of many of these thresholds and mechanisms. It is therefore worth considering the extent to which they are fundamental to the possibility of sustained development of knowledge or scientific understanding.

Sustained cooperation is necessary for scientific progress, but not sufficient. Robot scientists might systematically try possible laws of nature and experiments (c/f King et al 2009), but we have a strong, well-motivated, intuition that they would not be very good at producing fundamental new creative ideas (eg Anderson & Abrahams 2009). Proverbial "Monkeys with typewriters" have no chance of producing Shakespeare within the lifetime of the universe. Large combinatorial problems are intractable if attacked by "brute force" and it is known that randomised algorithms can be much more time-efficient. Genuine creative thought seems to require genuine intellectual freedom, both in the sense of not thinking and acting on wholly deterministic lines and having a society which allows individual dissent – even though this is somewhat in tension with the cooperation needed per O4.

We can thus conjecture:

C3: Substantial sustained conscious intellectual freedom is necessary for the timely development of science.

This is also becoming, at least in outline, scientifically tractable. Siegelmann showed that sufficiently complex analog recurrent neural networks were not Turing machines (Siegelmann, 1995), underpinning John Lucas’ famous argument (Lucas, 1951) that the human brain cannot be a Turing machine. It is also clear that the release of neuro-transmitters is triggered by the docking of single Ca2+ ions with a synaptotagamin molecule, which means that its timing is subject to genuine quantum uncertainties. Standard simulations show that when the neuron is on the cusp of firing, small changes in the time of an input make a large change in the time of the output, and thus can make the difference between whether a subsequent neuron will fire or not (Polkinghorne & Beale, 2009). There are many other sources of stochasticity in the human brain, which, unless the whole universe is deterministic, is clearly a non-deterministic system. At the level of societies, the number of creative scientific ideas that are accepted into the scientific community will be a function of the number of members of that community, their ability to generate ideas, the extent to which they think independently, and the propensity of the community to accept them. Populations of intelligent conscious beings could be too small, too conformist or too limited in their ability to communicate to make substantial scientific progress in a reasonable time.

In short, C3 if true it further limits the possibilities for (L*, X*), and together with O2 imposes a deep and subtle set of constraints which, although well beyond our present ability to calculate, are worth recognising.

5. The Discoverability Postulate

It is widely supposed that, as time advances, scientific uncertainty will reduce, decisions will be made between previously conflicting scientific theories (either one is falsified or both subsumed into a larger integrated whole), and that the process will be essentially convergent. So we can think about D*(L, X) as the convergent limit, in a suitable sense, of the sets Dt(L, X) as t tends to infinity. If no single convergent limit exists we will set D* = ∅ by definition.

Since there will be an infinite set of equivalent reformulations of a given set of laws, with rescaled parameters, the "output" of D* would technically be an equivalence class, but the abuse of notation is I think justified for readability: the technical details of the convergence are beyond the scope of this paper. This allows us to formulate:

C4 ("Discoverability Postulate"): (L*, X*) = D*(L*, X*)

In words: the actual laws and initial conditions will be the convergent limit of what can be discovered. As with all postulates, it could be false, and I can see little prospect of a mechanistic explanation. However it could be read as a statement about the kinds of laws and constants under which it would be possible to do science in a satisfying way, eventually converging towards a true understanding. Arguably we should restrict ourselves to exploring scientific theories for which C4 is true, provided they are sufficiently consistent with experimental observation, and only abandon this postulate if forced to do so, since it would be an admission that we will never be able to get the convergent scientific understanding we seek and appear so far broadly to be achieving. Some motivation for C4 comes from:

O5: If (L,X) is compatible with all known observations and satisfies C4, there is an infinite set of {(L', X')} which are not discoverable but are equally compatible with all known observations.

Consider for example the idea of parallel universes, causally disconnected from our own with different laws and/or initial conditions. There are infinitely many such hypotheses, and no way of distinguishing them empirically from (L,X). But C4 can act as a quantified Occam’s Razor, focusing our attention only on those laws and initial conditions which have demonstrable empirical consequences in our universe. This also leads us to:

C5: If (L,X) is compatible with all known observations and stateable with a sequence of mathematical symbols small enough to be comprehensible to the human mind, there exists a similarly compatible and stateable (L',X') for which (L',X')=D*(L',X').

If (L,X)=D*(L,X) then C5 is clearly true. But if (L,X) has some parameters which are not discoverable then either these parameters make no observable difference, or they make an observational difference but only when combined with other parameters, in such a way that you could never observe the underlying value but only the value of these combinations. In either case we should be able to formulate an (L', X') which is observationally equivalent and discoverable. If for example it turned out that the only evidence for inflation was the Harrison-Zel'dovich spectrum (perhaps because some form of cosmic censorship prevented observation of inflatons), then one could replace inflationary hypotheses with a direct hypothesis about the initial conditions. I want to emphasise the if in the last sentence because we should never underestimate the ability of outstanding physicists like Smolin to find subtle experimental tests for effects which one would have thought experimentally inscrutable (see Abdo et al 2009, Amelino-Camelia & Smolin 2009).

6. Suggestively Implies Fine Tuning

C4 is a fixed point condition, which in principle imposes very tight constraints on (L*, X*). For this to be true the universe described by (L*, X*) must be such that intelligent beings with sufficient ability for sustained cooperation and, if C3 is right, intellectual freedom, can arise for long enough. Furthermore, for D* to exist Dt, which is itself an extremely complex and delicate operator and the function of many complex iterations of "lower level" factors, has to be convergent. The sets of values under which such operators are convergent tend to have a fractal character. It is therefore quite understandable in principle why small changes in X would tend to lead to divergence and an infeasible region. Hence we can plausibly suggest:

C6: If (L, X) = D*(L, X) for plausible values of (L, X) then in general D*(L, X+ δX) = ∅

Note that this can in principle be explored at least for Lc (the laws we have at present) and a deeper understanding of the structure of D* might lead to a more general result. Indeed if we define D**(L,X) as the set of X for which D*(L,X)=(L,X) then we can restate C6 as: D**(L,X) will tend to be fine-tuned and have a fractal, or quasi fractal character.

Now admittedly D**(L,X) will be a subset of the set of values of X for which PI(L,X)>0, and the fact that a subset of a set S is fine-tuned does not imply that S is fine-tuned. But given the extreme subtlety with which the laws of physics are likely to influence the conditions of cooperation in O4 and freedom in C3, it is plausible to suggest that there may not be very large differences between the two sets. Hence the Discoverability Postulate offers, plausibly, an "explanation" of the very puzzling O1. Note that in this respect C4 differs significantly from:

C7 ("Strong Anthropic Principle"): PI(L*,X*) >> 0

Clearly C4 implies C7 but it is not at all clear why C7 should imply O1. Note of course that all the fine-tuning conditions which are entailed by assuming C7 (and thus in some sense "explained") are also entailed by C4.

7. Other Suggestive Implications

Although it is not easy to see how C4 could be falsified, one could imagine a result like Gödel’s theorem that showed that laws of nature beyond a certain level of complexity had the property of non-convergence so that D* was not well-defined. In conjunction with C3, C4 suggests a number of predictions that are at least in principle testable, eg:

C8: any plausible deterministic algorithm for discovering the laws and constants of nature would have an expected time to completion which is large compared to the age of the universe.

This is another prediction that is not a prediction of C7. In fact C8 may be too weak. Discovering the laws and constants of nature is, intuitively, a harder version of the problem of learning a language, and it has been known since (Gold, 1967) that there is no general algorithm for learning an arbitrary language. Vapnik and Chervonenkis (1971) demonstrated that a set of languages is learnable if and only if it has a finite VC Dimension and Valliant (1984) showed that there are sets of languages that are learnable in principle but no algorithm can do them in polynomial time. Note that these results extend into statistical learning theory and not just the deterministic cases. Exploring all this in detail is beyond the scope of this paper, but these considerations at least strongly suggest that we have to posit some limitations on the sets of possible scientific laws if we are to do science at all.

Accepting C4 would allow us at least in principle to dispense with the highly profligate multiverse hypothesis, by providing another explanation for the fine-tuning. It also offers the prospect of dealing in an orderly way with the "string landscape" problem: rather than having up to 10500 possible string theories which are considered as in some sense describing existing universes, and then selected against, such theories are considered as possible values of L, which do not obey C4 and thus do not obtain in the real world. C4 might also allow us to dispense with Inflation which, although it provides a nice explanation for the flatness of the initial universe (which is probably also required by C4), postulates inflationary fields/inflatons which have not been observed. The main observational evidence for inflation seems to be the observation from the Cosmic Background Radiation that there was a nearly-scale invariant Gaussian distribution of matter/energy in the early universe with a present day spectral index c. 0.96, and observations suggest that the value of 0.96 is significantly different from 1, which might be a natural expectation (Komatsu et al 2009 give an overview of the whole field in the light of the latest observations). But perhaps we can boldly offer:

C9: C4 will favour a nearly-scale invariant Gaussian distribution of matter/energy in the early universe with a present day spectral index c. 0.96.

Some of this may be a pure anthropic effect (C7) but it is at least plausible that the large-scale homogeneity of the universe is necessary for reasonably timely discoverability of the fundamentals of cosmology. Intuitively changes in the spectral index might be expected to have a significant effect on discoverability, but as yet I can’t find papers about this: the focus seems mainly on confirming inflation by finding fits with the data: in this context (and others) the remarks in (Efstathiou 2008) seem very pertinent.

These are also benefits offered by C7, but superficially at least C7 is a much "fuzzier" condition than C4: how big does PI have to be and why is that value chosen? Admittedly the D* operator is well beyond the possibility of exact calculation but at least we can begin to approximate to it by noting that we need Sufficient Sustained levels of Intelligence, cooperation and intellectual freedom (O3 and C3). Then the probability that (X,L)=D*(X,L) is the probability that there is sufficient sustained intelligence (SI), times the probability, given SI, of sufficient sustained intellectual freedom (F) times the probability given SI and F of sufficient sustained cooperation (C) times the probability given SI, F and C that (X,L)=D*(X,L) – assuming that an exact evaluation of D* is not possible. In symbols:

O6: p( (X,L)=D*(X,L) ) = p(SI) . p(F|SI) . p(C| SI∧ F) . p( (X,L)=D*(X,L) | C∧SI∧F)

One can certainly imagine how one might start to estimate some of these terms, to provide some reasonable upper and lower bounds on the probabilities and hence get some sense of how likely C4 is to be fulfilled. It is also clear that this would be related to the number and distribution and lifetime of stable habitats in the universe where intelligent life might evolve. This would be highly inexact to start with, but the history of science in general and cosmology in particular suggests that once people start focusing on quantities that could usefully be estimated, ingenious researchers usually find ways of reducing the error bands. Whether such intelligent life-forms evolved completely independently or via some form of panspermia (Hoyle & Wickramasinghe 1985, 2000; Joseph 2000, 2009) would influence the detailed calculations (for example a civilisation that destroyed itself might nevertheless have seeded life on other planets) but not the basic principle.

C4 also suggests that theories under consideration should be discoverable, ie:

C10: For sufficiently large t, (Lt, Xt)=D*(Lt, Xt)

This would allow us to explore the Discoverability Postulate in the context of currently understood scientific theories. It suggests the additional testable predictions:

C11: D**(Lt,X) will have a fine-tuned character (ie usually X+ δX ∉D**(Lt,X)).

C12: As the conditions under which the levels of cooperation mentioned in O3 and intellectual freedom required by C3 become better understood, it will become clearer that small changes in some aspects X will have substantial effects on p(C) and p(F), even within the domain under which PI(L,X) is high.

For example, if it turns out that delay amplification of timing uncertainties due to the binding of Ca2+ ions is an important mechanism in the emergence of adequate intellectual freedom then C10 suggests that some perturbation of the fundamental parameters is likely to change the magnitude of that effect without significantly changing the probability of carbon-based life.

8. Conclusions

In this paper I have sketched out a "Discoverability Principle" [O3], which appears to be a nontrivial strengthening of the Anthropic Principle, especially if [C3] is accepted. I also offer a "Discoverability Postulate" [C4], which if it is accepted, can deal with some significant philosophical problems about multiverses and string landscapes, offering a possible explanation of the fine-ness of anthropic fine tuning, and a number of in-principle testable predictions [C8, C9, C10, C11 and C12], none of which (except perhaps C9) are entailed by the Strong Anthropic Principle [C7]. Although the operators defined here are far too complex for present calculation, it is clear that at least a start could be made on some quantitative aspects [O6]. Thus I would suggest that discoverability can help us advance cosmology, and that these would be fruitful avenues for research.


Acknowledgements: I am very grateful to John Polkinghorne for his helpful comments and advice on earlier drafts of this paper, to Rose Beale for discussions that led to O4, to a helpful discussion with Corina Tarnita and to the anonymous referees.

References

Abdo A. A. et al (2009). A limit on the variation of the speed of light arising from quantum gravity effects Nature 462, 331-334

Amelino-Camelia, G. & Smolin, L. (2009). Prospects for constraining quantum gravity dispersion with near term observations Phys.Rev.D 80:084017,2009

Anderson P.W. & Abrahams, E. (2009). Machines Fall Short of Revolutionary Science Science 324 1515-1516

Balazs, A. C. & Epstein I. R. (2009). Emergent or Just Complex? Science 325 1632 – 1634

Barrow J .D. & Tipler F. J. (1986). The Anthropic Cosmological Principle Oxford University Press, Oxford

Barrow J. D. (2007). New Theories of Everything 2nd Edition, Oxford University Press, Oxford

Carr, B (2007). ed Universe or Multiverse? Cambridge University Press, Cambridge

Efstathiou, G (2008). The Future of Cosmology, arXiv:0712.1513v2

Elewa, A. M. T. (2009). The History, Origins, and Causes of Mass Extinctions, Journal of Cosmology, 2, 201-220.

Gold, E. M. (1967). Language Identification in the Limit Inform. Control 10:447-474.

Hartle, J. B. (2002). The State of the Universe, arXiv:gr-qc/0209046

Hoyle, F., and Wickramasinghe, N.C. (1985). Living Comets, University College Cardiff Press, Cardiff.

Hoyle, F., Wickramasinghe, N. C. (2000). Astronomical origins of life – Steps towards panspermia, Klewer Academic Publishers. 1–381.

Joseph, R. (2000). Astrobiology, the origin of life, and the death of Darwinism. University Press, San Jose, California.

Joseph, R. (2009). Life on Earth came from other planets. Journal of Cosmology 1, 1-56.

Jones, A. R. (2009). The Next Mass Extinction: Human Evolution or Human Eradication. Journal of Cosmology, 2, 316-333.

King et al (2009). The Automation of Science Science 324 85-89

Komatsu, E. Et al (2009). Five-Year Wilkinson Microwave Anisotropy Probe Observations: Cosmological Interpretation Astrophysical Journal Supplement Series, 180:330–376

Levy, B., and Sidel, V. (2009). The Threat of Nuclear War. Journal of Cosmology, 2009, 2, 309-315.

Lucas J. R. (1961). Minds, Machines and Godel. Philosophy, XXXVI

McKee, J. K. (2009). Contemporary Mass Extinction and the Human Population Imperative. Journal of Cosmology, 2, 301-308.

Nowak M. R. et al (2006). Five Rules for the Evolution of Cooperation Science 314, 1560-1563

Nowak, M. R. (2006). Evolutionary Dynamics: Exploring the Equations of Life Harvard University Press, Cambridge.

Polkinghorne, J.C. & Beale, N.C.L. (2009). Questions of Truth (Appendix A). Westminster John Knox, Louisville.

Rand, D. G. & al. (2009). Positive interactions Promote Public Cooperation Science 325: 1272- 1275

Rees, M. R. (1999). Just Six Numbers: The Deep Forces that Shape the Universe Weidenfeld & Nicolson, London.

Rees, M. R. (2004). Our Final Century? Arrow Books

Siegelmann, H. T. (1995). Computation Beyond the Turing Limit Science, 238, 632-637

Silk, J. (1997) Holistic Cosmology Science 277 644

Smolin, L. (2007). Scientific Alternatives to the Anthropic Principle, in Carr (2007).

Tonn, B. (2009). Preventing the Next Mass Extinction, Journal of Cosmology, 2009, 2, 334-343.

Valliant, L. G. (1984). A Theory of Learnable Comm. ACM 27:436-445

Vapnik, V. and Chervonenkis, A. (1971) On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16(2), 264–280




The Human Mission to Mars.
Colonizing the Red Planet
ISBN: 9780982955239

Edited by
Sir Roger Penrose & Stuart Hameroff

ISBN: 9780982955208

Abiogenesis
The Origins of LIfe
ISBN: 9780982955215

Life on Earth
Came From Other Planets
ISBN: 9780974975597

Biological Big Bang
Panspermia, Life
ISBN: 9780982955222

20 Scientific Articles
Explaining the Origins of Life

ISBN 9780982955291

Copyright 2009, 2010, 2011, All Rights Reserved