About the Journal
Contents All Volumes
Abstracting & Indexing
Processing Charges
Editorial Guidelines & Review
Manuscript Preparation
Submit Your Manuscript
Book/Journal Sales
Contact


Cosmology Science Books
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon


Journal of Cosmology, 2010, Vol 12, 3537-3548.
JournalofCosmology.com, October-November, 2010

Energy and Interstellar Travel

Edgar D. Mitchell, Sc.D.1, Robert Staretz, M.S.
1Apollo 14 Lunar module pilot. Sixth person to walk on the Moon.


Abstract

The economics and practicality of interplanetary and certainly interstellar travel are directly related to energy generation and utilization. A brief review of existing propulsion systems is discussed followed by several possibilities for future energy generation and utilization systems. Until recently, all of these alternatives have been the stuff of science fiction, but recent discoveries in theoretical physics suggest that they may present real possibilities after all. They may be well beyond the capabilities of 21st century physics but we can rest assured that as our knowledge of nature's secrets are slowly but continually revealed, interstellar travel may someday be well within humanity's grasp.

Key Words: Mars, Space Exploration, Astronauts, Manned Space Exploration Colonization of the Planets, Interstellar Exploration, Humanity's Destiny in Space, Extraterrestrial Civilizations, Zero Point Energy, New Propulsion Systems, Physics of Interstellar travel



1. INTRODUCTION

Interplanetary travel and undoubtedly interstellar travel are directly dependent upon massive energy generation and utilization. This is certainly necessary if a traveler wants to reach the desired destination in a reasonable period of time. Traversing the enormous distance of space with only modest energy utilization is an extremely time consuming process and adds huge costs to space flight. Consider a manned exploration to Mars. With current technology a Martian trajectory requires about 8 months transit time from Earth. This long duration requires extensive engineering to address many issues that dwarfed those when I (Edgar Mitchell) went to the moon on Apollo 14 in 1971. A few of the most challenging issues include maintaining an appropriate livable environment for the crew, carrying enough consumables and supplies needed for the round trip, meeting the physiological and psychological needs of the crew, and protecting against radiation and solar mass ejections from the sun.

2. HUMAN FACTORS AND TRANSIT ISSUES

Of major concern for the crew is the atrophying of the skeletal-musculature systems along with the weakening of the cardiovascular systems on flights of long duration. Without a mechanism for the generation of artificial gravity or a sufficient isometric workout system, our space explorers will likely be too weak for exploration when they arrive at their destination. Fortunately it is possible to create an artificial gravity environment by using centripetal acceleration resulting from spinning a portion or the entire spacecraft around an axis of rotation. However artificial gravity comes a great additional cost to the spacecraft design, significantly increases its size and also adds substantially to the energy utilization necessary to keep the living quarters in a positive g force environment suitable to maintain a healthy physiology for the crew. The isometric systems used on near earth and Apollo flights may be insufficient for the longer duration of flights to Mars. As important as these issues are, perhaps the most significant challenges concern generating energy for operating the ship's on board systems and propulsion, because with sufficient energy the other issues can become significantly more manageable. Most important would be the utilization of ample energy for propulsion to decrease the trajectory transit time from Earth to Mars. Both planets orbit the sun in the same plane and in a counterclockwise direction as seen from Earth in slightly elliptical orbits. Since Mars is approximately 1 1/2 times further from the sun than Earth, it takes about twice as long as the Earth to make one complete orbit around the sun. At their closest position the distance between the two planets is approximately 55 million kilometers. Six months later, thanks to its higher orbital velocity, Earth will have traveled 1/2 way around its orbit while Mars will have only traversed one quarter of its orbit. After an additional 6 months, Earth will be back at its starting point but Mars will have only traversed 1/2 way around in its orbit and will be at the furthest point from the Earth (a distance of over 400 million kilometers). Clearly planning a trajectory from Earth to Mars requires consideration of the orbital velocities of both planets as well as their relative orbital positions before a launch can be initiated.

There are several options for trajectories and launch windows that can be chosen for an outbound launch from Earth that will intersect with Mars which minimize the travel distance and hence the transit time. With the space propulsion systems available today any launch outside of this window will increase the transit time and therefore increase mission costs and resource requirements. Once a rendezvous with Mars has been achieved similar considerations are required for the return trip back to Earth. For a minimum energy launch window, a transit time of approximately 8 months would be required to reach Mars. Since both planets will have moved in their respective orbits around the sun, the optimum launch window for a low energy return flight would not occur for almost another year followed by another 8 months transit time in a return trajectory. The total round trip mission time would be in the order of 28 months duration.

It is certainly possible to reduce the mission time by utilizing bigger propulsion systems but this creates the typical cost benefit tradeoff. How much are we willing to spend to substantially reduce the mission duration? Clearly, shortening the mission duration would reduce the psychological and physiological hazards for the crew of interplanetary travel along with reducing the amount of supplies and consumables necessary for the trip. However without major advances in space propulsion systems it would seem that for the time being humankind will be restricted to lengthy and costly exploratory missions to the red planet. There are indications that advances in our understanding of new methods of energy generation and utilization may already be on the horizon; methods that will mitigate many of the problems of interplanetary travel and will eventually even enable interstellar travel as well. We will therefore turn our attention to the frontiers of science to explore the possibilities for new propulsion technologies that may someday lead us to drastically reduced duration of interplanetary and interstellar manned space missions.

3. ENERGY UTILIZATION AND CLASSIFICATION

There is a direct correlation between the technology a civilization utilizes and the energy required to support that technological base. The Kardashev scale, named after the Russian astronomer who first proposed it in 1964, is an energy classification which describes how technologically advanced a civilization is by the energy it is able to generate and consume. Originally Kardashev proposed 3 categories based on the amount of usable energy to which a civilization has access. Extensions have recently been proposed where it is now a continuous scale from zero to three where each incremental increase represents an exponential increase of several orders of magnitude in energy utilization over the previous one.

On Earth, prior to 150 years ago all energy generation was primarily based on burning carbon based fuels such as wood, charcoal and related materials. A tiny amount of energy was also derived from windmills and water wheels. In the last 150 years the industrial revolution has brought with it the widespread use of burning non renewable fossil fuels such as coal, petroleum and natural gas for energy generation along with a smaller increase from use of hydroelectric power generation. In recent times we have begun utilizing atomic power (nuclear fission) and have been introducing various renewable sources such as wind, solar, tidal and geothermal energy. Together the output of all these energy resources would classify Earth's technological level as a Type 0 civilization (actually about 0.7) on the Kardashev scale, still at a primitive level of technological development.

Perhaps within the next 100 years with the widespread use of various renewable energy sources and the addition of nuclear fusion (the energy generation process that powers the sun) we will achieve the status of a Type 1 civilization. This civilization is able to harness all of the power available on a single planet, or approximately 10 billion megawatts. Clearly energy generation and utilization on this scale is necessary if we are to fully support our own civilization's needs and to begin to explore, colonize and exploit the resources of our solar system. Eventually this level will also give us the capability for interstellar travel to our nearest neighbors, say within 10 light years of Earth.

Getting beyond a Type 1 civilization requires the technological capability to harness the entire energy output of a star the size of the sun, or approximately 100 billion, billion megawatts (1026 watts). At this level our civilization will have completely explored and colonized the solar system and will even have established colonies in nearby star systems. At this stage, known as a Type 2 civilization, the energy generation and utilization we will have mastered will be about a 100 billion fold increase over that controlled by Type 1 civilization. When we reach the Type 2 stage, our civilization will no longer need to be concerned about either natural or man-made extinction events as it will have the capability to prevent or avoid these events. In addition, as a Type 2 civilization (or beyond) depletes the resources of a single planet or star system it will simply be able to relocate and colonize another. How long will it take to reach this stage? Estimates vary but it is reasonable to assume that it will take a few thousand years (assuming that technological progress continues to advance at the current pace and we do not destroy ourselves in the process).

Getting beyond a Type 2 civilization to a Type 3 requires mastering the energy production and utilization of an entire galaxy. At this point such an advanced civilization can completely manipulate not only matter but the very fabric of space-time itself. It would have the capability to warp space, create wormholes, achieve superluminal travel and possesses technological capabilities that would seem magical to us today. The energy mastery for a Type 3 civilization is probably at least a further billion, billion times increase over a Type 2. If our progeny are to explore our own galaxy and venture beyond to the depths of intergalactic space we will have to achieve this level of technological capability. Perhaps it will take us between 10,000 to 100,000 years or so to fully achieve this ability.

What might the mechanism be for such incredible energy generation? Does our science have any possibilities on the horizon to generate the energy required for Type 2 or Type 3 civilizations? Clearly nuclear fusion will get us to a Type 1 and perhaps set us on the path to a Type 2 civilization, but it does not appear attractive for long distance interstellar travel because it requires carrying along the fuel needed to sustain the fusion process. As powerful as it is at energy generation, nuclear fusion is a relatively inefficient process. Consider for example the energy generated by our sun resulting from nuclear fusion at its core. For every second of energy production, about 700 million tons of hydrogen are needed to sustain the nuclear fusion reaction. Only 5 million tons of hydrogen are actually converted to energy (according to Einstein's equation E = mc2), the other 99% is fused into helium which are the ashes of the nuclear fusion process. This is an efficiency rating of less than 1%. Let us assume that it would take 1000 metric tons of hydrogen to generate the energy from nuclear fusion required for a spacecraft to travel from Earth to a nearby star. Such a ship would need to carry 100 times that amount or 100,000 metric tons of hydrogen for an engine that operates at 1% efficiency. This suggests that fusion engines are not useful for long distance interstellar travel unless, perhaps, we are able to mine hydrogen from interstellar gas clouds along the way and / or vastly increase the efficiency of the fusion process.

4. EXISTING PROPULSION SYSTEMS

For space travel within the solar system in the near term, exotic forms of propulsion have been proposed and, in some cases, already demonstrated. Solar sails have actually been used that rely on the solar wind and even from electromagnetic energy. Both provide pressure on a solar sail which moves the craft in the appropriate direction just like the wind on a sailboat in the ocean. The problem with this approach is threefold: even close to the sun the pressure is weak and as one gets too far from it (say beyond the orbit of Mars), the pressure becomes so weak that this method is no longer viable; the second disadvantage is that this method takes a long time to accelerate a spacecraft to a reasonable velocity necessary to traverse the vast distances between objects in the solar system and which would do little to reduce transit time over that utilized by ordinary chemical rocket propulsion systems. Finally, retracing a spacecraft's steps in traveling back to the inner solar system would also be more difficult than the outward journey by flying into the solar wind. Clearly to reach the outer edges of the solar system and return safely to Earth other methods must be used.

For flights to the edges of the solar system, nuclear propulsion is a good choice today. There are several types of nuclear propulsion systems that have been proposed and include nuclear thermal, electric, and pulse propulsion systems. Within the next 50 to 100 years nuclear fusion will probably be added to the list. For all of these methods, the energy that is generated is used to speed up ions (charged particles) carried or sometimes converted and used in the spacecraft as a propellant. When at a high enough velocity, the ions are expelled through a rocket nozzle at the rear of the spacecraft. At that point Newton's third law and the conservation of momentum applies and the ship moves in the opposite direction from the exhaust leaving the nozzle. Since these ions exhaust at a very high velocity and have a tiny mass, it is possible to sustain propulsion for much longer than conventional chemical rocket propellants. The result is that they are able to accelerate the craft to a higher forward velocity albeit it over a much longer duration.

More exotic forms of space propulsion include nuclear pulse and antimatter engines. Nuclear pulse is a technology where a spacecraft is propelled by a series of small nuclear explosions. In essence the spacecraft is propelled by the shockwave from these explosions that occur behind the spacecraft. With antimatter propulsion an equal mixture of matter and antimatter are mixed together. When this happens the engine annihilates the matter / antimatter mix creating vast amounts of pure energy in the process. Although energy generation by matter / antimatter reactions produces very high energies, it has two major technical problems. The first is that antimatter does not exist naturally in significant amounts as far as we know. Consequently it would have to be manufactured by high energy linear accelerators or similar expensive equipment. Once it was produced it would have to be contained and stored for later use on the spacecraft as the propellant. This in itself is no small task because antimatter cannot be kept in a normal container made of ordinary matter since it will annihilate instantly on coming into contact with the storage container's walls. The only practical solution would be some type of "electromagnetic container" to store and contain the antimatter. If these problems can be overcome, matter-antimatter annihilation would release 10-100 billion times more energy than today's chemical rockets making it ideal for interstellar space travel.

The trouble with all forms of the propulsion systems described above is that the fuel that is utilized must be carried in the ship. In most cases, this would greatly increase the mass of the ship either due to the mass of the fuel itself or of its containment system. The bottom line is much of the momentum of the exhaust would actually be "wasted" on moving the remaining unspent fuel in the opposite direction. In addition, except for matter-antimatter propulsion, most propulsion methods do not contain the energy necessary to accelerate a space craft to more than a small percentage the speed of light. Since they can only achieve a velocity of 5-10% of the speed of light, a trip would take months or years to reach its destination in the solar system. Travel to the nearest star would take over forty years one way. And then there is always the issue of having enough fuel for the deceleration at the destination as well as for the return trip back to Earth. For interstellar travel most rocket propulsion systems described above are simply not practical because all forms (except matter-antimatter) of this technology can only reach very modest velocities, making round trips to even to the nearest star a trip that would literally last a lifetime.

5. ACHIEVING SUPERLUMINAL VELOCITIES

For realistic interstellar travel faster than light (or superluminal) travel is needed. The distances in space are so huge that even with the ability to travel at the speed of light it will simply take too long to travel to any but the nearest stars. Not only that, one of the consequences of Einstein's special theory of relativity is that as objects approach (e.g. accelerate to) the speed of light, their mass will increase to infinite proportions. Massive objects have larger inertial mass, and, therefore require more force to accelerate them. Since more force translates into more energy, at 99.9% of the speed of light, the energy required to power the space craft approaches infinity. Even if this feat is possible, the inertial forces resulting from the acceleration would likely crush the spacecraft's occupants to a pulp in the process.

As we have just seen, interstellar space travel requires very different approaches than ordinary rocket propulsion systems. For it to become a reality for all but the closest stars, there are two prerequisites. The first is an energy source that is truly massive; on a par with the power output of the sun or perhaps even beyond. As we have described, if the generation and containment problems can be overcome one possibility of such a massive power source is a matter-antimatter reaction engine. The other prerequisite is a power source that is unlimited and ubiquitous. One such possibility that has recently received attention by several physicists is a propulsion system based on Zero Point Energy (ZPE) which we will now discuss. In many ways, ZPE, if it can be harnessed, seems to meet the energy requirement easier than a matter / antimatter reaction engine. ZPE energy generation, although recognized as a real phenomenon, is not currently accepted by main stream science as a source of usable energy. However, it is presented here along with other possibilities because, as science has continuously demonstrated over its 400 year history, negative pronouncements of one generation of scientists often turn out to be the reality of the next as new information and discoveries are revealed. At the very least they may stimulate research and eventually new discoveries in related areas.

ZPE is an electromagnetic energy source from which all matter in the universe is derived and sustained. Various calculations have estimated that the energy contained in a volume of empty space about the size of a typical coffee cup contains enough energy to boil the water in all the earth's oceans. To state it another way, the zero point energy density is potentially over 110 orders of magnitude greater than the electromagnetic radiation generated by the nuclear fusion process at the center of our sun. With such huge values for energy density, the effects of zeropoint energy should be obvious, but because it is isotropic and has such a uniform density it is not detectable any more than a person can detect atmospheric pressure interacting with his body. If we can ultimately find a way to harness it, ZPE does have an interesting attribute as a potential energy source. It seems to violate the first law of thermodynamics, the principle of conservation of energy. Zero point energy density appears to be constant; it does not become diluted but instead is literally created out of nothing.

Quantum vacuum fluctuations are called zero point field fluctuations because they represents the total energy of any quantum mechanical system at a temperature of 0° K (-273°C) which is also referred to as "absolute zero", the coldest temperature that can exist in nature. It is the point where the movement of atoms ceases due to thermal vibrations. Since temperature is really an indicator of average kinetic energy (e.g. energy of motion) of the molecules or atoms of the substance being measured, absolute zero represents the lowest quantized energy state of any quantum mechanical system. The energy (E) of such a system is given as E = 1/2(h f) where h is Planck's constant and f is the frequency of oscillation. Absolute zero represents the energy that remains when all other energy (e.g. kinetic energy) is removed from a system and is therefore sometimes called the ground energy state or simply "ground state". So it seems that even at absolute zero, all particles have some energy remaining. The question is "Where does this energy come from?"

In the early 1900's Physicist Werner Heisenberg resolved this issue by using the principle which is now carries his name. At absolute zero if a particle stops moving, since the velocity of the particle is known, its position must remain unknown so the particle must still be moving around (e.g. oscillating around a reference point) otherwise it would violate the uncertainty principle. It turns out that this uncertainty principle also applies to measurements involving both energy and time. Both are related to the value of Planck's constant "h". The implication here is that the residual energy of empty space is not constant and varies with time. From Einstein's equation for describing the equivalence of energy and mass, E = mc2, the zero point field must be able to create this energy as EM waves and their equivalent mass as "virtual" particles of empty space. In fact, these energy waves and particles pop into and then out of existence in a time interval dictated by Heisenberg's uncertainty principle. (The amount of energy required to create these particles is given by rearranging Einstein's equation from the standard form of E = m c2 to E / c2 = m). These particles include photons along with positron-electron pairs. They are called virtual because of their very short period of existence. When they meet they immediately annihilate themselves creating pure energy in the process. It turns out from both quantum theory and from verification by experiment all empty space (e.g. the entire universe) contains vast amounts of this quantum vacuum energy. Furthermore electromagnetic fields of all frequencies (down to the Planck length for the smallest wavelength) are produced by the ZPE and are also continuously fluctuating about their zero baseline values.

In the late 1940's Hendrick Casimir suggested that evidence for the energy of the zero point field could be detected using forces resulting from the ZPE. Casimir postulated that two parallel conducting metal plates close together will attract each other. The reason for the attraction is because the very small distance between the plates only allows very small wavelengths of the EM fluctuations of the zero point energy between the plates. Since the outer surface of the plates does not block larger wavelengths, the resulting overpressure forces would push the plates together. This has been experimentally demonstrated in the laboratory by Steve Lamoreaux at the University of Washington in the mid 1990s and later by Mohideen at the University of California. In both cases the results were very close to the values that Casimir had predicted. These experiments have provided confirmation of quantum mechanics that predicts the quantum vacuum continuously spawns particles and waves with all possible wavelengths (down to the Planck length) that spontaneously pop in and out of existence. It also suggests ways to harvest this energy and, in fact, various demonstration experiments are currently being developed to test ZPE as a potential energy source.

The quantum foam of the ZPF extends everywhere throughout the universe, the vacuum of space and even fills the empty space within atoms. Furthermore moving electrically charged particles such as an electron will be disturbed (e.g. wobble) as they experience the vacuum electromagnetic fields. The ZPF explains why helium remains a liquid at absolute zero. It is also responsible for a number of other quantum effects as well including why electrons remain in orbit around the nucleus of atoms and do not decay into the nucleus. This is an explanation for which no one has been able to come up with a satisfactory alternative solution in quantum physics. Not only electrons but all atomic and subatomic particles are influenced by the ZPF. This makes the vacuum energy act like a medium, so it seems that the previously discarded concept of an aether for EM waves to propagate though in empty space is now back in vogue (although in a slightly different form).

Physicists Hal Putoff and Bernard Haisch have written a series of papers on ZPF and have described inertia as an electromagnetic drag caused by the acceleration of objects through the ZPF. Rueda and Haisch (1998) have even gone one step further and have derived Newton's Second Law of Motion from the properties of the ZPF. Extrapolating these concepts, it would appear that inertia, mass and gravity are all related in some fundamental way. It implies that gravitational attraction may be an affect resulting from matter moving through and interacting with the ZPF. This was first proposed in 1967, by the Russian physicist Andre Sakharov. Sakharov suggested that gravity is not a fundamental force in the universe but is, instead an induced effect brought about by changes in the quantum fluctuation energy of the ZPF when matter is present. If this turns out to be true, it might someday be possible to manipulate mass, nullify inertia and accelerate spacecraft to extreme speeds in incredibly short periods of time. In the late 1990's astronomers observed that the expansion of the universe (e.g. space) is accelerating. This acceleration is attributed to dark energy which may also be related to the ZPF. It is called "dark energy" because scientists are not exactly sure what it is composed of. Whatever it is it seems to compose over 70% of the mass energy of the universe. Several theories have been proposed to explain this expansion. ZPE has been proposed as one possibility driving the accelerating expansion but has it an energy density 120 orders of magnitude greater than required for the acceleration. Beck and Mackey (2007) may have resolved this problem by suggesting that at very high frequencies above 1.7 THz from the EM energy emanating from the ZPF, a "phase shift" may occur where the normally repulsive gravitational force of dark energy occurs only below the 1.7 THz boundary and is neutralized above that frequency. An experiment has been proposed to verify whether this is a valid hypothesis but has yet to be implemented. Dark energy has had a major impact on cosmology, which had always assumed that the positive attraction of gravity caused by all matter in the universe was slowing down the rate of expansion. When Einstein originally developed his general theory of relativity, he added a cosmological constant (a term for negative gravity or a repulsive force) to his equation to counteract the attractive (positive) force of gravity so the universe would remain in a steady state (neither expanding nor contracting). In the 1920's when Hubble produced evidence that the universe was expanding, Einstein removed the cosmological constant from his equations, calling it "the biggest blunder of my career". Now it seems that, thanks to the ZPF and dark energy Einstein may have been correct all along. Clearly we have a long way to go to harness ZPE for interstellar travel but if we are someday able to capitalize on this capability, interplanetary travel may one day be no more time consuming than my trip to the moon in 1971.

Let us now turn to some other possibilities on the horizon for interstellar travel. The fixed value for the speed of light in a vacuum is the primary reason why many scientists believe interstellar space travel to all but the closest stars will not be possible. Within the last 15 years physicists have been exploring ways to get around these limitations and have already made significant progress in uncovering circumventions to nature's speed limit. A cornerstone of modern science is that the speed of light in a vacuum, denoted by "c", is constant at 300,000 km/s. But, in fact, this speed limit is only an assumption (albeit one that seems to be backed up by plenty of experimental evidence) but there are several indications that suggest that the speed of light may not be constant after all. Faster than light expansion of space is part of standard inflation theory (during the very early stages of the universe) along with the currently accelerating expansion of our universe resulting from the influence of dark energy. But these concepts apply to the stretching of space itself and not the maximum speed at which matter can travel. Even so, stretching and shrinking space is one way to get round this limit but there also appears to be several other ways to get around nature's cosmic speed restriction for matter as well.

One way to violate this "matter" speed limit is by manipulating the speed of light as we will describe below. Another way is to "warp" space-time without violating the speed of light restriction. Perhaps someday, we will be able to do this by utilizing concepts similar to that which appear to be caused by dark energy. An even more exotic way might be to create traversable worm holes. Until recently, all of these alternatives have been the stuff of science fiction, but recent discoveries in theoretical physics suggest that they may be real possibilities after all as we shall now explore.

According to Einstein's theory of general relativity, massive objects curve space-time, and this is the reason that the sun, for example, bends light from distant stars as it passes nearby. But this is just an interpretation of what is happening. Another interpretation is that massive objects affect the magnetic permeability and dielectric permittivity of empty space and this is the reason a beam of light curves as it passes nearby. These parameters are actually more fundamental and determine the speed of light in any medium including the vacuum of space. The speed of light in any medium (including a vacuum) is given by the formula: c = 1/(μ0 ε0)1/2 where μ0 and ε0 represent the magnetic permeability and dielectric permittivity respectively. These values determine the index of refraction which is a directly related to the speed of light in any medium including the vacuum of space.

Together these parameters define a property called the refractive index. It is most commonly used in the context of light within an empty vacuum as a reference medium. It is usually represented by the symbol n. For light, it is determined by: n = (μ0 ε0)1/2 where 0 is the transmission medium's permittivity, and μ0 is its permeability. The refractive index represents how fast light travels in a specific medium relative to the speed of light in an empty vacuum. In such a vacuum, the refractive index of light is equal to unity meaning light is not bent at all but follows a straight path. If it is greater than one (for example near a massive objective) it means the medium is denser than in a normal empty vacuum and the light will bend toward the massive object. It is also possible for light to bend the opposite way which is contrary to a widespread misconception. In this case the index of refraction of the vacuum will be less than 1. This means the medium is less dense and light will bend in the opposite direction from what it would do if a massive object were present. Examples in this case would be light traveling between the plates of a Casimir cavity, X-rays or even for light traveling through a plasma (e.g. ionized gas).

In normal empty space the magnetic permeability and dielectric permittivity are constant and define the speed of light by the formula: c = 1/n where n is the refractive index as defined above. If either the magnetic permeability or dielectric permittivity or both can be manipulated, then the refractive index will change and consequently the speed of light will also change. A change in either of these values will cause the light traversing the space influenced by these parameters to curve either positively or negatively. We know the refractive index of the vacuum of space can be changed in some circumstances (e.g. near a massive object) then why not by direct manipulation by using advanced technologies? The energy required to do this would certainly be available to Type 2 or Type 3 civilizations.

We mentioned earlier that the energy requirements needed to accelerate an object to the speed of light is infinite. The reason is that, according to special relativity, the relativistic mass of an object is inversely proportional to the speed of an object relative to the speed of light and is given by mv = mr (1/(1-(v/c)2)1/2 where mv is the mass at velocity v and mr is the rest mass. The formula shows that as an object approaches the speed of light its relativistic mass will increase, until it becomes infinite. If the value of the speed of light "c" in a vacuum can be manipulated (e.g. increased) in the manner we have suggested above, then the relativistic mass of the object at a given speed will effectively be reduced accordingly and less energy will be needed to accelerate it to a specific high velocity.

In 1994 theoretical physicist Miguel Alcubierre described mathematically another way to accomplish faster than light travel by engineering space-time utilizing the principles of general relativity. This process has come to be known as metric engineering of space-time or metric engineering for short. Alcubierre developed a mathematical description of the gravitational field that would create a space-time warp around a space craft that would enable faster than light travel. He postulated that a "warp bubble" surrounding a spacecraft would be possible if spacetime could be distorted (e.g. expanding it) behind a spacecraft while at the same time also distorting it in the opposite way (e.g. contracting it) in front of the spacecraft. This has the effect of moving the spacecraft's departure point many light-years further back while at the same time moving its destination much closer. At the same time the spacecraft would be contained in a locally flat region of space-time bounded by the warp bubble between the two distortions. It could therefore ride along in its warp bubble at an arbitrarily high velocity, pushed forward by the expansion of space-time in the back and pulled by the contraction of space-time in the front. Alcubierre's description is not unlike the way a surfer rides ocean waves breaking in shallow water near a shoreline. One of the advantages of his approach is that, inside the warp bubble the ship, its contents and its crew would not experience any accelerations or decelerations. Nor would they experience any relativistic effects such as time dilation (e.g. just as the mass of an object increases as the speed of an object approaches the speed of light, time slows down at the same rate; hence time dilation) because with respect to the space-time in the warp bubble the ship would be at rest. However with respect to its origination or destination point, the spacecraft would be traveling faster than the speed of light without violating any known physical laws.

What is necessary to distort space time to create these space time distortions? As we have seen before, ordinary matter/energy can increase the density of the vacuum (e.g. light traveling near a massive object). Decreasing it can be accomplished by using exotic matter (essentially negative matter or energy). In the latter case, this has been demonstrated in the Casimir effect (in the space between the parallel conducting plates) described earlier which produces negative energy density vacuum states. Although large macro scale effects of producing exotic matter / negative energy density is currently beyond the grasp of 21st century physics, what would the capabilities be of our civilization in several thousand years or more? Surely by utilizing ZPE or some other even more exotic form of energy manipulated by the appropriate technology would be just the thing needed and readily available by an advanced technological civilization.

Unfortunately Pfenning and Ford (1997) have demonstrated that although Alcubierre's theory may be correct in principle, the conditions required to implement it are currently physically unobtainable. At this time it is a matter of speculation if new discoveries at some point in the future will make Alcubierre's drive a reality. However aside from Alcubierre, recently there have been several additional proposals offered in the literature on the subject of practical ways for exceeding the limits to the speed of light in a vacuum. One of these is by a traversable wormhole. The possibility of wormholes arise as a solution to the equations of Einstein's general theory of relativity. One of the properties of wormholes is that they appear to be highly unstable and would probably collapse instantly if even a tiny amount of matter like an atom attempted to pass through them. A possible way around this problem is the use of exotic matter to prevent the wormhole from pinching off. As we have seen earlier, exotic matter is any matter that has negative mass and / or negative energy. It has been speculated that a sufficiently large quantity of exotic matter could stabilize worm holes to allow for interstellar travel through them.

There are two types of wormholes that may enable interstellar travel. The first kind originates with the same process as a black hole resulting from the death of a massive star of several solar masses or more. Wormholes of this kind safe enough for a human being to navigate would probably have to be super-massive and rotating; anything smaller would produce intense tidal forces that would completely destroy anything falling into them. Another kind of wormhole is one that is based on quantum gravity. Scientists have speculated that these wormholes can spontaneously pop into existence only to disappear again, and they only exist at exceedingly small scales (e.g. scales of Planck length). Speculation is that wormholes of this type could be held open using exotic matter (negative matter / energy) though the quantity of the energy would be immense. At this time it is not clear whether this is even possible, the theory of quantum gravity has not been completely formulated let alone verified by experimentation.

6. CONCLUSION

We have attempted to describe future possibilities for interstellar travel based on current understanding of the physics of the very small and that of the very large. Just on the basis of what we know (or don't know as the case may be) about gravity, dark matter, dark energy and the unification of gravity with quantum mechanics, we certainly know that our understanding of nature is incomplete and perhaps some of it may even be incorrect. What we do know for sure that in the future new discoveries or new interpretations of natural phenomenon will undoubtedly occur and with that will come new theories, technologies and new possibilities for interstellar travel. If our civilization is able to maintain the rate of technological progress of the last 150 years, perhaps travel to the stars will occur much sooner than currently seems possible.


REFERENCES

Beck, C., and Mackey, M. C. (2007). Measurability of vacuum fluctuations and dark energy, Physica A 79, 101-110.

Pfenning, M.J. & Ford, L.H. (1997). The Unphysical Nature of Warp Drive, Classical & Quantum Gravity, 14, 1743.

Rueda, A., Haisch, B. (1998). Inertial Mass as Reaction of the Vacuum to Acccelerated Motion, Physics Letters A, 240, 115-126.





The Human Mission to Mars.
Colonizing the Red Planet
ISBN: 9780982955239

Edited by
Sir Roger Penrose & Stuart Hameroff

ISBN: 9780982955208

Abiogenesis
The Origins of LIfe
ISBN: 9780982955215

Life on Earth
Came From Other Planets
ISBN: 9780974975597

Biological Big Bang
Panspermia, Life
ISBN: 9780982955222

20 Scientific Articles
Explaining the Origins of Life

ISBN 9780982955291

Copyright 2009, 2010, 2011, All Rights Reserved