On Truth

Labels: , , , , , ,

Summary: Truth & Reality: Wave Structure of Matter

Reality cannot be found except in One single source, because of the interconnection of all things with one another. ... I maintain also that substances, whether material or immaterial, cannot be conceived in their bare essence without any activity, activity being of the essence of substance in general. (Gottfried Leibniz, 1670)


(Bradley, 1846-1924) We may agree, perhaps, to understand by Metaphysics an attempt to know reality as against mere appearance, or the study of first principles or ultimate truths, or again the effort to comprehend the universe, not simply piecemeal or by fragments, but somehow as a whole.


The Error - The Motion of Discrete Particles in Space and Time


For the past 350 years (since Newton) we have tried to describe an interconnected reality (which has been known for thousands of years) with many discrete and separate 'particles'. Thus you have to add forces or fields to the particles to connect them together in space and time. This is merely a mathematical solution and it does not explain how discrete matter particles create continuous fields that act on other particles in the space around them. Further, both quantum physics (particle-wave duality, non-locality, uncertainty) and Einstein's general relativity (matter-energy curves space-time) contradict the concept of discrete and separate 'particles'.
The Solution - The Wave Motion of Space Causes Matter and Time

We simply needed to describe reality from the most simple foundation of the one thing, Space, that all matter exists in(look around you now and think about this - we all experience existing in space).

This leads to only one solution, a Wave Structure of Matter in Space from which we then deduce the fundamentals of physics, philosophy and metaphysics to show that it is correct (and scientific / testable, not just our opinion).

There is no particle-wave duality of matter, just a spherical standing wave structure of matter where the wave center creates the 'particle' effect, and the spherical in and out waves provide continual two way communication with all other matter in the universe.

We only see the high wave amplitude wave-center and have been deluded into thinking matter was made of tiny little 'particles'. A very naive conception in hindsight - and quantum physics was telling us all along that waves were central to light and matter interactions!


Deducing Reality - Uniting Metaphysics with Science


There are just two steps to deducing the truth about your existence in the universe.

And I emphasize that none of this is my opinion - everyone will come to the same conclusions since it is deduced.

Deduce that Space is One Active Substance: Unite Science & Metaphysics: Most Simple Science Theory of Reality


Science requires physical reality to be logical, sensible and simple (Occam’s razor). Metaphysics requires the existence of one active substance to explain the causal connection between the many things we experience, as Leibniz simply states;
“Reality cannot be found except in One single source, because of the interconnection of all things with one another. ... I maintain also that substances cannot be conceived in their bare essence without any activity, activity being of the essence of substance in general. (Gottfried Leibniz, 1670)

This dynamic unity of reality from one active substance, required to explain how reality is connected together and changing (causal connection) is well known throughout history, the ancient Greeks (and Indians, Chinese, many others ...) realised this 2,500 years ago, as Aristotle in his famous metaphysical treatise writes;
"The first philosophy (Metaphysics) is universal and is exclusively concerned with primary substance. ... And here we will have the science to study that which is just as that which is, both in its essence and in the properties which, just as a thing that is, it has. ... The entire preoccupation of the physicist is with things that contain within themselves a principle of movement and rest. And to seek for this is to seek for the second kind of principle, that from which comes the beginning of the change." (Aristotle, Metaphysics, 340BC)

Given we all experience many minds and many material things, but always in one common Space, and since we must describe physical reality in terms of only one substance, we must conclude this one substance is Space. (If you try using many things, like matter or minds, you have to add a second thing to connect these many things together and this contradicts our rules of science and metaphysics.)


So now we are considering the properties of this space to explain activity / motion.


Look around you in space and notice that light comes in to you from all directions. This light has well known discrete ‘particle’ and continuous wave properties (the famous particle-wave duality of both light and matter). However, we cannot add discrete and separate particles to space (a second substance is not allowed) but we can have space vibrating (a property of space as a wave medium). Thus we come to the necessary conclusion that this one substance must be space, and this space must have waves flowing through it in all directions (since we can see things in all directions).

Thus we have our one active substance - vibrating space (the wave motion of space) and this is the cause of change / time. (Interestingly, Aristotle also realised this connection between time, matter and motion - the common mistake was to consider the motion of matter in space and time - rather than the wave motion of space that causes matter and time.)

Finally, we must consider what kind of waves can flow in all directions through three dimensional space. We find the most simple solution is two dimensional plane waves moving in the third dimension.

So let us now simply state our conclusion - our truth statement for what exists.
One Active Substance (Three Dimensional Space) Exists and has Plane Waves Flowing Through it in All Directions.

This is the most simple way to describe physical reality that is in harmony with science and metaphysics - and it is a valid scientific model that can be mathematically treated to make testable predictions.

Fortunately for us, a brilliant philosopher, metaphysicist and mathematician, Sir William Rowan Hamilton (1843) invented quaternions, a three dimensional algebra that does exactly this - they represent the motion of real things in real three dimensional space. Hamilton believed this mathematics was special and would be important to physics (he carved it on a stone bridge, as he was walking when he thought of it and did not want to forget!).

And it turns out he was correct about its importance - in fact he was 60 years ahead of Einstein in connecting three dimensional space (x,y,z) with time (t) into a four dimensional space-time continuum in one quaternion equation q = t + ix + jy + kz where the i, j, and k terms are imaginary numbers such that i2 = j2 = k2 = ijk = -1
"Time is said to have only one dimension, and space to have three dimensions. ... The mathematical quaternion partakes of both these elements; in technical language it may be said to be 'time plus space', or 'space plus time': and in this sense it has, or at least involves a reference to, four dimensions. And how the One of Time, of Space the Three, Might in the Chain of Symbols girdled be." (W R Hamilton)

It is beautifully elegant mathematics that perhaps scares off many people because of its use of imaginary numbers i, j, k. But these just represent 90 degree rotations. Thus i2 = -1 really means a 180 degree rotation which reverses an object's direction, hence the negative sign.

So whenever you see ix, jy and kz you are considering vectors on orthogonal planes (90 degrees, perpendicular, at right angles) to the x, y and z axis. This then gives us the four dimensional mathematical structure to represent plane waves flowing through 3D space.

So now we are in a position to test this wave theory of reality to see if it correctly deduces the properties of matter that we observe in the space around us.
2. Deduce the Spherical Standing Waves Formed from Plane Waves in Three Dimensional Space

And now for the very cool clever bit - it is these waves that create both matter 'particles' and fields! i.e. When you do the mathematics (complex quaternion wave functions) you discover that space actually vibrates in two completely different ways depending upon the phase of the intersecting plane waves - and these two modes of vibration exactly deduce both the quantum field (backgound space) and matter 'particles'.

Of profound importance though is the fact that there are exactly four unique phase arrangements of the plane waves such that all the vector / transverse wave components cancel one another and you are left with this scalar / longitudinal spherical standing wave (space vibrates radially in and out around a central point).

Two of these phase arrangements create the two spin states of electrons, and two phase arrangements create the two spin states of the opposite phase standing waves, positrons (anti-matter).

This keeps the phase of electrons and positrons locked to the phase of background space (which is the same everywhere) thus explaining how these wave-center 'particles' are locked in their respective phase relations with one another across the universe (what I was seeking).

How amazing - we never needed to add Newton's 'God particles' to space, these appear naturally as the wave centers of spherical standing waves which naturally form from these plane waves. The wave diagrams are very useful aids for understanding this.)

This knowledge of the Wave Structure of Matter in Space provides us with a 'source code' from which we can deduce the truth as a foundation for acting wisely.

Clearly humanity now faces many problems caused by myths and customs that result in endless conflict and harm. Thus knowing the truth about physical reality is critically important for our future survival on our fragile and beautiful little planet.


Basically I hope to make it obvious to people that are either sensible and logical, or enlightened spiritual / religious that we have worked out physical reality (matter-energy interactions in space). To show you that we can perfectly imagine matter interactions in space - they are just wave interactions.


Then you will understand that this whole postmodern uncertainty and skepticism arose from trying to describe an interconnected reality (where matter is a large structure of space) with many discrete and separate particles.



Our existence in the universe is amazing, our bodies subtly connected to all this other matter in the space around us (this is why we can see and interact with it). Currently humanity does not live from these 'connected' foundations, and this 'insanity' is causing the destruction of life on earth that will have catastrophic consequences for all of us.

The cure is a correct understanding of physical reality and the necessary truths that are derived from this. There are many others who now realise this, that truth is the most powerful force for changing the world and saving humanity.


Geoff Haselhurst (May, 2011)

http://www.spaceandmotion.com/

Even geniuses get it wrong

Labels: , , , ,


English: Albert Einstein Français : portrait d...
English: Albert Einstein (Photo credit: Wikipedia)

Arguably the greatest genius of all-time, Albert Einstein, made some colossal mistakes that it took others to correct. Here are the four biggest.

Einstein made numerous mistakes in his derivations, although his most famous results turned out to be quite robust. Image credit: Einstein deriving special relativity, 1934.


1.) Einstein erred in his 'proof' of his most famous equation, E = mc^2. In 1905, his "miracle year," Einstein published papers about the photoelectric effect, Brownian motion, special relativity and mass-energy equivalence, among others. A number of people had worked on the idea of a "rest energy" associated with massive objects, but couldn't work out the numbers. Many had proposed E = Nmc^2, where N was a number like 4/3, 1, 3/8 or some other figure, but nobody had proved which one was correct. Until Einstein did it, in 1905.

At least, that's the legend. The truth might deflate your view of Einstein a bit, but here it is: Einstein was only able to derive E = mc^2 for a particle completely at rest. Despite also inventing special relativity -- founded on the principle that the laws of physics are independent of an observer's frame of reference -- Einstein's formulation couldn't account for how energy worked for a particle in motion. In other words, E = mc^2 as derived by Einstein was frame-dependent! It wasn't until Max von Laue made the critical advance, six years later, that showed the flaw in Einstein's work: one must get rid of the idea of kinetic energy. Instead, we now talk about total relativistic energy, where the traditional kinetic energy -- KE = ½mv^2 -- can only emerge in the non-relativistic limit. Einstein made similar errors in all seven of his derivations of E = mc^2, spanning his entire life, despite that in addition to von Laue, Joseph Larmor, Wolfgang Pauli and Philipp Lenard all successfully derived the mass/energy relationship without Einstein's flaw.


No One, Not Even Newton Or Einstein, Was The Muhammad Ali Of Physics


2.) Einstein added a cosmological constant, Λ, in General Relativity to keep the Universe static. General Relativity is a beautiful, elegant and powerful theory that changed our conception of the Universe. Instead of a Universe where gravitation is the instantaneous, attractive force between two masses located at fixed positions in space, the presence of matter and energy -- in all its forms -- affects and determines the curvature of spacetime. The density and pressure of the full sum of all forms of energy in the Universe play a role, from particles to radiation to dark matter to field energy. But this relationship was no good to Einstein, so he changed it. Dark energy is said to account for more than two-thirds of the total energy of the visible universe.  But so far, trying to prove what dark energy is made up of has been impossible


You see, what Einstein had determined was that a Universe full of matter and radiation was unstable! It would have to be either expanding or contracting if it were filled with massive particles, which our Universe clearly is. So his "fix" for this was to insert an extra term -- a positive cosmological constant -- to exactly balance the attempted contraction of the Universe. This "fix" was unstable anyway, as a slightly denser region than normal would collapse anyway, while a slightly less dense than average region would expand away forever. If Einstein had been able to resist this temptation, he could have predicted the expanding Universe before Friedmann and Lemaître did, and before Hubble uncovered the evidence that proved it. Although we do actually appear to have a cosmological constant in our Universe (responsible for what we call dark energy), Einstein's motivations for putting it in were all wrong, and prevented us from predicting the expanding Universe. It really was a great blunder on his part.


3.) Einstein rejected the indeterminate, quantum nature of the Universe. This one is still controversial, likely primarily due to Einstein's stubbornness on the subject. In classical physics, like Newtonian gravity, Maxwell's electromagnetism and even General Relativity, the theories really are deterministic. If you tell me the initial positions and momenta of all the particles in the Universe, I can -- with enough computational power -- tell you how every one of them will evolve, move, and where they will be located at any point in time. But in quantum mechanics, there are not only quantities that can't be known in advance, there is a fundamental indeterminism inherent to the theory.

The wave pattern for electrons passing through a double slit. If you measure "which slit" the electron goes through, you destroy the quantum interference pattern shown here. Image credit: Dr. Tonomura and Belsazar of Wikimedia Commons, under c.c.a.-s.a.-3.0.
The wave pattern for electrons passing through a double slit. If you measure "which slit" the electron goes through, you destroy the quantum interference pattern shown here. Image credit: Dr. Tonomura and Belsazar of Wikimedia Commons, under c.c.a.-s.a.-3.0.

The better you measure and know the position of a particle, the less well-known its momentum is. The shorter a particle's lifetime, the more inherently uncertain its rest energy (i.e., its mass) is. And if you measure its spin in one direction (x, y, or z), you inherently destroy information about it in the other two. But rather than accept these self-evident facts and try and reinterpret how we fundamentally view the quanta making up our Universe, Einstein insisted on viewing them in a deterministic sense, claiming that there must be hidden variables afoot. It's arguable that the reason physicists still bicker over preferred "interpretations" of quantum mechanics is rooted in Einstein's ill-motivated thinking, rather than simply changing our preconceptions of what a quantum of energy actually is. SMBC has a good comic illustrating this.


4.) Einstein held onto his wrongheaded approach to unification until his death, despite the overwhelming evidence that it was futile. Unification in science is an idea that goes back well before Einstein. The idea that all of nature could be explained by as few simple rules or parameters as possible speaks to the power of a theory, and simplicity is as strong an allure as science ever had. Coulomb's law, Gauss' law, Faraday's law and permanent magnets can all be explained in a single framework: Maxwell's electromagnetism. The motion of terrestrial and heavenly bodies was first explained by Newton's gravitation and then even better by Einstein's General Relativity. But Einstein wanted to go even farther, and attempted to unify gravitation and electromagnetism. In the 1920s, much headway was made, and Einstein would pursue this for the next 30 years.



But experiments had revealed some significant new rules, which Einstein summarily ignored in his stubborn pursuit to unify these two forces. The weak and strong nuclear forces obeyed similar quantum rules to electromagnetism, and the application of group theory to these quantum forces led to the unification we know in the Standard Model. Yet Einstein never pursued these paths or even attempted to incorporate the nuclear forces; he remained stuck on gravity and electromagnetism, even as clear relationships were emerging between the others. The evidence was not enough to cause Einstein to change his path. Today, the electroweak force picture has been confirmed, with Grand Unification Theories (GUTs) theoretically adding the strong force to the works, and string theory finally, at the highest energy scales, as the leading candidate for bringing gravity into the fold. As Oppenheimer said of Einstein,

During all the end of his life, Einstein did no good. He turned his back on experiments... to realise the unity of knowledge.

Even geniuses get it wrong more often than not. It would serve us all well to remember that making mistakes is okay; it's failing to learn from them that should shame us.

It Starts With A Bang

Labels: , , , , , ,


The Big Bang era of the universe, presented as...

The Big Bang is actually not a "theory" at all, but rather a scenario or model about the early moments of our universe, for which the evidence is overwhelming. It is a common misconception that the Big Bang was the origin of the universe. In reality, the Big Bang scenario is completely silent about how the universe came into existence in the first place. In fact, the closer we look to time "zero," the less certain we are about what actually happened, because our current description of physical laws do not apply to these occurrences of nature.

The Big Bang era of the universe presented as a manifold in two dimensions (1-space and time); the shape is right (approximately), but it's not to scale. (Photo credit: Wikipedia)

 The Big Bang scenario simply assumes that space, time, and energy already existed. But it tells us nothing about where they came from or why the universe was born hot and dense, to begin with. But if space and everything with it are expanding now, then the universe must have been much denser in the past. That is, all the matter and energy (such as light) that we observe in the universe would have been compressed into a much smaller space in the past. Einstein's theory of gravity attempts to run the "movie" of the universe backwards—i.e., to calculate the density that the universe must have had in the past, for our current understanding of the laws that govern the universe, to work!. The assumption: any chunk of the universe we can observe—no matter how large—must have expanded from an infinitesimally small volume of space.

The Big Bang Theory hinges 100% on the assumption that scientist claim to have proven, that the universe is expanding.

Let's look at those assumptions, briefly here and determine how we came to the conclusion the universe is expanding!

Hubble found that the further away a galaxy was from us, the more severe the "redshifting". In other words, the redder the color of that galaxy. So redder color of an Ia Supernova, increase by 10%, means it moved 10% further away from us. That's because of a belief that as a galaxy moves farther away, the color of light changes and heads towards the red end of the spectrum.


English: Diagram of a dispersion prism Magyar:...
English: Diagram of a dispersion prism (Photo credit: Wikipedia)

So, The assumption of the change in color of a Supernova disregards the influence of bending of light by our own atmosphere, by gravitational waves or even inconsistent temperatures and the densities of the gasses, the curvature of spacetime. All blatantly ignored in order to make this theory seem somewhat credible.

So, by determining how fast the universe is "expanding" now, and then "running the movie of the universe" backward in time, we can mathematically determine the age of the universe. Using this formula, the result is that space started expanding 13.7 billion years ago. The problem with this picture, is that all calculations and estimates of the magnitude of the empty space energy so far, lead to absurdly large values.Unbelievably unscientific!

Secondly, if you look at a car driving off into the distance, surely, that by no means imply the world is getting bigger? Logic dictates, however, using the same criteria that we could be shrinking in relation to the universe!


It's a common misconception that the entire universe began from a single point. If the whole universe is infinitely large today, then it would have been infinitely large in the past, including during the Big Bang. How can an infinate universe, expand? Surely that is not possible?


It is, however, true that any finite chunk of the universe (such as the part of the universe we can observe today) can be predicted to have started from an extremely small unit.

The biggest contributor of the confusion is that scientists sometimes use the term "universe" when they're referring to just the part of it.we can see ("the observable universe"). And sometimes they use the term universe to refer to everything, including the part of the universe beyond what we can see.

It is further a huge misconception that the Big Bang was an "explosion" that took place somewhere in space. But the Big Bang was an expansion of space itself. Every part of space participated in it. For example, the part of space occupied by the Earth, the Sun, and our Milky Way galaxy was once, during the Big Bang, incredibly hot and dense. The same holds true of every other part of the universe we can see.

We assume that galaxies are rushing apart in just the way predicted by the Big Bang model. But there are other observations that scientist present in support of the Big Bang. Astronomers have detected, throughout the universe, two chemical elements that could only have been created during the Big Bang: hydrogen and helium. Furthermore, these elements are observed in just the proportions (roughly 75% hydrogen, 25% helium) theorized to have been produced during the Big Bang. This prediction is based on our Newtonian understanding of nuclear reactions—independent of Einstein's theory of gravity-Independent of Quantum weirdness and independent of the presence of so-called "Dark Matter". Yet, Quantum physics have lately been such a pain for scientists as it actually made us realize that our assumptions and laws do not apply to the quantum world.


The standard Hot Big Bang model is necessary to provide a framework in which to understand the collapse of matter to form galaxies and other large-scale structures observed in the Universe today. Without the theory, Scientists need to prove a whole lot of established LAWS all over again.

UFO Fuel - Now On Periodic Table

Labels: , , , , , , , , , , ,

Chances are you never knew these 4 newly-named elements even existed. From the official IUPAC announcement, the elements are:


Nihonium (Nh) for Element 113. (Comes from one of the Japanese words for Japan "nihon," literally "the land of the rising sun.")

Simulation of an accelerated calcium-48 ion ab...
Simulation of an accelerated calcium-48 ion about to collide with an americium-243 target atom. (Photo credit: Wikipedia)
On February 2, 2004, scientists at the Lawrence Livermore National Laboratory, in collaboration with researchers from the Joint Institute for Nuclear Research in Russia (JINR), announced that they discovered two new super-heavy elements, Element 113 and Element 115. The Isotope of Element 115, produced by bombarding an Americium-243 (95Am243) nucleus with a Calcium-48 (20Ca48) nucleus, rapidly decayed to Element 113. then continued to decay until Like other superheavy elements, after 113 was created, it quickly decayed, ultimately turning element 113 into 111, and then 109, 107, 105, 103 and finally into element 101 (a meta-stable isotope).

Moscovium (Mc) for Element 115.(Supposedly a Fuel source of Flying discs)

Element 115 was an Infamous 'Alien element’ mentioned over a decade ago in Area 51. Element 115 was already announced in 1989 when Bob Lazar, famous area 51 whistleblower revealed to the public that the UFOs possessed by the government were powered by a mysterious ‘Element 115.’ Of course at that time, the claims made by Lazar were tagged as absurd as the scientific community had no knowledge of ‘Element 115’


Tennessine (Ts) for Element 117.

Scientists discovered Superheavy element 117, more than five years after they first reported its discovery in April 2010. Superheavy elements don't occur naturally in nature; instead they are created in labs.. In theory, the nuclei will in rare cases combine into a "superheavy" and heretofore unknown element.

Oganesson (Og) for Element 118.

Oganesson is a radioactive, artificially produced element about which little is known. It is expected to be a gas and is classified as a non-metal. It is a member of the noble gas group. It was discovered in 2002 by Russian scientists at the Joint Institute for Nuclear Research in Dubna, Russia.

Following tradition, the new additions to the seventh row of the periodic table received names that paid homage to either the region of their discovery or a person, and their suffixes reflected chemical consistency.

There are currently two main theories about gravity. The "wave" theory which states that gravity is a wave, and the other is a theory which includes "gravitons", which are alleged sub-atomic particles which perform as gravity, which by the way, is total nonsense.

The fact that gravity is a wave has caused mainstream scientists to surmise numerous sub-atomic particles which don't actually exist and this has caused great complexity and confusion in the study of particle physics.

Gravity is actually 2 waves identified as Gravity A' and 'Gravity B'. 'Gravity A' is at the atomic level. That is, the wave does not extend beyond the molecular bond except in Moscovium (Mc). This slight extension allows the wave to be accessed and amplified. 'Gravity A' is currently called the "strong nuclear force" in mainstream particle physics.


Moscovium (Mc): Space Fuel

Robert Scott "Bob" Lazar claims to have worked on reverse engineering extraterrestrial technology at a site called S4, in the Emigrant Valley and Old Kelley Mine area near the Area 51 test facility. He introduced us to element 115 in 1989 already.

According to Lazer's assertion nearly 18 years ago, element 115 served as fuel for the vehicle he referred to. The propulsion system of the UFO is an anti-matter reactor. In the disc that Lazar crawled inside, the reactor was a sphere about the size of a medicine ball. The top half of it was visible in the middle of the floor. The reactor is located directly above the 3 gravity amplifiers on the center level and is in fact centered between them. The reactor is a closed system which uses the Moscovium (Mc) as its fuel. The element is also the source of the 'Gravity A' wave which is amplified for space/time distortion and travel.


Now that the seventh row (called a period) of the periodic table has been completed with element 118, according to the IUPAC, chemists will continue to search for heavier elements beyond that.


New Periodic Table

Our Quantum Problem: Everything's related

Labels: , , , , , , , , , , ,

 What Really Happens In Schrödinger's Box


Left to right: Max Planck, Albert Einstein, Ni...
Left to right: Max Planck, Albert Einstein, Niels Bohr, Louis de Broglie, Max Born, Paul Dirac, Werner Heisenberg, Wolfgang Pauli, Erwin Schrödinger, Richard Feynman. (Photo credit: Wikipedia)
In 1909, Ernest Rutherford, Hans Geiger and Ernest Marsden took a piece of radium and used it to fire charged particles at a sheet of gold foil. They wanted to test the then-dominant theory that atoms were simply clusters of electrons floating in little seas of positive electrical charge (the so-called ‘plum pudding’ model). What came next, said Rutherford, was ‘the most incredible event that has ever happened to me in my life’.

Despite the airy thinness of the foil, a small fraction of the particles bounced straight back at the source – a result, Rutherford noted, ‘as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you’. Instead of whooshing straight through the thin soup of electrons that should have been all that hovered in their path, the particles had encountered something solid enough to push back. Something was wrong with matter. Somewhere, reality had departed from the best available model. But where?

The first big insight came from Rutherford himself. He realised that, if the structure of the atom were to permit collisions of the magnitude that his team had observed, its mass must be concentrated in a central nucleus, with electrons whirling around it. Could such a structure be stable? Why didn’t the electrons just spiral into the centre, leaking electromagnetic radiation as they fell?

Such concerns prompted the Danish physicist Niels Bohr to formulate a rather oddly rigid model of the atom, using artificial-seeming rules about electron orbits and energy levels to keep everything in order. It was ugly but it seemed to work. Then, in 1924, a French aristocrat and physicist named Louis de Broglie argued that Bohr’s model would make more sense if we assumed that the electrons orbiting the atomic nucleus (and indeed everything else that had hitherto been considered a particle) either came with, or in some sense could behave like, waves.

If Bohr’s atom had seemed a little arbitrary, de Broglie’s improved version was almost incomprehensible. Physical theory might have recovered some grip on reality but it seemed to have decisively parted company from common sense. And yet, as Albert Einstein said on reading de Broglie’s thesis, here was ‘the first feeble ray of light on this worst of our physics enigmas’. By 1926, these disparate intuitions and partial models were already unified into a new mathematical theory called quantum mechanics. Within a few years, the implications for chemistry, spectroscopy and nuclear physics were being confirmed.

It was clear from the start that quantum theory challenged all our previous preconceptions about the nature of matter and how it behaves, and indeed about what science can possibly – even in principle – say about these questions. Over the years, this very slipperiness has made it irresistible to hucksters of various descriptions. I regularly receive ads offering to teach me how to make quantum jumps into alternate universes, tap into my infinite quantum self-energy, and make other exciting-sounding excursions from the plane of reason and meaning. It’s worth stressing, then, that the theory itself is both mathematically precise and extremely well confirmed by experiment.

Quantum mechanics has correctly predicted the outcomes of a vast range of investigations, from the scattering of X-rays by crystals to the discovery of the Higgs boson at the Large Hadron Collider. It successfully explains a vast range of natural phenomena, including the structure of atoms and molecules, nuclear fission and fusion, the way light interacts with matter, how stars evolve and shine, and how the elements forming the world around us were originally created.

Yet it puzzled many of its founders, including Einstein and Erwin Schrödinger, and it continues to puzzle physicists today. Einstein in particular never quite accepted it. ‘It seems hard to sneak a look at God’s cards,’ he wrote to a colleague, ‘but that he plays dice and uses “telepathic” methods (as the present quantum theory requires of him) is something that I cannot believe for a single moment.’ In a 1935 paper co-written with Boris Podolsky and Nathan Rosen, Einstein asked: ‘Can [the] Quantum-Mechanical Description of Physical Reality Be Considered Complete?’ He concluded that it could not. Given apparently sensible demands on what a description of physical reality must entail, it seemed that something must be missing. We needed a deeper theory to understand physical reality fully.

Einstein never found the deeper theory he sought. Indeed, later theoretical work by the Irish physicist John Bell and subsequent experiments suggested that the apparently reasonable demands of that 1935 paper could never be satisfied. Had Einstein lived to see this work, he would surely have agreed that his own search for a deeper theory of reality needed to follow a different path from the one he sketched in 1935.

Even so, I believe that Einstein would have remained convinced that a deeper theory was needed. None of the ways we have so far found of looking at quantum theory are entirely believable. In fact, it’s worse than that. To be ruthlessly honest, none of them even quite makes sense. But that might be about to change.


Here’s the basic problem. While the mathematics of quantum theory works very well in telling us what to expect at the end of an experiment, it seems peculiarly conceptually confusing when we try to understand what was happening during the experiment. To calculate what outcomes we might expect when we fire protons at one another in the Large Hadron Collider, we need to analyse what – at first sight – look like many different stories. The same final set of particles detected after a collision might have been generated by lots of different possible sequences of energy exchanges involving lots of different possible collections of particles. We can’t tell which particles were involved from the final set of detected particles.

Now, if the trouble was only that we have a list of possible ways that things could have gone in a given experiment and we can’t tell which way they actually went just by looking at the results, that wouldn’t be so puzzling. If you find some flowers at your front door and you’re not sure which of your friends left them there, you don’t start worrying that there are inconsistencies in your understanding of physical reality. You just reason that, of all the people who could have brought them, one of them presumably did. You don’t have a logical or conceptual problem, just a patchy record of events.


If you think this doesn’t make any sense, that there has to be something missing, well, that’s how many thoughtful physicists feel


Quantum theory isn’t like this, as far as we presently understand it. We don’t get a list of possible explanations for what happened, of which one (although we don’t know which) must be the correct one. We get a mathematical recipe that tells us to combine, in an elegant but conceptually mysterious way, numbers attached to each possible explanation. Then we use the result of this calculation to work out the likelihood of any given final result. But here’s the twist. Unlike the mathematical theory of probability, this quantum recipe requires us to make different possible stories cancel each other out, or fully or partially reinforce each other. This means that the net chance of an outcome arising from several possible stories can be more or less than the sum of the chances associated with each.
To get a sense of the conceptual mystery we face here, imagine you have three friends, John, Mary and Jo, who absolutely never talk to each other or interact in any other way. If any one of them is in town, there’s a one-in-four chance that this person will bring you flowers on any given day. (They’re generous and affectionate friends. They’re also entirely random and spontaneous – nothing about the particular choice of day affects the chance they might bring you flowers.) But if John and Mary are both in town, you know there’s no chance you’ll get any flowers that day – even though they never interact, so neither of them should have any idea whether the other one is around. And if Mary and Jo are both in town, you’ll certainly get exactly one bunch of flowers – again, even though Mary and Jo never interact either, and you’d have thought that if they’re acting independently, your chance of getting any flowers is a bit less than a half, while once in a while you should get two bunches.

If you think this doesn’t make any sense, that there has to be something missing from this flower delivery fable, well, that’s how many thoughtful physicists feel about quantum theory and our understanding of nature. Pretty precisely analogous things happen in quantum experiments.


One attempt to make sense of this situation – the so-called ‘Copenhagen interpretation’ of quantum theory, versions of which were advocated by Bohr, Werner Heisenberg and other leading quantum theorists in the first half of the last century – claims that quantum theory is teaching us something profound and final about the limits of what science can tell us. According to this approach, a scientific question makes sense only if we have a direct way of verifying the answer. So, asking what we’ll see in our particle detectors is a scientific question; asking what happened in the experiment before anything registered in our detectors isn’t, because we weren’t looking. To be looking, we’d have had to put detectors in the middle of the experiment, and then it would have been a different experiment. In trying to highlight the absurd-seeming consequences of this view, Schrödinger minted what has become its best-known popular icon – an imaginary experiment with a sealed box containing a cat that is simultaneously alive and dead, only resolving into one or other definite state when an experimenter opens the box.

The Copenhagen interpretation was very much in line with the scientific philosophy of logical positivism that caught on at around the same time. In particular, it rests on something like logical positivism’s principle of verification, according to which a scientific statement is meaningful only if we have some means of verifying its truth. To some of the founders of quantum theory, as well as to later adherents of the Copenhagen interpretation, this came to seem an almost self-evident description of the scientific process. Even after philosophers largely abandoned logical positivism – not least because the principle of verification fails its own test for meaningful statements – many physicists trained in the Copenhagen tradition insisted that their stance was no more than common sense.

However, its consequences are far from commonsensical. If you take this position seriously, then you have to accept that the Higgs boson wasn’t actually discovered at the Large Hadron Collider, since no one has ever directly detected a Higgs boson, and we have no direct evidence to support the claim that the Higgs boson is a real particle. Insofar as we learnt anything about nature from the Large Hadron Collider, it was merely what sort of records you get in your detectors when you build something like the Large Hadron Collider. It’s hard to imagine the scientists who work on it, or the citizens who funded them, being very enthusiastic about this justification, but on a strict Copenhagen view it’s the best we can do.

It gets worse. Quantum theory is supposed to describe the behaviour of elementary particles, atoms, molecules and every other form of matter in the universe. This includes us, our planet and, of course, the Large Hadron Collider. In that sense, everything since the Big Bang has been one giant quantum experiment, in which all the particles in the universe, including those we think of as making up the Earth and our own bodies, are involved. But if theory tells us we’re among the sets of particles involved a giant quantum experiment, the position I’ve just outlined tells us we can’t justify any statement about what has happened or is happening until the experiment is over. Only at the end, when we might perhaps imagine some technologically advanced alien experimenters in the future looking at the final state of the universe, can any meaningful statement be made.



Of course, this final observation will never happen. By definition, no one is sitting outside the universe waiting to observe the final outcome at the end of time. And even if the idea of observers waiting outside the universe made sense – which it doesn’t – on this view their final observations still wouldn’t allow them to say anything about what happened between the Big Bang and the end of time. We end up concluding that quantum theory doesn’t allow us to justify making any scientific statement at all about the past, present or future. Our most fundamental scientific theory turns out to be a threat to the whole enterprise of science. For these and related reasons, the Copenhagen interpretation gradually fell out of general favour.

Its great rival was first set out in a 1957 paper and Princeton PhD thesis written by one of the stranger figures in the history of 20th-century physics, Hugh Everett III. Rather unromantically, and very unusually for a highly original thinker and talented physicist, Everett abandoned theoretical physics after he had published his big idea. A good deal of his subsequent career was spent in military consultancy, advising the US on strategies for fighting and ‘winning’ a nuclear war against the USSR, and the bleakness of this chosen path presumably contributed to his chain-smoking, alcoholism and depression. Everett died of a heart attack at the age of 51; possibly we can infer something of his own ultimate assessment of his life’s worth from the fact that he instructed his wife to throw his ashes in the trash. And yet, despite his detachment from academic life (some might say from all of life), Everett’s PhD work eventually became enormously influential.

One way of thinking about his ideas on quantum theory is that our difficulties in getting a description of quantum reality arise from a tension between the mathematics – which, as we have seen, tells us to make calculations involving many different possible stories about what might have really happened – and the apparently incontrovertible fact that, at the end of an experiment, we see that only one thing actually did happen. This led Everett to ask a question that seems at first sight stupid, but which turns out to be very deep: how do we know that we only get one outcome to a quantum experiment? What if we take the hint from the mathematics and consider a picture of reality in which many different things actually do happen – everything, in fact, that quantum theory allows? And what if we take this to its logical conclusion and accept the same view of cosmology, so that all the different possible histories of the evolution of the universe are realised? We end up, Everett argued, with what became known as a ‘many worlds’ picture of reality, one in which it is constantly forming new branches describing alternative – but equally real – future continuations of the same present state.

On this view, every time any of us does a quantum experiment with several possible outcomes, all those outcomes are enacted in different branches of reality, each of which contains a copy of our self whose memories are identical up to the start of experiment, but each of whom sees different results. None of these future selves has any special claim to be the real one. They are all equally real – genuine but distinct successors of the person who started the experiment. The same picture holds true more generally in cosmology: alongside the reality we currently habit, there are many others in which the history of the universe and our planet was ever so slightly different, many more in which humanity exists on Earth but the course of human history was significantly different from ours, and many more still in which nothing resembling Earth or its inhabitants can be found.


On another paper addressing the same issue, Everett’s comment was the single word ‘bullshit’

This might sound like unbelievable science fiction. To such a gibe, Everett and his followers would reply that science has taught us many things that seemed incredible at first. Other critics object that the ‘many worlds’ scenario seems like an absurdly extravagant and inelegant hypothesis. Trying to explain the appearance of one visible reality by positing an infinite collection of invisible ones might seem the most deserving candidate in the history of science for a sharp encounter with Occam’s razor. But to this, too, Everettians have an answer: given the mathematics of quantum theory, on which everyone agrees, their proposal is actually the simplest option. The many worlds are there in the equations. To eliminate them you have to add something new, or else change them – and we don’t have any experimental evidence telling us that something should be added or that the equations need changing.
Everettians might have a point, then, when they argue that their ideas deserve a hearing. The problem is that, from Everett and his early followers onwards, they have never managed to agree on a clear story about how exactly this picture of branching worlds is supposed to emerge from the fundamental equations of quantum theory, and how this single world that we see, with experimental outcomes that are apparently random but which follow definite statistical laws, might then be explained. One of the blackly funny revelations in Peter Byrne’s biography The Many Worlds of Hugh Everett III (2010) was the discovery of Everett’s personal copy of the classic text The Many‑Worlds Interpretation of Quantum Mechanics, put together in 1973 by the distinguished American physicist Bryce DeWitt and a few of Everett’s other early supporters. To DeWitt’s mild criticism that ‘Everett’s original derivation [of probabilities]… is rather too brief to be entirely satisfying’, Everett scribbled in the margins ‘Only to you!’ and ‘Goddamit [sic] you don’t see it’. On another paper addressing the same issue, his comment was the single word ‘bullshit’. Although generally in more civil terms, Everettians have continued to argue over this and related points ever since.

Indeed, the big unresolved, and seemingly unsolvable, problem here is how statistical laws can possibly emerge at all when the Everettian meta-picture of branching worlds has no randomness in it. If we do an experiment with an uncertain outcome, Everett’s proposal says that everything that could possibly happen (including the very unlikely outcomes) will in fact take place. It’s possible that Everettians can sketch some explanation of why it seems to ‘us’ (really, to any one of our many future successors) that ‘we’ see only one outcome. But that only replaces ‘everything will actually happen’ with ‘anything could seem to happen to us’ – which is still neither a quantitative nor a falsifiable scientific statement. To do science, we need to able to test statements such as ‘there’s a one-in-three chance X will happen to us’ and ‘it’s incredibly unlikely that Y will happen to us’ – but it isn’t at all obvious that Everett’s ideas support any such statements.

Everettians continue to devote much ingenuity to deriving statements involving probabilities from the underlying deterministic many-worlds picture. One idea lately advocated by David Deutsch and David Wallace of the University of Oxford is to try to use decision theory, the area of mathematics that concerns rational decision-making, to explain how rational people should behave if they believe they are in a branching universe. Deutsch and Wallace start from a few purportedly simple and natural technical assumptions about the preferences one should have in a branching world and then claim to show that rational Everettians should behave as though they were in an uncertain probabilistic world following the statistical laws of quantum theory, even though they believe their true situation is very different.

One problem with this line of thought is that the assumptions turn out not to seem especially natural, or even properly defined, on close inspection. The easiest way to understand this is to look for rationally defensible strategies for life in a branching universe other than the ones Deutsch and Wallace advocate. One example I rather like (because it makes the point succinctly, not because it seems morally attractive) is that of future self elitism, which counsels us to focus only on the welfare of our most fortunate and successful future successor, perhaps on the premise that our best possible future self is our truest self. Future self elitists don’t worry about the odds of a particular bet, only about the best possible payoff. Thus they violate Deutsch and Wallace’s axioms, but it is hard to see any purely logical argument against their decisions.

Another issue is that, as several critics have pointed out, whatever one thinks of Deutsch and Wallace’s proposed rational strategy, it answers a subtly different question to the one that Everettians were supposed to be addressing. The question ‘What bets should I be happy to place on the outcomes of a given experiment, given that I believe in Everettian many-worlds?’ is certainly a question that relates something we normally try to answer using probabilities with the many-worlds picture. In that sense, it makes some sort of connection between probabilities and many worlds – and since we’ve seen how hard that is to achieve, it’s easy to understand why Everettians (at least initially) are enthusiastic about this accomplishment. But, unfortunately, it’s not the sort of connection we need. The key scientific question is why the experimental evidence for quantum theory justifies a belief in many worlds in the first place. Many Everettians – from Everett and DeWitt onwards – have tried to give a satisfactory answer to this. Many critics (myself included) appreciate the cunning of their attempts but think they have all failed.


If we cannot get a coherent story about physical reality from the Copenhagen interpretation of quantum theory and we cannot get a scientifically adequate one from many-worlds theory, where do we turn? We could, as some physicists suggest, simply give up on the hope of finding any description of an objective external reality. But it is very hard to see how to do this without also giving up on science. The hypothesis that our universe began from something like a Big Bang, our account of the evolution of galaxies and stars, the formation of the elements and of planets and all of chemistry, biology, physics, archaeology, palaeontology and indeed human history – all rely on propositions about real observer-independent facts and events. Once we assume the existence of an external world that changes over time, these interrelated propositions form a logically coherent set; chemistry depends on cosmology, evolution on chemistry, history on evolution and so on. Without that assumption, it is very hard to see how one might make sense of any of these disciplines, let alone see a unifying picture that underlies them all and explains their deep interrelations and mutual dependence.

If we can’t allow the statement that dinosaurs really walked the Earth, what meaningful content could biology, palaeontology or Darwinian evolution actually have? It’s even harder to understand why the statement seems to give such a concise explanation of many things we’ve noticed about the world, from the fossil record to (we think) the present existence of birds, if it’s actually just a meaningless fiction. Similarly, if we can’t say that water molecules really contain one oxygen and two hydrogen atoms – or at least that something about reality that supports this model – then what, if anything, is chemistry telling us?

Physics poses many puzzles, and the focus of the physics community shifts over time. Most theoretical physicists today do not work on this question about what really happens in quantum experiments. Among those who think about it at all, many hope that we can find a way of thinking about quantum theory in which reality somehow evaporates or never arises. That seems like wishful thinking to me.

The alternative, as John Bell recognised earlier and more clearly than almost all of his contemporaries, is to accept that quantum theory cannot be a complete fundamental theory of nature. (As mentioned above, Einstein also believed this, though at least partly because of arguments that Bell was instrumental in refuting.)


we need to supplement our quantum equations with quantities that correspond directly to real events or things – real ‘stuff’ in the world

Bell was one of the last century’s deepest thinkers about science. As he put it, quantum theory ‘carries in itself the seeds of its own destruction’: it undermines the account of reality that it needs in order to make any sense as a physical theory. On this view, which was once as close to heresy as a scientific argument can be but is now widely held among scientists who work on the foundations of physics, the reality problem is just not solvable within quantum theory as it stands. And so, along with the variables that describe potentialities and possibilities, we need to supplement our quantum equations with quantities that correspond directly to real events or things – real ‘stuff’ in the world.
Bell coined the term beables to refer to these elusive missing ingredients. ‘Beable’ is an ugly word but a useful concept. It denotes variables that are able to ‘be’ in the world – hence the name. And indeed it turns out that we can extend quantum theory to include beables that would directly describe the sort of reality we actually see. Some of the most interesting work in fundamental physics in the past few decades has been in the search for new theories that agree with quantum theory in its predictions to date, but which include a beable description of reality, and so give us a profoundly different fundamental picture of the world.

What sort of quantities might do the trick? One early idea comes from Louis de Broglie, whom we met earlier, and David Bohm, an American theoretical physicist who fled McCarthyite persecution and spent most of his career at the University of London. The essence of their proposal is that, in addition to the mathematical quantities given to us by quantum theory, we also have equations defining a definite path through space and time for each elementary particle in nature. These paths are determined by the initial state of the universe and, in this sense, de Broglie-Bohm theory can be thought of as a deterministic theory, rather like the pre-quantum theories given by Newton’s and Maxwell’s equations. Unfortunately, de Broglie and Bohm’s equations also share another property of Newton’s equations: an action at any point in space has instantaneous effects on particles at arbitrarily distant points.

Because these effects would not be directly detectable, this would not actually allow us to send signals faster than light, and so it does not lead to observations that contradict Einstein’s special theory of relativity. It does, however, very much violate its spirit, as well as the beautiful symmetry principles incorporated in the underlying mathematics. For this reason, and also because de Broglie and Bohm’s ideas work well for particles but are hard to generalise to electromagnetic and other fields, it seems impossible to find a version of the scheme that is consistent with much of modern theoretical physics. Still, de Broglie and Bohm’s great achievement was to show that we can find a mathematically consistent description of reality alongside quantum theory. When it first emerged, their work was largely unappreciated, but it led to many of Bell’s insights into the quantum reality problem and blazed a trail for later theorists.


In the 1980s, a much more promising avenue opened up, thanks to the efforts of Giancarlo Ghirardi, Alberto Rimini, Tullio Weber and Philip Pearle, three European theorists and an American. Their approach became known as the ‘spontaneous collapse’ model and their brilliant insight was that we can find mathematical laws that describe how the innumerable possible outcomes encoded in a quantum description of an experiment get reduced to the one actual result that we see. As we have already noted, the tension between these two descriptions is at the heart of the quantum reality problem.

When using standard quantum theory, physicists often say that the wave function – a mathematical object that encodes all the potential possibilities – ‘collapses’ to the measured outcome at the end of an experiment. This ‘collapse’, though, is no more than a figure of speech, which only highlights the awkward fact that we do not understand what is really happening. By contrast, in Ghirardi-Rimini-Weber-Pearle models, collapse becomes a well-defined mathematical and physical process, taking place at definite points in space, following precise equations and going on all the time in the world around us, whether or not we are making measurements. According to these new equations, the more particles there are in a physical system, the faster the collapse rate. Left isolated, a single electron will collapse so rarely that we essentially never see any effect. On the other hand, anything large enough to be visible – even a dust grain – has enough particles in it that it collapses very quickly compared to human perception times. (In Schrödinger’s famous thought experiment, the cat’s quantum state would resolve in next to no time, leaving us with either a live cat or a dead one, not some strange quantum combination of both.)

One way of thinking about reality in these models, first suggested by Bell, is to take the beables to be the points in space and time at which the collapses take place. On this view, a dust grain is actually a little galaxy of collapse points, winking instantaneously in and out of existence within or near to (what we normally think of as) the small region of space that it occupies. Everything else we see around us, including our selves, has the same sort of pointillistic character.

Collapse models do not make exactly the same predictions as quantum theory, which could turn out to be either a strength or a weakness. Since quantum theory is very well confirmed, this disagreement might seem to rule these new models out. However, the exact rate of collapses per particle is a free parameter that is not fixed by the mathematics of the basic proposal. It is perfectly possible to tailor this value such that the differences between collapse model predictions and those of quantum theory are so tiny that no experiment to date would have detected it, and at the same time large enough that the models give a satisfactory solution to the reality problem (ie, everything that seems definite and real to us actually is real and definite).

That said, we presently have no theoretically good reason why the parameter should be in the range that allows this explanation to work. It might seem a little conspiratorial of nature to give us the impression that quantum theory is correct, while tuning the equations so that the crucial features that give rise to a definite physical reality are – with present technology – essentially undetectable. On the other hand, history tells us that deep physical insights, not least quantum theory itself, have often come to light only when technology advances sufficiently. The first evidence for what turns out to be a revolutionary change in our understanding of nature can often be a tiny difference between what current theory predicts and what is observed in some crucial experiment.


Like every previous theory of physics, quantum theory will turn out only approximately true, applying within a limited domain only

There are other theoretical problems with collapse models. Although they do not seem to conflict with special relativity or with field theories in the way that de Broglie-Bohm theory does, incorporating the collapse idea into these fundamental theories nevertheless poses formidable technical problems. Even on an optimistic view, the results in this direction to date represent work in progress rather than a fully satisfactory solution. Another worry for theorists in a subject where elegance seems to be a surprisingly strong indicator of physical relevance is that the mathematics of collapse seems a little ad hoc and utilitarian. To be fair, it is considerably less ugly than the de Broglie-Bohm theories, which to a purist’s eye more closely resemble a Heath Robinson contraption than the elegant machinery we have come to expect of the laws of physics. But compared with the extraordinary depth and beauty of Einstein’s general theory of relativity, or of quantum theory itself, collapse models disappoint.
This could simply mean that we have not properly understood them, or not yet seen the majestic deeper theory of which they form a part. It seems likelier, though, that collapse models are at best only a step in roughly the right direction. I suspect that, like de Broglie-Bohm theory, they will eventually be seen as pointers on the way to a deeper understanding of physical reality – extraordinarily important achievements, but not fundamentally correct descriptions.


There is, however, one important lesson that we can already credit to collapse models. De Broglie-Bohm theory suffers from the weakness that its experimental predictions are precisely the same as those of quantum theory, unlike collapse models that, as we have noted, are at least in principle testably different. The beables in de Broglie-Bohm theory – the particle paths – play a rather subordinate role: their behaviour is governed by the wave function that characterises all the possible realities from which any given set of paths is drawn, but they have no effect on that wave function. In metaphysical language, the de Broglie-Bohm theory beables are epiphenomena. The American psychologist William James once poetically described human consciousness as ‘Inert, uninfluential, a simple passenger in the voyage of life, it is allowed to remain on board, but not to touch the helm or handle the rigging’. Much the same might be said of a de Broglie-Bohm beable. Collapse-model beables, on the other hand, give as good as they get. Their appearance is governed by rules involving the quantum wave function, and yet, once they appear, they in turn alter the wave function. This makes for a far more interesting theory, mathematically as well as scientifically.

It’s tempting to declare this as a requirement for any variable in a fundamental theory of physics – or at least, any variable that plays as important a role as the beables are meant to play: it should be mathematically active, not purely passive. Any interesting solution to the quantum reality problem should (like collapse models but unlike de Broglie-Bohm theory) make experimentally testable predictions that allow us to check our new description of reality.

How might we do that? Assuming these ideas are not entirely wrong, what sort of experiments might give us evidence of a deeper theory underlying quantum theory and a better understanding of physical reality? The best answer we can give at present, if collapse models and other recent ideas for beable theories are any guide, is that we should expect to see something new when some relevant quantity in the experiment gets large. In particular, the peculiar and intriguing phenomenon called quantum interference – which seems to give direct evidence that different possible paths which could have been followed during an experiment all contribute to the outcome – should start to break down as we try to demonstrate it for larger and larger objects, or over larger and larger scales.

This makes some intuitive sense. Quantum theory was developed to explain the behaviour of atoms and other small systems, and has been well tested only on small scales. It would always have been a brave and perhaps foolhardy extrapolation to assume that it works on all scales, up to and including the entire universe, even if this involved no conceptual problems. Given the self-contradictions involved in the extrapolation and the profound obstacles that seem to prevent any solution of the reality problem within standard quantum theory, the most natural assumption is that, like every previous theory of physics, quantum mechanics will turn out only approximately true, applying within a limited domain only.

A number of experimental groups around the world are now trying to find the boundaries of that domain, testing quantum interference for larger and larger molecules (the current record is for molecules comprising around 1,000 atoms), and ultimately for small crystals and even viruses and other living organisms. This would also allow us to investigate the outlandish but not utterly inconceivable hunch that the boundaries of quantum theory have to do with the complexity of a system, or even with life itself, rather than just size. Researchers have proposed space-based experiments to test the interference between very widely separated beams and will no doubt spring into action once quantum technology becomes available on satellites, as it probably will in the next few years.


With luck, if the ideas I have outlined are on the right lines, we might have a good chance of detecting the limits of quantum theory in the next decade or two. At the same time we can hope for some insight into the nature and structure of physical reality. Anyone who expects it to look like Newtonian billiard-balls bouncing around in space and time, or anything remotely akin to pre-quantum physical ideas, will surely be disappointed. Quantum theory might not be fundamentally correct, but it would not have worked so well for so long if its strange and beautiful mathematics did not form an important part of the deep structure of nature. Whatever underlies it might well seem weirder still, more remote from everyday human intuitions, and perhaps even richer and more challenging mathematically. To borrow a phrase from John Bell, trying to speculate further would only be to share my confusion. No one in 1899 could have dreamed of anything like quantum theory as a fundamental description of physics: we would never have arrived at quantum theory without compelling hints from a wide range of experiments.

The best present ideas for addressing the quantum reality problem are at least as crude and problematic as Bohr’s model of the atom. Nature is far richer than our imaginations, and we will almost certainly need new experimental data to take our understanding of quantum reality further. If the past is any guide, it should be an extraordinarily interesting scientific journey.

This article was originally published by Adrian Kent at Aeon

Implicate Order of subatomic particles

Labels: , , , , , , , ,

“Space is not empty. It is full, a plenum as opposed to a vacuum, and is the ground for the existence of everything, including ourselves. The universe is not separate from this cosmic sea of energy.” – David Bohm.

David Bohm was one of the most distinguished theoretical physicists of his generation, and a fearless challenger of scientific orthodoxy.

His interests and influence extended far beyond physics and embraced biology, psychology, philosophy, religion, art, and the future of society. Underlying his innovative approach to many different issues was the fundamental idea that beyond the visible, tangible world there lies a deeper, implicate order of undivided wholeness.

David Bohm was born in Wilkes-Barre, Pennsylvania, on December 20, 1917. He went to Pennsylvania State University to study physics, and later to the University of California at Berkeley to work on his PhD thesis with J.Robert Oppenheimer.

Albert Einstein (left) with J. Robert Oppenhei...
Albert Einstein (left) with J. Robert Oppenheimer (right) working on the Manhattan Project (Photo credit: Wikipedia)
While at Berkeley, Bohm, an idealist, became involved in politics and he was labeled a communist by the FBI led by J. Edgar Hoover. This prevented him from getting a clearance to work with Oppenheimer on the Manhattan Project at Los Alamos to produce the first atomic bomb during the World War II. However, while working on his doctorate at Berkeley, he discovered “the scattering calculations of collisions of protons and deuterons” which was used by the Manhattan Project team, and was immediately classified. As a result, Bohm was denied access to his own work and wasn’t allowed to write or defend his thesis. Oppenheimer had to certify before the faculty of the university that Bohm had indeed successfully completed his research. Bohm was awarded his PhD in physics.

Bohm was surprised to find that once electrons were in a plasma, they stopped behaving like individuals and started behaving as if they were part of a larger and interconnected whole. He later remarked that he frequently had the impression that the sea of electrons was in some sense alive.

In 1947, he became an assistant professor at Princeton University, where he met Albert Einstein. Einstein found Bohm to be a kindred spirit, a like-minded colleague with whom he could have fascinating conversations about the nature of the universe.He extended his research to the study of electrons in metals. Once again the seemingly haphazard movements of individual electrons managed to produce highly organized overall effects. Bohm’s innovative work in this area established his reputation as a theoretical physicist.

In 1951 Bohm wrote a classic textbook entitled Quantum Theory, in which he presented a clear account of the orthodox, Copenhagen interpretation of quantum physics. The Copenhagen interpretation was formulated mainly by Niels Bohr and Werner Heisenberg in the 1920s and is still highly influential today. But even before the book was published, Bohm began to have doubts about the assumptions underlying the conventional approach.

The holomovement is a key concept in David Bohm`s interpretation of quantum mechanics and for his overall worldview. It brings together the holistic principle of “undivided wholeness” with the idea that everything is in a state of process or becoming (or what he calls the “universal flux») For Bohm, wholeness is not a static oneness, but a dynamic wholeness-in-motion in which everything moves together in an interconnected process. The concept is presented most fully in Wholeness and the implicate order published in 1980.

Referring to quantum theory, Bohm’s basic assumption is that “elementary particles are actually systems of extremely complicated internal structure, acting essentially as amplifiers of information contained in a quantum wave.” As a consequence, he has evolved a new and controversial theory of the universe. A new model of reality that Bohm calls the “Implicate Order.”

The theory of the Implicate Order contains an ultra-holistic cosmic view; it connects everything with everything else. In principle, any individual element could reveal “detailed information about every other element in the universe.” The central underlying theme of Bohm’s theory is the “unbroken wholeness of the totality of existence as an undivided flowing movement without borders.”

David Bohm
David Bohm

During the early 1980s Bohm developed his theory of the Implicate Order in order to explain the bizarre behavior of subatomic particles. Behavior that quantum physicists have not been able to explain. Basically, two subatomic particles that have once interacted can instantaneously “respond to each other’s motions thousands of years later when they are light-years apart.” This sort of particle interconnectedness requires superluminal signaling, which is faster than the speed of light. This odd phenomenon is called the EPR effect, named after the Einstein, Podolsky, and Rosen thought experiment.

Bohm believes that the bizarre behavior of the subatomic particles might be caused by unobserved subquantum forces and particles. Indeed, the apparent weirdness might be produced by hidden means that pose no conflict with ordinary ideas of causality and reality.

Bohm believes that this “hiddeness” may be reflective of a deeper dimension of reality. He maintains that space and time might actually be derived from an even deeper level of objective reality. This reality he calls the Implicate Order. Within the Implicate Order everything is connected; and, in theory, any individual element could reveal information about every other element in the universe.

Borrowing ideas from holographic photography, the hologram is Bohm’s favorite metaphor for conveying the structure of the Implicate Order. Holography relies upon wave interference. If two wavelengths of light are of differing frequencies, they will interfere with each other and create a pattern. “Because a hologram is recording detail down to the wavelength of light itself, it is also a dense information storage.” Bohm notes that the hologram clearly reveals how a “total content–in principle extending over the whole of space and time–is enfolded in the movement of waves (electromagnetic and other kinds) in any given region.” The hologram illustrates how “information about the entire holographed scene is enfolded into every part of the film.” It resembles the Implicate Order in the sense that every point on the film is “completely determined by the overall configuration of the interference patterns.” Even a tiny chunk of the holographic film will reveal the unfolded form of an entire three-dimensional object.

Proceeding from his holographic analogy, Bohm proposes a new order–the Implicate Order where “everything is enfolded into everything.” This is in contrast to the explicate order where things are unfolded. Bohm puts it thus:

“The actual order (the Implicate Order) itself has been recorded in the complex movement of electromagnetic fields, in the form of light waves. Such movement of light waves is present everywhere and in principle enfolds the entire universe of space and time in each region. This enfoldment and unfoldment takes place not only in the movement of the electromagnetic field but also in that of other fields (electronic, protonic, etc.). These fields obey quantum-mechanical laws, implying the properties of discontinuity and non-locality. The totality of the movement of enfoldment and unfoldment may go immensely beyond what has revealed itself to our observations. We call this totality by the name holomovement.”

Bohm believes that the Implicate Order has to be extended into a multidimensional reality; in other words, the holomovement endlessly enfolds and unfolds into infinite dimensionality. Within this milieu there are independent sub-totalities (such as physical elements and human entities) with relative autonomy. The layers of the Implicate Order can go deeper and deeper to the ultimately unknown. It is this “unknown and undescribable totality” that Bohm calls the holomovement. The holomovement is the “fundamental ground of all matter.”

THE HOLOGRAM AND HOLONOMY

In collaboration with Stanford neuroscientist Karl Pribram, Bohm was involved in the early development of the holonomic model of the functioning of the brain, a model for human cognition that is drastically different from conventionally accepted ideas. Bohm worked with Pribram on the theory that the brain operates in a manner similar to a hologram in accordance with quantum mathematical principles and the characteristics of wave patterns.

The holonomic brain theory or model, developed by neuroscientist Karl Pribram initially in collaboration with physicist David Bohm, is a model of human cognition that describes the brain as a holographic storage network. Pribram suggests these processes involve electric oscillations in the brain’s fine-fibered dendritic webs, which are different from the more commonly known action potentials involving axons and synapses.These oscillations are waves and create wave interference patterns in which memory is encoded naturally, in a way that can be described with Fourier  Transformation equations. Pribram and others noted the similarities between these brain processes and the storage of information in a hologram, which also uses Fourier Transformations(mathematical).In a hologram, any part of the hologram with sufficient size contains the whole of the stored information. In this theory, a piece of a long-term memory is similarly distributed over a dendritic arbor so that each part of the dendritic network contains all the information stored over the entire network.This model allows for important aspects of human consciousness, including the fast associative memory that allows for connections between different pieces of stored information and the non-locality of memory storage (a specific memory is not stored in a specific location, i.e. a certain neuron).

A main characteristic of a hologram is that every part of the stored information is distributed over the entire hologram.Both processes of storage and retrieval are carried out in a way described by Fourier transformation equations. As long as a part of the hologram is large enough to contain the interference pattern that part can recreate the entirety of the stored image, except with more unwanted changes, called noise.

An analogy to this is the broadcasting region of a radio antennae. In each smaller individual location within the entire area it is possible to access every channel, similar to how the entirety of the information of a hologram is contained within a part.

Another analogy of a hologram is the way sunlight illuminates objects in the visual field of an observer. It doesn’t matter how narrow the beam of sunlight is. The beam always contains all the information of the object, and when conjugated by a lens of a camera or the eyeball, produces the same full three-dimensional image. The Fourier transform formula converts spatial forms to spatial wave frequencies and vice versa, as all objects are in essence vibratory structures. Different types of lenses, acting similarly to optic lenses can alter the frequency nature of information that is transferred.

This non-locality of information storage within the hologram is crucial, because even if most parts are damaged, the entirety will be contained within even a single remaining part of sufficient size. Pribram and others noted the similarities between an optical hologram and memory storage in the human brain. According to the holonomic brain theory, memories are stored within certain general regions, but stored non-locally within those regions. This allows the brain to maintain function and memory even when it is damaged. It is only when there exist no parts big enough to contain the whole that the memory is lost. This can also explain why some children retain normal intelligence when large portions of their brain in some cases, half are removed. It can also explain why memory is not lost when the brain is sliced in different cross-sections.

A single hologram can store 3D information in a 2D way. Such properties may explain some of the brain’s abilities, including the ability to recognize objects at different angles and sizes than in the original stored memory.

Pribram proposed that neural holograms were formed by the diffraction patterns of oscillating electric waves within the cortex. It is important to note the difference between the idea of a holonomic brain and a holographic one. Pribram does not suggest that the brain functions as a single hologram. Rather, the waves within smaller neural networks create localized holograms within the larger workings of the brain. This patch holography is called holonomy or windowed Fourier transformations.

A holographic model can also account for other features of memory that more traditional models cannot. The Hopfield Memory model has an early memory saturation point before which memory retrieval drastically slows and becomes unreliable.  On the other hand, holographic memory models have much larger theoretical storage capacities. Holographic models can also demonstrate associative memory, store complex connections between different concepts, and resemble forgetting through lossy storage.

INFORMATION

Bohm: “The actual nature of the information and the way it is carried is not yet entirely clear. Is it really correct, for example, to speak of a “field” of information, since information does not fall off with distance, neither is it associated with energy in the usual sense. Possibly the notion of field should be widened or, at the quantum level. we should be talking about pre-space structures, or about algebraic relationships that precede the structure of space and time. “

Bohm’s notion of “active information” is tied to his “Ontological Interpretation” (formerly the Causal or Hidden Variable Interpretation). I propose it be freed from any particular theory and raised to the level of a General Principle. Bohm never considered his Ontological Interpretation to be the last word on quantum theory, rather that it would suggest insights and avenues for further research. I believe that one of the most valuable is this notion of information.

“Yes, if you say that all matter actually works from information, not merely matter in the nervous system or DNA matter working in the cell, but even the electron is forming from empty space being informed as it were by some unknown source of information which may be all over the space.
And then we can not have, there is no sharp division between thought, emotion and matter. You see that they flow into each other. Even in ordinary experience you have thought and emotion flow into a movement of matter in the body. Or the movement of matter in the body gives rise to emotion and thought right.

Now the only point is that present science has no idea how thought could directly affect an object which is not in contact with the body you see, or directly through some system. But if you say that the entire ground of existence is enfolded in space, that all matter is coming out of that space, including ourselves, our brains, our thoughts … then the information might gradually vades the space so that matter starts to, you could say that matter is always forming according to whatever information it has and therefore the thought process could alter that information content.
So I would d say that it does look possible though I think very careful experiments have to be done before we say that it actually does take place.”

“Because a hologram is recording detail down to the wavelength of light itself, it is also a dense information storage.” Bohm notes that the hologram clearly reveals how a “total content,in principle extending over the whole of space and time,is enfolded in the movement of waves (electromagnetic and other kinds) in any given region.” The hologram illustrates how “information about the entire holographed scene is enfolded into every part of the film.” It resembles the Implicate Order in the sense that every point on the film is “completely determined by the overall configuration of the interference patterns.” Even a tiny chunk of the holographic film will reveal the unfolded form of an entire three-dimensional object.

MATTER, ANIMATE AND INANIMATE

Right off Bohm refers to the particle, the most essential building-block of matter. He considers the particle, fundamentally, to be only an “abstraction that is manifest to our senses.” Basically, for Bohm, the whole cosmos is matter; in his own words: “What is, is always a totality of ensembles, all present together, in an orderly series of stages of enfoldment and unfoldment, which intermingle and interpenetrate each other in principle throughout the whole of space.”

Bohm’s explicate order, however, is secondary–derivative. It flows out of the law of the Implicate Order, a law that stresses the relationships between the enfolded structures that interweave each other throughout cosmic space rather than between the “abstracted and separate forms that manifest to the senses.”

Bohm’s explanation of “manifest” is basically that in certain sub-orders, within the “whole set” of Implicate Order, there is a “totality of forms that have an approximate kind of recurrence, stability and separability.” These forms are capable of appearing tangible, solid, and thus make up our manifest world.

Bohm also declares that the “implicate order has to be extended into a multidimensional reality.” He proceeds: “In principle this reality is one unbroken whole, including the entire universe with all its fields and particles. Thus we have to say that the holomovement enfolds and unfolds in a multidimensional order, the dimensionality of which is effectively infinite. Thus the principle of relative autonomy of sub-totalities–is now seen to extend to the multi-dimensional order of reality.”

Bohm illustrates this higher-dimensional reality by showing the relationship of two televised images of a fish tank, where the fish are seen through two walls at right angles to one another. What is seen is that there is a certain “relationship between the images appearing on the two screens.” We know, Bohm notes, that the two fish tank images are interacting actualities, but they are not two independently existent realities. “Rather, they refer to a single actuality, which is the common ground of both.” For Bohm this single actuality is of higher dimensionality, because the television images are two-dimensional projections of a three-dimensional reality, which “holds these two-dimensional projections within it.” These projections are only abstractions, but the “three-dimensional reality is neither of these,rather it is something else, something of a nature beyond both.”

If there is apparent evolution in the universe, it is because the different scales or dimensions of reality are already implicit in its structure. Bohm uses the analogy of the seed being “informed” to produce a living plant. The same can be said of all living matter. “Life is enfolded in the totality and–even when it is not manifest, it is somehow implicit.” The holomovement is the ground for both life and matter. There is no dichotomy.

What lies ahead? For Bohm it is the development of consciousness!

CONSCIOUSNESS

Bohm conceives of consciousness as more than information and the brain; rather it is information that enters into consciousness. For Bohm consciousness “involves awareness, attention, perception, acts of understanding, and perhaps yet more.” Further, Bohm parallels the activity of consciousness with that of the Implicate Order in general.

Consciousness, Bohm notes, can be “described in terms of a series of moments.” Basically, “one moment gives rise to the next, in which context that was previously implicate is now explicate while the previous explicate content has become implicate.” Consciousness is an interchange; it is a feedback process that results in a growing accumulation of understanding.

Bohm considers the human individual to be an “intrinsic feature of the universe, which would be incomplete,in some fundamental sense” if the person did not exist. He believes that individuals participate in the whole and consequently give it meaning. Because of human participation, the “Implicate Order is getting to know itself better.”

Bohm also senses a new development. The individual is in total contact with the Implicate Order, the individual is part of the whole of mankind, and he is the “focus for something beyond mankind.” Using the analogy of the transformation of the atom ultimately into a power and chain reaction, Bohm believes that the individual who uses inner energy and intelligence can transform mankind. The collectivity of individuals have reached the “principle of the consciousness of mankind,” but they have not quite the “energy to reach the whole, to put it all on fire.”

Continuing with this theme on the transformation of consciousness, Bohm goes on to suggest that an intense heightening of individuals who have shaken off the “pollution of the ages” (wrong worldviews that propagate ignorance), who come into close and trusting relationship with one another, can begin to generate the immense power needed to ignite the whole consciousness of the world. In the depths of the Implicate Order, there is a “consciousness, deep down–of the whole of mankind.”

It is this collective consciousness of mankind that is truly significant for Bohm. It is this collective consciousness that is truly one and indivisible, and it is the responsibility of each human person to contribute towards the building of this consciousness of mankind. “There’s nothing else to do,there is no other way out. That is absolutely what has to be done and nothing else can work.”

Bohm also believes that the individual will eventually be fulfilled upon the completion of cosmic noogenesis. Referring to all the elements of the cosmos, including human beings, as projections of an ultimate totality, Bohm notes that as a “human being takes part in the process of this totality, he is fundamentally changed in the very activity in which his aim is to change that reality, which is the content of his consciousness.”

Youtube-link showing A model of David Bohm’s implicate order as a Schrodinger wave hologram comprised of free particle wave-functions:

https://www.youtube.com/watch?v=Jzfj4R52Q6I

Bohm was obsessed with language, particularly with the derivation of words. He delved into the roots of words, not only in his writing but also in his usual manner of discourse. Peat tells a story on Bohm as well as on himself.

He liked to go on and on about the root of words. He’d say for example, “And art, take art, there are the word like artifice, and artery, and articulate, and Artemis…” And then I’d quickly throw in, “and artichoke.” “Yes, artichoke!” he’d say. Then he’d stop and laugh, realizing his having been caught in his own stream of thought. “Artichoke….”[1]

In his serious way of approaching his life work, the pursuit of science was inextricably intertwined with the processes of Bohm’s thought and language. As he spent time delving into those topics, his physicist peers must have wondered if he hadn’t fallen down a rabbit hole and gotten lost. Why would a scientist of such creativity and potential divert to topics that belong in the soft pseudo-sciences of human functioning?

But to Bohm, the questions a researcher asks and the tools used to study them, are inseparable, much as Niels Bohr had shown that the researcher, the measuring apparatus, and that which is measured together form an inseparable system. In a sense, Bohm extended Bohr’s ideas to an even more finite level to include attributes of the researcher’s own operating system. Alfred North Whitehead had said, “Every science must devise its own instruments. The tool required for philosophy is language. Thus philosophy redesigns language in the same way that, in a physical science, pre-existing appliances are redesigned.”[2] Surely, Bohm would have agreed, and then extended language as a prime tool of the physical sciences as well.
af49e98cd69973df67823e63334f8d19eed86bd0f18fd18bb1