Our Quantum Problem: Everything's related

Labels: , , , , , , , , , , ,

 What Really Happens In Schrödinger's Box


Left to right: Max Planck, Albert Einstein, Ni...
Left to right: Max Planck, Albert Einstein, Niels Bohr, Louis de Broglie, Max Born, Paul Dirac, Werner Heisenberg, Wolfgang Pauli, Erwin Schrödinger, Richard Feynman. (Photo credit: Wikipedia)
In 1909, Ernest Rutherford, Hans Geiger and Ernest Marsden took a piece of radium and used it to fire charged particles at a sheet of gold foil. They wanted to test the then-dominant theory that atoms were simply clusters of electrons floating in little seas of positive electrical charge (the so-called ‘plum pudding’ model). What came next, said Rutherford, was ‘the most incredible event that has ever happened to me in my life’.

Despite the airy thinness of the foil, a small fraction of the particles bounced straight back at the source – a result, Rutherford noted, ‘as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you’. Instead of whooshing straight through the thin soup of electrons that should have been all that hovered in their path, the particles had encountered something solid enough to push back. Something was wrong with matter. Somewhere, reality had departed from the best available model. But where?

The first big insight came from Rutherford himself. He realised that, if the structure of the atom were to permit collisions of the magnitude that his team had observed, its mass must be concentrated in a central nucleus, with electrons whirling around it. Could such a structure be stable? Why didn’t the electrons just spiral into the centre, leaking electromagnetic radiation as they fell?

Such concerns prompted the Danish physicist Niels Bohr to formulate a rather oddly rigid model of the atom, using artificial-seeming rules about electron orbits and energy levels to keep everything in order. It was ugly but it seemed to work. Then, in 1924, a French aristocrat and physicist named Louis de Broglie argued that Bohr’s model would make more sense if we assumed that the electrons orbiting the atomic nucleus (and indeed everything else that had hitherto been considered a particle) either came with, or in some sense could behave like, waves.

If Bohr’s atom had seemed a little arbitrary, de Broglie’s improved version was almost incomprehensible. Physical theory might have recovered some grip on reality but it seemed to have decisively parted company from common sense. And yet, as Albert Einstein said on reading de Broglie’s thesis, here was ‘the first feeble ray of light on this worst of our physics enigmas’. By 1926, these disparate intuitions and partial models were already unified into a new mathematical theory called quantum mechanics. Within a few years, the implications for chemistry, spectroscopy and nuclear physics were being confirmed.

It was clear from the start that quantum theory challenged all our previous preconceptions about the nature of matter and how it behaves, and indeed about what science can possibly – even in principle – say about these questions. Over the years, this very slipperiness has made it irresistible to hucksters of various descriptions. I regularly receive ads offering to teach me how to make quantum jumps into alternate universes, tap into my infinite quantum self-energy, and make other exciting-sounding excursions from the plane of reason and meaning. It’s worth stressing, then, that the theory itself is both mathematically precise and extremely well confirmed by experiment.

Quantum mechanics has correctly predicted the outcomes of a vast range of investigations, from the scattering of X-rays by crystals to the discovery of the Higgs boson at the Large Hadron Collider. It successfully explains a vast range of natural phenomena, including the structure of atoms and molecules, nuclear fission and fusion, the way light interacts with matter, how stars evolve and shine, and how the elements forming the world around us were originally created.

Yet it puzzled many of its founders, including Einstein and Erwin Schrödinger, and it continues to puzzle physicists today. Einstein in particular never quite accepted it. ‘It seems hard to sneak a look at God’s cards,’ he wrote to a colleague, ‘but that he plays dice and uses “telepathic” methods (as the present quantum theory requires of him) is something that I cannot believe for a single moment.’ In a 1935 paper co-written with Boris Podolsky and Nathan Rosen, Einstein asked: ‘Can [the] Quantum-Mechanical Description of Physical Reality Be Considered Complete?’ He concluded that it could not. Given apparently sensible demands on what a description of physical reality must entail, it seemed that something must be missing. We needed a deeper theory to understand physical reality fully.

Einstein never found the deeper theory he sought. Indeed, later theoretical work by the Irish physicist John Bell and subsequent experiments suggested that the apparently reasonable demands of that 1935 paper could never be satisfied. Had Einstein lived to see this work, he would surely have agreed that his own search for a deeper theory of reality needed to follow a different path from the one he sketched in 1935.

Even so, I believe that Einstein would have remained convinced that a deeper theory was needed. None of the ways we have so far found of looking at quantum theory are entirely believable. In fact, it’s worse than that. To be ruthlessly honest, none of them even quite makes sense. But that might be about to change.


Here’s the basic problem. While the mathematics of quantum theory works very well in telling us what to expect at the end of an experiment, it seems peculiarly conceptually confusing when we try to understand what was happening during the experiment. To calculate what outcomes we might expect when we fire protons at one another in the Large Hadron Collider, we need to analyse what – at first sight – look like many different stories. The same final set of particles detected after a collision might have been generated by lots of different possible sequences of energy exchanges involving lots of different possible collections of particles. We can’t tell which particles were involved from the final set of detected particles.

Now, if the trouble was only that we have a list of possible ways that things could have gone in a given experiment and we can’t tell which way they actually went just by looking at the results, that wouldn’t be so puzzling. If you find some flowers at your front door and you’re not sure which of your friends left them there, you don’t start worrying that there are inconsistencies in your understanding of physical reality. You just reason that, of all the people who could have brought them, one of them presumably did. You don’t have a logical or conceptual problem, just a patchy record of events.


If you think this doesn’t make any sense, that there has to be something missing, well, that’s how many thoughtful physicists feel


Quantum theory isn’t like this, as far as we presently understand it. We don’t get a list of possible explanations for what happened, of which one (although we don’t know which) must be the correct one. We get a mathematical recipe that tells us to combine, in an elegant but conceptually mysterious way, numbers attached to each possible explanation. Then we use the result of this calculation to work out the likelihood of any given final result. But here’s the twist. Unlike the mathematical theory of probability, this quantum recipe requires us to make different possible stories cancel each other out, or fully or partially reinforce each other. This means that the net chance of an outcome arising from several possible stories can be more or less than the sum of the chances associated with each.
To get a sense of the conceptual mystery we face here, imagine you have three friends, John, Mary and Jo, who absolutely never talk to each other or interact in any other way. If any one of them is in town, there’s a one-in-four chance that this person will bring you flowers on any given day. (They’re generous and affectionate friends. They’re also entirely random and spontaneous – nothing about the particular choice of day affects the chance they might bring you flowers.) But if John and Mary are both in town, you know there’s no chance you’ll get any flowers that day – even though they never interact, so neither of them should have any idea whether the other one is around. And if Mary and Jo are both in town, you’ll certainly get exactly one bunch of flowers – again, even though Mary and Jo never interact either, and you’d have thought that if they’re acting independently, your chance of getting any flowers is a bit less than a half, while once in a while you should get two bunches.

If you think this doesn’t make any sense, that there has to be something missing from this flower delivery fable, well, that’s how many thoughtful physicists feel about quantum theory and our understanding of nature. Pretty precisely analogous things happen in quantum experiments.


One attempt to make sense of this situation – the so-called ‘Copenhagen interpretation’ of quantum theory, versions of which were advocated by Bohr, Werner Heisenberg and other leading quantum theorists in the first half of the last century – claims that quantum theory is teaching us something profound and final about the limits of what science can tell us. According to this approach, a scientific question makes sense only if we have a direct way of verifying the answer. So, asking what we’ll see in our particle detectors is a scientific question; asking what happened in the experiment before anything registered in our detectors isn’t, because we weren’t looking. To be looking, we’d have had to put detectors in the middle of the experiment, and then it would have been a different experiment. In trying to highlight the absurd-seeming consequences of this view, Schrödinger minted what has become its best-known popular icon – an imaginary experiment with a sealed box containing a cat that is simultaneously alive and dead, only resolving into one or other definite state when an experimenter opens the box.

The Copenhagen interpretation was very much in line with the scientific philosophy of logical positivism that caught on at around the same time. In particular, it rests on something like logical positivism’s principle of verification, according to which a scientific statement is meaningful only if we have some means of verifying its truth. To some of the founders of quantum theory, as well as to later adherents of the Copenhagen interpretation, this came to seem an almost self-evident description of the scientific process. Even after philosophers largely abandoned logical positivism – not least because the principle of verification fails its own test for meaningful statements – many physicists trained in the Copenhagen tradition insisted that their stance was no more than common sense.

However, its consequences are far from commonsensical. If you take this position seriously, then you have to accept that the Higgs boson wasn’t actually discovered at the Large Hadron Collider, since no one has ever directly detected a Higgs boson, and we have no direct evidence to support the claim that the Higgs boson is a real particle. Insofar as we learnt anything about nature from the Large Hadron Collider, it was merely what sort of records you get in your detectors when you build something like the Large Hadron Collider. It’s hard to imagine the scientists who work on it, or the citizens who funded them, being very enthusiastic about this justification, but on a strict Copenhagen view it’s the best we can do.

It gets worse. Quantum theory is supposed to describe the behaviour of elementary particles, atoms, molecules and every other form of matter in the universe. This includes us, our planet and, of course, the Large Hadron Collider. In that sense, everything since the Big Bang has been one giant quantum experiment, in which all the particles in the universe, including those we think of as making up the Earth and our own bodies, are involved. But if theory tells us we’re among the sets of particles involved a giant quantum experiment, the position I’ve just outlined tells us we can’t justify any statement about what has happened or is happening until the experiment is over. Only at the end, when we might perhaps imagine some technologically advanced alien experimenters in the future looking at the final state of the universe, can any meaningful statement be made.



Of course, this final observation will never happen. By definition, no one is sitting outside the universe waiting to observe the final outcome at the end of time. And even if the idea of observers waiting outside the universe made sense – which it doesn’t – on this view their final observations still wouldn’t allow them to say anything about what happened between the Big Bang and the end of time. We end up concluding that quantum theory doesn’t allow us to justify making any scientific statement at all about the past, present or future. Our most fundamental scientific theory turns out to be a threat to the whole enterprise of science. For these and related reasons, the Copenhagen interpretation gradually fell out of general favour.

Its great rival was first set out in a 1957 paper and Princeton PhD thesis written by one of the stranger figures in the history of 20th-century physics, Hugh Everett III. Rather unromantically, and very unusually for a highly original thinker and talented physicist, Everett abandoned theoretical physics after he had published his big idea. A good deal of his subsequent career was spent in military consultancy, advising the US on strategies for fighting and ‘winning’ a nuclear war against the USSR, and the bleakness of this chosen path presumably contributed to his chain-smoking, alcoholism and depression. Everett died of a heart attack at the age of 51; possibly we can infer something of his own ultimate assessment of his life’s worth from the fact that he instructed his wife to throw his ashes in the trash. And yet, despite his detachment from academic life (some might say from all of life), Everett’s PhD work eventually became enormously influential.

One way of thinking about his ideas on quantum theory is that our difficulties in getting a description of quantum reality arise from a tension between the mathematics – which, as we have seen, tells us to make calculations involving many different possible stories about what might have really happened – and the apparently incontrovertible fact that, at the end of an experiment, we see that only one thing actually did happen. This led Everett to ask a question that seems at first sight stupid, but which turns out to be very deep: how do we know that we only get one outcome to a quantum experiment? What if we take the hint from the mathematics and consider a picture of reality in which many different things actually do happen – everything, in fact, that quantum theory allows? And what if we take this to its logical conclusion and accept the same view of cosmology, so that all the different possible histories of the evolution of the universe are realised? We end up, Everett argued, with what became known as a ‘many worlds’ picture of reality, one in which it is constantly forming new branches describing alternative – but equally real – future continuations of the same present state.

On this view, every time any of us does a quantum experiment with several possible outcomes, all those outcomes are enacted in different branches of reality, each of which contains a copy of our self whose memories are identical up to the start of experiment, but each of whom sees different results. None of these future selves has any special claim to be the real one. They are all equally real – genuine but distinct successors of the person who started the experiment. The same picture holds true more generally in cosmology: alongside the reality we currently habit, there are many others in which the history of the universe and our planet was ever so slightly different, many more in which humanity exists on Earth but the course of human history was significantly different from ours, and many more still in which nothing resembling Earth or its inhabitants can be found.


On another paper addressing the same issue, Everett’s comment was the single word ‘bullshit’

This might sound like unbelievable science fiction. To such a gibe, Everett and his followers would reply that science has taught us many things that seemed incredible at first. Other critics object that the ‘many worlds’ scenario seems like an absurdly extravagant and inelegant hypothesis. Trying to explain the appearance of one visible reality by positing an infinite collection of invisible ones might seem the most deserving candidate in the history of science for a sharp encounter with Occam’s razor. But to this, too, Everettians have an answer: given the mathematics of quantum theory, on which everyone agrees, their proposal is actually the simplest option. The many worlds are there in the equations. To eliminate them you have to add something new, or else change them – and we don’t have any experimental evidence telling us that something should be added or that the equations need changing.
Everettians might have a point, then, when they argue that their ideas deserve a hearing. The problem is that, from Everett and his early followers onwards, they have never managed to agree on a clear story about how exactly this picture of branching worlds is supposed to emerge from the fundamental equations of quantum theory, and how this single world that we see, with experimental outcomes that are apparently random but which follow definite statistical laws, might then be explained. One of the blackly funny revelations in Peter Byrne’s biography The Many Worlds of Hugh Everett III (2010) was the discovery of Everett’s personal copy of the classic text The Many‑Worlds Interpretation of Quantum Mechanics, put together in 1973 by the distinguished American physicist Bryce DeWitt and a few of Everett’s other early supporters. To DeWitt’s mild criticism that ‘Everett’s original derivation [of probabilities]… is rather too brief to be entirely satisfying’, Everett scribbled in the margins ‘Only to you!’ and ‘Goddamit [sic] you don’t see it’. On another paper addressing the same issue, his comment was the single word ‘bullshit’. Although generally in more civil terms, Everettians have continued to argue over this and related points ever since.

Indeed, the big unresolved, and seemingly unsolvable, problem here is how statistical laws can possibly emerge at all when the Everettian meta-picture of branching worlds has no randomness in it. If we do an experiment with an uncertain outcome, Everett’s proposal says that everything that could possibly happen (including the very unlikely outcomes) will in fact take place. It’s possible that Everettians can sketch some explanation of why it seems to ‘us’ (really, to any one of our many future successors) that ‘we’ see only one outcome. But that only replaces ‘everything will actually happen’ with ‘anything could seem to happen to us’ – which is still neither a quantitative nor a falsifiable scientific statement. To do science, we need to able to test statements such as ‘there’s a one-in-three chance X will happen to us’ and ‘it’s incredibly unlikely that Y will happen to us’ – but it isn’t at all obvious that Everett’s ideas support any such statements.

Everettians continue to devote much ingenuity to deriving statements involving probabilities from the underlying deterministic many-worlds picture. One idea lately advocated by David Deutsch and David Wallace of the University of Oxford is to try to use decision theory, the area of mathematics that concerns rational decision-making, to explain how rational people should behave if they believe they are in a branching universe. Deutsch and Wallace start from a few purportedly simple and natural technical assumptions about the preferences one should have in a branching world and then claim to show that rational Everettians should behave as though they were in an uncertain probabilistic world following the statistical laws of quantum theory, even though they believe their true situation is very different.

One problem with this line of thought is that the assumptions turn out not to seem especially natural, or even properly defined, on close inspection. The easiest way to understand this is to look for rationally defensible strategies for life in a branching universe other than the ones Deutsch and Wallace advocate. One example I rather like (because it makes the point succinctly, not because it seems morally attractive) is that of future self elitism, which counsels us to focus only on the welfare of our most fortunate and successful future successor, perhaps on the premise that our best possible future self is our truest self. Future self elitists don’t worry about the odds of a particular bet, only about the best possible payoff. Thus they violate Deutsch and Wallace’s axioms, but it is hard to see any purely logical argument against their decisions.

Another issue is that, as several critics have pointed out, whatever one thinks of Deutsch and Wallace’s proposed rational strategy, it answers a subtly different question to the one that Everettians were supposed to be addressing. The question ‘What bets should I be happy to place on the outcomes of a given experiment, given that I believe in Everettian many-worlds?’ is certainly a question that relates something we normally try to answer using probabilities with the many-worlds picture. In that sense, it makes some sort of connection between probabilities and many worlds – and since we’ve seen how hard that is to achieve, it’s easy to understand why Everettians (at least initially) are enthusiastic about this accomplishment. But, unfortunately, it’s not the sort of connection we need. The key scientific question is why the experimental evidence for quantum theory justifies a belief in many worlds in the first place. Many Everettians – from Everett and DeWitt onwards – have tried to give a satisfactory answer to this. Many critics (myself included) appreciate the cunning of their attempts but think they have all failed.


If we cannot get a coherent story about physical reality from the Copenhagen interpretation of quantum theory and we cannot get a scientifically adequate one from many-worlds theory, where do we turn? We could, as some physicists suggest, simply give up on the hope of finding any description of an objective external reality. But it is very hard to see how to do this without also giving up on science. The hypothesis that our universe began from something like a Big Bang, our account of the evolution of galaxies and stars, the formation of the elements and of planets and all of chemistry, biology, physics, archaeology, palaeontology and indeed human history – all rely on propositions about real observer-independent facts and events. Once we assume the existence of an external world that changes over time, these interrelated propositions form a logically coherent set; chemistry depends on cosmology, evolution on chemistry, history on evolution and so on. Without that assumption, it is very hard to see how one might make sense of any of these disciplines, let alone see a unifying picture that underlies them all and explains their deep interrelations and mutual dependence.

If we can’t allow the statement that dinosaurs really walked the Earth, what meaningful content could biology, palaeontology or Darwinian evolution actually have? It’s even harder to understand why the statement seems to give such a concise explanation of many things we’ve noticed about the world, from the fossil record to (we think) the present existence of birds, if it’s actually just a meaningless fiction. Similarly, if we can’t say that water molecules really contain one oxygen and two hydrogen atoms – or at least that something about reality that supports this model – then what, if anything, is chemistry telling us?

Physics poses many puzzles, and the focus of the physics community shifts over time. Most theoretical physicists today do not work on this question about what really happens in quantum experiments. Among those who think about it at all, many hope that we can find a way of thinking about quantum theory in which reality somehow evaporates or never arises. That seems like wishful thinking to me.

The alternative, as John Bell recognised earlier and more clearly than almost all of his contemporaries, is to accept that quantum theory cannot be a complete fundamental theory of nature. (As mentioned above, Einstein also believed this, though at least partly because of arguments that Bell was instrumental in refuting.)


we need to supplement our quantum equations with quantities that correspond directly to real events or things – real ‘stuff’ in the world

Bell was one of the last century’s deepest thinkers about science. As he put it, quantum theory ‘carries in itself the seeds of its own destruction’: it undermines the account of reality that it needs in order to make any sense as a physical theory. On this view, which was once as close to heresy as a scientific argument can be but is now widely held among scientists who work on the foundations of physics, the reality problem is just not solvable within quantum theory as it stands. And so, along with the variables that describe potentialities and possibilities, we need to supplement our quantum equations with quantities that correspond directly to real events or things – real ‘stuff’ in the world.
Bell coined the term beables to refer to these elusive missing ingredients. ‘Beable’ is an ugly word but a useful concept. It denotes variables that are able to ‘be’ in the world – hence the name. And indeed it turns out that we can extend quantum theory to include beables that would directly describe the sort of reality we actually see. Some of the most interesting work in fundamental physics in the past few decades has been in the search for new theories that agree with quantum theory in its predictions to date, but which include a beable description of reality, and so give us a profoundly different fundamental picture of the world.

What sort of quantities might do the trick? One early idea comes from Louis de Broglie, whom we met earlier, and David Bohm, an American theoretical physicist who fled McCarthyite persecution and spent most of his career at the University of London. The essence of their proposal is that, in addition to the mathematical quantities given to us by quantum theory, we also have equations defining a definite path through space and time for each elementary particle in nature. These paths are determined by the initial state of the universe and, in this sense, de Broglie-Bohm theory can be thought of as a deterministic theory, rather like the pre-quantum theories given by Newton’s and Maxwell’s equations. Unfortunately, de Broglie and Bohm’s equations also share another property of Newton’s equations: an action at any point in space has instantaneous effects on particles at arbitrarily distant points.

Because these effects would not be directly detectable, this would not actually allow us to send signals faster than light, and so it does not lead to observations that contradict Einstein’s special theory of relativity. It does, however, very much violate its spirit, as well as the beautiful symmetry principles incorporated in the underlying mathematics. For this reason, and also because de Broglie and Bohm’s ideas work well for particles but are hard to generalise to electromagnetic and other fields, it seems impossible to find a version of the scheme that is consistent with much of modern theoretical physics. Still, de Broglie and Bohm’s great achievement was to show that we can find a mathematically consistent description of reality alongside quantum theory. When it first emerged, their work was largely unappreciated, but it led to many of Bell’s insights into the quantum reality problem and blazed a trail for later theorists.


In the 1980s, a much more promising avenue opened up, thanks to the efforts of Giancarlo Ghirardi, Alberto Rimini, Tullio Weber and Philip Pearle, three European theorists and an American. Their approach became known as the ‘spontaneous collapse’ model and their brilliant insight was that we can find mathematical laws that describe how the innumerable possible outcomes encoded in a quantum description of an experiment get reduced to the one actual result that we see. As we have already noted, the tension between these two descriptions is at the heart of the quantum reality problem.

When using standard quantum theory, physicists often say that the wave function – a mathematical object that encodes all the potential possibilities – ‘collapses’ to the measured outcome at the end of an experiment. This ‘collapse’, though, is no more than a figure of speech, which only highlights the awkward fact that we do not understand what is really happening. By contrast, in Ghirardi-Rimini-Weber-Pearle models, collapse becomes a well-defined mathematical and physical process, taking place at definite points in space, following precise equations and going on all the time in the world around us, whether or not we are making measurements. According to these new equations, the more particles there are in a physical system, the faster the collapse rate. Left isolated, a single electron will collapse so rarely that we essentially never see any effect. On the other hand, anything large enough to be visible – even a dust grain – has enough particles in it that it collapses very quickly compared to human perception times. (In Schrödinger’s famous thought experiment, the cat’s quantum state would resolve in next to no time, leaving us with either a live cat or a dead one, not some strange quantum combination of both.)

One way of thinking about reality in these models, first suggested by Bell, is to take the beables to be the points in space and time at which the collapses take place. On this view, a dust grain is actually a little galaxy of collapse points, winking instantaneously in and out of existence within or near to (what we normally think of as) the small region of space that it occupies. Everything else we see around us, including our selves, has the same sort of pointillistic character.

Collapse models do not make exactly the same predictions as quantum theory, which could turn out to be either a strength or a weakness. Since quantum theory is very well confirmed, this disagreement might seem to rule these new models out. However, the exact rate of collapses per particle is a free parameter that is not fixed by the mathematics of the basic proposal. It is perfectly possible to tailor this value such that the differences between collapse model predictions and those of quantum theory are so tiny that no experiment to date would have detected it, and at the same time large enough that the models give a satisfactory solution to the reality problem (ie, everything that seems definite and real to us actually is real and definite).

That said, we presently have no theoretically good reason why the parameter should be in the range that allows this explanation to work. It might seem a little conspiratorial of nature to give us the impression that quantum theory is correct, while tuning the equations so that the crucial features that give rise to a definite physical reality are – with present technology – essentially undetectable. On the other hand, history tells us that deep physical insights, not least quantum theory itself, have often come to light only when technology advances sufficiently. The first evidence for what turns out to be a revolutionary change in our understanding of nature can often be a tiny difference between what current theory predicts and what is observed in some crucial experiment.


Like every previous theory of physics, quantum theory will turn out only approximately true, applying within a limited domain only

There are other theoretical problems with collapse models. Although they do not seem to conflict with special relativity or with field theories in the way that de Broglie-Bohm theory does, incorporating the collapse idea into these fundamental theories nevertheless poses formidable technical problems. Even on an optimistic view, the results in this direction to date represent work in progress rather than a fully satisfactory solution. Another worry for theorists in a subject where elegance seems to be a surprisingly strong indicator of physical relevance is that the mathematics of collapse seems a little ad hoc and utilitarian. To be fair, it is considerably less ugly than the de Broglie-Bohm theories, which to a purist’s eye more closely resemble a Heath Robinson contraption than the elegant machinery we have come to expect of the laws of physics. But compared with the extraordinary depth and beauty of Einstein’s general theory of relativity, or of quantum theory itself, collapse models disappoint.
This could simply mean that we have not properly understood them, or not yet seen the majestic deeper theory of which they form a part. It seems likelier, though, that collapse models are at best only a step in roughly the right direction. I suspect that, like de Broglie-Bohm theory, they will eventually be seen as pointers on the way to a deeper understanding of physical reality – extraordinarily important achievements, but not fundamentally correct descriptions.


There is, however, one important lesson that we can already credit to collapse models. De Broglie-Bohm theory suffers from the weakness that its experimental predictions are precisely the same as those of quantum theory, unlike collapse models that, as we have noted, are at least in principle testably different. The beables in de Broglie-Bohm theory – the particle paths – play a rather subordinate role: their behaviour is governed by the wave function that characterises all the possible realities from which any given set of paths is drawn, but they have no effect on that wave function. In metaphysical language, the de Broglie-Bohm theory beables are epiphenomena. The American psychologist William James once poetically described human consciousness as ‘Inert, uninfluential, a simple passenger in the voyage of life, it is allowed to remain on board, but not to touch the helm or handle the rigging’. Much the same might be said of a de Broglie-Bohm beable. Collapse-model beables, on the other hand, give as good as they get. Their appearance is governed by rules involving the quantum wave function, and yet, once they appear, they in turn alter the wave function. This makes for a far more interesting theory, mathematically as well as scientifically.

It’s tempting to declare this as a requirement for any variable in a fundamental theory of physics – or at least, any variable that plays as important a role as the beables are meant to play: it should be mathematically active, not purely passive. Any interesting solution to the quantum reality problem should (like collapse models but unlike de Broglie-Bohm theory) make experimentally testable predictions that allow us to check our new description of reality.

How might we do that? Assuming these ideas are not entirely wrong, what sort of experiments might give us evidence of a deeper theory underlying quantum theory and a better understanding of physical reality? The best answer we can give at present, if collapse models and other recent ideas for beable theories are any guide, is that we should expect to see something new when some relevant quantity in the experiment gets large. In particular, the peculiar and intriguing phenomenon called quantum interference – which seems to give direct evidence that different possible paths which could have been followed during an experiment all contribute to the outcome – should start to break down as we try to demonstrate it for larger and larger objects, or over larger and larger scales.

This makes some intuitive sense. Quantum theory was developed to explain the behaviour of atoms and other small systems, and has been well tested only on small scales. It would always have been a brave and perhaps foolhardy extrapolation to assume that it works on all scales, up to and including the entire universe, even if this involved no conceptual problems. Given the self-contradictions involved in the extrapolation and the profound obstacles that seem to prevent any solution of the reality problem within standard quantum theory, the most natural assumption is that, like every previous theory of physics, quantum mechanics will turn out only approximately true, applying within a limited domain only.

A number of experimental groups around the world are now trying to find the boundaries of that domain, testing quantum interference for larger and larger molecules (the current record is for molecules comprising around 1,000 atoms), and ultimately for small crystals and even viruses and other living organisms. This would also allow us to investigate the outlandish but not utterly inconceivable hunch that the boundaries of quantum theory have to do with the complexity of a system, or even with life itself, rather than just size. Researchers have proposed space-based experiments to test the interference between very widely separated beams and will no doubt spring into action once quantum technology becomes available on satellites, as it probably will in the next few years.


With luck, if the ideas I have outlined are on the right lines, we might have a good chance of detecting the limits of quantum theory in the next decade or two. At the same time we can hope for some insight into the nature and structure of physical reality. Anyone who expects it to look like Newtonian billiard-balls bouncing around in space and time, or anything remotely akin to pre-quantum physical ideas, will surely be disappointed. Quantum theory might not be fundamentally correct, but it would not have worked so well for so long if its strange and beautiful mathematics did not form an important part of the deep structure of nature. Whatever underlies it might well seem weirder still, more remote from everyday human intuitions, and perhaps even richer and more challenging mathematically. To borrow a phrase from John Bell, trying to speculate further would only be to share my confusion. No one in 1899 could have dreamed of anything like quantum theory as a fundamental description of physics: we would never have arrived at quantum theory without compelling hints from a wide range of experiments.

The best present ideas for addressing the quantum reality problem are at least as crude and problematic as Bohr’s model of the atom. Nature is far richer than our imaginations, and we will almost certainly need new experimental data to take our understanding of quantum reality further. If the past is any guide, it should be an extraordinarily interesting scientific journey.

This article was originally published by Adrian Kent at Aeon

Implicate Order of subatomic particles

Labels: , , , , , , , ,

“Space is not empty. It is full, a plenum as opposed to a vacuum, and is the ground for the existence of everything, including ourselves. The universe is not separate from this cosmic sea of energy.” – David Bohm.

David Bohm was one of the most distinguished theoretical physicists of his generation, and a fearless challenger of scientific orthodoxy.

His interests and influence extended far beyond physics and embraced biology, psychology, philosophy, religion, art, and the future of society. Underlying his innovative approach to many different issues was the fundamental idea that beyond the visible, tangible world there lies a deeper, implicate order of undivided wholeness.

David Bohm was born in Wilkes-Barre, Pennsylvania, on December 20, 1917. He went to Pennsylvania State University to study physics, and later to the University of California at Berkeley to work on his PhD thesis with J.Robert Oppenheimer.

Albert Einstein (left) with J. Robert Oppenhei...
Albert Einstein (left) with J. Robert Oppenheimer (right) working on the Manhattan Project (Photo credit: Wikipedia)
While at Berkeley, Bohm, an idealist, became involved in politics and he was labeled a communist by the FBI led by J. Edgar Hoover. This prevented him from getting a clearance to work with Oppenheimer on the Manhattan Project at Los Alamos to produce the first atomic bomb during the World War II. However, while working on his doctorate at Berkeley, he discovered “the scattering calculations of collisions of protons and deuterons” which was used by the Manhattan Project team, and was immediately classified. As a result, Bohm was denied access to his own work and wasn’t allowed to write or defend his thesis. Oppenheimer had to certify before the faculty of the university that Bohm had indeed successfully completed his research. Bohm was awarded his PhD in physics.

Bohm was surprised to find that once electrons were in a plasma, they stopped behaving like individuals and started behaving as if they were part of a larger and interconnected whole. He later remarked that he frequently had the impression that the sea of electrons was in some sense alive.

In 1947, he became an assistant professor at Princeton University, where he met Albert Einstein. Einstein found Bohm to be a kindred spirit, a like-minded colleague with whom he could have fascinating conversations about the nature of the universe.He extended his research to the study of electrons in metals. Once again the seemingly haphazard movements of individual electrons managed to produce highly organized overall effects. Bohm’s innovative work in this area established his reputation as a theoretical physicist.

In 1951 Bohm wrote a classic textbook entitled Quantum Theory, in which he presented a clear account of the orthodox, Copenhagen interpretation of quantum physics. The Copenhagen interpretation was formulated mainly by Niels Bohr and Werner Heisenberg in the 1920s and is still highly influential today. But even before the book was published, Bohm began to have doubts about the assumptions underlying the conventional approach.

The holomovement is a key concept in David Bohm`s interpretation of quantum mechanics and for his overall worldview. It brings together the holistic principle of “undivided wholeness” with the idea that everything is in a state of process or becoming (or what he calls the “universal flux») For Bohm, wholeness is not a static oneness, but a dynamic wholeness-in-motion in which everything moves together in an interconnected process. The concept is presented most fully in Wholeness and the implicate order published in 1980.

Referring to quantum theory, Bohm’s basic assumption is that “elementary particles are actually systems of extremely complicated internal structure, acting essentially as amplifiers of information contained in a quantum wave.” As a consequence, he has evolved a new and controversial theory of the universe. A new model of reality that Bohm calls the “Implicate Order.”

The theory of the Implicate Order contains an ultra-holistic cosmic view; it connects everything with everything else. In principle, any individual element could reveal “detailed information about every other element in the universe.” The central underlying theme of Bohm’s theory is the “unbroken wholeness of the totality of existence as an undivided flowing movement without borders.”

David Bohm
David Bohm

During the early 1980s Bohm developed his theory of the Implicate Order in order to explain the bizarre behavior of subatomic particles. Behavior that quantum physicists have not been able to explain. Basically, two subatomic particles that have once interacted can instantaneously “respond to each other’s motions thousands of years later when they are light-years apart.” This sort of particle interconnectedness requires superluminal signaling, which is faster than the speed of light. This odd phenomenon is called the EPR effect, named after the Einstein, Podolsky, and Rosen thought experiment.

Bohm believes that the bizarre behavior of the subatomic particles might be caused by unobserved subquantum forces and particles. Indeed, the apparent weirdness might be produced by hidden means that pose no conflict with ordinary ideas of causality and reality.

Bohm believes that this “hiddeness” may be reflective of a deeper dimension of reality. He maintains that space and time might actually be derived from an even deeper level of objective reality. This reality he calls the Implicate Order. Within the Implicate Order everything is connected; and, in theory, any individual element could reveal information about every other element in the universe.

Borrowing ideas from holographic photography, the hologram is Bohm’s favorite metaphor for conveying the structure of the Implicate Order. Holography relies upon wave interference. If two wavelengths of light are of differing frequencies, they will interfere with each other and create a pattern. “Because a hologram is recording detail down to the wavelength of light itself, it is also a dense information storage.” Bohm notes that the hologram clearly reveals how a “total content–in principle extending over the whole of space and time–is enfolded in the movement of waves (electromagnetic and other kinds) in any given region.” The hologram illustrates how “information about the entire holographed scene is enfolded into every part of the film.” It resembles the Implicate Order in the sense that every point on the film is “completely determined by the overall configuration of the interference patterns.” Even a tiny chunk of the holographic film will reveal the unfolded form of an entire three-dimensional object.

Proceeding from his holographic analogy, Bohm proposes a new order–the Implicate Order where “everything is enfolded into everything.” This is in contrast to the explicate order where things are unfolded. Bohm puts it thus:

“The actual order (the Implicate Order) itself has been recorded in the complex movement of electromagnetic fields, in the form of light waves. Such movement of light waves is present everywhere and in principle enfolds the entire universe of space and time in each region. This enfoldment and unfoldment takes place not only in the movement of the electromagnetic field but also in that of other fields (electronic, protonic, etc.). These fields obey quantum-mechanical laws, implying the properties of discontinuity and non-locality. The totality of the movement of enfoldment and unfoldment may go immensely beyond what has revealed itself to our observations. We call this totality by the name holomovement.”

Bohm believes that the Implicate Order has to be extended into a multidimensional reality; in other words, the holomovement endlessly enfolds and unfolds into infinite dimensionality. Within this milieu there are independent sub-totalities (such as physical elements and human entities) with relative autonomy. The layers of the Implicate Order can go deeper and deeper to the ultimately unknown. It is this “unknown and undescribable totality” that Bohm calls the holomovement. The holomovement is the “fundamental ground of all matter.”

THE HOLOGRAM AND HOLONOMY

In collaboration with Stanford neuroscientist Karl Pribram, Bohm was involved in the early development of the holonomic model of the functioning of the brain, a model for human cognition that is drastically different from conventionally accepted ideas. Bohm worked with Pribram on the theory that the brain operates in a manner similar to a hologram in accordance with quantum mathematical principles and the characteristics of wave patterns.

The holonomic brain theory or model, developed by neuroscientist Karl Pribram initially in collaboration with physicist David Bohm, is a model of human cognition that describes the brain as a holographic storage network. Pribram suggests these processes involve electric oscillations in the brain’s fine-fibered dendritic webs, which are different from the more commonly known action potentials involving axons and synapses.These oscillations are waves and create wave interference patterns in which memory is encoded naturally, in a way that can be described with Fourier  Transformation equations. Pribram and others noted the similarities between these brain processes and the storage of information in a hologram, which also uses Fourier Transformations(mathematical).In a hologram, any part of the hologram with sufficient size contains the whole of the stored information. In this theory, a piece of a long-term memory is similarly distributed over a dendritic arbor so that each part of the dendritic network contains all the information stored over the entire network.This model allows for important aspects of human consciousness, including the fast associative memory that allows for connections between different pieces of stored information and the non-locality of memory storage (a specific memory is not stored in a specific location, i.e. a certain neuron).

A main characteristic of a hologram is that every part of the stored information is distributed over the entire hologram.Both processes of storage and retrieval are carried out in a way described by Fourier transformation equations. As long as a part of the hologram is large enough to contain the interference pattern that part can recreate the entirety of the stored image, except with more unwanted changes, called noise.

An analogy to this is the broadcasting region of a radio antennae. In each smaller individual location within the entire area it is possible to access every channel, similar to how the entirety of the information of a hologram is contained within a part.

Another analogy of a hologram is the way sunlight illuminates objects in the visual field of an observer. It doesn’t matter how narrow the beam of sunlight is. The beam always contains all the information of the object, and when conjugated by a lens of a camera or the eyeball, produces the same full three-dimensional image. The Fourier transform formula converts spatial forms to spatial wave frequencies and vice versa, as all objects are in essence vibratory structures. Different types of lenses, acting similarly to optic lenses can alter the frequency nature of information that is transferred.

This non-locality of information storage within the hologram is crucial, because even if most parts are damaged, the entirety will be contained within even a single remaining part of sufficient size. Pribram and others noted the similarities between an optical hologram and memory storage in the human brain. According to the holonomic brain theory, memories are stored within certain general regions, but stored non-locally within those regions. This allows the brain to maintain function and memory even when it is damaged. It is only when there exist no parts big enough to contain the whole that the memory is lost. This can also explain why some children retain normal intelligence when large portions of their brain in some cases, half are removed. It can also explain why memory is not lost when the brain is sliced in different cross-sections.

A single hologram can store 3D information in a 2D way. Such properties may explain some of the brain’s abilities, including the ability to recognize objects at different angles and sizes than in the original stored memory.

Pribram proposed that neural holograms were formed by the diffraction patterns of oscillating electric waves within the cortex. It is important to note the difference between the idea of a holonomic brain and a holographic one. Pribram does not suggest that the brain functions as a single hologram. Rather, the waves within smaller neural networks create localized holograms within the larger workings of the brain. This patch holography is called holonomy or windowed Fourier transformations.

A holographic model can also account for other features of memory that more traditional models cannot. The Hopfield Memory model has an early memory saturation point before which memory retrieval drastically slows and becomes unreliable.  On the other hand, holographic memory models have much larger theoretical storage capacities. Holographic models can also demonstrate associative memory, store complex connections between different concepts, and resemble forgetting through lossy storage.

INFORMATION

Bohm: “The actual nature of the information and the way it is carried is not yet entirely clear. Is it really correct, for example, to speak of a “field” of information, since information does not fall off with distance, neither is it associated with energy in the usual sense. Possibly the notion of field should be widened or, at the quantum level. we should be talking about pre-space structures, or about algebraic relationships that precede the structure of space and time. “

Bohm’s notion of “active information” is tied to his “Ontological Interpretation” (formerly the Causal or Hidden Variable Interpretation). I propose it be freed from any particular theory and raised to the level of a General Principle. Bohm never considered his Ontological Interpretation to be the last word on quantum theory, rather that it would suggest insights and avenues for further research. I believe that one of the most valuable is this notion of information.

“Yes, if you say that all matter actually works from information, not merely matter in the nervous system or DNA matter working in the cell, but even the electron is forming from empty space being informed as it were by some unknown source of information which may be all over the space.
And then we can not have, there is no sharp division between thought, emotion and matter. You see that they flow into each other. Even in ordinary experience you have thought and emotion flow into a movement of matter in the body. Or the movement of matter in the body gives rise to emotion and thought right.

Now the only point is that present science has no idea how thought could directly affect an object which is not in contact with the body you see, or directly through some system. But if you say that the entire ground of existence is enfolded in space, that all matter is coming out of that space, including ourselves, our brains, our thoughts … then the information might gradually vades the space so that matter starts to, you could say that matter is always forming according to whatever information it has and therefore the thought process could alter that information content.
So I would d say that it does look possible though I think very careful experiments have to be done before we say that it actually does take place.”

“Because a hologram is recording detail down to the wavelength of light itself, it is also a dense information storage.” Bohm notes that the hologram clearly reveals how a “total content,in principle extending over the whole of space and time,is enfolded in the movement of waves (electromagnetic and other kinds) in any given region.” The hologram illustrates how “information about the entire holographed scene is enfolded into every part of the film.” It resembles the Implicate Order in the sense that every point on the film is “completely determined by the overall configuration of the interference patterns.” Even a tiny chunk of the holographic film will reveal the unfolded form of an entire three-dimensional object.

MATTER, ANIMATE AND INANIMATE

Right off Bohm refers to the particle, the most essential building-block of matter. He considers the particle, fundamentally, to be only an “abstraction that is manifest to our senses.” Basically, for Bohm, the whole cosmos is matter; in his own words: “What is, is always a totality of ensembles, all present together, in an orderly series of stages of enfoldment and unfoldment, which intermingle and interpenetrate each other in principle throughout the whole of space.”

Bohm’s explicate order, however, is secondary–derivative. It flows out of the law of the Implicate Order, a law that stresses the relationships between the enfolded structures that interweave each other throughout cosmic space rather than between the “abstracted and separate forms that manifest to the senses.”

Bohm’s explanation of “manifest” is basically that in certain sub-orders, within the “whole set” of Implicate Order, there is a “totality of forms that have an approximate kind of recurrence, stability and separability.” These forms are capable of appearing tangible, solid, and thus make up our manifest world.

Bohm also declares that the “implicate order has to be extended into a multidimensional reality.” He proceeds: “In principle this reality is one unbroken whole, including the entire universe with all its fields and particles. Thus we have to say that the holomovement enfolds and unfolds in a multidimensional order, the dimensionality of which is effectively infinite. Thus the principle of relative autonomy of sub-totalities–is now seen to extend to the multi-dimensional order of reality.”

Bohm illustrates this higher-dimensional reality by showing the relationship of two televised images of a fish tank, where the fish are seen through two walls at right angles to one another. What is seen is that there is a certain “relationship between the images appearing on the two screens.” We know, Bohm notes, that the two fish tank images are interacting actualities, but they are not two independently existent realities. “Rather, they refer to a single actuality, which is the common ground of both.” For Bohm this single actuality is of higher dimensionality, because the television images are two-dimensional projections of a three-dimensional reality, which “holds these two-dimensional projections within it.” These projections are only abstractions, but the “three-dimensional reality is neither of these,rather it is something else, something of a nature beyond both.”

If there is apparent evolution in the universe, it is because the different scales or dimensions of reality are already implicit in its structure. Bohm uses the analogy of the seed being “informed” to produce a living plant. The same can be said of all living matter. “Life is enfolded in the totality and–even when it is not manifest, it is somehow implicit.” The holomovement is the ground for both life and matter. There is no dichotomy.

What lies ahead? For Bohm it is the development of consciousness!

CONSCIOUSNESS

Bohm conceives of consciousness as more than information and the brain; rather it is information that enters into consciousness. For Bohm consciousness “involves awareness, attention, perception, acts of understanding, and perhaps yet more.” Further, Bohm parallels the activity of consciousness with that of the Implicate Order in general.

Consciousness, Bohm notes, can be “described in terms of a series of moments.” Basically, “one moment gives rise to the next, in which context that was previously implicate is now explicate while the previous explicate content has become implicate.” Consciousness is an interchange; it is a feedback process that results in a growing accumulation of understanding.

Bohm considers the human individual to be an “intrinsic feature of the universe, which would be incomplete,in some fundamental sense” if the person did not exist. He believes that individuals participate in the whole and consequently give it meaning. Because of human participation, the “Implicate Order is getting to know itself better.”

Bohm also senses a new development. The individual is in total contact with the Implicate Order, the individual is part of the whole of mankind, and he is the “focus for something beyond mankind.” Using the analogy of the transformation of the atom ultimately into a power and chain reaction, Bohm believes that the individual who uses inner energy and intelligence can transform mankind. The collectivity of individuals have reached the “principle of the consciousness of mankind,” but they have not quite the “energy to reach the whole, to put it all on fire.”

Continuing with this theme on the transformation of consciousness, Bohm goes on to suggest that an intense heightening of individuals who have shaken off the “pollution of the ages” (wrong worldviews that propagate ignorance), who come into close and trusting relationship with one another, can begin to generate the immense power needed to ignite the whole consciousness of the world. In the depths of the Implicate Order, there is a “consciousness, deep down–of the whole of mankind.”

It is this collective consciousness of mankind that is truly significant for Bohm. It is this collective consciousness that is truly one and indivisible, and it is the responsibility of each human person to contribute towards the building of this consciousness of mankind. “There’s nothing else to do,there is no other way out. That is absolutely what has to be done and nothing else can work.”

Bohm also believes that the individual will eventually be fulfilled upon the completion of cosmic noogenesis. Referring to all the elements of the cosmos, including human beings, as projections of an ultimate totality, Bohm notes that as a “human being takes part in the process of this totality, he is fundamentally changed in the very activity in which his aim is to change that reality, which is the content of his consciousness.”

Youtube-link showing A model of David Bohm’s implicate order as a Schrodinger wave hologram comprised of free particle wave-functions:

https://www.youtube.com/watch?v=Jzfj4R52Q6I

Bohm was obsessed with language, particularly with the derivation of words. He delved into the roots of words, not only in his writing but also in his usual manner of discourse. Peat tells a story on Bohm as well as on himself.

He liked to go on and on about the root of words. He’d say for example, “And art, take art, there are the word like artifice, and artery, and articulate, and Artemis…” And then I’d quickly throw in, “and artichoke.” “Yes, artichoke!” he’d say. Then he’d stop and laugh, realizing his having been caught in his own stream of thought. “Artichoke….”[1]

In his serious way of approaching his life work, the pursuit of science was inextricably intertwined with the processes of Bohm’s thought and language. As he spent time delving into those topics, his physicist peers must have wondered if he hadn’t fallen down a rabbit hole and gotten lost. Why would a scientist of such creativity and potential divert to topics that belong in the soft pseudo-sciences of human functioning?

But to Bohm, the questions a researcher asks and the tools used to study them, are inseparable, much as Niels Bohr had shown that the researcher, the measuring apparatus, and that which is measured together form an inseparable system. In a sense, Bohm extended Bohr’s ideas to an even more finite level to include attributes of the researcher’s own operating system. Alfred North Whitehead had said, “Every science must devise its own instruments. The tool required for philosophy is language. Thus philosophy redesigns language in the same way that, in a physical science, pre-existing appliances are redesigned.”[2] Surely, Bohm would have agreed, and then extended language as a prime tool of the physical sciences as well.

The "2 Base number" Orbit Coincidence

Labels: , , , , , , , , , , , ,

There is a simple "law" concerning the radiuses of the orbits of the planets. As remarkable as this is for predicting the orbit radiuses of the planets there is no explanation of the law in terms of other laws of physics. A laughably easy way to use Base 2 numbers that seriously diminish its scientific value:

It was called Bode's Law because it was popularized by Bode

It was actually discovered by Titius. It is a rule or formula for finding the orbit radiuses of the planets. 
Bode's Law is a very important issue for informal science as it is so close to the status of a phenomenological law but has no theoretical explanation (or at least it has no explanation accepted by most the scholarly community, yet it is freakishly coincidental). An attempt made by Poveda and Lara to confirm this law by the data from a different planetary system is very interesting and important as it could have helped us to better understand the nature of TBL. However, due to serious mistakes committed by the authors their hypothesis was rejected and the question of existence of the Bode-Titius Law in other planetary systems (as well as the question of its best mathematical form in the Solar system) remains open.
Although The Bode-Titius Law gives a pretty fair approximation of the radiuses of the orbits of the planets, It appears to fail between Mars and Jupiter where there are many asteroids and where they would have combined to form a planet if Jupiter was not so close by! The Law fails to give the right figure for Neptune too but Pluto fits the value given by the law quite well.

This approach is indeed very interesting as if this hypothesis have been correct it would be a major step towards proving the physical nature of this highly controversial law. Obviously, if the distribution of planetary distances were governed by TBL not only in the Solar system but also in other planetary systems, it would clearly demonstrate that TBL is something more than a simple numerical coincidence. 
Now, since I'm no astronomical or maths genius and I'm very curious to understand this and other things, from a NON CONFORMIST scientific perspective, I'm Now challenging fellow discoverers to assist me in understanding the implications this might have on the cosmos:
It can have great implications in understanding some things mentioned below:
  • The relationship between bits and bytes and binary              values: 0 and 1 on computers seem to be related to this :      Eg. 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096,        8192, 16384
  • Gravity does the Universe apply more keys by which gravity works instead of unexplainable magical forces that pull each other?
  • Perpetual Motion or a harmonic oscillator?
  • Relationship between sizes of planets in our solar system
  • Lastly, there seems to be a systematic relationship between the periods of planets revolving around a primary body. The distances of the planets from the sun, seems based on the numerical sequence 0, 3, 6, 12, 24,48… By adding 4 to each number and then by dividing that number by 10 gives the sequence of 0.4, 0.7, 1, 1.6, 2.8,5.6,which is a reasonable representation of distances in astronomical units for most planets.

Fellow researchers (Not newtonian scientists) may join me on http://physic-spirit.blogspot.co.za/ on a claborative journey where we chuck conformist science out the window and rely on our own "grey matter" to get to know our Cosmos. 

PS, follow me and share this all over the informal science forums you belong to!
Artist's concept of a distant planetary system
Artist's concept of a distant planetary system (Photo credit: Wikipedia)

Supermoon Full Moon Nov 2016

Labels: , ,

November's full moon on Monday (Nov. 14) will be the biggest and brightest one since 1948, making it a great time to get outside and marvel at the lunar sight for stargazers around the world. But if it happens to be cloudy in your area, don't despair. You can still watch the so-called "supermoon" online in several live webcasts, starting tonight (Nov. 13). The Full Beaver Moon of November is called the supermoon because the full phase is taking place at the moon's closest point in its orbit around the Earth, also called the perigee. NASA says the moon will appear slightly larger than a typical full moon, at about 15 percent larger. The moon won't look this large again until 2034. The Moon follows an elliptical path around the Earth, with an average eccentricity of about 0.055 (a perfect circle has an eccentricity of 0). This means that, at its closest approach, the Moon comes within 363,400km of our planet, and at its most distant the Moon is 405,500km. When the Moon is full at the perigee of this orbit, it has come to be known as a "Supermoon." And yes, we use the designation "Supermoon" because, even though it was originally coined back in 1979 by an astrologer, NASA has now adopted it. We shall too, partly because the astronomy of why this month's Supermoon has gotten so much attention is interesting. Supermoon is also an easier term to use than, say, a perigee full Moon. But this month's Supermoon is special. The eccentricity above is calculated based upon the Earth-Moon system, but other celestial bodies also influence the Moon's orbit through gravity. The Sun plays the largest role, but so too does Jupiter and even some of the smaller planets. When factoring in these other influences, the eccentricity of the Moon's orbit can actually vary by as little as 0.026 and as much as 0.077. A more eccentric lunar orbit brings the perigee nearer the Earth, and when this perigee occurs during a full Moon, we get an extra-Supermoon. That is what will happen on Nov. 14, when the Moon will come to within just 356,509km of Earth, which is the Moon's closest approach since Jan. 26, 1948. The Solar System won't line up this well again for a lunar approach until Nov. 25, 2034. Although Monday's closest approach will be at 6:23am ET, the maximum Supermoon will not be visible to us on Earth until the full Moon at 8:52am ET. Unfortunately for the eastern half of the United States, this will occur after sunrise. But no worries—the Moon will appear to be almost the same as Monday morning's Supermoon during the pre-dawn hours and on Monday night. In the UK and continental Europe, look for the Supermoon late on Monday afternoon or early evening. It will also be visible on Tuesday evening, though marginally smaller and less full. A comparison of a "normal" Moon and a Supermoon. A comparison of a "normal" Moon and a Supermoon. Laurent Laveder via Sky & Telescope Despite all the talk of a Supermoon, casual observers will have a hard time discerning the difference between a "normal" Moon and a Supermoon. While the Moon will be about 14 percent larger in size and 30 percent brighter, its appearance will not be dramatically different. www.arstechnica.com
af49e98cd69973df67823e63334f8d19eed86bd0f18fd18bb1