Things are heating up in the field of dark matter.
One interesting idea making the rounds is that the very first stars may have actually been powered by dark matter!
It sounds paradoxical, since dark matter is dark now because it doesn't interact with anything, so how can it burn to power a star? But back in the early universe there could have been a lot more of it around, enough so that the dark matter particles could annihilate against each other to make large amounts of energy.
These "dark stars" would be pretty odd creatures. They could reach vast proportions, comparable to the size of our solar system, and weighing in at 1000 times a much as our own sun. They would be extremely bright as well, around one million times the luminosity of our own sun. And oddly enough the dark matter would actually form just a tiny fraction of their mass, the vast majority of which is normal matter.
When the dark matter runs out, after a few hundred thousand years, the normal matter would collapse to a supermassive normal star, and ultimately into a black hole. This could help resolve a puzzle: very large black holes appear to exist in the early universe, but nobody understands how they could grow so large in the time available.
To see the papers, check out Katherine Freese's website: http://www-personal.umich.edu/~ktfreese/index.html.
And in other dark matter news...
In the same Minnesota mine where the CDMSII experiment reported two possible dark matter detections earlier this month, another experiment called CoGeNT is reporting hundreds of events: http://www.nature.com/news/2010/100226/full/news.2010.97.html. These events are doubly interesting because they suggest an unexpectedly light dark matter particle.
Well, we should not get too excited just yet, since it is only one experiment and there are many possible complicating factors - but we can get a little excited...
and especially so, because the LHC turned on again yesterday, hopefully for real this time! With luck we will see results from the world of 7 trillion electron volts before year-end.
Sunday, February 28, 2010
Tuesday, February 23, 2010
Jan de Boer colloqium
Here's a popular-level online talk which goes over the current state of thinking on the whole fascinating mix of topics relating to black holes, string theory, "holography", and the Ads/CFT correspondence:
http://agenda.albanova.se/conferenceDisplay.py?confId=1900
This is really some pretty remarkable stuff. About half the talk is "ancient history" from the 70's, 80's and 90's, not new anymore but still fascinating ( you can read about it also in the book "The Black Hole War"). The rest is on newer developments, particularly the application of string theory to quark/gluon physics and to high-energy superconductivity. This is a story still very much under development.
At the end he says something which I find highly dubious. He claims that a person falling in to a black hole would gradually lose consciousness as they hit the event horizon, and this, as far as I know, is not the accepted viewpoint at all. The generally accepted view is that the horizon is undetectable by someone falling across it. Indeed, we could be falling across one right now - perhaps for a huge black hole whose horizon is light years across - and we won't know the difference for millions of years until we start to approach the actual singularity at the heart of the hole.
But on the other hand, it is also generally accepted that if you watch someone falling into a hole from the outside, then you see them get closer and closer to the horizon but never actually fall in. And furthermore, the horizon has a temperature, although generally a low one. So from the outside it looks like a person should be encountering warm temperatures as they fall in, which might dissolve them or cook them or something.
This relates to the idea of "Black hole complementarity", according to which there two equally valid but complementary ways to look at a black hole: the view from outside, and the view falling in. But this seems to violate the principle, because the person falling in could be sending radio messages back home, and those messages would say, "situation normal, nothing to report". But if the infalling person actually sees a temperature and is getting cooked, then their messages would surely mention this fact.
So there is a conflict here, one which has been debated for several decades now, apparently without resolution. Personally I don't believe that the infalling observer would see anything, at least for big black holes. To me the view "from outside" seems pathological and highly suspect, because of the strong warping of time near the event horizon, relative to a distant spot.
That's my .02, but I've been wrong before!
http://agenda.albanova.se/conferenceDisplay.py?confId=1900
This is really some pretty remarkable stuff. About half the talk is "ancient history" from the 70's, 80's and 90's, not new anymore but still fascinating ( you can read about it also in the book "The Black Hole War"). The rest is on newer developments, particularly the application of string theory to quark/gluon physics and to high-energy superconductivity. This is a story still very much under development.
At the end he says something which I find highly dubious. He claims that a person falling in to a black hole would gradually lose consciousness as they hit the event horizon, and this, as far as I know, is not the accepted viewpoint at all. The generally accepted view is that the horizon is undetectable by someone falling across it. Indeed, we could be falling across one right now - perhaps for a huge black hole whose horizon is light years across - and we won't know the difference for millions of years until we start to approach the actual singularity at the heart of the hole.
But on the other hand, it is also generally accepted that if you watch someone falling into a hole from the outside, then you see them get closer and closer to the horizon but never actually fall in. And furthermore, the horizon has a temperature, although generally a low one. So from the outside it looks like a person should be encountering warm temperatures as they fall in, which might dissolve them or cook them or something.
This relates to the idea of "Black hole complementarity", according to which there two equally valid but complementary ways to look at a black hole: the view from outside, and the view falling in. But this seems to violate the principle, because the person falling in could be sending radio messages back home, and those messages would say, "situation normal, nothing to report". But if the infalling person actually sees a temperature and is getting cooked, then their messages would surely mention this fact.
So there is a conflict here, one which has been debated for several decades now, apparently without resolution. Personally I don't believe that the infalling observer would see anything, at least for big black holes. To me the view "from outside" seems pathological and highly suspect, because of the strong warping of time near the event horizon, relative to a distant spot.
That's my .02, but I've been wrong before!
Friday, February 19, 2010
Signs of spring
Could it be that springtime is near, not just in Earth's climate (in the Northern hemisphere at least!), but also in experimental particle physics?
It has been a long, trying winter, with few really exciting observations since the early 1970's. Experiments at Fermilab, SLAC, and CERN confirmed and refined the so-called "Standard Model" of particle physics, for which Nobel prizes were duly dished out during the 70's, 80's, 90's, and even into the last decade. The tau lepton and top quark were confirmed (c. 1975 and 1995, respectively), filling out most of the missing pieces of the bestiary, with the Higgs boson remaining the one stubborn holdout.
These are terrifically important results, don't get me wrong. Four decades is not a long time to test and verify such a complex theory as the Standard Model. Nevertheless it has been frustrating for theorists, who - we can be honest here - find the Standard Model rather clunky and unloveable, and feel certain that it must be incomplete. Candidates to extend it abound, from supersymmetry to technicolor to strings, but very little data exists to constrain them. The cancellation of the SSC in 1993 was a major disappointment; arguably, the most fruitful development to emerge from particle physics laboratories during this period was the World Wide Web, invented at CERN in 1989 (just 21 years ago - but it feels like a century!).
However, all that may be poised to change. The Large Hadron Collider at CERN is finally almost ready to take data, and it should be powerful enough to go beyond the Standard Model. At the very least it should discover the Higgs or, failing that, blast a big hole in the Model.
But what prompted me to write this post was a recent, tantalizing result on dark matter. This is the mysterious matter which seems to comprise 75% or so of the mass of the universe, but which has never been directly seen.
At least until now - perhaps. An experiment called "CDMS II", utilizing fantastically sensitive detectors buried in a mine in Minnesota, reported in December the detection of two possible dark matter collisions (the paper came out in Science last week). Unfortunately, this was not enough events to confidently claim a discovery; the researchers estimated a 75% probability of being due to dark matter, rather than background noise.
Although not definitive, this is very exciting since it would be the first detection ever of a particle from beyond the Standard Model. Indeed, the most favored dark matter candidate at present is the so-called "Lightest Supersymmetric Particle", and theorists would love to get their hands on any concrete information about this creature.
So there's still, speaking literally, nothing to report. But there are gathering signs of promise everywhere. Punxsatawney Phil may have predicted a long winter this year - but what does a groundhog know about particle physics anyway?
It has been a long, trying winter, with few really exciting observations since the early 1970's. Experiments at Fermilab, SLAC, and CERN confirmed and refined the so-called "Standard Model" of particle physics, for which Nobel prizes were duly dished out during the 70's, 80's, 90's, and even into the last decade. The tau lepton and top quark were confirmed (c. 1975 and 1995, respectively), filling out most of the missing pieces of the bestiary, with the Higgs boson remaining the one stubborn holdout.
These are terrifically important results, don't get me wrong. Four decades is not a long time to test and verify such a complex theory as the Standard Model. Nevertheless it has been frustrating for theorists, who - we can be honest here - find the Standard Model rather clunky and unloveable, and feel certain that it must be incomplete. Candidates to extend it abound, from supersymmetry to technicolor to strings, but very little data exists to constrain them. The cancellation of the SSC in 1993 was a major disappointment; arguably, the most fruitful development to emerge from particle physics laboratories during this period was the World Wide Web, invented at CERN in 1989 (just 21 years ago - but it feels like a century!).
However, all that may be poised to change. The Large Hadron Collider at CERN is finally almost ready to take data, and it should be powerful enough to go beyond the Standard Model. At the very least it should discover the Higgs or, failing that, blast a big hole in the Model.
But what prompted me to write this post was a recent, tantalizing result on dark matter. This is the mysterious matter which seems to comprise 75% or so of the mass of the universe, but which has never been directly seen.
At least until now - perhaps. An experiment called "CDMS II", utilizing fantastically sensitive detectors buried in a mine in Minnesota, reported in December the detection of two possible dark matter collisions (the paper came out in Science last week). Unfortunately, this was not enough events to confidently claim a discovery; the researchers estimated a 75% probability of being due to dark matter, rather than background noise.
Although not definitive, this is very exciting since it would be the first detection ever of a particle from beyond the Standard Model. Indeed, the most favored dark matter candidate at present is the so-called "Lightest Supersymmetric Particle", and theorists would love to get their hands on any concrete information about this creature.
So there's still, speaking literally, nothing to report. But there are gathering signs of promise everywhere. Punxsatawney Phil may have predicted a long winter this year - but what does a groundhog know about particle physics anyway?
Monday, February 15, 2010
Applied String Theory!?
Now here's surprising twist in the string theory story, to say the least...
I blogged a little bit the other day about the "Ads/CFT" correspondence, which relates string theory in certain spaces to non-string theories on the surface of those spaces. This bizarre dimension-shifting idea is 13 years old now but its ramifications continue to expand. Juan Maldacena's paper proposing the idea was, as of last year, the second most-cited paper of all time in the Spires high-energy physics database, and will certainly hit number one soon. (I have disqualified an unfair review paper which actually sits at number one).
When first conceived, it seemed like a novel way to figure out things about string theory and therefore, perhaps, about quantum gravity. It seemed like one more bit of cool but ultimately arcane mathematics coming out of string theory.
But in the last few years that logic has being turned on its head and physicists have found it very fruitful to go the other way - to use string theory to understand the surface theories, which are "quantum field theories" quite a bit like the one believed to describe quarks in atomic nuclei.
Now, the quark theory ("QCD") is very hard, because it is "strongly interacting". However, strongly interacting theories are precisely the ones with good "Ads/CFT" dual descriptions. So we have the bizarre phenomenon of actual observable properties of colliding nuclei - messy, hot globs of quarks and gluons - being described in terms of 5-dimensional gravity, strings, membranes, and black holes! I don't think, 15 years ago, that anyone in their wildest thoughts had imagined that black hole physics could be relevant in any way to nuclear interactions; let alone black hole physics in 5 dimensions!
And, more speculatively, some condensed matter systems (e.g. high temperature superconductors) at the temperatures of their phase transitions, also can be connected to a dual gravity description. This, I believe, is still much more tentative than the quark connection.
Note, nobody is saying that actual black holes or other quantum gravity effects are created in nuclear collisions or high-temperature superconductors. The string theory and gravity here are just a "dual description", or equivalent way of looking at them. What's acting like a "string" in the quark-gluon soup would actually be a chain of gluons or something like that. What's acting like the "5th dimension" would actually the energy scale of the reaction. And now I am getting out of my depth and cannot comment in further detail.
For those of you who have read about this elsewhere in the media, I am sorry to probably not add much more. For those who haven't, I hope you find this development as remarkable as I do! I mean seriously, black holes in nuclear physics, of all places!
I blogged a little bit the other day about the "Ads/CFT" correspondence, which relates string theory in certain spaces to non-string theories on the surface of those spaces. This bizarre dimension-shifting idea is 13 years old now but its ramifications continue to expand. Juan Maldacena's paper proposing the idea was, as of last year, the second most-cited paper of all time in the Spires high-energy physics database, and will certainly hit number one soon. (I have disqualified an unfair review paper which actually sits at number one).
When first conceived, it seemed like a novel way to figure out things about string theory and therefore, perhaps, about quantum gravity. It seemed like one more bit of cool but ultimately arcane mathematics coming out of string theory.
But in the last few years that logic has being turned on its head and physicists have found it very fruitful to go the other way - to use string theory to understand the surface theories, which are "quantum field theories" quite a bit like the one believed to describe quarks in atomic nuclei.
Now, the quark theory ("QCD") is very hard, because it is "strongly interacting". However, strongly interacting theories are precisely the ones with good "Ads/CFT" dual descriptions. So we have the bizarre phenomenon of actual observable properties of colliding nuclei - messy, hot globs of quarks and gluons - being described in terms of 5-dimensional gravity, strings, membranes, and black holes! I don't think, 15 years ago, that anyone in their wildest thoughts had imagined that black hole physics could be relevant in any way to nuclear interactions; let alone black hole physics in 5 dimensions!
And, more speculatively, some condensed matter systems (e.g. high temperature superconductors) at the temperatures of their phase transitions, also can be connected to a dual gravity description. This, I believe, is still much more tentative than the quark connection.
Note, nobody is saying that actual black holes or other quantum gravity effects are created in nuclear collisions or high-temperature superconductors. The string theory and gravity here are just a "dual description", or equivalent way of looking at them. What's acting like a "string" in the quark-gluon soup would actually be a chain of gluons or something like that. What's acting like the "5th dimension" would actually the energy scale of the reaction. And now I am getting out of my depth and cannot comment in further detail.
For those of you who have read about this elsewhere in the media, I am sorry to probably not add much more. For those who haven't, I hope you find this development as remarkable as I do! I mean seriously, black holes in nuclear physics, of all places!
Wednesday, February 10, 2010
The Black Hole War
I just finished a really good popular physics book, the best I can remember reading for a long time. It is "The Black Hole War", by Lenny Susskind, an eminent Stanford professor of physics. Among other major achievements, Susskind has a strong claim to be the inventor of string theory, and - unlike with some other current popular authors - everything he says can be taken extremely seriously.
Susskind's topic is one that is close to my heart, indeed I did my dissertation on it, more or less. I was part of the Santa Barbara group of string theory physicists - a.k.a. "the enemy" in Susskind's book, at least as far as the "black hole war" goes. My vote was counted in the tally shown on p. 262 of the book; unfortunately, I'm pretty sure I voted for the "wrong" side, along with the rest of the Santa Barbara crew.
The problem, and the subject of the "War" which Susskind recounts, is simple: what happens to matter swallowed up by a black hole? One possibility is that it just vanishes forever, and this was the general belief until Hawking - in one of the most beautiful computations ever carried out, and the first to combine general relativity and quantum mechanics in any substantial way - showed that black holes have a temperature and they radiate energy like every other warm object. Eventually, they "evaporate" completely and vanish.
But Hawking's calculation opened a huge can of worms because it indicated no connection at all between the matter which went in and that which came out. In other words, the evaporating black hole creates "something from nothing". Energy is conserved, to be sure, but everything else about the matter - all of its "information" - is erased, in a mathematically complete sense, and replaced by a featureless, memoryless, random collection of particles.
Now, this is not how physics has ever worked. In physics, the situation now comes from the situation before, through a one-to-one connection. The situation now does not just arise spontaneously from nothing, in some random state. That just sounds wrong, and it seems mathematically impossible to implement.
However, wrong as this consequence seemed, Hawking's calculation seemed right, and most physicists didn't see the big deal since there were no black holes handy to test with anyway.
But a few physicists, most notably Susskind and 'tHooft, recognized the problem as a critical matter of principle that should be resolved. And they felt quite strongly that Hawking's picture was wrong, and that proving it would teach us profound things about gravity and universe.
In 1994, the paradox seemed completely impenetrable; but by 1997 it had been resolved, more or less, and Susskind and 'tHooft proved right.
History will record these three years as among the most momentous in science. Below I present their chronology, with some introductory years added for context, to give the reader some feeling for the times, which were a strange admixture of excitement and despair. People were waiting for something big to happen, not really believing that it would - and then it did. There's a lesson in there, not least for yours truly, who quit the field just before it exploded. I was at Santa Barbara from 1989-94, a student of Steve Giddings.
March, 1991
Witten discovers a simplified, 2-dimensional black hole solution in string theory. It is exciting both because it is simple, and because it exists within string theory, a partial theory of quantum gravity, suggesting that it might illuminate the paradox of Hawking.
November, 1991
Callan, Giddings, Harvey, and Strominger propose the "CGHS" model of black hole formation and evaporation, based on Witten's black hole.
1992
The "black hole information problem" takes the string theory community by storm, sparked by the string-inspired CGHS model, and helped by a lull in progress in string theory itself. I began working with Giddings and we wrote a followup to the CGHS paper.
1993
The Santa Barbara Black Hole Conference, a.k.a "The Battle of Santa Barbara", in Susskind's dramatic rendition. Heated debate, fascinating ideas - but no resolutions.
In fact the most important result, by far, to be announced during the conference is the proof of Fermat's Last Theorem.
Meanwhile, in a major blow to the particle physics community, the SSC accelerator is canceled by Congress. My thesis advisor Giddings is quoted in a major news magazine saying that, had he known that would happen, he would have gone to law school.
1994
The calm before the storm. Black hole work mushrooms in string theory, and the ideas remain tantalizing, but true solutions seem wholly out of reach. Many, including yours truly, are very discouraged.
March, 1995. University of Southern California.
At the Strings '95 conference, Witten informs a stunned audience that string theory, previously thought to reside in 10 dimensions, actually has a hidden, 11th dimension. The most famous scientific talk in recent memory, it sparks a revolution in string theory.
The significance of it all was still pretty unclear though. At the conference final dinner, I listened to Susskind's wrapup speech, in which he described the whole field as "angels dancing on the head of a pin". I am sure he never really believed that, and if you read his book you won't believe it either - but it still might be true!
October, 1995
String theory expands yet again, as Joe Polchinski of Santa Barbara discovers 10 additional types of matter hidden within it, the "D-branes". D-brane theory is so beautiful and compelling that once you study it, you can't believe that string theory could not be right.
Polchinski wrote me a letter of recommendation upon my graduation; however, I suspect that it was not a very good letter! At any rate, I left the field several months before his historic discovery.
January, 1996
Vafa and Strominger use D-branes to build a model black hole for which they can identify the internal states directly and see that information is not lost. The problem is unraveling.
November, 1997
Juan Maldacena, using D-branes as well as most other major ideas of the previous two decades of theoretical physics research, conjectures that string theory in 4-dimensional spaces equivalent to "dual", non-string theory in 3 dimensions. It is both mind blowing and arcane, but it has over 6000 references and appears to solve problems even in the previously-fossilized field of nuclear physics.
Shortly thereafter, Witten shows that creating a black hole in the 4-dimensional space is the same as adding temperature to the dual 3-dimensional theory.
The veil of the Black Hole is lifted, at least in part, and nobody believes any more that information is sucked into a hole, never to return. The "war" described in Susskind's book is over.
Susskind's topic is one that is close to my heart, indeed I did my dissertation on it, more or less. I was part of the Santa Barbara group of string theory physicists - a.k.a. "the enemy" in Susskind's book, at least as far as the "black hole war" goes. My vote was counted in the tally shown on p. 262 of the book; unfortunately, I'm pretty sure I voted for the "wrong" side, along with the rest of the Santa Barbara crew.
The problem, and the subject of the "War" which Susskind recounts, is simple: what happens to matter swallowed up by a black hole? One possibility is that it just vanishes forever, and this was the general belief until Hawking - in one of the most beautiful computations ever carried out, and the first to combine general relativity and quantum mechanics in any substantial way - showed that black holes have a temperature and they radiate energy like every other warm object. Eventually, they "evaporate" completely and vanish.
But Hawking's calculation opened a huge can of worms because it indicated no connection at all between the matter which went in and that which came out. In other words, the evaporating black hole creates "something from nothing". Energy is conserved, to be sure, but everything else about the matter - all of its "information" - is erased, in a mathematically complete sense, and replaced by a featureless, memoryless, random collection of particles.
Now, this is not how physics has ever worked. In physics, the situation now comes from the situation before, through a one-to-one connection. The situation now does not just arise spontaneously from nothing, in some random state. That just sounds wrong, and it seems mathematically impossible to implement.
However, wrong as this consequence seemed, Hawking's calculation seemed right, and most physicists didn't see the big deal since there were no black holes handy to test with anyway.
But a few physicists, most notably Susskind and 'tHooft, recognized the problem as a critical matter of principle that should be resolved. And they felt quite strongly that Hawking's picture was wrong, and that proving it would teach us profound things about gravity and universe.
In 1994, the paradox seemed completely impenetrable; but by 1997 it had been resolved, more or less, and Susskind and 'tHooft proved right.
History will record these three years as among the most momentous in science. Below I present their chronology, with some introductory years added for context, to give the reader some feeling for the times, which were a strange admixture of excitement and despair. People were waiting for something big to happen, not really believing that it would - and then it did. There's a lesson in there, not least for yours truly, who quit the field just before it exploded. I was at Santa Barbara from 1989-94, a student of Steve Giddings.
March, 1991
Witten discovers a simplified, 2-dimensional black hole solution in string theory. It is exciting both because it is simple, and because it exists within string theory, a partial theory of quantum gravity, suggesting that it might illuminate the paradox of Hawking.
November, 1991
Callan, Giddings, Harvey, and Strominger propose the "CGHS" model of black hole formation and evaporation, based on Witten's black hole.
1992
The "black hole information problem" takes the string theory community by storm, sparked by the string-inspired CGHS model, and helped by a lull in progress in string theory itself. I began working with Giddings and we wrote a followup to the CGHS paper.
1993
The Santa Barbara Black Hole Conference, a.k.a "The Battle of Santa Barbara", in Susskind's dramatic rendition. Heated debate, fascinating ideas - but no resolutions.
In fact the most important result, by far, to be announced during the conference is the proof of Fermat's Last Theorem.
Meanwhile, in a major blow to the particle physics community, the SSC accelerator is canceled by Congress. My thesis advisor Giddings is quoted in a major news magazine saying that, had he known that would happen, he would have gone to law school.
1994
The calm before the storm. Black hole work mushrooms in string theory, and the ideas remain tantalizing, but true solutions seem wholly out of reach. Many, including yours truly, are very discouraged.
March, 1995. University of Southern California.
At the Strings '95 conference, Witten informs a stunned audience that string theory, previously thought to reside in 10 dimensions, actually has a hidden, 11th dimension. The most famous scientific talk in recent memory, it sparks a revolution in string theory.
The significance of it all was still pretty unclear though. At the conference final dinner, I listened to Susskind's wrapup speech, in which he described the whole field as "angels dancing on the head of a pin". I am sure he never really believed that, and if you read his book you won't believe it either - but it still might be true!
October, 1995
String theory expands yet again, as Joe Polchinski of Santa Barbara discovers 10 additional types of matter hidden within it, the "D-branes". D-brane theory is so beautiful and compelling that once you study it, you can't believe that string theory could not be right.
Polchinski wrote me a letter of recommendation upon my graduation; however, I suspect that it was not a very good letter! At any rate, I left the field several months before his historic discovery.
January, 1996
Vafa and Strominger use D-branes to build a model black hole for which they can identify the internal states directly and see that information is not lost. The problem is unraveling.
November, 1997
Juan Maldacena, using D-branes as well as most other major ideas of the previous two decades of theoretical physics research, conjectures that string theory in 4-dimensional spaces equivalent to "dual", non-string theory in 3 dimensions. It is both mind blowing and arcane, but it has over 6000 references and appears to solve problems even in the previously-fossilized field of nuclear physics.
Shortly thereafter, Witten shows that creating a black hole in the 4-dimensional space is the same as adding temperature to the dual 3-dimensional theory.
The veil of the Black Hole is lifted, at least in part, and nobody believes any more that information is sucked into a hole, never to return. The "war" described in Susskind's book is over.
Friday, February 5, 2010
The problem with Quantum Mechanics
Everyone knows Quantum Mechanics is weird. Many of its principles sound paradoxical.
Matter is both wave and particle. Position and velocity can't be simultaneously specified. Particles have spin even though they can't be spun. Particles carry entanglements across space, allowing a form of teleportation. "Empty" space seethes with activity. Small-scale physics is unpredictable and fundamentally random.
Weird, for sure. But is there any real problem here? Does the theory have some kind of inconsistency or mathematical difficulty, or does it just conflict with our inborn intuitions?
I say mathematical difficulty because that is the only kind of problem that would be a real problem (aside from experimental contradiction). If a theory makes mathematical sense then there's no reason to believe it couldn't represent a universe, no matter how badly it contravenes "common sense". Indeed, mathematics is just extrapolated common sense, so anything that makes mathematical sense can be assimilated into our intuition eventually.
But Quantum Mechanics has resisted this assimilation for almost a century now. The reason for this lies not with any of the oddities cited in the second paragraph; they are all perfectly comprehensible with a bit of study.
The problem with Quantum Mechanics is that it contains no consistent way to say what exists. This is usually referred to as the "measurement problem", because physicists encounter it when studying the measurement process, but in truth virtually everything is a kind of measurement. To even say that something exists, even something as seemingly obvious as a rhinoceros or a planet, is to make a type of measurement.
In Quantum Mechanics the universe consists of the "wave function", Ψ. However, Ψ doesn't describe any actual particles, fields, or rhinoceroses, but only the probabilities that they might exist. In order for them to actually exist, there must be a "measurement". But a measurement requires a measurer, and the theory doesn't tell us what or who are the measurers.
Many perfectly valid Quantum Mechanical universes indeed have no actual measurements. Consider a sparsely-filled box of electrons, and imagine that this is the entire universe. Nothing in this universe provides any possibility of measurement, and therefore nothing in this universe really exists. We say the box "contains electrons" because we wrote down our normal Ψ theory for electrons, but beyond this there is nothing - no events, no history, and therefore no real electrons either.
But, one might object, surely in this "=electron universe one can talk about the possible locations and collisions of the electrons, and compute their probabilities? Maybe there's nobody around to see them, but so what - can't they still exist?
Alas, no. In the electron box there are no computable probabilities because the future trajectories of the electrons continue to interfere with each other. It is this interference which is the root of the problem we are discussing. The function of a "measuring device", or "observer", is to wash out the interference of outcomes in the future. Once the interference is washed out, the different outcomes are distinguishable and their probabilities make sense to a high degree - which, for probabilities, means that they very nearly add up to one.
And here is the mathematical crux of the problem: the probabilities never quite add up to one. There's no such thing as a perfect measuring device, because everything is just globs of matter in the first place. A big, complicated thing like a human being does a pretty good job of washing out interference (or to use the more technical lingo, "creating decoherence") but it is never perfect.
So we have "probabilities" that, mathematically speaking, aren't probabilities at all, because they don't add to one. It's like saying there's a 50% chance of flipping heads and a 51% chance of tails; in fact that's exactly what the prediction could be in extreme cases of interference, like the electron box world.
The most accepted solution to this problem, as far as I know, is the one I just alluded to: large blobs of matter create "decoherence" in the things they touch, allowing them to wash out interference to a very high degree. In other words, the probabilities almost add up to one. The come so close that one can argue that the discrepancy can never be noticed in practice.
Well, we don't "notice" the inconsistency between General Relativity and Quantum Mechanics in practice, either - but that hasn't stopped two generations of physicists from trying to resolve it. This problem of probabilities that don't add up to one is equally embarrassing, but gets little attention because nobody has a clue where to start. It is built so deeply into the structure of Quantum Mechanics, and that structure seems so impervious to tinkering, that the effort seems futile.
Personally, I am torn. I believe that anything that exists must rest on a consistent mathematical foundation. The fact that our own universe is built from mathematics suggests to me that this view is right. If it could have been some other way - then why isn't it?
But inconsistent mathematics is as bad as no mathematics at all. Inconsistent mathematics has all the same problems as gods or magic or any of the other non-mathematical fairy tales people have dreamed up over the eons. So, given that the universe clearly uses mathematics, why would it slip in an inconsistency at the very lowest level? Why bother with math at all, in that case?
Yet, I have a feeling that the problems with Quantum Mechanics will not be resolved. The probabilities add up nearly to one in the universe we have right here, so even though it doesn't make any sense in principle, and it wouldn't make sense for some other universes, it's what we will be stuck with - like it or lump it.
For some reason, our universe chooses to exist at the very boundary of conceivability. Perhaps it is a joke of some kind, or perhaps for some reason this is the only kind of existence that is really possible.
Matter is both wave and particle. Position and velocity can't be simultaneously specified. Particles have spin even though they can't be spun. Particles carry entanglements across space, allowing a form of teleportation. "Empty" space seethes with activity. Small-scale physics is unpredictable and fundamentally random.
Weird, for sure. But is there any real problem here? Does the theory have some kind of inconsistency or mathematical difficulty, or does it just conflict with our inborn intuitions?
I say mathematical difficulty because that is the only kind of problem that would be a real problem (aside from experimental contradiction). If a theory makes mathematical sense then there's no reason to believe it couldn't represent a universe, no matter how badly it contravenes "common sense". Indeed, mathematics is just extrapolated common sense, so anything that makes mathematical sense can be assimilated into our intuition eventually.
But Quantum Mechanics has resisted this assimilation for almost a century now. The reason for this lies not with any of the oddities cited in the second paragraph; they are all perfectly comprehensible with a bit of study.
The problem with Quantum Mechanics is that it contains no consistent way to say what exists. This is usually referred to as the "measurement problem", because physicists encounter it when studying the measurement process, but in truth virtually everything is a kind of measurement. To even say that something exists, even something as seemingly obvious as a rhinoceros or a planet, is to make a type of measurement.
In Quantum Mechanics the universe consists of the "wave function", Ψ. However, Ψ doesn't describe any actual particles, fields, or rhinoceroses, but only the probabilities that they might exist. In order for them to actually exist, there must be a "measurement". But a measurement requires a measurer, and the theory doesn't tell us what or who are the measurers.
Many perfectly valid Quantum Mechanical universes indeed have no actual measurements. Consider a sparsely-filled box of electrons, and imagine that this is the entire universe. Nothing in this universe provides any possibility of measurement, and therefore nothing in this universe really exists. We say the box "contains electrons" because we wrote down our normal Ψ theory for electrons, but beyond this there is nothing - no events, no history, and therefore no real electrons either.
But, one might object, surely in this "=electron universe one can talk about the possible locations and collisions of the electrons, and compute their probabilities? Maybe there's nobody around to see them, but so what - can't they still exist?
Alas, no. In the electron box there are no computable probabilities because the future trajectories of the electrons continue to interfere with each other. It is this interference which is the root of the problem we are discussing. The function of a "measuring device", or "observer", is to wash out the interference of outcomes in the future. Once the interference is washed out, the different outcomes are distinguishable and their probabilities make sense to a high degree - which, for probabilities, means that they very nearly add up to one.
And here is the mathematical crux of the problem: the probabilities never quite add up to one. There's no such thing as a perfect measuring device, because everything is just globs of matter in the first place. A big, complicated thing like a human being does a pretty good job of washing out interference (or to use the more technical lingo, "creating decoherence") but it is never perfect.
So we have "probabilities" that, mathematically speaking, aren't probabilities at all, because they don't add to one. It's like saying there's a 50% chance of flipping heads and a 51% chance of tails; in fact that's exactly what the prediction could be in extreme cases of interference, like the electron box world.
The most accepted solution to this problem, as far as I know, is the one I just alluded to: large blobs of matter create "decoherence" in the things they touch, allowing them to wash out interference to a very high degree. In other words, the probabilities almost add up to one. The come so close that one can argue that the discrepancy can never be noticed in practice.
Well, we don't "notice" the inconsistency between General Relativity and Quantum Mechanics in practice, either - but that hasn't stopped two generations of physicists from trying to resolve it. This problem of probabilities that don't add up to one is equally embarrassing, but gets little attention because nobody has a clue where to start. It is built so deeply into the structure of Quantum Mechanics, and that structure seems so impervious to tinkering, that the effort seems futile.
Personally, I am torn. I believe that anything that exists must rest on a consistent mathematical foundation. The fact that our own universe is built from mathematics suggests to me that this view is right. If it could have been some other way - then why isn't it?
But inconsistent mathematics is as bad as no mathematics at all. Inconsistent mathematics has all the same problems as gods or magic or any of the other non-mathematical fairy tales people have dreamed up over the eons. So, given that the universe clearly uses mathematics, why would it slip in an inconsistency at the very lowest level? Why bother with math at all, in that case?
Yet, I have a feeling that the problems with Quantum Mechanics will not be resolved. The probabilities add up nearly to one in the universe we have right here, so even though it doesn't make any sense in principle, and it wouldn't make sense for some other universes, it's what we will be stuck with - like it or lump it.
For some reason, our universe chooses to exist at the very boundary of conceivability. Perhaps it is a joke of some kind, or perhaps for some reason this is the only kind of existence that is really possible.
Subscribe to:
Posts (Atom)