The slowing of moving clocks ("time dilation") is one of the most famous results of Einstein's Special Theory of Relativity. Because it relates to time, it sounds very arcane and mysterious, but in truth it is very easy to get a concrete picture of how it happens.
This is because someone invented the wonderful example of the "light clock". I will first go over this example, and then explain why the same mechanism also applies to more normal clocks (not to mention every other physical process, e.g. aging).
Fig. 1
Light clock at rest.
Figure 1 shows a light clock sitting still with respect to us. The clock consists of two mirrors between which a light beam bounces back and forth. A counter counts the bounces, allowing us to measure time. I'm not sure whether such a clock has been built, but it's certainly possible in principle.
Fig. 2
Light clock in motion (aboard a fast-moving spacecraft).
Figure 2 shows a light clock flying past us aboard a spaceship. The light still bounces back and forth between the mirrors, but what we see now is that it takes a longer path than it did before, because the mirrors keep moving. Since the light still moves with its accustomed speed (denoted "c"), each bounce takes longer. It's actually easy to compute the exact factor of slowing using this picture; see the Wikipedia article.
So the moving light clock slows down because the light beam has trouble catching up to the mirrors it is bouncing between. The main ingredient that creates this result is that the speed of light doesn't depend on the thing that emitted it; light, unlike a tennis ball, doesn't move any faster after bouncing off of a moving mirror.
And why is this? It is because light is a wave, and the speed of a wave depends on its medium, not on the emitter. Think of a boat: its wake moves at the same speed regardless how fast the boat was moving.
The great revolution of 19th century physics was Maxwell's wave theory of light, and it is from this that Relativity directly sprang. Currently all of our theories of physics are wave theories and therefore they all embody similar Relativistic effects; indeed, their mathematics is matched up such that they all embody the exact same effects.
Finally let's consider more "normal" clocks, like ticking mechanical clocks or digital clocks. These clocks are really just light clocks in disguise, because they work using electrical forces, and electrical forces are carried by the electromagnetic field, the same field whose vibrations we call light. Mechanical clocks consist of atoms which are bound together by electrical forces, while digital clocks consist of electrons being shunted around by various electrical forces.
A normal clock is basically the same as a light clock but with a many, many mirrors. Each atom is like a mirror, and the electrical forces between them are like the light wave bouncing in the light clock. If the clock is moving rapidly, the forces between the atoms are transmitted more slowly, causing the operation of the whole clock to slow down.
Obviously there is much more that should be said here. For one thing, there are processes that don't involve light at all, e.g. those mediated by the strong or weak force, but as pointed out above these forces are also waves sharing the same basic properties which cause the light clock to slow down.
And one should really discuss the "paradoxical" symmetry of the time dilation: each observer sees the other's clock running slow. It seems impossible, but that's obviously how the light clocks behave, so it can't really be a paradox - and it isn't. But I have to leave it there.
Sunday, December 13, 2009
Tuesday, December 8, 2009
Heat pump efficiency
Here's something that surprised me...I guess I didn't pay close enough attention in thermodynamics class....
What's the most efficient way to heat a house, a) burn natural gas, or b) run an electrically powered "heat pump" system?
I would have answered a), thinking that nothing could be more efficient than to burn an energy source directly into heat. But this is totally wrong. It is actually far more efficient to let the power company burn that gas to generate electricity, and then use the electricity to run your heat pump.
The outside air may be colder than the inside, but it still stores plenty of heat - the only trick is how to get it from the outside to the inside. Heat doesn't naturally move from a colder place to a hotter place, so it takes energy to pump it, but there is a multiplier factor: a given amount of energy can transfer several times that amount of heat.
Heat pumps really seem like a case of "something for nothing". How can energy E magically pump 3E or 4E of heat from the freezing outdoors to the inside of your house?
The first key is that the outside and inside temperatures are actually not that different, when measured in the Kelvin scale, i.e., starting from -460 Farenheit. The difference between 68 degrees indoors and 32 outdoors is only about 6% on this scale.
The second key is that the obstacle to transferring heat from outdoors to in is not energy, but entropy. After all, we are not talking about creating any energy - just moving it around. Energy ordinarily doesn't move from cold to hot places because it has lower entropy in the hot place 1; however, the entropy difference is not that great for normal temperatures because it depends on the temperature difference in Kelvin.
The third key is that the energy we use to run the heat pump has to be in a very low-entropy form, such as natural gas, rather than a high-entropy form such as the air inside the house. (We could not use the energy in that air to power the heat pump!)
To pump energy from outside to inside, then, all we have to do is make up the relatively small difference in entropy, which we can do by taking a little bit of low-entropy energy and converting it to high entropy.
Of course this doesn't tell us how to make a heat pump, it just tells us something about the performance we can expect. But a heat pump is not complicated - it is just a refrigerator or air conditioner turned backwards.
Added 8/4/2013: Since the heat pump is just a standard heat engine running in reverse, its efficiency is the inverse of that of a heat engine. The efficiency of an ideal heat engine is W/Q=(H-C)/H where H is the hot temperature and C is the cold, and W is the work done while Q is the heat transferred. The "efficiency" of a heat pump is just the inverse of this, Q/W, and it will always be greater than one, at least for a reasonably well constructed engine.
I learned this from the book "Sustainable Energy - Without the Hot Air", by David MacKay, a book I highly recommend.
1 Hot energy has lower entropy than cold energy because the cold energy spreads over more "degrees of freedom". For example, one Joule at a cold temperature may be shared over N particles, while at a hot temperature it is only shared over N/2 particles, because the particles are all moving around faster on average.
What's the most efficient way to heat a house, a) burn natural gas, or b) run an electrically powered "heat pump" system?
I would have answered a), thinking that nothing could be more efficient than to burn an energy source directly into heat. But this is totally wrong. It is actually far more efficient to let the power company burn that gas to generate electricity, and then use the electricity to run your heat pump.
The outside air may be colder than the inside, but it still stores plenty of heat - the only trick is how to get it from the outside to the inside. Heat doesn't naturally move from a colder place to a hotter place, so it takes energy to pump it, but there is a multiplier factor: a given amount of energy can transfer several times that amount of heat.
Heat pumps really seem like a case of "something for nothing". How can energy E magically pump 3E or 4E of heat from the freezing outdoors to the inside of your house?
The first key is that the outside and inside temperatures are actually not that different, when measured in the Kelvin scale, i.e., starting from -460 Farenheit. The difference between 68 degrees indoors and 32 outdoors is only about 6% on this scale.
The second key is that the obstacle to transferring heat from outdoors to in is not energy, but entropy. After all, we are not talking about creating any energy - just moving it around. Energy ordinarily doesn't move from cold to hot places because it has lower entropy in the hot place 1; however, the entropy difference is not that great for normal temperatures because it depends on the temperature difference in Kelvin.
The third key is that the energy we use to run the heat pump has to be in a very low-entropy form, such as natural gas, rather than a high-entropy form such as the air inside the house. (We could not use the energy in that air to power the heat pump!)
To pump energy from outside to inside, then, all we have to do is make up the relatively small difference in entropy, which we can do by taking a little bit of low-entropy energy and converting it to high entropy.
Of course this doesn't tell us how to make a heat pump, it just tells us something about the performance we can expect. But a heat pump is not complicated - it is just a refrigerator or air conditioner turned backwards.
Added 8/4/2013: Since the heat pump is just a standard heat engine running in reverse, its efficiency is the inverse of that of a heat engine. The efficiency of an ideal heat engine is W/Q=(H-C)/H where H is the hot temperature and C is the cold, and W is the work done while Q is the heat transferred. The "efficiency" of a heat pump is just the inverse of this, Q/W, and it will always be greater than one, at least for a reasonably well constructed engine.
I learned this from the book "Sustainable Energy - Without the Hot Air", by David MacKay, a book I highly recommend.
1 Hot energy has lower entropy than cold energy because the cold energy spreads over more "degrees of freedom". For example, one Joule at a cold temperature may be shared over N particles, while at a hot temperature it is only shared over N/2 particles, because the particles are all moving around faster on average.
Monday, December 7, 2009
Seeing quantum gravity?
Here's an beautiful line of research: look at very distant objects with the best possible telescopes, and see if their images are blurred due to spacetime fluctuations caused by quantum gravity. This is the subject of a recent paper posted to ArXiv with the evocative name "A Cosmic Peek at Spacetime Foam" (authors Wayne Christiansen, David Floyd, Y.J. Ng, and Eric Perlman).
The mixing of scales involved in this scenario is breathtaking. Photons coming to us from the most powerful objects in the universe - quasars - and traversing the longest distances we can measure - billions of light years - bring to us traces of the smallest entities ever conceived, namely the tiny fluctuations of quantum spacetime. Throw in some black hole theory and some quantum information theory, which are used to try to estimate the expected blurring effects, and one definitely gets what they call a "sexy" scientific paper.
So has quantum gravity been observed? Well...perhaps. There is a hint, but no more than that, of the behavior one would expect from one particular model (the behavior being the dependence of blurring on wavelength). It will take a better instrument to convert the hint into something meaningful.
This is definitely exciting data to look forward to from future, more accurate telescopes!
Below I've attached the actual quasar images used in the study, not because they really convey anything by themselves, but just because they are fun to think about...
The mixing of scales involved in this scenario is breathtaking. Photons coming to us from the most powerful objects in the universe - quasars - and traversing the longest distances we can measure - billions of light years - bring to us traces of the smallest entities ever conceived, namely the tiny fluctuations of quantum spacetime. Throw in some black hole theory and some quantum information theory, which are used to try to estimate the expected blurring effects, and one definitely gets what they call a "sexy" scientific paper.
So has quantum gravity been observed? Well...perhaps. There is a hint, but no more than that, of the behavior one would expect from one particular model (the behavior being the dependence of blurring on wavelength). It will take a better instrument to convert the hint into something meaningful.
This is definitely exciting data to look forward to from future, more accurate telescopes!
Below I've attached the actual quasar images used in the study, not because they really convey anything by themselves, but just because they are fun to think about...
Sunday, December 6, 2009
Falsifiability
What makes a theory "scientific"? Probably the most widely accepted notion is that it should be falsifiable, i.e., there should be some way, at least in principle, to disprove the theory. This sounds reasonable, but unfortunately it is logically possible - and looking increasingly likely - that the true underlying "theory of everything" is not falsifiable.
What if the theory predicts, for example, the existence of many different universes, unable to communicate with one another? This is a perfectly reasonable possibility, yet we could never disprove it. These "multiverse" theories even have explanatory value in helping us understand why our universe has the particular constants of nature necessary for life (because each universe has a different, random set of constants, so eventually the ones suitable for life will crop up).
Falsifiability is also a very tricky criterion to use in discriminating science from pseudoscience. For a theory to be falsifiable there have to be two "possible" universes, one in which the theory is true and one where it isn't. So we need to know what kind of universes are "possible"; but once we decide this then we don't need falsifiability anymore, since we will already know which theories are possible.
To me it seems that there is a very simple criterion for which universes can exist, namely reducibility to mathematics. As I have argued in another post, any possible universe must be founded on mathematics because only mathematical objects can actually be defined. This implies (as a trivial consequence) that the only "scientific" theories are those compatible with reduction to mathematics.
This criterion immediately rules out any theories involving gods or "supernatural" beings. One can argue at length over the hypothetical characteristics of these entities, but one thing their supporters will never agree to is that they might have a rigorous mathematical basis - because that would defeat the entire psychological purpose of believing in them.
The criterion may seem simplistic and reductive, but it does cast a clear light upon the issues - and one which happens to build upon, rather than shrugging off, the mathematical foundation we have discovered in our own universe.
My criterion is also more honest, I believe, since generally when scientists argue that certain things are "non-scientific" what they really mean is that those things could not possibly exist. For example, if "spirits" could exist and influence events, then those spirits could be studied by science and would not be "unscientific". To say they are "unscientific" is pointless - for that is the exact reason why believers want to believe in them; what one really means by "unscientific" is that they could not possibly exist.
Of course, to believe that only mathematical entities can exist is a belief; we can never prove this. However, it is a belief which matches our discoveries about our own universe, and which makes logical sense, and this is more than one can say about its numerous competing belief systems.
What if the theory predicts, for example, the existence of many different universes, unable to communicate with one another? This is a perfectly reasonable possibility, yet we could never disprove it. These "multiverse" theories even have explanatory value in helping us understand why our universe has the particular constants of nature necessary for life (because each universe has a different, random set of constants, so eventually the ones suitable for life will crop up).
Falsifiability is also a very tricky criterion to use in discriminating science from pseudoscience. For a theory to be falsifiable there have to be two "possible" universes, one in which the theory is true and one where it isn't. So we need to know what kind of universes are "possible"; but once we decide this then we don't need falsifiability anymore, since we will already know which theories are possible.
To me it seems that there is a very simple criterion for which universes can exist, namely reducibility to mathematics. As I have argued in another post, any possible universe must be founded on mathematics because only mathematical objects can actually be defined. This implies (as a trivial consequence) that the only "scientific" theories are those compatible with reduction to mathematics.
This criterion immediately rules out any theories involving gods or "supernatural" beings. One can argue at length over the hypothetical characteristics of these entities, but one thing their supporters will never agree to is that they might have a rigorous mathematical basis - because that would defeat the entire psychological purpose of believing in them.
The criterion may seem simplistic and reductive, but it does cast a clear light upon the issues - and one which happens to build upon, rather than shrugging off, the mathematical foundation we have discovered in our own universe.
My criterion is also more honest, I believe, since generally when scientists argue that certain things are "non-scientific" what they really mean is that those things could not possibly exist. For example, if "spirits" could exist and influence events, then those spirits could be studied by science and would not be "unscientific". To say they are "unscientific" is pointless - for that is the exact reason why believers want to believe in them; what one really means by "unscientific" is that they could not possibly exist.
Of course, to believe that only mathematical entities can exist is a belief; we can never prove this. However, it is a belief which matches our discoveries about our own universe, and which makes logical sense, and this is more than one can say about its numerous competing belief systems.
Wednesday, December 2, 2009
What is a clock?
What is a clock, what is a "good" clock, and why do any good clocks exist?
A clock is just a physical system which goes through a repeating cycle. The cycle can be anything from the swinging of a pendulum to the vibrations of the electromagnetic field. By counting the cycles we can attempt to measure of the passage of time.
Of course our measurements won't be very useful unless the time taken for each cycle remains constant. A clock whose cycle time remains constant is a "good" clock, and we really need at least one of these to make any sense of time at all.
Fortunately, we can expect almost any reasonably-constructed clock to be good, in almost any reasonably-imaginable universe. The reason for this is that the laws of a reasonable universe have a symmetry known as "time translation invariance", which is a fancy way of saying that the laws today are the same as the laws tomorrow. This means that identical starting conditions give rise to identical evolution, regardless of time. Since each clock cycle starts with an identical configuration, each clock cycle unfolds the same, and takes the same amount of time.
How do we know that this "time translation invariance" is actually true? We don't know for sure, but it is true for our current best-guess theories, and it is hard to see how life or any interesting structure could evolve in a universe without it. Evolution couldn't work if the next generation was subject to different laws from the current one - and the Earth itself probably could not sustain a regular orbit around a star.
So time translation invariance is a very fundamental assumption/observation of physics. Perhaps it is not surprising then that it is intimately connected to the most fundamental quantity of physics - energy. Energy conservation is the flip side of time translation symmetry. Mathematically they are simply different statements of the same thing; but this will have to be the subject of another post.
A clock is just a physical system which goes through a repeating cycle. The cycle can be anything from the swinging of a pendulum to the vibrations of the electromagnetic field. By counting the cycles we can attempt to measure of the passage of time.
Of course our measurements won't be very useful unless the time taken for each cycle remains constant. A clock whose cycle time remains constant is a "good" clock, and we really need at least one of these to make any sense of time at all.
Fortunately, we can expect almost any reasonably-constructed clock to be good, in almost any reasonably-imaginable universe. The reason for this is that the laws of a reasonable universe have a symmetry known as "time translation invariance", which is a fancy way of saying that the laws today are the same as the laws tomorrow. This means that identical starting conditions give rise to identical evolution, regardless of time. Since each clock cycle starts with an identical configuration, each clock cycle unfolds the same, and takes the same amount of time.
How do we know that this "time translation invariance" is actually true? We don't know for sure, but it is true for our current best-guess theories, and it is hard to see how life or any interesting structure could evolve in a universe without it. Evolution couldn't work if the next generation was subject to different laws from the current one - and the Earth itself probably could not sustain a regular orbit around a star.
So time translation invariance is a very fundamental assumption/observation of physics. Perhaps it is not surprising then that it is intimately connected to the most fundamental quantity of physics - energy. Energy conservation is the flip side of time translation symmetry. Mathematically they are simply different statements of the same thing; but this will have to be the subject of another post.
Subscribe to:
Posts (Atom)