# Locklin on science

## The enigma of the Ford paradox

Posted in chaos, physics by Scott Locklin on March 7, 2013

“God plays dice with the Universe. But they’re loaded dice. And the main objective of physics now is to ﬁnd out what rules were they and how we can use them for our own ends.” -Joe Ford

Joe Ford was one of the greats of “Chaos Theory.” He is largely responsible for turning this into a topic of interest in the West (the Soviets invented much of it independently) through his founding of the journal Physica D. It is one of the indignities of physics history that he isn’t more widely recognized for his contributions. I never met the guy, as he died around the time I began studying his ideas, but my former colleagues sing his praises as a great scientist and a fine man. One of his lost ideas, working with student Matthias Ilg and coworker Giorgio Mantica, is the “Ford paradox.” The Ford paradox is so obscure, a google search on it only turns up comments by me. This is a bloody shame, as it is extremely interesting.

Definitions: In dynamical systems theory, we call the motion of a constrained system an “orbit.” No need to think of planets here; they are associated with the word “orbit” because they were the first orbital systems formally studied. It’s obvious what an orbit is if you look at the Hamiltonian, but for now, just consider an orbit to be some kind of constrained motion.

In most nontrivial dynamical systems theory, we also define something called the phase space.” The phase space is that which fully defines the dynamical state of the system. In mechanics, the general convention is to define it by position and momentum of the objects under study. If the object is constrained to travel in a plane and its mass doesn’t change, like, say, a pendulum, you only have two variables; angular position, and its time derivative, and you can easily visualize the phase space:

For my last definition, I will define the spectrum for the purposes of this exposition. The spectrum is the Fourier transform with respect to time of the orbits. Effectively, it is the energy levels of the dynamical system. If you know the energy and the structure of the phase space, classically speaking, you know what the motion is.

Consider a chaotic system, such as the double pendulum. Double pendulums, as you might expect, have two moving parts, so the phase space is four dimensional, but we can just look at the angle of the bottom most pendulum with respect to the upper pendulum:

If you break down the phase space into regions, and assign a string to each region, one can characterize  chaos by the length of the string in bits. If it is a repeated string, the system is non-chaotic. Chaotic systems are random number generators. They generate random strings. This is one of the fundamental results of modern dynamical systems theory.  A periodic orbit can be reduced to simple sequences, like: {1 0 1 0 1 0}, {1 1 0 1 1 0 1 1 0}. Effectively, periodic orbits are integers. Chaotic orbits have no simple repeating sequences. Chaotic orbits look like real numbers. Not floats which can be represented in a couple of bytes: actual real numbers, like  base of the natural log $e$ or $\pi$ or the golden ratio $\phi$. In a very real sense, chaotic orbits generate new information. Chaotic randomness sounds like the opposite of information, but noisy signals contain lots of information. Otherwise, qua information theory, you could represent the noise with a simple string, identify it, and remove it.  People have invented  mechanical computers that work on this principle. This fact also underlies the workings of many machine learning algorithms. Joe Ford had an extremely witty quoteable about this: “Evolution is chaos with feedback.”

This is all immediately obvious when you view the phase space for a chaotic system, versus a non-chaotic system. Here is a phase space for the end pendulum of a double pendulum at a non-chaotic set of parameters: it behaves more or less like a simple pendulum. My plots are in radians (unlike the above one for a normal pendulum, which I found somewhere else), but otherwise, you should see some familiar features:

It looks squished because, well, it is a bipendulum. The bottom which looks like  lines instead of distorted ellipses  is where the lower pendulum flips over the upper pendulum. The important thing to notice is, the orbits are all closed paths. If you divided the phase space into two regions, the path defined string would reduce to something like {1 0 1 0 1 0…} (or in the lower case { 0 0 0 0…}) forever.

Next, we examine a partially chaotic regime. The chaotic parts of the phase space look like fuzz, because we don’t know where the pendulum will be on the phase space at any given instant. There are still some periodic orbits here. Some look reminiscent of the non-chaotic orbits. Others would require longer strings to describe fully.  What you should get from this; the orbits in the chaotic regions are random. Maybe the next point in time will be a 1. Maybe a 0. So, we’re generating new information here. The chaotic parts and not so chaotic parts are defined on a manifold. Studying the geometry of these manifolds is much of the business of dynamical systems theory. Non-chaotic systems always fall on a torus shaped manifold. You can see in the phase space that they even look like slices of a torus. Chaotic systems are, by definition, not on a torus. They’re on a really weird manifold.

Finally: a really chaotic double pendulum. There are almost no periodic orbits left here; it’s all motion in chaotic, and the path the double pendulum follows generates random bits on virtually any path available to it in the phase space:

Now, consider quantum mechanics. In QM, we can’t observe the position and momentum of an object with infinite precision, so the phase space is “fuzzy.” I don’t feel like plotting this out using Husimi functions, but the ultimate result of it is the chaotic regions are smoothed over. Since the universe can’t know the exact trajectory of the object, it must remain agnostic as to the  path taken. The spectrum of a quantum mechanical orbital system looks like … a bunch of periodic orbits. The quantum spectrum vaguely resembles the parts of the classical phase space that look like slices of a torus. I believe it was W.P. Reinhardt who waggishly called this the “vague tori.” He also said, “the vague tori, being of too indistinct a character to object, are then heavily exploited…” Quantum chaologists are damn funny.

This may seem subtle, but according to quantum mechanics, the “motion” is completely defined by periodic orbits. There are no chaotic orbits in quantum mechanics.  In other words, you have a small set of  periodic orbits which completely define the quantum system. If the orbits are all periodic, there is  less information content than orbits which are chaotic. If this sort of thing is true in general, it indicates that classical physics could be a more fundamental theory than quantum mechanics.

As an interesting aside: we can see neat things in the statistics of the quantum spectrum when the classical equivalent is chaotic; the spectrum looks like the eigenvalues of a random matrix. Since quantum mechanics can be studied as matrix theory, this was a somewhat expected result. Eigenvalues of a random matrix were studied at great length by people interested in the spectra of nuclei, though the nuclear randomness comes from the complexity of the nucleus (aka, all the many protons and neutrons), rather than the complexity of the underlying classical dynamics.  Still, it was pretty interesting when folks first noticed it in simple atomic systems with classically chaotic dynamics. The quantum spectra of a classically non-chaotic system are more or less near neighbor Poisson distributed. Quantum spectra repulse one another. You know something is up when near neighbor spectral distribution starts to look like this:

Random matrix theory is now used by folks in acoustics. Since sound is wave mechanics, and since wave mechanics can be approximated in the short wavelength regime by particles, the same spectral properties apply.  One can design better concert hall acoustics by making the “short wavelength” regime chaotic. This way there are no dead spots or resonances in the concert hall. Same thing applies to acoustically invisible submarines. I may expand upon this, and its relationship to financial and machine learning problems in a later blog post. Spectral analysis is important everywhere.

Returning from the aside to the Ford paradox. Our chaotic pendulum is happily chugging along producing random bits we can use to, I dunno, encrypt stuff or otherwise perform computations. But, QM orbits behave like classical periodic orbits, albeit ones that don’t like standing too close to one another. If quantum mechanics is the ultimate theory of the universe: where do the long strings of random bits come from in a classically chaotic system? Since people believe that QM is the ultimate  law of the universe, somehow we must be able to recover all of classical physics from quantum mechanics. This includes information generating systems like the paths of chaotic orbits. If we can’t derive such chaotic orbits from a QM model, that indicates that QM might not be the ultimate law of nature. Either that, or our understanding of QM  is incomplete. Is there a point where the fuzzy QM picture turn into the classical bit generating picture? If so, what does it look like in the transition?

I’ve had  physicists tell me that this is “trivial,” and that the “correspondence principle” handles this case. The problem is, classically chaotic systems egregiously violate the correspondence principle. Classically chaotic systems generate  information over time. Quantum mechanical systems are completely defined by stationary periodic orbits. To say the “correspondence principle handles this” is to merely assert that we’ll always get the correct answer, when, in fact, there are two different answers. The Ford paradox is asking the question: if QM is the ultimate theory of nature, where do the long bit strings in a classically chaotic dynamical system come from? How is the classical chaotic manifold  constructed from quantum mechanical fundamentals?

Joe Ford was a scientist’s scientist who understood that “the true method of knowledge is experiment.” He suggested we go build one of these crazy things and see what happens, rather than simply yakking about it. Why not  build a set of small and precise double pendulums and see what happens? The double pendulum is pretty good, in that its classical mechanics has been exhaustively studied. If you make a small enough one, and study it on the right time scales, quantum mechanics should apply. In principle, you can make a bunch of them of various sizes, excite them to the chaotic manifold, and watch the dynamics unfold.  You should also do this in simulation, of course. My pal Luca made some steps in that direction.  This experiment could also be done with other kinds of classically chaotic systems; perhaps the stadium problem is the right approach. Nobody, to my knowledge, is thinking of doing this experiment, though there are many potential ways to do it.

It’s possible Joe Ford and I have misunderstood things. It is possible that spectral theory and the idea of the “quantum break time” answers the question sufficiently. But the question has not to my knowledge been rigorously answered. It seems to me much a more interesting question than the ones posed by cosmology and high energy physics. For one thing, it is an answerable question with available experimental tests. For another, it probably has real-world consequences in all kinds of places. Finally, it is probably a  productive approach to unifying information theory with quantum mechanics, which many people agree is worth doing. More so than playing  games postulating quantum computers. Even if you are a quantum computing enthusiast, this should be an interesting question. Do the bits in the long chaotic string exist in a superposition of states, only made actual by observation? If that is so, does the measurement produce the randomness? What if I measure differently?

But alas, until someone answers the question, I’ll have to ponder it myself.

For people with a background in physics who want to understand the information theory behind this idea, the following paper is useful:

“The Arnol’d Cat: Failure of the Correspondence Principle” J. Ford, G. Mantica, G. H. Ristow, Physica D, Volume 50, Issue 3, July 1991, Pages 493–520

### 35 Responses

1. Andrew Jenner said, on March 7, 2013 at 12:35 pm

I think the resolution to the paradox is that chaotic classical systems don’t generate an infinite amount of information – rather, an infinite amount of information is encoded into the starting conditions. Changing the starting condition even infinitesimally causes a completely different string of information to be emitted. In a quantum system, the amount of information in the system is finite so the resulting strings will always be periodic. Larger quantum systems can store more information, so generate longer strings and are more sensitive to initial conditions.

• Scott Locklin said, on March 7, 2013 at 4:57 pm

That would not be the resolution to the paradox even if it were true, which I don’t think it is: QM has to give the same answer as classical, and it can’t.

• Andrew Jenner said, on March 7, 2013 at 5:24 pm

Why does QM have to give the same answer as classical? They’re different things. An ideal classical system is just an approximation to the real QM system (taking the limit as Planck’s constant goes to 0).

• Scott Locklin said, on March 7, 2013 at 5:46 pm

This is an example of how QM can’t approximate classical physics, even in principle. If it stands: classical physics is a more fundamental theory than QM, and the idea that “classical system is just an approximation to the real QM system” becomes bullshit.

• Andrew Jenner said, on March 7, 2013 at 6:49 pm

That classical physics is an approximation to QM isn’t at odds with the two theories giving qualitatively different results for macroscopic experiments. The kind of experiments where they give different results involve taking very small distances or energies and amplifying them to the macroscopic scale. A chaotic system does exactly that, by its sensitivity on initial conditions. So in theory you could do an experiment where you map the trajectories of identical macroscopic systems with extremely close starting conditions, and see whether they diverge (classical prediction) or remain similar (quantum prediction). In practice, such a test would be impossible because you can’t control the initial (and ongoing) conditions that precisely – your trajectories would be affected by microscopic air currents, gravitational effects of distant bodies and even the photons that you bounce of the object in order to measure its trajectory.

• Scott Locklin said, on March 7, 2013 at 10:44 pm

It is eminently possible to do experiments which straddle the quantum and classical worlds. People do them all the time: there are rows of books and journals in the physics library about such experiments, more or less starting in the 70s, when lasers became common tools of research in atomic systems.

Since quantum mechanics is supposed to be the fundamental theory of nature, you should be able to derive all of the Newtonian world you see around you from quantum mechanics. That includes the behavior of double pendulums. The Ford paradox is a way of noticing, via information theory, that you can’t derive one of the most fundamental characteristics of classically chaotic systems. So, either the Ford paradox is wrong, QM is wrong, or QM is incomplete, and there is new physics there. That makes the Ford paradox pretty important.

If you can understand the equation; you need to show me how to get exponential dependence from initial conditions using
$\Psi(\theta,t) = \sum S_n U_n(\theta)e^{-iE_nt/\hbar}$

You may be happy with the present state of affairs, and incurious about what happens in the classical limit of very chaotic systems with underlying chaos, but I’m not.

• Andrew Jenner said, on March 7, 2013 at 10:57 pm

In the classical limit, hbar approaches zero so that equation can’t be used directly. However, you can derive classical mechanics (including the behavior of double pendulums) by starting with QM and finding the limit as hbar approaches zero. Just as you can derive Newtonian mechanics by starting with relativity and finding the limit as the speed of light goes to infinity.

• Scott Locklin said, on March 7, 2013 at 11:01 pm

No, in fact, you can not derive the classical mechanical behavior of double pendulums from QM. That’s the point. The above equation *is* QM. That’s the $\hbar$ that has to go to zero to get the right answer.
Your statement is a statement of faith, not a statement of fact. If you can do it: show me!

• Andrew Jenner said, on March 8, 2013 at 12:05 am

That’s an equation for the wavefunction, which has no classical analogue – so obviously that equation isn’t going to translate. If you instead look at the equations for some observable value, you can take that limit. Doing so is beyond the scope of a blog comment so I’m afraid I need to fall back on pointing you at derivations others have already done. See http://mathoverflow.net/questions/102313/classical-limit-of-quantum-mechanics for example.

• Scott Locklin said, on March 8, 2013 at 12:29 am

I’m sorry: Joe Ford, even in death, is a better resource for understanding these matters than “math overflow.”
The link you reference does point out that the Schroedinger equation was motivated by the Hamilton Jacobi picture of classical mechanics; $\Psi$ has an exact classical analog in the classical action variables. The expectation value of position of the quantum system looks like an expansion of a bunch of $e^{-iE_nt/\hbar}$‘s, just like I wrote above.

• Andrew Jenner said, on March 8, 2013 at 9:01 am

Informally, you can imagine the classical analogue of a wavefunction as a generalized function – a distribution. It’s zero everywhere there are no particles, and the integral over a region containing the particle yields a finite answer. The classical analogue of a zero-sized particle in this scheme is a Dirac delta function, which is also the limit of a quantum wavefunction of one particle as its de Broglie wavelength goes to zero (or, equivalently, as Planck’s constant goes to zero). A delta function encodes a position in space with infinite precision. Hence they can encode an infinite amount of information and hence have aperiodic behavior.

Perhaps I’m misunderstanding the problem, but the Ford paper you linked is behind a paywall.

2. Marcus said, on March 7, 2013 at 1:22 pm

Scott,
That was by far one of your most interesting posts yet and Im still considering with the implications of it. Did Joe consider how quantum informational theory might fit into this methodology? Technically it should be irrelevant as we are looking at an isolated system but I cant help feeling that some emergent behaviour of entangled states might be a piece of the the puzzle here.

• Scott Locklin said, on March 7, 2013 at 5:02 pm

It didn’t really exist then. Arguably, it doesn’t really exist now, except as formalism. It’s not like anybody is using quantum information theory for much other than talking about quantum information theory.

Someone wrote a paper about “quantum chaos computers” once (it might have been Mantica or one of the other Italian guys like Casati), but I’m not sure it meant much of anything.

• Marcus said, on March 8, 2013 at 10:12 am

It looks to me less of a paradox and more of a two finger salute to quantum mechanics, information which isnt there simply isnt there,so perhaps we should look in other places.

While quantum information theory is for all intents and purposes useless at the moment the tests of Bells theorem largely invalidating hidden variables pointed to something going on.
You can make the standard argument that entangled information is not information but I find that more of an exercise of semantics than good science,I don’t know if we can correctly model the informational content of a quantum system(as opposed to a single object) without a deeper understanding of this “information”.

A dumbshit digression is that while our individual quantum objects are producing non random effects their relative spacial displacement could be considered to be more random,while I know gravity is totally irrelevant at that scale but could we postulate that some kind of information exists as as a kind of variant geometric relation and gravity emerges from this at a macro scale due to weak convergence?

• Scott Locklin said, on March 8, 2013 at 10:18 am

The information is there in the classical world where we can see it: why can’t quantum mechanics come up with the goods?
Obviously QM is a good theory. New physics only comes when we stretch theories to their breaking point. This could be one of these things. Or it could be a misunderstanding, which could lead to new understanding. Or maybe I’m just missing something.

Gravity is present in the quantized double pendulum. Anyway, you don’t need to use a double pendulum for this experiment: you could use a kicked rotor or a Bunimovich stadium or whatever you have lying around handy.

3. codeulike said, on March 7, 2013 at 1:43 pm

re: “Chaotic systems are random number generators. They generate random strings.” … Think you make a bit of a jump here. Chaotic systems _can_ be entirely deterministic, its just that they have sensitive dependence on initial conditions. How do you jump from ‘chaotic system’ to ‘random number generator’?

• Scott Locklin said, on March 7, 2013 at 5:24 pm

Determinism doesn’t imply predictability. Chaos is deterministic randomness. “Chaotic” systems are used as random number generators on your computer.
The fact that you need N bits to determine the path of a chaotic system N steps in the future, where you only need some much smaller number for a periodic system is well established stuff. Ford and Ilg did a good job of exposing this oddity.

• codeulike said, on March 7, 2013 at 6:20 pm

> “Chaotic” systems are used as random number generators on your computer.
But the so called random number generators aren’t really random, they’re pseudo-random. To get genuinely random numbers out of a computer you need to hook it up to a radioisotope-decay-detection mechanism or something like that.
I don’t see how “Determinism doesn’t imply predictability. Chaos is deterministic randomness.” holds together. If something it deterministic, it means it can be determined, at least in theory. Something that is deterministic in theory but unpredictable in practice still does not count as ‘Random’.

• Scott Locklin said, on March 7, 2013 at 10:52 pm

If you examine a chaotic system using the tools of Kolmogorov complexity: if I want to predict the value of a chaotic system at any time $t_N=t_0+N$, you need a program which represents the chaotic dynamics, and the initial conditions at $t_0$. The idea is, ultimately, you need a $t_0$ which is N bits long. For quantum systems, predicting at $t_N$ requires less information. The information content of the quantum system is $log_2(N)$ Ford, Mantica and Ilg were the first ones to notice this. That’s the Ford paradox.

• Will said, on March 8, 2013 at 4:25 am

>if I want to predict the value of a chaotic system at any time t_N=t_0+N , you need a program which represents the chaotic dynamics, and the initial conditions at t_0. The idea is, ultimately, you need a t_0 which is N bits long.

I believe small computer programs modeling chaotic systems generally only need enough bits of state to represent the system at the current instant, not to remember their entire history or even any of their history. In that case, I think you could write down the computer program, including the starting state and the value N, as a bitstring, and say that the bits of that program are a compressed version of all N pseudo random output bits. As N grew linearly the program length would only grow as log base 2 of N. Finding the Nth bit would still require N computational steps through time, but it wouldn’t require any more bits of storage space than finding the 1st bit.

I’m very far out of my depth here regarding the physics, and I may well have misunderstood your point, but I’d love to know what you think!

• Scott Locklin said, on March 8, 2013 at 9:42 am

You didn’t quite grasp what I said. To specify the state of the chaotic system at any time $T_n$ you need n bits of information about where it was at $T_0$. That’s pretty much the definition of chaos in information theory terms. Quantum systems, you don’t need so much information. Quantum mechanics is compressible. Or at least it looks that way on paper. That doesn’t make any sense if you expect to derive the classical equations of motion from quantum mechanics; there isn’t enough there there.

• Will said, on March 8, 2013 at 5:40 pm

Ah, OK. I do understand that the classical equations couldn’t be derived from the quantum ones if the classical systems have more information. Many computer models wouldn’t need n bits about the state at T_0 to derive the state at T_n, but if I understand you correctly now, then real world classical systems would, and so perhaps accurate computer models of them would as well.

4. Jason Belec (@jasonbelec) said, on March 7, 2013 at 3:09 pm

Cool. Drew me right in. Great writing. A wonderful bit of science and now it will not be lost. So why haven’t you put this to the test?

• Scott Locklin said, on March 7, 2013 at 10:58 pm

I guess it is a superposition of two states; the fact I have to make money somehow, and the fact that I’m probably not smart enough to figure it out.

5. Duuuuuuuuuuuuuuuuuuuuude said, on March 7, 2013 at 7:56 pm

I remember Dr. Ford saying to a class of undergrad thermo students (of which I was a fast-fading part), “I will not promulgate a bunch of idiots and call them physicists.” Kinda brings a tear to the eye.

• Scott Locklin said, on March 7, 2013 at 10:59 pm

Excellent!

6. Rod Carvalho said, on March 8, 2013 at 12:21 am

You can always post a question on the Ford paradox at the Physics Stack Exchange. There’s a nonzero chance that Luboš will provide a detailed answer (he’s the top contributor there) in which he will accuse you of being an idiot who does not understand the basics of QM. He’s “attacked” Steven Weinberg recently, so you’d be in excellent company 😉

• Scott Locklin said, on March 8, 2013 at 12:38 am

I hope Lubos is doing well these days; he deserved tenure somewhere. I regularly hang out with a guy who helped invent Lubos’ field, and he doesn’t have a good answer either. String theorists don’t tend to think about the classical limit in any depth, nor experimental reality. They just wave their hands and declare that disbelief in the correspondence principle is heresy without showing anyone how it can be done in this case. The understanding of the information theory is also a rare quality among high energy folks and physicists in general: I didn’t understand what Ford was going on about when I first read the papers, but I have a much better idea now that I work with such ideas.

The guys who understand chaotic dynamics, or who think about the roots of quantum mechanics, on the other hand, find it an interesting question. I pinged a few of those; maybe they’ll tell me I’m being an idiot too, but at least they’ll provide some reasoned explanation of where my (and Ford’s) reasoning went off the deep end.

My old boss at Pitt has pointed me towards a large paper he wrote on the thermodynamics of quantum electrodynamics. It might be of some help, but I have yet to read it through.

7. jedharris said, on March 8, 2013 at 3:23 am

Your comment “ultimately, you need a t_0 which is N bits long. For quantum systems, predicting at t_N requires less information. The information content of the quantum system is log_2(N)” was very helpful. Until then I wasn’t sure what you were saying.

Ideally I’d like a post that starts with this and then provides some intuition about why the “vague torii” have only log_2(N) information — that is the key.

I have some sense of the way classical chaotic systems “shift up” information from their infinitely precise initial conditions so I get the other half of your dichotomy.

My guess about the resolution of the paradox is that isolated classical chaotic systems would eventually run out of initial conditions to shift up — they’d run into the limits of the vague torii of which they are made. After all, they don’t in fact have infinite precision to draw on. It would certainly be very cool to be able to show that limitation or to find it experimentally. Both sound hard and would be exciting new science.

Thanks for a fascinating post!

• Scott Locklin said, on March 8, 2013 at 4:21 am

This was for a general audience; it took 2000 words and many diagrams just to cover the dynamics and QM parts enough to state the problem. I was surprised and gratified that 15,000 people had the attention span for it today. I figured it was “good enough” to show in the hand wavey way, how chaotic systems generate information.
One thing that is apparently not clear: $t_0$ is completely arbitrary. Any point in time is a perfectly good $t_0$. “Initial conditions” means at any arbitrary point in time. That is one of the many wonders of chaos that make it worth looking at.

8. nfix said, on March 12, 2013 at 8:17 am

The Eleven Pictures of Time by C. K. Raju is a pretty good book that talks about the connection between chaos theory and time

9. Eirik Gjerlow said, on March 12, 2013 at 12:31 pm

It seems that many of those who have looked into this (some examples:
http://arxiv.org/abs/chao-dyn/9510013, http://prl.aps.org/abstract/PRL/v80/i20/p4361_1) cite decoherence and interaction with the environment as the solution to the paradox, although I haven’t been able to explicitly connect what they’re saying with what Ford is saying yet.

As for experiments, I found this, which at least is an attempt: http://www.researchgate.net/publication/1903922_On_the_correspondence_principle_implications_from_a_study_of_the_chaotic_dynamics_of_a_macroscopic_quantum_device

The first author also had another paper with such a bombastic name that I thought it was one of those mock papers: http://arxiv.org/abs/physics/9806002

• Jed Harris said, on March 12, 2013 at 9:33 pm

Thanks for the pointers. The following experiment seems quite relevant: http://arxiv.org/pdf/1207.5465v1.pdf The authors seem pretty focused on the experimental paradigm, avoid bombast and don’t draw large conclusions.

• Scott Locklin said, on March 12, 2013 at 10:45 pm

Thanks for the links Eirik (and Jed). Quite a few people work in the “quantum chaos” field. I started my physics career working for Jim Bayfield, who was one of the first to observe signatures of chaos in microwave ionized 1-d hydrogen atoms (effectively, a quantum kicked rotor: not quite as pure as the one in Jed’s link though).

The Ford paradox is an idea which seems to have fallen through the cracks. Maybe with good reason, maybe not. Most folks don’t use plain old information theory. Since the Ford paradox always bothered me, and since I’m now qualified to check up on his information theory, I figured I’d write it up in hopes someone would tell me the answer.

Mantica is still at work on similar problems using the decoherence approach. I’m not sure it applies in general to very simple quantum systems. Usually, the mechanism for decoherence involves interaction with the environment, or with other parts of the system. I have a day job which has nothing to do with physics, so I haven’t fully digested those results, which are pretty technical. Professor Cvitanovic was kind enough to send me a paper which might be a way out of the paradox from the other direction, but again I haven’t fully digested it yet. More or less, it is the same argument that people use against simple mechanical devices solving NP-complete problems: there is always some noise, and in a chaotic system, it gets magnified to where it smears out the interesting new information. I think combining this idea with the idea of a quantum break time might put the issue to rest. The “quantum break time” being, effectively, how many eigenstates fit into a period of time, such that the wave function tracks the classical motion. My original dissertation topic was going to attempt to look for a break time, but someone at Rice (I think) scooped us on that. While I haven’t worked anything out yet (again, I got to eat: the taxman cometh, and I need a new consulting gig), this sort of analysis probably produces actual numbers one can apply to real world systems and interesting experiments.

Such ideas are, of course, important. For example, if quantum chaos somehow causes decoherence, as the abstract to the paper Jed linked implies (and I think the Zurek paper you linked, though I only have the abstract to go on), maybe quantum computers can never work. The SQUID paper, I dunno; he seems to just add in a decoherence term to get the right answer. That might in fact be the right answer, but it implies there should sometimes be a decoherence effect going on. Presumably that should be modeled somehow (aka, “what physical conditions justify adding this term?”), or at least brought into the formalism in a way which is motivated by experimental tests. I remember Bayfield mentioning SQUIDs as a neat place to look; I guess that is an example of why. The Kirilyuk papers are more radical; modifying the Schroedinger equation to make it fit the Gutzwiller trace formula! Dunno about that. Seems a waste of paper to change all them Schroedinger equations for aesthetic reasons. Still, I can relate; the first time I grasped the way the Gutzwiller trace formula works, my mind was blown as well.

• Rod Carvalho said, on March 16, 2013 at 2:38 am

For example, if quantum chaos somehow causes decoherence (…) maybe quantum computers can never work.

Somewhat off-topic: Israeli mathematician Gil Kalai has been working for a few years on why quantum computers cannot work. Here are some slides from a talk he recently gave: Why Quantum Computers Cannot Work and How [pdf].