Locklin on science

The enigma of the Ford paradox

Posted in chaos, physics by Scott Locklin on March 7, 2013

“God plays dice with the Universe. But they’re loaded dice. And the main objective of physics now is to find out what rules were they and how we can use them for our own ends.” -Joe Ford

joeford

Joe Ford was one of the greats of “Chaos Theory.” He is largely responsible for turning this into a topic of interest in the West (the Soviets invented much of it independently) through his founding of the journal Physica D. It is one of the indignities of physics history that he isn’t more widely recognized for his contributions. I never met the guy, as he died around the time I began studying his ideas, but my former colleagues sing his praises as a great scientist and a fine man. One of his lost ideas, working with student Matthias Ilg and coworker Giorgio Mantica, is the “Ford paradox.” The Ford paradox is so obscure, a google search on it only turns up comments by me. This is a bloody shame, as it is extremely interesting.

Definitions: In dynamical systems theory, we call the motion of a constrained system an “orbit.” No need to think of planets here; they are associated with the word “orbit” because they were the first orbital systems formally studied. It’s obvious what an orbit is if you look at the Hamiltonian, but for now, just consider an orbit to be some kind of constrained motion.

In most nontrivial dynamical systems theory, we also define something called the phase space.” The phase space is that which fully defines the dynamical state of the system. In mechanics, the general convention is to define it by position and momentum of the objects under study. If the object is constrained to travel in a plane and its mass doesn’t change, like, say, a pendulum, you only have two variables; angular position, and its time derivative, and you can easily visualize the phase space:

pendulum

For my last definition, I will define the spectrum for the purposes of this exposition. The spectrum is the Fourier transform with respect to time of the orbits. Effectively, it is the energy levels of the dynamical system. If you know the energy and the structure of the phase space, classically speaking, you know what the motion is.

Consider a chaotic system, such as the double pendulum. Double pendulums, as you might expect, have two moving parts, so the phase space is four dimensional, but we can just look at the angle of the bottom most pendulum with respect to the upper pendulum:

bipend

If you break down the phase space into regions, and assign a string to each region, one can characterize  chaos by the length of the string in bits. If it is a repeated string, the system is non-chaotic. Chaotic systems are random number generators. They generate random strings. This is one of the fundamental results of modern dynamical systems theory.  A periodic orbit can be reduced to simple sequences, like: {1 0 1 0 1 0}, {1 1 0 1 1 0 1 1 0}. Effectively, periodic orbits are integers. Chaotic orbits have no simple repeating sequences. Chaotic orbits look like real numbers. Not floats which can be represented in a couple of bytes: actual real numbers, like  base of the natural log e or \pi or the golden ratio \phi . In a very real sense, chaotic orbits generate new information. Chaotic randomness sounds like the opposite of information, but noisy signals contain lots of information. Otherwise, qua information theory, you could represent the noise with a simple string, identify it, and remove it.  People have invented  mechanical computers that work on this principle. This fact also underlies the workings of many machine learning algorithms. Joe Ford had an extremely witty quoteable about this: “Evolution is chaos with feedback.”

This is all immediately obvious when you view the phase space for a chaotic system, versus a non-chaotic system. Here is a phase space for the end pendulum of a double pendulum at a non-chaotic set of parameters: it behaves more or less like a simple pendulum. My plots are in radians (unlike the above one for a normal pendulum, which I found somewhere else), but otherwise, you should see some familiar features:

non-chaotic

It looks squished because, well, it is a bipendulum. The bottom which looks like  lines instead of distorted ellipses  is where the lower pendulum flips over the upper pendulum. The important thing to notice is, the orbits are all closed paths. If you divided the phase space into two regions, the path defined string would reduce to something like {1 0 1 0 1 0…} (or in the lower case { 0 0 0 0…}) forever.

Next, we examine a partially chaotic regime. The chaotic parts of the phase space look like fuzz, because we don’t know where the pendulum will be on the phase space at any given instant. There are still some periodic orbits here. Some look reminiscent of the non-chaotic orbits. Others would require longer strings to describe fully.  What you should get from this; the orbits in the chaotic regions are random. Maybe the next point in time will be a 1. Maybe a 0. So, we’re generating new information here. The chaotic parts and not so chaotic parts are defined on a manifold. Studying the geometry of these manifolds is much of the business of dynamical systems theory. Non-chaotic systems always fall on a torus shaped manifold. You can see in the phase space that they even look like slices of a torus. Chaotic systems are, by definition, not on a torus. They’re on a really weird manifold.

more-chaotic

Finally: a really chaotic double pendulum. There are almost no periodic orbits left here; it’s all motion in chaotic, and the path the double pendulum follows generates random bits on virtually any path available to it in the phase space:

quite-chaotic

Now, consider quantum mechanics. In QM, we can’t observe the position and momentum of an object with infinite precision, so the phase space is “fuzzy.” I don’t feel like plotting this out using Husimi functions, but the ultimate result of it is the chaotic regions are smoothed over. Since the universe can’t know the exact trajectory of the object, it must remain agnostic as to the  path taken. The spectrum of a quantum mechanical orbital system looks like … a bunch of periodic orbits. The quantum spectrum vaguely resembles the parts of the classical phase space that look like slices of a torus. I believe it was W.P. Reinhardt who waggishly called this the “vague tori.” He also said, “the vague tori, being of too indistinct a character to object, are then heavily exploited…” Quantum chaologists are damn funny.

This may seem subtle, but according to quantum mechanics, the “motion” is completely defined by periodic orbits. There are no chaotic orbits in quantum mechanics.  In other words, you have a small set of  periodic orbits which completely define the quantum system. If the orbits are all periodic, there is  less information content than orbits which are chaotic. If this sort of thing is true in general, it indicates that classical physics could be a more fundamental theory than quantum mechanics.

As an interesting aside: we can see neat things in the statistics of the quantum spectrum when the classical equivalent is chaotic; the spectrum looks like the eigenvalues of a random matrix. Since quantum mechanics can be studied as matrix theory, this was a somewhat expected result. Eigenvalues of a random matrix were studied at great length by people interested in the spectra of nuclei, though the nuclear randomness comes from the complexity of the nucleus (aka, all the many protons and neutrons), rather than the complexity of the underlying classical dynamics.  Still, it was pretty interesting when folks first noticed it in simple atomic systems with classically chaotic dynamics. The quantum spectra of a classically non-chaotic system are more or less near neighbor Poisson distributed. Quantum spectra repulse one another. You know something is up when near neighbor spectral distribution starts to look like this:

wigner distribution

Random matrix theory is now used by folks in acoustics. Since sound is wave mechanics, and since wave mechanics can be approximated in the short wavelength regime by particles, the same spectral properties apply.  One can design better concert hall acoustics by making the “short wavelength” regime chaotic. This way there are no dead spots or resonances in the concert hall. Same thing applies to acoustically invisible submarines. I may expand upon this, and its relationship to financial and machine learning problems in a later blog post. Spectral analysis is important everywhere.

Returning from the aside to the Ford paradox. Our chaotic pendulum is happily chugging along producing random bits we can use to, I dunno, encrypt stuff or otherwise perform computations. But, QM orbits behave like classical periodic orbits, albeit ones that don’t like standing too close to one another. If quantum mechanics is the ultimate theory of the universe: where do the long strings of random bits come from in a classically chaotic system? Since people believe that QM is the ultimate  law of the universe, somehow we must be able to recover all of classical physics from quantum mechanics. This includes information generating systems like the paths of chaotic orbits. If we can’t derive such chaotic orbits from a QM model, that indicates that QM might not be the ultimate law of nature. Either that, or our understanding of QM  is incomplete. Is there a point where the fuzzy QM picture turn into the classical bit generating picture? If so, what does it look like in the transition?

I’ve had  physicists tell me that this is “trivial,” and that the “correspondence principle” handles this case. The problem is, classically chaotic systems egregiously violate the correspondence principle. Classically chaotic systems generate  information over time. Quantum mechanical systems are completely defined by stationary periodic orbits. To say the “correspondence principle handles this” is to merely assert that we’ll always get the correct answer, when, in fact, there are two different answers. The Ford paradox is asking the question: if QM is the ultimate theory of nature, where do the long bit strings in a classically chaotic dynamical system come from? How is the classical chaotic manifold  constructed from quantum mechanical fundamentals?

Joe Ford was a scientist’s scientist who understood that “the true method of knowledge is experiment.” He suggested we go build one of these crazy things and see what happens, rather than simply yakking about it. Why not  build a set of small and precise double pendulums and see what happens? The double pendulum is pretty good, in that its classical mechanics has been exhaustively studied. If you make a small enough one, and study it on the right time scales, quantum mechanics should apply. In principle, you can make a bunch of them of various sizes, excite them to the chaotic manifold, and watch the dynamics unfold.  You should also do this in simulation, of course. My pal Luca made some steps in that direction.  This experiment could also be done with other kinds of classically chaotic systems; perhaps the stadium problem is the right approach. Nobody, to my knowledge, is thinking of doing this experiment, though there are many potential ways to do it.

It’s possible Joe Ford and I have misunderstood things. It is possible that spectral theory and the idea of the “quantum break time” answers the question sufficiently. But the question has not to my knowledge been rigorously answered. It seems to me much a more interesting question than the ones posed by cosmology and high energy physics. For one thing, it is an answerable question with available experimental tests. For another, it probably has real-world consequences in all kinds of places. Finally, it is probably a  productive approach to unifying information theory with quantum mechanics, which many people agree is worth doing. More so than playing  games postulating quantum computers. Even if you are a quantum computing enthusiast, this should be an interesting question. Do the bits in the long chaotic string exist in a superposition of states, only made actual by observation? If that is so, does the measurement produce the randomness? What if I measure differently?

But alas, until someone answers the question, I’ll have to ponder it myself.

Edit add:
For people with a background in physics who want to understand the information theory behind this idea, the following paper is useful:

“The Arnol’d Cat: Failure of the Correspondence Principle” J. Ford, G. Mantica, G. H. Ristow, Physica D, Volume 50, Issue 3, July 1991, Pages 493–520

 

 

The compass rose pattern: microstructure on daily time scales

Posted in chaos, econophysics, microstructure by Scott Locklin on August 12, 2010

One of the first things I did when I fired up my Frankenstein’s monster was plot a recurrence map between equity returns and their lagged value. This is something every dynamical systems monkey will do. I did. In physics, we call it, “looking at the phase space.” If you can find an embedding dimension (in this case, a lag which creates some kind of regularity), you can tell a lot about the dynamical system under consideration. I figured plotting returns against their lags would be too simple to be interesting, but I was dead wrong. I saw this:

A high quality compass rose can be seen on Berkshire Hathaway preferred stock


I convinced myself that nothing this simple could be important (and that it went away with decimalization), and moved on to more productive activities, like trying to get indexing working on my time series class, or figuring out how to make some dude’s kd-tree library do what I wanted it to. I realized just today, this was a mistake, as other people have also seen the pattern, and think it’s cool enough to publish papers on. None other than Timothy Falcon Crack, bane of wannabe quants (and their employers) everywhere, was a coauthor of the first paper to overtly notice this phenomenon.

A slightly later epoch pre-decimalization

You can sort of see why this pattern would fade out with decimalization. If you’re trading in “pieces of 8” (aka 1/8ths of a dollar), returns which don’t neatly divide into 1/8ths will not be possible. In other words, there are only 7 prices between $20 and $21, as opposed to 100 like there are now. Therefore you’d expect to see some gaps in the lagged returns, which are just price ratios. Roughly speaking, if the average variance is small compared to the size of the tick, you’ll be able to see the pattern. At least that’s what most people seem to think. Weird that Berkshire Hathaway should be effected by this, but as it turns out, it had an effective tick size which was fairly large compared to daily motion, because the people trading it were lazy apes who wouldn’t quote a price at market defined tick size (which, even at 1/8ths was very small compared to Berkshire Hathaway’s share price of several tens of thousands of dollars).

Here you can still see some evidence of Compass Rose in the early decimalization era

One of the interesting implications of all this: if ticks are important enough to show up in a simple plot like this, what happens when you apply models which assume real numbers (aka virtually all models) to data which are actually integers? This is something I’ve wondered about since I got into this business. Anyone who notices his model returning something which has many decimal points at the end …. when the thing you’re measuring should be measured in integers should notice this. I don’t think this sort of issue has ever been resolved to anyone’s satisfaction; people just assume the generating process uses real numbers underneath, and average up to the nearest integer; sort of like trusting the floating point processor in your computer to do the right thing. The compass rose points out dramatically that you can’t really do that. It also demonstrates that, in a very real way, the models are wrong: they can’t reproduce this pattern. For example, what do you do when you’re testing for a random walk on something like this? Can it possibly be a random walk if the returns are probabilistically “loitering” at these critical angles? Does this bias models we use? Smart people think it does. Traders don’t seem to worry about it.

Finally, the compass rose is completely gone in the more recent epoch of decimalization for Berkshire Hathaway series A

Some other guys have attempted to tease some dynamics out of the pattern. Not sure I buy the arguments, since I don’t understand their denoising techniques. Others (Koppl and Nardone) have speculated that “big players” like central banks cause this sort of effect by creating confusion, though I can’t for the life of me see why central bank interventions would cause these patterns in equities. Their argument seems sound statistically. It was done on the Rouble market during periods of credit money versus gold backed. Unfortunately, they never bother relating the pattern in the different regimes to central bank interventions, other than to notice they coincidentally seem to happen at the same times. That doesn’t make any sense to me. It’s a regression on two numbers.

My own guess, developed over a half day of thinking about this and fiddling with plots in R, is that these patterns arise from dealer liquidity issues and market dislocations. How?

  1. Human beings like round numbers. Machines don’t care. Lots of the market in ye olden pre-decimalization days was organized by actual human beings, like my pal Moe. Thus, even if there was no reason to pin a share at a round number, people often would anyway, because $22.00 is more satisfying than $22.13. Since liquidity peddling is now done by machines, most of which assume random walk, I’d expect compass rose patterns to go away in cases where it persisted for a long time, like with $100k Berkshire Hathaway preferred shares, which are all that is pictured above. Voila, I am right. At least in my one stock guess, though the effect can be seen elsewhere also.
  2. The plethora of machine-run strategies has made the market much more tightly coupled than it used to be. What does this mean? For example: at the end of the day, something like an ETF has to be marked to its individual components. One of the things which causes a burst in end of the day trading is the battle between the ETF traders trying to track an index, and arbs trying to make a dollar off of them. Similarly with the volatility of the index. With all this going on, there isn’t much “inertia” pinning the closing price to a nice, human round value. It was observed early on that indexes don’t follow the compass rose pattern, and it’s very easy to understand why if you think of it from the behavioral point of view; add together a lot of numbers, even if they’re mostly round numbers, and chances are high you will not get a round number as a result (especially if you weight the numbers, like in most indexes). You could look at the dissolution of this pattern over time as increasing the entropy of stock prices. High frequency traders make the market “hotter.” As such, the lovely crystaline compass rose pattern “melts” at higher temperatures, just like an ice cube in a glass of rum. With the Berkshire Hathaway preferred shares patterns above, you can see the pattern fading out as the machines take over: while some compass rose remains post-decimalization, it’s completely gone after 2006. You might see it at shorter time scales, however.

Relating it back to the Rouble analysis of Koppl and Nardone, I’d say they saw the compass rose in times of credit money simply because the market moved a lot slower than it did when it was based on gold. When it was credit money, there were effectively fewer people in the trade, and so, monkey preferences prevailed. When it was gold, there were lots of people in the trade, and the “end of day” for trading the Rouble was less meaningful, since gold was traded around the world.

One of the things that bothered me about the original paper is the insistence that one couldn’t possibly make money off of this pattern. I say, people probably were making money off the pattern: mostly market makers. What is more, I posit that, where the pattern exists (on whatever time scale), one could make money probabilistically. What you’re doing here is bidding on ebay. Everyone on ebay knows that it’s a win to not bid on round numbers, because the other apes will bid there. If you bid off the round number, you are more likely to win the auction. Similarly, if you’re a market maker, you might win the trade by bidding off the round number, and giving the customer a slightly better price. Duh. My four hours worth of hypothesis would predict thinly traded stocks which aren’t obviously important components in any index would continue to show this end of day pattern, since they won’t be as subject to electronic market making. And, in fact, that’s what I saw in the first one I saw, WVVI, which appears to be a small winery of some kind. Even in the most recent era, it has a decent compass rose evident. Second one I looked at, ATRO (a small aerospace company) similarly showed the compass rose during the 2001-2006 regime. I’m pretty sure there are simple ways to data mine for this pattern in the universe of stocks using KNN, though I don’t feel like writing the code to do it for a dumb blog post; someone’s grad student can look into it.


All of this is pure speculation after too much coffee, but it’s a very simple and evocative feature of markets which is deeper than I first thought. Maybe with some research one could actually use such a thing to look for trading opportunities (probably it’s just a bad proxy for “low volume”). Or maybe the excess coffee is making me crazy, and these patterns are actually just meaningless. None the less, in this silly little exercise, we can see effects of the integer nature of money, behavioral economics, visible market microstructure on a daily time scale, and very deep issues into the dynamics of financial instruments.