Locklin on science

BTC bubbles

Posted in econophysics by Scott Locklin on April 17, 2013

Not surprisingly, Bitcoin prices are well described by the  log periodic power laws describing the dynamics of bubbles. A reminder of what a LPPL model looks like; here is a simple one:

\log(p(t)) = A + B(t_c - t)^\beta + C(t_c - t)^\beta \cos( \omega \log(t_c-t)+\phi)

I didn’t profit from this. I thought of applying LPPL to the BTC bubble well before the crash during a bullshit session with a friend, but I didn’t run the analysis until after. I have better things to do with my time than play with weird monopoly money, and the “exchanges” presently offering shorts are not even close to useful. I also think anyone who trades on LPPL is basically gambling. The most interesting parameter, t_c is hardest to fit, and, well, with all those parameters I could fit a whole lot of elephants. Just the same it is a useful enough concept to justify further research. No, I won’t be telling the world about that research on my blog. A man’s got to eat, after all. Doing bubble physics costs money.

If you don’t know about LPPL models, click on these two helpful links. The “hand wavey” idea is, if the price is formed by market participants looking at what other market participants are doing, as with Dutch tulips, pets.com, and market prices in various eras, the price is an irrational bubble which will eventually burst. This isn’t an original idea: Charles Mackay was talking about it 180 years ago. The original idea is mapping this behavior onto an Ising model,  running some renormalization group theory on it, and fitting to the result to get a forecast of bubble burstings.  Sornette, Ledoit,  Johanson, Bouchaud and Freund did it and told the world about it; may the eternal void bless them with healthy returns for being kind enough to share this interesting idea with us.

Here’s a plot of BTC close prices from MtGox (via quandl), with the LPPL model fit 10 days before the bubble pop. I wasn’t real careful with the fit; no unit root tests were done, no probabilistic estimates were made and no Ornstein Uhlenbeck processes were taken into account. This is just curve fitting. The result is compelling enough to talk about. As you can see, with these parameters, the out of sample top is fit fairly well. Amusingly, so is the decline.

test

What can we learn from this? You can see a “fair value” of around $20/BTC due to be hit in a few weeks, with perhaps a full mean reversion to $10/BTC.  BTC doesn’t seem to have a helpful “anti-bubble” decay; if anything, it is decaying faster than expected so far (it is possible I mis-fit the \omega). The fit parameters for this version of the model tell us a few interesting things about the herding behavior which you can read about in Sornette’s book.

I don’t have any strong opinions about using BTC as a currency. I think most of its enthusiasts  are naive and do not understand the nature of money and what it is good for. I do think BTC would work a lot better as a store of value with a properly functioning foreign exchange futures market. There are no properly functioning BTC futures exchanges at present; just an assortment of dreamers and borderline crooks cashing in on hype. This is more of an engineering and legal problem than it is an inherent problem with using BTC as a currency. The way things are presently set up, without shorts, any extra media attention will result only in people buying the damn things. Without the ability to easily short them, price discovery is impossible, and herding behavior is the rule. It ain’t a market without shorts. It’s a bubble maker. Shorts don’t guarantee there will be no bubbles; we see plenty in shortable markets, but a lack of shorts will virtually guarantee future BTC bubbles.

What is a bubble? (econophysics, part 2)

Posted in econophysics by Scott Locklin on August 14, 2011

“In science there is only physics; all the rest is stamp collecting.“-Lord Kelvin

In long ago (it took me this long to figure out how to make LaTeX in wordpress) part 1, I discussed the random field Ising model for opinion diffusion. What motivates this model? Well, we observe in nature that people’s opinions are influenced by the opinions of people around them. The random field Ising model is one very sensible way of modeling this. It’s particularly useful in systems where the statistical or geometric properties of the social network are well understood.

The nice thing about physics-like approaches is you don’t always need to know details, like what the geometric properties of the social network are. You can figure out a lot by just thinkin’. Didier Sornette’s ideas are like this. Effectively, in bubbles, people stop taking positions based on underlying fundamentals information, and start taking positions based on price. In other words, the entire market becomes trend following. If you take a position based on price, and everyone else starts to do the same thing, the prices will rise faster than exponentially via feedback. So, if you take the log of the price series, and notice stuff with super linear growth: those are bubbles.

One of the things my pals the economists don’t realize about markets is who the market consists of. There are a lot of ways to think about markets that flatter our egos, but a very useful way of thinking about markets is a group of people with edge versus a bunch of people who don’t know what they’re doing -who I’ll call “noise traders” because that’s what Fisher Black called them. This isn’t very charitable, as a lot of the “people who don’t know what they’re doing” have other priorities, like hedging or forming an index, or just buying and selling when some dope tells him to. But for the purposes of argument, since they’re not profit takers, they’re trading on what amounts to noise. When you get informed versus noise traders, you get a reversion to the fair value of the instrument fairly rapidly. The profit takers will trade with the noise traders who give them the price they want, and the price will move towards a place where profit takers can no longer make a profit.

Now, I’m asking you to take my word for these facts, but they’re stuff you can derive using freshman calculus. Some more advanced trickery (renormalization group theory -not as scary as it sounds: mostly, this is a trick for looking at symmetries in systems, a handy trick, used everywhere in physics) gets you the loq periodic power law (LPPL). Using the LPPL, we can potentially forecast things like when the bubble pops. How do you forecast something like this? Well, you use the following price evolution equation to find the crash point t_c:

\log(p(t)) = A + B(t_c - t)^\beta + C(t_c - t)^\beta \cos( \omega \log(t_c-t)+\phi)

“With three parameters, I can fit an elephant.” -Lord Kelvin

The first thing you should notice is all the parameters there are to fit here. With three parameters, I can fit an elephant, said Lord Kelvin; he wasn’t kidding. Wolfram has a useful demo. Given the hairiness of this function and the lack of data, this should give you pause. Seven parameters! Then … things actually get worse. We know that price time evolution follows an Ornstein–Uhlenbeck process which means our equation isn’t really an equation: there is stochastic calculus built into it. One can ignore this and blindly fit anyway, but this isn’t how the pros do it: Sornette and company account for it in various ways, using moving windows, bits of Black Scholes, multiple fits to isolate the range of the “less hairy” parameters (aka, A, B, C are relatively easy to fit assuming you get everything else right -the interesting bits are \omega, \beta, t_c ), multiple constrained fits on parameters (aka, constrain potential values to make sure \omega and t_c isn’t being fit to noise and A doesn’t call the top of the market at some absurdly high value), Lomb periodograms, various GARCHy and Black Scholesey time series models with the log periodic piece baked in. Oh, and they use optimizer/fitters from Annealing to genetic algorithms and various other optimization tricks of the trade. It’s a tough function.

Interesting things to make note of: A is important, as it calls the top. t_c obviously is important as it calls the date of the crash. Most important, though, is \beta and \omega , without which, you can’t identify whether or not you’re in a bubble regime. No power law growth with oscillations: no bubble. The meaning of these parameters is also interesting. It’s obvious what they mean from a fitting standpoint, but some of those numbers come from the actual physics of markets. For example, \omega contains microstructure information about the “herdiness” of the market. \beta actually comes from the structure of human markets, and seems to remain similar across many examples.

Does all this stuff work? I dunno. Certainly, running the rude implementation of it I found on Rnabble looks … evocative at least.

Might be interesting to build an oscillator which only fits the run-ups (though of course, the ones here are fit in sample, and so they look better than they probably are), and gets you out of the trade as they get closer together. Better yet: use lazy learning -maybe something like Dynamic Time Warping on time series which have been post-facto identified as bubble-like, with LPPL oscillations in the run-up. Or, generate LPPL time series using the model and DTW your way onto new ones to forecast burst bubbles. To my mind (and considering some interesting unexploited time scales), the log periodicity itself is interesting, and probably represents a common “orbit” in market dynamics which does not necessarily end in crashes, and which could be captured via some variants on the usual techniques.

Or maybe it’s all horse shit. Personally, I think the most productive looking route for research into this subject is not going to fit parametric models like the above using ML and what not: the basic insight of Sornette and company is to give you a vague idea of what groups of noise traders and informed traders will do to a price series in the presence of different populations of each. Indeed, in Sornette’s early research (which he seems to have made some money speculating on), most of those parameters were not present. Paring the equation down and trying to get the \omega and \beta right might pay a few dividends; as I said above -simple oscillator might work better.

One of the problems with this model as a model is that these populations of market participants are themselves a dynamic quantity. Think about what happens in a market collapse in a healthy market: informed traders move in to buy when the noise traders freak out and sell off. Those numbers slosh around a lot. The really big collapses happen when, whether because of tight credit or severe uncertainty, informed traders decide to stay home. But then, sometimes they don’t stay home. This isn’t really “modelable” in any way I can think of. This is something that requires the big picture judgement of a Monroe Trout or Paul Tudor Jones.

Generally though, such fitted models are not useful to speculators, as getting the fit right (in any model) is an art form, and when the parameters change over time, it becomes rather hopeless. That’s one of the reasons why people who trade end up using moving averages instead of GARCH. We know that volatility can be modeled with GARCH … but can it be forecasted by models fit to GARCH? Well, sometimes, but not really. However, ideas like GARCH are crucial to our understanding of how markets function. I’d say at some point, log periodic power laws will also be considered similarly useful. But right now, we’re still figuring it out.

http://quantivity.wordpress.com/2011/02/08/curiosity-of-lppl/

Everything you always wanted to know about Log Periodic Power Laws but were afriad to ask. -my favorite review article on the subject thus far.

Conservation laws in the Ising universe: Econophysics breakthroughs

Posted in econophysics by Scott Locklin on May 11, 2011

Toffoli is one of my favorite physicists. His research has been consistently excellent, and he has been a pioneer in a number of fields. Unlike most physicists, who are content to mine their little niche, Toffoli is a risk taker: a would-be conquerer of new worlds. As Borel said of Poincare; he is a conquerer, not a colonist. His latest paper has tremendous implications in all kinds of interesting areas in Finance, econophysics, sociology and machine learning.

Remember way back when I blathered about the random field Ising model as a model for collective human behavior about a year ago? Well, Toffoli has forged ahead and derived conservation laws for this model. What does this mean for the science of social groups? Well, it means you can derive global behavior from first principles, aka microstructure. Sometimes you’ll be able to derive microstructure from observed global behavior. All this has rather large implications in fields which map well onto the Ising model.

One of the fields, of course, is econophysics: a developing branch of science which studies group behavior in an economic setting. Another is sociology, of which econophysics is a subset. Finally, there is machine learning: the Hopfield net a sort of ur-version of the Neural net, is an Ising model. It’s also a special case of Bayesian networks.

What does this sort of thing mean? Conservation laws are more or less how physicists think about the world. Should we develop a more detailed mathematical framework for the Ising model, it may be possible to analyze all kinds of orderly behavior which takes place naturally in systems which are well modeled by the Ising model. It’s entirely possible there are all manner of conservation laws derivable about Ising models, based on their geometry and other detailed aspects of their structures.

This could mean humans may some day understand some of the spooky behaviors of crowds. We can already understand lots of these spooky behaviors via numeric simulation and thermodynamic arguments. Imagine knowing how to spook a crowd into doing what you want? OK, this is kind of science fiction stuff (though looking around … maybe not so much; the RFIM reduces to something real simple when the driver field, aka mass media, is really strong). Consider a more pedestrian application: how do you pick the right kind of machine learning algorithm for a given task? How does one architect a neural net in order to solve a problem? Conservation laws *will* help people to do this, based on the symmetries of the problems at hand. If this doesn’t result in some breakthroughs in machine learning, it’s because people aren’t paying attention.

On a more pragmatic level, trending systems can be understood in the RFIM framework, and energy/time invariance conservation makes these sorts of models much easier to think about. Gentlemen: start your automated theorem provers.

The compass rose pattern: microstructure on daily time scales

Posted in chaos, econophysics, microstructure by Scott Locklin on August 12, 2010

One of the first things I did when I fired up my Frankenstein’s monster was plot a recurrence map between equity returns and their lagged value. This is something every dynamical systems monkey will do. I did. In physics, we call it, “looking at the phase space.” If you can find an embedding dimension (in this case, a lag which creates some kind of regularity), you can tell a lot about the dynamical system under consideration. I figured plotting returns against their lags would be too simple to be interesting, but I was dead wrong. I saw this:

A high quality compass rose can be seen on Berkshire Hathaway preferred stock


I convinced myself that nothing this simple could be important (and that it went away with decimalization), and moved on to more productive activities, like trying to get indexing working on my time series class, or figuring out how to make some dude’s kd-tree library do what I wanted it to. I realized just today, this was a mistake, as other people have also seen the pattern, and think it’s cool enough to publish papers on. None other than Timothy Falcon Crack, bane of wannabe quants (and their employers) everywhere, was a coauthor of the first paper to overtly notice this phenomenon.

A slightly later epoch pre-decimalization

You can sort of see why this pattern would fade out with decimalization. If you’re trading in “pieces of 8″ (aka 1/8ths of a dollar), returns which don’t neatly divide into 1/8ths will not be possible. In other words, there are only 7 prices between $20 and $21, as opposed to 100 like there are now. Therefore you’d expect to see some gaps in the lagged returns, which are just price ratios. Roughly speaking, if the average variance is small compared to the size of the tick, you’ll be able to see the pattern. At least that’s what most people seem to think. Weird that Berkshire Hathaway should be effected by this, but as it turns out, it had an effective tick size which was fairly large compared to daily motion, because the people trading it were lazy apes who wouldn’t quote a price at market defined tick size (which, even at 1/8ths was very small compared to Berkshire Hathaway’s share price of several tens of thousands of dollars).

Here you can still see some evidence of Compass Rose in the early decimalization era

One of the interesting implications of all this: if ticks are important enough to show up in a simple plot like this, what happens when you apply models which assume real numbers (aka virtually all models) to data which are actually integers? This is something I’ve wondered about since I got into this business. Anyone who notices his model returning something which has many decimal points at the end …. when the thing you’re measuring should be measured in integers should notice this. I don’t think this sort of issue has ever been resolved to anyone’s satisfaction; people just assume the generating process uses real numbers underneath, and average up to the nearest integer; sort of like trusting the floating point processor in your computer to do the right thing. The compass rose points out dramatically that you can’t really do that. It also demonstrates that, in a very real way, the models are wrong: they can’t reproduce this pattern. For example, what do you do when you’re testing for a random walk on something like this? Can it possibly be a random walk if the returns are probabilistically “loitering” at these critical angles? Does this bias models we use? Smart people think it does. Traders don’t seem to worry about it.

Finally, the compass rose is completely gone in the more recent epoch of decimalization for Berkshire Hathaway series A

Some other guys have attempted to tease some dynamics out of the pattern. Not sure I buy the arguments, since I don’t understand their denoising techniques. Others (Koppl and Nardone) have speculated that “big players” like central banks cause this sort of effect by creating confusion, though I can’t for the life of me see why central bank interventions would cause these patterns in equities. Their argument seems sound statistically. It was done on the Rouble market during periods of credit money versus gold backed. Unfortunately, they never bother relating the pattern in the different regimes to central bank interventions, other than to notice they coincidentally seem to happen at the same times. That doesn’t make any sense to me. It’s a regression on two numbers.

My own guess, developed over a half day of thinking about this and fiddling with plots in R, is that these patterns arise from dealer liquidity issues and market dislocations. How?

  1. Human beings like round numbers. Machines don’t care. Lots of the market in ye olden pre-decimalization days was organized by actual human beings, like my pal Moe. Thus, even if there was no reason to pin a share at a round number, people often would anyway, because $22.00 is more satisfying than $22.13. Since liquidity peddling is now done by machines, most of which assume random walk, I’d expect compass rose patterns to go away in cases where it persisted for a long time, like with $100k Berkshire Hathaway preferred shares, which are all that is pictured above. Voila, I am right. At least in my one stock guess, though the effect can be seen elsewhere also.
  2. The plethora of machine-run strategies has made the market much more tightly coupled than it used to be. What does this mean? For example: at the end of the day, something like an ETF has to be marked to its individual components. One of the things which causes a burst in end of the day trading is the battle between the ETF traders trying to track an index, and arbs trying to make a dollar off of them. Similarly with the volatility of the index. With all this going on, there isn’t much “inertia” pinning the closing price to a nice, human round value. It was observed early on that indexes don’t follow the compass rose pattern, and it’s very easy to understand why if you think of it from the behavioral point of view; add together a lot of numbers, even if they’re mostly round numbers, and chances are high you will not get a round number as a result (especially if you weight the numbers, like in most indexes). You could look at the dissolution of this pattern over time as increasing the entropy of stock prices. High frequency traders make the market “hotter.” As such, the lovely crystaline compass rose pattern “melts” at higher temperatures, just like an ice cube in a glass of rum. With the Berkshire Hathaway preferred shares patterns above, you can see the pattern fading out as the machines take over: while some compass rose remains post-decimalization, it’s completely gone after 2006. You might see it at shorter time scales, however.

Relating it back to the Rouble analysis of Koppl and Nardone, I’d say they saw the compass rose in times of credit money simply because the market moved a lot slower than it did when it was based on gold. When it was credit money, there were effectively fewer people in the trade, and so, monkey preferences prevailed. When it was gold, there were lots of people in the trade, and the “end of day” for trading the Rouble was less meaningful, since gold was traded around the world.

One of the things that bothered me about the original paper is the insistence that one couldn’t possibly make money off of this pattern. I say, people probably were making money off the pattern: mostly market makers. What is more, I posit that, where the pattern exists (on whatever time scale), one could make money probabilistically. What you’re doing here is bidding on ebay. Everyone on ebay knows that it’s a win to not bid on round numbers, because the other apes will bid there. If you bid off the round number, you are more likely to win the auction. Similarly, if you’re a market maker, you might win the trade by bidding off the round number, and giving the customer a slightly better price. Duh. My four hours worth of hypothesis would predict thinly traded stocks which aren’t obviously important components in any index would continue to show this end of day pattern, since they won’t be as subject to electronic market making. And, in fact, that’s what I saw in the first one I saw, WVVI, which appears to be a small winery of some kind. Even in the most recent era, it has a decent compass rose evident. Second one I looked at, ATRO (a small aerospace company) similarly showed the compass rose during the 2001-2006 regime. I’m pretty sure there are simple ways to data mine for this pattern in the universe of stocks using KNN, though I don’t feel like writing the code to do it for a dumb blog post; someone’s grad student can look into it.


All of this is pure speculation after too much coffee, but it’s a very simple and evocative feature of markets which is deeper than I first thought. Maybe with some research one could actually use such a thing to look for trading opportunities (probably it’s just a bad proxy for “low volume”). Or maybe the excess coffee is making me crazy, and these patterns are actually just meaningless. None the less, in this silly little exercise, we can see effects of the integer nature of money, behavioral economics, visible market microstructure on a daily time scale, and very deep issues into the dynamics of financial instruments.

Follow

Get every new post delivered to your Inbox.

Join 334 other followers