Locklin on science

Notation as a tool for thought: Wavelets in J

Posted in J, statistical tools by Scott Locklin on November 14, 2014

I’ve recently had a need for Wavelets in my work in J. There is no wavelet toolbox in J, so, I wrote my own (to be released to pacman and github, eventually, along with a version of this blog as a “lab”). This exercise was extremely valuable in two ways. One, it improved my J-ing: I am, and will probably always remain, relatively weak of the J sauce, though  this exercise leveled up my game somewhat. Two, it vastly improved my understanding of the discrete wavelet transform algorithm. Arthur Whitney said something to the effect that you can really understand a piece of code when it all fits on one page. Well, it turns out that’s very true.

Wavelets are filters recursively applied to a timeseries. In the DWT, you’re effectively recursively subsampling and filtering a time series with a low pass filter and a high pass filter. The resulting elements are at different levels, with a remaining low pass filtered time series at the end. The operations at each level are the same. This is the well-known “pyramid algorithm” of Stephane Mallat.

In C, the way you would do the individual levels is by allocating vector with half the number of elements, then filling up each element with the correct subsampled filtered values. In J, the way you do this is by projecting the elements and the filters in the correct way, then multiplying them out and getting the answer.  J allocates more memory to do this, but it’s something that could be mitigated if it becomes a problem. As is often the case with J, the runtime of the J version, without spending a moment to optimize for space or time, is quite comparable to that of the C version; for the wavelet transform, about a factor of 2 within the C results. Here’s what a wavelet level calculation looks like in C. I took it from the excellent waveslim package in R, FWIIW:



void dwt(double *Vin, int M, int L, double *h, double *g, 
	 double *Wout, double *Vout)
{

  int n, t, u;
  for(t = 0; t < M/2; t++) {
    u = 2 * t + 1;
    Wout[t] = h[0] * Vin[u];
    Vout[t] = g[0] * Vin[u];
    for(n = 1; n < L; n++) {
      u -= 1;
      if(u < 0) u = M - 1;
      Wout[t] += h[n] * Vin[u];
      Vout[t] += g[n] * Vin[u];
    } 
  }
}

J doesn’t have functions; it has verbs. “dyad define” is for defining verbs which operate on  two ‘nouns.’ The first one comes before the verb, the second one comes after the verb. They have implied names x (left noun) and y (right noun) inside the verb. “dyad define” is almost never used by J hackers, as it evaluates to the string ‘4 : 0’ (don’t ask me; you get used to it). Similarly the monad define in the wdict verb evaluates to ‘3 : 0’; and monadic verbs only have y (right) noun inputs. The other thing to remember about J; it evaluates from right to left, like Hebrew. The following verb is for evaluating the compiled C function above, using J’s very cool FFI.

LDWT=: '/path/to/dwt.so'   NB. where your compiled C lives
dwtc=: dyad define
 'hpf lpf'=.wdict x
 Newy=.((#y)%2)#(2.2-2.2)
 Wout=.((#y)%2)#(2.2-2.2)
 cmd=. LDWT, ' dwt n *d i i *d *d *d *d'
 cmd cd (y+2.2-2.2);(<.(#y));(#hpf);hpf;lpf;Wout;Newy
 Wout;Newy
)

For purposes of this exercise, wdict is a simple dictionary that returns the boxed high pass and low pass filter. Boxing is literally the same thing as a struct in C, though in J the use of structs ends up being quite different. To save typing, I know I can derive the HPF from the LPF for a given wavelet type, which I accomplish with the HpLp verb. To get the High Pass filter from Low Pass you multiply the odd elements by -1, then rotate their order. So, in J, remembering J executes from right to left,  a monadic # with a noun to the right is the tally operation: it returns the number of elements. i. generates an integer index, so i.# is a constructor of a sequence on the number of elements in y. [: is the cap which makes the i.# group of verbs into a sequential operation, preventing it from being evaluated as a fork. _1 is -1, * is dyadic times as it is in most programming languages, and |. is the “rotate vector” verb. ;~ does magic which flips this result around, and sticks a box between original low pass value, which is represented by ] 


NB. get the high pass from the low pass, return a box list of both
HpLp =: ] ;~ |. * _1 ^ [: i. #
wdict=: monad define
 select. y
  case. 'haar' do.
   HpLp 2 # %: % ] 2
  case. 'db4' do.
   HpLp 0.482962913144534, 0.836516303737808, 0.224143868042013,  _0.12940952255126
  end.
)

If you compile the C piece and run it with the J FFI on a 2^n long vector, you’ll get the right answer. But the J implementation is cooler, and runs within a factor of 4 of the C.


oddx=: ] {~ ([: (] #~ 0 1 $~ #) [: i. [: # ]) -/ [: i. [: # [
dwt=: dyad define
'hpf lpf'=.wdict x
 yvals=. hpf oddx y
 (yvals +/ . * hpf);yvals +/ . * lpf
)

Since we used “dyad define” -we know that it takes a left and right noun argument (x and y in the verb). The way this one is used looks like this:


'db4'dwt myts

Same calling convention and comments apply to the C version. The real action of the dwt verb happens with oddx. oddx is a tacit verb. I wrote it this way for conciseness and speed. It’s not a very good tacit verb, because I am not a very good slinger of J sentences, but it does what I want it to. Unless someone smarter comes along and gives me a better idea, that’s what we’re stuck with. I am not going to give you a breakdown of how it works inside, but I will show you what it does, and how it helps encapsulate a lot of information into a very small place. oddx is a dyad; it takes x and y values. In tacit programming, the x’s and y’s are represented by [ and ] respectively, with [: as described above, as a sort of spacer between chunks. I will resist explaining all the pieces; feel free to examine it in J 8’s excellent sentence dissector. I have already explained the i. verb. The values 0:15 are created by i.16. So, a useful way to examine what this very does is to run it on something like i.16.

  (1,2)oddx i.16
1 0
3 2
5 4
7 6

Huh. So, it’s taking the y input, which is 0:15, and making two columns, which seem to be the odd and even sequence of the first 4 elements… You might guess that it is taking the size of the x noun and operating on the y input. You would guess correctly. Let’s try it on something bigger:


  (1,2,3,4)oddx i.16
 1  0 15 14
 3  2  1  0
 5  4  3  2
 7  6  5  4
 9  8  7  6
11 10  9  8
13 12 11 10
15 14 13 12

Aha, now it is starting to make sense. Let’s try something bigger still;


  (i.6)oddx i.16
 1  0 15 14 13 12
 3  2  1  0 15 14
 5  4  3  2  1  0
 7  6  5  4  3  2
 9  8  7  6  5  4
11 10  9  8  7  6
13 12 11 10  9  8
15 14 13 12 11 10

Now you can see what it does. The columns alternate between odd and even values of y. The rows are the rotated versions of the first two. The more values you add, the deeper the rotation becomes. Why did I do this? Well, a wavelet is these very index elements, multiplied by the high pass and low pass filters. If you look at the last two lines of the dwt verb, you can see it:


 yvals=. hpf oddx y
(yvals +/ . * hpf);yvals +/ . * lpf

yvals contains the rotated elements. The next line contains two pieces. The  one on the right is the rotated yvalues array multiplied (+/ . * is matrix multiplication) by the low pass filter. The invocation of oddx happens with hpf, but it could take lpf; it is just using the size of the filter to construct the rotated matrix for multiplication.  This is the wavelet decimated y, which you need to reconstruct the time series.  The next one is the rotated yvalues multiplied by the high pass filter. This is the level 1 wavelet. They are separated by a ‘;’ which puts them in “boxed” form (remember: typeless structs), so the decimated yvalues and the level 1 wavelet are separated. There is probably a way of making this a one liner using forks and such, but at my present state of enlightenment, it’s not obvious, and this suits me just fine for clarity and runtime sake. Now, almost nobody cares about the level1 wavelets; usually you want a couple of levels of wavelets, so you need a way to recursively apply this guy to the results. I used the power conjunction.  Raul Miller fixed up my messy verb into a very clear adverb on the J-programming list.


dwtL=: adverb define
:
 'wn yn'=. m dwt y
  wn; (<:x) (m dwtL)^:(x>1) yn
)

You’d call it like this

2 ‘db4’ dwtL  y

Adverbs  return verbs based on modifiers. In this case the adverb returns a dyadic verb modified by m, which in this case is the kind of wavelet.  dwtL is a dyad.  Kind is the kind of wavelet (‘db4’ and ‘haar’ are the only ones defined in wdict monad above). You can see dwt being used to calculate the decimated yn and wn wavelet. The tricky piece is the last line. This is the power conjunction; the ^:(lev>1) piece. dyadic ‘>’ does what it does in regular programming languages (in this case anyway). It checks if lev is > 1, and the ^: power verb calls dwtL recursively as long as this is true. More or less, this is a while loop. At the front of the dwtL call, you can see wn being prepended to the output with a ; to box it. You can also see a (<:lev). This is a decrement operator on the noun ‘lev.’ So, the level is called until it is at 1. The wavelet levels are prepended, and you end up with a boxed set of all the wavelets from 1:Level, with the decimated y stuck at the end. Once you grasp how the power conjunction works, this is a really obvious and sweet implementation of Mallat’s “pyramid algorithm” for calculating the levels of wavelets.

Taking the inverse wavelet transform is conceptually more difficult, since you’re stitching together the values of y with the different levels of wavelet.  There is more than one way to skin this cat, but the way I was thinking when I wrote these verbs was “how do I go from the smaller/subsampled values to the next level up in y in the reverse pyramid algorithm?”

First, I needed to rearrange my mother wavelets to do the inverse. I needed a verb like this:


filtrot =: [: |: [: |. (2 ,~ 2 %~ [: # [) $ [: |. ]

You can see what it does if you apply it to i.4 or i.8


filtrot i.4
1 3
0 2
filtrot i.8
1 3 5 7
0 2 4 6

Next, I need a verb which rearranges the values of the input such that it is aware of the shape of the rotated mother wavelet pattern. This was hairy, but I did it with the following verb, which takes the rotated wavelet as the x value, and the yval or wavelet level as the y value:

drot =:   ] {~ ([: i. [: # ]) +/ 0 , (_1 * [: # ]) + [: }. [: i. [: # [: {. [

You can see how it works like this


(filtrot i.4) drot i.8

0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 0

(filtrot i.6) drot i.8
0 1 2
1 2 3
2 3 4
3 4 5
4 5 6
5 6 7
6 7 0
7 0 1

As you can see, the pattern is to rotate the original values in the opposite direction as you did with the wavelet transform.

Finally, I need to double these guys up, and element-wise multiply them to the correct number of rotated wavelets. For the double up part, ‘2 #’ is the correct verb. Notice it is # called dyadically. Dyadic overloadings of monadic verbs is confusing at first, but you get used to it.


2 #   2 # (filtrot i.4) drot i.8
0 1
0 1
1 2
1 2
2 3
2 3
3 4
3 4
4 5
4 5
5 6
5 6
6 7
6 7
7 0
7 0

Putting it all together with the element wise multiplication, we use the verb “reducer.” I’m getting tired of explaining what all the pieces do, so I’ll write it in explicit to tacit (13: gives explicit to tacit) form to show all the implied x’s and y’s, and point you to the documents for $/shape:


13 : ' (2 # (x drot y)) * ((2*#y)$x)'
reducer=: (2 # drot) * [ $~ 2 * [: # ]

Finally combining into the following:


idwt=: dyad define
'wv yl' =. y
'hpf lpf'=. filtrot each wdict x
+/"1 (lpf reducer yl) + hpf reducer wv
)

So, the inverse transform runs the reducer on the wavelet, the decimated y, and then adds them together.

The real sweet part is running the inverse on all the levels. When I first encountered the / adverb, I thought it was sort of like “apply.” This is wrong; what it does exactly is stick a copy of the verb which comes before it in between all the values of what comes after it. Since you’ve stacked all these wavelets together in a convenient way, you can use the / adverb to stick the inverse transform operator between all the wavelets. You need a few extra pieces to make it evaluate properly, but since J evaluates from right to left, and the decimated values are on the far left; this is  a very satisfying inverse.


idwtL=: dyad  define
idw =. [: < [: x&idwt ,
> idw/y
)

I’m sure dwtL and idwtL  could be compressed into a one liner, and it could be read by someone familiar enough with tacit J programming. But this is a good level of abstraction for me. It’s remarkable that the performance is as good as it is, as I basically just wrote down the first verbs which occurred to me (the inverse is fairly inefficient: OCaML speed instead of C speed). I’ve used wavelets for years; they’re very handy things for signal processing, and I have even written code to calculate them before. Writing them down in J really brought my understanding of what they are to a new level. In an ideal world, people would write algorithms in this compressed form, so that mathematicians, engineers and computer scientists could communicate ideas more clearly and concisely. Alas, Iverson’s dream of notation as a tool of thought hasn’t yet caught on. Maybe some day. While something like J is confusing to people who have been programming for years, there is nothing inherently difficult about it. Various educators have been teaching J to kids in grammar school. It’s actually very simple compared to something like C, and since it has many similarities to ordinary written language, for kids it might even be simpler. Maybe some day.

Neglected machine learning ideas

Posted in machine learning, statistical tools, tools by Scott Locklin on July 22, 2014

This post is inspired by the “metacademy” suggestions for “leveling up your machine learning.” They make some halfway decent suggestions for beginners.  The problem is, these suggestions won’t give you a view of machine learning as a field; they’ll only teach you about the subjects of interest to authors of machine learning books, which is different. The level-3 and level-4 suggestions they make are not super useful either: they just reflect the tastes of the author.

The machine learning literature is vast, techniques are bewilderingly diverse, multidisciplinary and seemingly unrelated. It is extremely difficult to know what is important and useful. While “metacademy” has the horse sense to suggest reading some books, the problem is, there is no book which can even give you a survey of what is available, or make you aware of things which might be helpful. The best guide for the perplexed, in my not at all humble opinion, is Peter Flach’s introductory text, “Machine Learning: the Art and Science of Algorithms that Make Sense of Data” which at least mentions some of the more obscure techniques, and makes pointers to other resources. Most books are just a collection of the popular techniques. They all mention regression models, logistic regression, neural nets, trees, ensemble methods, graphical models and SVM type things. Most of the time, they don’t even bother telling you what each technique is actually good for, and when you should choose one over the other for an approach (Flach does; that’s one of many reasons you should read his book). Sometimes I am definitely just whining that people don’t pay enough attention to the things I find interesting, or that I don’t have a good book or review article on the topic. Sleep deprivation will do that to a man. Sometimes I am probably putting together things that have no clearly unifying feature, perhaps because they’re “not done yet.” I figure that’s OK, subjects such as “deep learning” are also a bunch of ideas that have no real unifying theme and aren’t done yet; this doesn’t stop people from writing good treatments of the subject. Perhaps my list is a “send me review articles and book suggestions” cry for help, but perhaps it is useful to others as an overview of neat things.

1510

 

Stuff I think is egregiously neglected in books, and in academia in unranked semi-clustered listing below:

 

Online learning: not the “Khan academy” kind, the “exposing your learners to data, one piece at a time, the way the human  brain works” kind. This is hugely important for “big data” and timeseries, but there are precious few ML texts which go beyond mentioning the existence of online learning in passing. Almost all textbooks concentrate on batch learning. Realistically, when you’re dealing with timeseries or very large data sets, you’re probably doing things online in some sense. If you’re not thinking about how you’re exposing your learners to sequentially generated data, you’re probably leaving information on the table, or overfitting to irrelevant data. I can think of zero books which are actually helpful here. Cesa-Bianchi and Lugosi wrote a very interesting book on some recent proofs for online learners and “universal prediction” which strike me as being of extreme importance, though this is a presentation of new ideas, rather than an exposition of established ones.  Vowpal Wabbit is a useful and interesting piece of software with OK documentation, but there should be a book which takes you from online versions of linear regression (they exist! I can show you one!)  to something like Vowpal Wabbit. Such a book does not exist. Hell, I am at a loss to think of a decent review article, and the subject is unfortunately un-googleable, thanks to the hype over the BFD of “watching lectures and taking tests over the freaking internets.” Please correct me if I am wrong: I’d love to have a good review article on the subject for my own purposes.

boris-artzybasheff-machinalia1

Reinforcement learning: a form of online learning which has become a field unto its own. One of the great triumphs of machine learning is teaching computers to win at Backgammon. This was done via a form of reinforcement learning known as TD-learning. Reinforcement learning is a large field, as it has been used with great success in control systems theory and robotics. The problem is, the guys who do reinforcement learning are generally in control systems theory and robotics, making the literature impenetrable to machine learning researchers and engineers. Something oriented towards non robotics problems would be nice (Sutton and Barto doesn’t suffice here; Norvig’s chapter is the best general treatment I have thus far seen). There are papers on applications of the idea to ideas which do not involve robots, but none which unify the ideas into something comprehensible and utile to a ML engineer.

4140093417_c8b498a205_b

“Compression” sequence prediction techniques: this is another form of online learning, though it can also be done in batch mode. We’re all familiar with this; when google tries to guess what you’re going to search for, it is using a primitive form of this called the Trie. Such ideas are related to standard compression techniques like LZW, and have deep roots in information theory and signal processing. Really, Claude Shannon wrote the first iterations of this idea. I can’t give you a good reference for this subject in general, though Ron Begleiter and friends wrote a very good paper on some classical compression learning implementations and their uses. I wrote an R wrapper for their Java lib if you want to fool around with their tool. Boris Ryabko and son have also written numerous interesting papers on the subject. Complearn is a presumably useful library which encapsulates some of these ideas, and is available everywhere Linux is sold. Some day I’ll expound on these ideas in more detail.

4140853224_78a8faed00_b

Time series oriented techniques in general: a large fraction of  industry applications have a time component. Even in marketing problems dealing with survival techniques, there is a time component, and you should know about it.In situations where there are non-linear relationships in the time series, classical regression and time-series techniques will fail. In situations where you must discover the underlying non-linear model yourself, well, you’re in deep shit if you don’t know some time-series oriented machine learning techniques.  There was much work done in the 80s and 90s on tools like recurrent ANNs and feedforward ANNs for starters, and there has been much work in this line since then. There are plenty of other useful tools and techniques.  Once in a while someone will mention dynamic time warping in a book, but nobody seems real happy about this technique.  Many books mention Hidden Markov Models, which are important, but they’re only useful when the data is at least semi-Markov, and you have some idea of how to characterize it as a sequence of well defined states. Even in this case, I daresay not even the natural language recognition textbooks are real helpful (though Rabiner and Juang is OK, it’s also over 20 years old). Similarly, there are no review papers  treating this as a general problem. I guess we TS guys are too busy racking in the lindens to write one.

Conformal prediction: I will be surprised if anyone reading this has even heard of conformal prediction. There are no wikipedia entries. There is a website and a book. The concept is simple: it would be nice to well motivated put error bars on a machine learning prediction. If you read the basic books, stuff like k-fold cross validation and the jackknife  trick are the entire story. OK, WTF do I do when my training is online? What do I do in the presence of different kinds of noise? Conformal prediction is a step towards this, and hopefully a theory of machine learning confidence intervals in general. It seems to mostly be the work of a small group of researchers who were influenced by Kolomogorov, but others are catching on. I’m interested. Not interested enough to write one, as of yet, but I’d sure like to play with one.

98

ML in the presence of lots of noise: The closest thing to a book on it is the bizarro (and awesomely cool) “Pattern Theory: The Stochastic Analysis of Real World Signals” by Mumford and Desolneux, or perhaps something in the corpus of speech recognition and image processing books. This isn’t exactly a cookbook or exposition, mind you: more of a thematic manifesto with a few applications.  Obviously, signal processing has something to say about the subject, but what about learners which are designed to function usefully when we know that most of the data is noise?  Fields such as natural language processing and image processing are effectively ML in the presence of lots of noise and confounding signal, but the solutions you will find in their textbooks are specifically oriented to the problems at hand.  Once in a while something like vector quantization will be reused across fields, but it would be nice if we had an “elements of statistical learning in the presence of lots of noise” type book or review paper. Missing in action, and other than the specific subfields mentioned above, there are no research groups which study the problem as an engineering subject. New stuff is happening all the time; part of the success of “Deep Learning” is attributable to the Drop Out technique to prevent overfitting. Random forests could be seen as a technique which at genuflects at  “ML in the presence of noise” without worrying about it too much. Marketing guys are definitely thinking about this. I know for a fact that there are very powerful learners for picking signal out of shitloads of noise: I’ve written some. It would have been a lot easier if somebody wrote a  review paper on the topic. The available knowledge can certainly be systematized and popularized better than it has been.

4140852076_c0b2bc6ee6_b

Feature engineering: feature engineering is another topic which doesn’t seem to merit any review papers or books, or even chapters in books, but it is absolutely vital to ML success. Sometimes the features are obvious; sometimes not. Much of the success of machine learning is actually success in engineering features that a learner can understand. I daresay document classification would be awfully difficult without td-idf representation of document features. Latent Dirichlet allocation is a form of “graphical model” which works wonders on such data, but it wouldn’t do a thing without td-idf. [correction to this statement from Brendan below] Similarly, image processing has a bewildering variety of feature extraction algorithms which are of towering importance for that field; the SIFT descriptor, the GIST and HOG descriptors, the Hough transform, vector quantization, tangent distance [pdf link]. The Winner Take All hash [pdf link] is an extremely simple and related idea… it makes a man wonder if such ideas could be used in higher (or lower) dimensions. Most of these engineered features are histograms in some sense, but just saying “use a histogram” isn’t helpful. A review article or a book chapter on this sort of thing, thinking through the relationships of these ideas, and helping the practitioner to engineer new kinds of feature for broad problems would be great. Until then, it falls to the practitioner to figure all this crap out all by their lonesome.

4140092567_80f07dbc23_b

Unsupervised and semi-supervised learning in general: almost all books, and even tools like R inherently assume that you are doing supervised learning, or else you’re doing something real simple, like hierarchical clustering, kmeans or PCA.  In the presence of a good set of features, or an interesting set of data, unsupervised techniques can be very helpful. Such techniques may be crucial. They may even help you to engineer new features, or at least reduce the dimensionality of your data. Many interesting data sets are only possible to analyze using semi-supervised techniques; recommendation engines being an obvious beneficiary of such tricks. “Deep learning” is also connected with unsupervised and semi-supervised approaches. I am pretty sure the genomics community does a lot of work with this sort of thing for dimensionality reduction. Supposedly Symbolic Regression (generalized additive models picked using genetic algorithms) is pretty cool too, and it’s in my org-emacs TODO lists to look at this more. Lots of good unsupervised techniques such as Kohonen Self Organizing Maps have fallen by the wayside. They’re still useful: I use them. I’d love a book or review article which concentrates on the topic, or just provides a bestiary of things which are broadly unsupervised. I suppose Oliver Chapelle’s book is an OK start for semi-supervised ideas, but again, not real unified or complete.

 Images by one of my heroes, the Ukrainian-American artist Boris Artzybasheff. You can find more of it here.