Locklin on science

On the Empire of the Ants

Posted in brainz, information theory by Scott Locklin on July 2, 2013

The internet is generally a wasteland of cat memes and political invective. Once in a while it serves its original purpose in disseminating new ideas. I stumbled across Boris Ryabko‘s little corner of the web while researching compression learning algorithms (which, BTW, are much more fundamental and important than crap like ARIMA). In it, I found one of the nicest little curiosity driven  papers I’ve come across in some time. Ryabko and his coworker, Zhanna Reznikova, measured the information processing abilities of ants, and the information capacity of ant languages. Download it here. There was also a plenary talk at an IEEE conference you can download here.

8135214574_43546bff19_h

In our degenerate age where people think cell phone apps are innovations,  it is probably necessary to explain why this is a glorious piece of work. Science is an exercise in curiosity about nature. It is a process. It sometimes involves complex and costly apparatus, or the resources of giant institutes. Sometimes it involves looking at ants in an ant farm, and knowing some clever math. Many people are gobsmacked by the technological gizmos used to do science. They think the giant S&M dungeons of tokomaks and synchro-cyclotrons are science. Those aren’t science; they’re tools. The end product; the insights into nature -that is what is important. Professors Ryabko and Reznikova did something a kid could understand the implications of, but no kid could actually do. The fact that they did it at all indicates they have the child-like curiosity and love for nature that is the true spirit of scientific enquiry. As far as I am concerned, Ryabko and Reznikova are real scientists. The thousands of co-authors on the Higgs paper; able technicians I am sure, but their contributions are a widows mite to the gold sovereign of Ryabko and Reznikova.

Theory: ants are smart, and they talk with their antennae. How smart are they, and how much information can they transfer with their antennae language? Here’s a video of talking ants from Professor Reznikova’s webpage:

Experiment: to figure out how much information they can transfer, starve some ants (hey, it’s for science), stick some food at random places in a binary tree, and see how fast they can tell the other ants about it. Here’s a video clip of the setup. Each fork in the path of a physical binary tree represents 1 bit of information, just as it does on your computer. Paint the ants so you know which is which. When a scout ant finds the food, you remove the maze, and put in place an identical one to avoid their sniffing the ant trails or the food in it.  This way, the only way for the other ants to find the fork the food was in is via actual ant communication. Time the ant communication between the scout ant and other foragers (takes longer than 30 seconds, apparently). Result: F. sanguinea can transmit around 0.74 bits a minute.  F. polyctena can do 1.1 bits a minute.

2383473468_26f4380bbd_z

Experiment: to figure out if ants are smart, see if they can pass on maze information in a compressed way. LRLRLRLRLRLR is a lot simpler in an information theoretical sense than an equal length random sequence of lefts and rights. Telephone transmission and MP3 players have this sort of compression baked into them to make storage and transmission more efficient.  If ants can communicate directions for a regular maze faster than a random one, they’re kind of smart. Result: in fact, this turns out to be the case.

Experiment: to find out if ants are smart, see if they can count. Stick them in a comb or hub shaped maze where there is food at the end of one of the 25 or more forks (you can see some of the mazes here). The only way the poor ant can tell other ants about it is if he says something like “seventeenth one to the left.” Or, in the case of one of the variants of this experiment,  something more like”3 over from the one the crazy Russian usually puts the food in.” Yep, you can see it plain as pie in the plots: ants have a hard time explaining “number 30” and a much easier time of saying, “two over from the one the food is usually in.” Ants can do math.

1969_Aadvark

The power of information theory is not appreciated as it should be. We use the products of it every time we fire up a computer or a cell phone, but it is applicable in many areas where a mention of “Shannon entropy” will be met with a shrug. Learning about the Empire of the Ants is just one example.

People in the SETI project are looking for  alien ham radios on other planets. I’ve often wondered why people think they’ll be able to recognize an alien language as such. Sophisticated information encoding systems look an awful lot like noise. The English language isn’t particularly sophisticated as an encoding system. Its compressibility indicates this. If I were an alien, I might use very compressed signals (sort of like we do with some of our electronic communications). It might look an awful lot like noise.

We have yet to communicate  with dolphins. We’re pretty sure they have interesting things to say, via an information theoretical result called Zipf’s law (though others disagree,  it seems likely they’re saying something pretty complex). There are  better techniques to “decompress” dolphin vocalizations than Zipf’s law: I use some of them looking for patterns in economic systems. Unfortunately marine biologists are usually not current with information theoretical tools, and the types of people who are familiar with such tools are busy working for the NSA and Rentech. Should I ever make my pile of dough and retire, I’ll hopefully have enough loot to strap a couple of tape recorders to the dolphins. It seems something worth doing.

The beautiful result of Ryabko and Reznikova points the way forward. A low budget, high concept experiment, done with stopwatches, paint and miniature plastic ant habitrails produced this beautiful result on insect intelligence. It is such a simple experiment, anyone with some time and some ants could have done it! This sort of “small science” seems rare these days; people are more interested in big budget things, designed to answer questions about minutae, rather than interesting things about the world around us. I don’t know if we have the spirit to do such “small science” in America any longer.  American scientists seem like bureaucratized lemmings, hypnotized by budgets, much like the poor ants are hypnotized by sugar water. The Rube-Goldberg nature of this experiment could only be done by a nation of curious tinkerers; something we no longer seem to have here.

Dolphin language could have been decoded decades ago. While it is sad that such studies haven’t been done yet, it leaves open new frontiers for creative young scientists today. Stop whining about your budget and get to work!

You’re smarter than you think

Posted in brainz by Scott Locklin on May 18, 2010

One of the more interesting measures in nature is entropy.

Entropy is defined the same in statistical physics and information theory. That’s one of the great discoveries of Claude Shannon. As you may or may not know, entropy is a measure of disorder. It can be disorder in positions and momenta of molecules, or disorder (noise) in signals on a transmission wire. It’s one of those little wonders that the same mathematical object governs both the behavior of steam engines and the behavior of information in computational machines and telegraph wires. It’s actually not much of a wonder once you understand the math, but stepping back and looking at in “big picture” makes it pretty amazing and glorious. The first time I figured it out, I am pretty sure I frothed at the mouth. Heat is a measure of disorderliness in information? Hot dang! I can’t convey the spectacularness of it all, as it requires an undergrad degree in physics to really grok it, but I will give it the old college try anyhow.

People who get degrees in computer science like to think of computers as “Turing machines.” I suspect this is because it is easier to prove theorems about Turing machines than Von Neumann machines. That’s fine by me, though I will note in passing that I have never read a proof (or anything like a proof) that VN machines like you write code on are computationally equivalent to Turing machines. Someone smart please point me to the proof if it exists. We will assume for the moment that they are equivalent; it isn’t important if you don’t believe it. I, personally, like to think of universal computers as shift maps. A shift map takes a string of symbols and shifts them around. Or, if you want to think in terms of Mr. George Boole, I think of a computer as a bunch of NOR (or NAND, it doesn’t really matter) gates. NOR gates flip bits. It’s not really important that you believe you can build a computer of any given complexity out of NANDs and NORs; the important thing is you believe that by taking two input bits, operating on them, and getting one output bit you can build a universal computational machine. It’s quite true; I wouldn’t lie to you. Trust me; I’m a scientist.

Important fact to get from this picture; two inputs, one output

It is crucial to note here that, these are not reversible operations. One of the bits goes away. You can’t take a one bit output and generate the two bits of input, as there are several permutations and combinations of them. This is sort of the crux of the whole argument; what happens when the bit goes away? If you believe in the statistical definition of entropy, you believe that the change in entropy is Boltzmann’s constant times the natural log of the density of states. The density of states of a NAND or NOR gate has decreased by 1/2, making this -k*log(2). So every time you do a calculation, a bit dies, and disorder increases by k*log(2). Most of you who are still reading have glossy eyes by now. Who gives a pair of foetid dingo’s kidneys, right? Well, when you know what the entropy increase is, you can know what the dissipated heat is. k is 1.38*10^-23joules/degree kelvin. If you are dissipating the heat to room temperature (about 300 degrees kelvin), when each bit dies, it dissipates 3E-21 joules of heat. Stop and think about this for a second. For every bit operation done in a computer, I know the minimum physically possible amount of heat dissipated. Assuming a 64 bit calculation is done in 64 bits (an underestimate for almost any kind of calculation), a 64 bit op is 1.84 *10^-19 joules.

What good is this, you might ask? Well, consider: the average human’s resting heat dissipation is something like 2000 kilocalories per day. Making a rough approximation, assume the brain dissipates 1/10 of this; 200 kilocals per day. That works out (you do the math) to 9.5 joules per second. This puts an upper limit on how much the brain can calculate, assuming it is an irreversible computer: 5 *10^19 64 bit ops per second.

Considering all the noise various people make over theories of consciousness and artificial intelligence, this seems to me a pretty important number to keep track of. Assuming a 3Ghz pentium is actually doing 3 billion calculations per second (it can’t), it is about 17 billion times less powerful than the upper bound on a human brain. Even if brains are computationally only 1/1000 of their theoretical efficiency, computers need to get 17 million times quicker to beat a brain. There are actually indications that brains are very close to their theoretical efficiencies; transmissions of information in nervous systems indicate things happen at very close to theoretical efficiency. Too bad we don’t know how calculation takes place in your noggin.

I leave it as an exercise to the reader to calculate when Moore’s law should give us a computer as powerful as a human brain (though Moore’s law appears to have failed in recent years). I leave it as another exercise to determine if this happens before the feature size on a piece of silicon approaches that of an atom (in which case, it can no longer really be a feature at room temperature).

Penrose thinks brains are super-Turing quantum gravitic computers. Most people think he is full of baloney, but, you never know. I am pretty sure quantum computers get around the Landauer limit. People like Christoper Moore, Michel Cosnard and Olivier Bournez have also shown that analog computers are potentially vastly more powerful than digital ones, though they still don’t get around the thermodynamic limit.


“Antikythera mechanism; an early analog computer”

Incidentally, if you google on Landauer entropy (that log(2) thing), you will find many people who don’t know what they are talking about who think they have refuted this bone-simple calculation. All they have done is (re) discovered reversible computing (aka Toffoli and Fredkin) without admitting or realizing it. Such people also must believe in perpetual motion machines, as it is the exact same logical error.

Reversible computing is, theoretically, infinitely powerful. It is also an inadvertent restatement of what “chaos” means -not to mention, a restatement of what heat means. In reversible computing, you need to keep around all the bits which are destroyed in irreversible computing. That’s a lot of semirandom bits. What are they really, but the heat that your computer dissipates when it does a calculation?