Locklin on science

RNA memory hypothesis

Posted in brainz, Open problems by Scott Locklin on February 3, 2021

There’s an old theory that memory is actually encoded in part in RNA. The argument is pretty simple: there’s no obvious way for all that sensory data to be captured in synapses as long term memories, yet long term memories obviously exist and are fairly reliable. RNA, unlike synapses, is energy efficient, redundant and persistent and consistent with what we observe about brains from day to day life.

You’d think with all the neuroscientists running around these days, this would have been eliminated from serious consideration by now, but the opposite is true. There’s actually been a little bit more experimental evidence indicating it might be true. People have allegedly transferred memories between snails, planaria, sea slugs, and there are accounts of people “inheriting” memories after organ transplants. It’s entirely possible that all of these are the result of poor experimental hygiene and wishful thinking, and there’s nothing really there, but they sure are evocative, and it seems like people should be interested in sorting this out, or finding simpler models which have hopes of sorting it out.

I had run across this idea again reading a Ron Maimon screed on physics stack exchange. It’s a pretty good screed worth reading (thanks Laeeth):

Highlight excerpted for the lazy:

RNA ticker tape

It is clear that there is hidden computation internal to the neurons. The source of these computations is almost certainly intracellular RNA, which is the main computational workhorse in the cell.

The RNA in a cell is the only entity which is active and carries significant bit density. It can transform by cutting and splicing, and it can double bind to identify complementary strands. These operations are very sensitive to the precise bit content, and allow rich full computation. The RNA analogous to a microprocessor.

In order to make a decent model for the brain, this RNA must be coupled to neuron level electrochemical computation directly. This requires a model in which RNA directly affects what signals come out of neurons.

I will give a model for this behavior, which is just a guess, but a reasonable one. The model is the ticker-tape. You have RNA attached to the neuron at the axon, which is read out base by base. Every time you hit a C, you fire the neuron. The recieving dendrite then writes out RNA constantly, and writes out a T every time it recieves a signal. The RNA is then read out by complementary binding at the ticker tape, and the RNA computes the rest of the thing intracellularly. If the neuron identifies the signal recieved RNA, it takes another strand of RNA and puts it on the membrane, and reads this one to give the output.

The amount of memory in the brain is then the number of bits in the RNA involved, which is about a gigabyte per cell. There are hundreds of billions of cells in the brain, which translates to hundreds of billions of gigabytes. The efficiency of memory retrieval and modification is a few ATP’s per bit, with thousands of ATP’s used for long-range neural communication only.

The brain then becomes an internet of independent computers, each neuron itself being a sizable computer itself.

 

This is a pretty exciting idea, and there are several near relatives. There are protein kinases involved in mRNA transcription and immunology which are candidates for memory as well. Functionally they’re all kind of similar: the idea is the long term memory is chemical and exists on the sub cellular level. 

Mechanisms are known to exist. If RNA is the persistence substrate, you’d expect there to be something like a nucleotide gated channel in the brain, so it can talk to the signal processing components of the brain. There is, starting from the olfactory system, which is known to be associated with memory. Such RNA gated channels are also important in the hippocampus; the master organ of memory in the brain.  Furthermore, it’s entirely possible that the glial cells have something to do with it; the function of these are still poorly understood. Women have more of them than men; maybe that’s why they can always remember where your keys are. There’s plenty of non-protein transcripting RNA floating around in the brain doing …. stuff, and nobody really knows what it does.

One of the cute things about it, is it is entirely possible RNA works like some kind of ticker tape for a Turing machine the way Maimon suggests above. There are a number of speculations to this effect. One can construct something that looks like logic gates or a lambda calculus through RNA editing rules; various enzymes we know about already more or less do this; weirder stuff like methylation may also play a role.

There are obvious ways of figuring all this out; people do look at RNA activities in the hippocampus for example. But because this theory is out of fashion, they attribute the activity to things other than direct RNA memory formation. Everyone more or less seems to believe in the Hebbian connectome model, despite there being little real evidence for it being the long term memory mechanism, or much understanding of what brains do at all beyond relatively simple image recognition/signal processing type stuff it is known to do. Memory is much more mysterious; seemingly a huge reservoir of super efficient data-storage.

The fact that more primitive organisms which are completely without nervous systems seem to have some kind of behavioral memory system ought to indicate there is something more than Hebbian memory. People are starting to notice. You have little single-cell critters like paramecia responding to stimuli, and acting more or less in as complex a way as larger organisms which do have some primitive nervous system. Various “microtubule” theories do not explain this (sorry Sir Roger), as disrupting them doesn’t change behavior much.

One can measure memory in some of these little beasts; the e. coli that lives in your bowels and in overhopped beers have a memory of at least 4 seconds; better than some instagram influencers. Paramecia have memories which may last their entire lifetime -if the memories are transferable via asexual reproduction (not clear they are; worth checking) that would be a couple of weeks: vastly better than most MSNBC viewers. Larger unicellular organisms like the 2mm long stentor exhibit very complex behaviors. They behave much like multicellular animals they more or less compete with. No neurons! Lots of behavior. Levels of behavior which would be very difficult to reproduce even using the latest megawatt dweeb learning atrocity that would otherwise be used to (badly) identify cat videos.


Since humans evolved from unicellular life, there should be some more primitive processing power still around, very possibly networked  and working in concert together. We already know that bacterial colonies kind of do this; even using similar electrical mechanisms to what is observed in brains. It’s completely bonkers to me that modern “neuroscientists” would abandon the idea of RNA memory when …. something is going on with small unicellular creatures. There is obviously some mechanism for the complex behaviors exhibited by unicellular life, and RNA is weird and active enough, it is a plausible mechanism. Maybe they’re not aware of this because unicellular organisms don’t have neurons? Argument for them taking a more comprehensive biology course, or, like, looking at something other than neurons through a microscope if so.

I’m not sure hyperacuity is fully understood. I’ve read things which claim that dolphin, electric eel, bat and human hyperacuity (eyeballs, or fast reflexes in video games) is  a sort of interferometry done with the rate encoding of the spikes of nervous impulses. It’s possible that this is true, but it is also possible that some extra, offloaded computational element governs this amazing phenomenon. To put a few numbers on it: bat nervous systems can echolocate on a 10 nanosecond time scale, electric eels 100nanoseconds. Biological nervous systems operate on a rate encoded sort of sub kilohertz time scale, but resolve things on a gigahertz time scale; that’s a pretty remarkable characteristic. They claim the neurons are doing some fancy interferometry on the rate encoded spikes that nervous systems are known to operate on, but there is much hand waving going on. I’ll wave my hands further and wonder if offloading some of the computation on RNA computers on the cellular level might help somehow. Certainly neural nets with memory layers are vastly more powerful than those without. Granted the thing on your video card isn’t very Hebbian either, but one can make the argument at least on the box diagram signal processing level.

There are fascinating consequences to this, I think some of which were explored by 50s and 60s science fiction authors who were aware of the then popular RNA memory hypothesis. Imagine you could learn a new language by taking an injection. Of course if such a technology were possible, absolutely horrific things are also possible, and, in fact, likely, as early technological innovations come from large, powerful institutions. 

There are various mystics who assert that humans have multiple levels of consciousness. Gurdjieff, the rug-merchant and mountebank who brought us the phrases “working on yourself” and … “consciousness,” asserted that the average human consciousness was a bunch of disconnected automatons that could occasionally could be unified into a whole, powerful being. While I think Gurdjieff mostly seemed interested in fleecing and pantsing the early 20th century equivalent of quartz-crystal clutching yoga instructors, his idea is one of the few usefully predictive hypothesis for why stuff like hypnosis and advertising (marketing hypnosis) works. Maybe he stumbled upon the multicore networked RNA memory hypothesis by accident. Maybe the ancients are right and the soul resides somewhere in the liver. Don’t laugh; people have led normal lives with giant pieces of their brain removed, but nobody has survived the death of their livers. The former fact; normal people getting by without much brain tissue, at least, ought to be the end of the argument: purely Hebbian models of the brain are obviously false.

Debate in the literature:


https://www.frontiersin.org/articles/10.3389/fnsys.2016.00088/full

https://www.frontiersin.org/articles/10.3389/fnsys.2018.00052/full

Open problems in Robotics

Posted in brainz, Open problems by Scott Locklin on July 29, 2020

Robotics is one of those things the business funny papers regularly wonder about; it seems like consumer robotics is a revolutionary trillion dollar market which is perpetually  20-years away -more or less like nuclear fusion.

I had contemplated fiddling with robotics in hopes of building something that would do a useful science-fictiony thing, like go fetch me a beer from the refrigerator. Seemed like a nice way of fucking around with math, the machine shop and ending up with something cool and useful to fiddle with.  To do this, my beer fetching robot would have to navigate my potentially cluttered apartment to the refrigerator, open the door, look for the arbitrarily shaped/sized beer bottle amidst the ketchup bottles, jars of herring, broccoli and other such irrelevant objects, move things out of the way, grasp the bottle and return to me. After conversing with a world-renowned expert in autonomous vehicles; a subset of robotics,  I was informed that this isn’t really possible. All the actions I described above are open problems. Sure, you could do some ridiculous workaround that makes it look like autonomous behavior. I could also train a monkey or a dog to do the same thing, or get up and get the damn beer myself.

There really aren’t any lists in open problems in robotics, I am assuming because it would be a depressingly long litany. I figured I would assemble one; one which I assume will be gratuitously incomplete and occasionally wrong, but which makes up for all that by actually existing. Like my list of open problems in physics and astronomy, I could very well be wrong about some of these, or behind the times, and since my expertise consists in google and 5-10 year old conversations with a cool dude between deadlifts, but it seems worth doing.

  1. Motion planning is an actual area of research, with its own journals, schools of thought, experts and sets of open problems. Things like, “how do I get my robot from point A to point B without falling into a canyon, getting stuck, or being able to deal generally with obstacles” are not solved problems. Even things like a model of where the robot is, with respect to the surroundings: totally an open problem. How to know where your manipulator is in space, and how to get it somewhere else; open problem. Obviously beer fetching robots need to do all kinds of motion planning. Any potential solution will be ad-hoc and useless for the general case of, say, fetching a screw from a bin in the machine shop.
  2. Multiaxis singularities -this one blew my mind. Imagine you have a robot arm bolted to the ground. You want to teach the stupid thing to paint a car or something. There are actual singularities possible in the equations of motion; and it is more or less an underconstrained problem. I guess there are workarounds for this at this point, but they all have different tradeoffs. It’s as open a problem as motion planning on a macro scale.
  3. Simultaneous Location and Mapping. SLAM for short. When you enter a room, your brain knows exactly where your body is, and makes a map of the surroundings. Robots have a hard time with this. There are any number of solutions to the problem, but ultimately the most useful one is to make a really good map in advance. Having a vague or topological map or some kind of prior as to the environment: these are all completely different problems which seem like they should have a common solution, but don’t. While there are solutions to some problems available, they’re not general and definitely not turn-key to where there would be a SLAM module you can buy for your robot. I could program my beer robot to know all about my room, but there’s always going to be new obstacles (a pair of shoes, a book) which aren’t in its model. It needs SLAM to deal.
  4. Lost Robot Problem. Related; if I wake up, and my friends moved my bed to another room; we’ll all have a laugh. Most robots won’t know what to do if it loses track of its location. It will need a strategy to deal with this. The strategies are not general. It’s extremely likely I turn on my beer robot in different positions and locations in the room, and it will have to deal with that. Now imagine I put it somewhere else in the apartment building.
  5. Object manipulation and haptic feedback. Hugely not done yet. The human hand is an amazing thing, and robot manipulators are nowhere near being able to manipulate with haptic feedback or even simply manipulate real world objects based on visual recognition. Even something like picking up a stationary object with a simple graspable plane is a huge unsolved problem people publish on all the time. My beer robot could have a special manipulator designed to grasp a specific kind of beer bottle, or a lot of models of shapes of beer bottles, but if I ask the same robot to fetch me a carrot or a jar of mayo, I’m shit out of luck.
  6. Depth estimation. A sort of subset of object manipulation; you’d figure a robot with binocular vision, or even simply the ability to poke at an object and see it move is something pretty simple to do. It’s very much an open problem. Depth estimation is a problem for my beer-fetching robot, even if the beer is in the same place in the refrigerator every time (the robot won’t be, depending on its trajectory).
  7. Position estimation of moving objects. If you can’t know how far away an object is, you’re sure going to have a hard time estimating what a moving object is doing. Lt. Data ain’t gonna be playing baseball any time soon. If my beer robot had a human-looking bottle opener, it would need a technology like this.
  8. Affordance discovery how to predict what an object you interact with will do when you interact with it.  In my example; the robot would need a model for how objects are likely to behave in moving them aside in searching my refrigerator for a beer bottle.
  9. Scene understanding: this one should be obvious. We’re just at the point where image recognition is useful: I drove an Audi on the autobahn which could detect and somewhat adhere to the lines on the highway. I’m pretty sure it eventually would have detected the truck stopped in the middle of the road in front of me, but despite this fairly trivial “you’re going to turn into road pizza” if(object_in_front) {apply_break} level of understanding, it showed no evidence of being capable of this much reasoning. Totally open problem. I’ll point out that the humble housefly has no problem understanding the concept of “shit in front of you; avoid,” making robots and Audi brains vastly inferior to the housefly. Even putting the obvious problem aside; imagine if your robot is tasked with getting me a beer out of the refrigerator and there is a bottle of ketchup obscuring the beer. The robot will be unable to deal. Even with a 3-d model of the concept of beer bottle and the ketchup bottle which is absurdly complex to program the robot with.

 

several of the above problems illustrated

 

 

There’s something called the Moravec paradox which I’ve mentioned in the past.

“it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”

Robotics embodies the Moravec paradox. There’s a sort of corollary to this that people who work in the tiny field of “actual AI” (as opposed to ML ding dongs who got above their station) used to know about. This was before the marketing departments of google and other frauds made objective thought about this impossible. The idea is that intelligence and consciousness arose spontaneously out of biological motion control systems.

I think the idea comes from Roger Sperry, but whatever, it used to be widely known and at least somewhat accepted. Those biological motion control systems exist even on a microscopic level; even unicellular creatures like the paramecium, or primitive animals without real nervous systems like the hydra are capable of solving problems that we can’t do even in the general case with the latest NVIDIA supercomputer. While robotics is a noble calling and the roboticists solve devilishly hard problems, animal behavior ought to give a big old hint that they’re not doing it right.

 

 

Guys like Rodney Brooks seemed to accept this and built various robots that would learn how to walk using primitive hardware and feedback oriented ideas rather than programmed ideas. There was even a name for this; “Nouvelle AI.” No idea what happened to those ideas; I suppose they were too hard to make progress on, though the early results were impressive looking. Now Dr Brooks has a blog where he opines hilarious things like flying cars and “real soon now” autonomous vehicles are right around the corner.

I’ll go out on a limb and say I think current year Rodney Brooks is wrong about autonomous vehicles, but I think 80s Rodney Brooks was probably on the right path. Maybe it was too hard to go down the correct path: that’s often the way. We all know emergent systems are super important in all manner of phenomena, but we have no mathematics or models to deal with them. So we end up with useless horse shit like GPT-3.

It’s probably the case that, at minimum, a genuine “AI” would need to have a physical form and be capable of interacting with its environment. Many of the proposed algorithmic solutions to the problems listed above are NP-hard problems. To me, this implies that crap involving computers such as we use is wrong. We do approximately solve NP-hard problems in other ways all the time; you can do it with soap bubbles, but the design of the “computer” is vastly different from the von Neumann machine: it’s an analog machine where we don’t care about infinite accuracy.

You can see some of this in various proposed neuromorphic computing models: it’s abundantly obvious that nothing like stochastic gradient descent or contrastive divergence is happening in biological neurons. Spiking models like a liquid state machine are closer to how a primitive nervous system works, and they’re fairly difficult to simulate on Von Neumann hardware (some NPC is about to burble “Church Turing thesis” at me: don’t). I think it likely that many robot open problems could be solved using something more like a simulacrum of a simple nervous system than writing python code in ROS.

But really, all I know about robotics is that it’s pretty difficult.

On the Empire of the Ants

Posted in brainz, information theory by Scott Locklin on July 2, 2013

The internet is generally a wasteland of cat memes and political invective. Once in a while it serves its original purpose in disseminating new ideas. I stumbled across Boris Ryabko‘s little corner of the web while researching compression learning algorithms (which, BTW, are much more fundamental and important than crap like ARIMA). In it, I found one of the nicest little curiosity driven  papers I’ve come across in some time. Ryabko and his coworker, Zhanna Reznikova, measured the information processing abilities of ants, and the information capacity of ant languages. Download it here. There was also a plenary talk at an IEEE conference you can download here.

8135214574_43546bff19_h

In our degenerate age where people think cell phone apps are innovations,  it is probably necessary to explain why this is a glorious piece of work. Science is an exercise in curiosity about nature. It is a process. It sometimes involves complex and costly apparatus, or the resources of giant institutes. Sometimes it involves looking at ants in an ant farm, and knowing some clever math. Many people are gobsmacked by the technological gizmos used to do science. They think the giant S&M dungeons of tokomaks and synchro-cyclotrons are science. Those aren’t science; they’re tools. The end product; the insights into nature -that is what is important. Professors Ryabko and Reznikova did something a kid could understand the implications of, but no kid could actually do. The fact that they did it at all indicates they have the child-like curiosity and love for nature that is the true spirit of scientific enquiry. As far as I am concerned, Ryabko and Reznikova are real scientists. The thousands of co-authors on the Higgs paper; able technicians I am sure, but their contributions are a widows mite to the gold sovereign of Ryabko and Reznikova.

Theory: ants are smart, and they talk with their antennae. How smart are they, and how much information can they transfer with their antennae language? Here’s a video of talking ants from Professor Reznikova’s webpage:

Experiment: to figure out how much information they can transfer, starve some ants (hey, it’s for science), stick some food at random places in a binary tree, and see how fast they can tell the other ants about it. Here’s a video clip of the setup. Each fork in the path of a physical binary tree represents 1 bit of information, just as it does on your computer. Paint the ants so you know which is which. When a scout ant finds the food, you remove the maze, and put in place an identical one to avoid their sniffing the ant trails or the food in it.  This way, the only way for the other ants to find the fork the food was in is via actual ant communication. Time the ant communication between the scout ant and other foragers (takes longer than 30 seconds, apparently). Result: F. sanguinea can transmit around 0.74 bits a minute.  F. polyctena can do 1.1 bits a minute.

2383473468_26f4380bbd_z

Experiment: to figure out if ants are smart, see if they can pass on maze information in a compressed way. LRLRLRLRLRLR is a lot simpler in an information theoretical sense than an equal length random sequence of lefts and rights. Telephone transmission and MP3 players have this sort of compression baked into them to make storage and transmission more efficient.  If ants can communicate directions for a regular maze faster than a random one, they’re kind of smart. Result: in fact, this turns out to be the case.

Experiment: to find out if ants are smart, see if they can count. Stick them in a comb or hub shaped maze where there is food at the end of one of the 25 or more forks (you can see some of the mazes here). The only way the poor ant can tell other ants about it is if he says something like “seventeenth one to the left.” Or, in the case of one of the variants of this experiment,  something more like”3 over from the one the crazy Russian usually puts the food in.” Yep, you can see it plain as pie in the plots: ants have a hard time explaining “number 30” and a much easier time of saying, “two over from the one the food is usually in.” Ants can do math.

1969_Aadvark

The power of information theory is not appreciated as it should be. We use the products of it every time we fire up a computer or a cell phone, but it is applicable in many areas where a mention of “Shannon entropy” will be met with a shrug. Learning about the Empire of the Ants is just one example.

People in the SETI project are looking for  alien ham radios on other planets. I’ve often wondered why people think they’ll be able to recognize an alien language as such. Sophisticated information encoding systems look an awful lot like noise. The English language isn’t particularly sophisticated as an encoding system. Its compressibility indicates this. If I were an alien, I might use very compressed signals (sort of like we do with some of our electronic communications). It might look an awful lot like noise.

We have yet to communicate  with dolphins. We’re pretty sure they have interesting things to say, via an information theoretical result called Zipf’s law (though others disagree,  it seems likely they’re saying something pretty complex). There are  better techniques to “decompress” dolphin vocalizations than Zipf’s law: I use some of them looking for patterns in economic systems. Unfortunately marine biologists are usually not current with information theoretical tools, and the types of people who are familiar with such tools are busy working for the NSA and Rentech. Should I ever make my pile of dough and retire, I’ll hopefully have enough loot to strap a couple of tape recorders to the dolphins. It seems something worth doing.

The beautiful result of Ryabko and Reznikova points the way forward. A low budget, high concept experiment, done with stopwatches, paint and miniature plastic ant habitrails produced this beautiful result on insect intelligence. It is such a simple experiment, anyone with some time and some ants could have done it! This sort of “small science” seems rare these days; people are more interested in big budget things, designed to answer questions about minutae, rather than interesting things about the world around us. I don’t know if we have the spirit to do such “small science” in America any longer.  American scientists seem like bureaucratized lemmings, hypnotized by budgets, much like the poor ants are hypnotized by sugar water. The Rube-Goldberg nature of this experiment could only be done by a nation of curious tinkerers; something we no longer seem to have here.

Dolphin language could have been decoded decades ago. While it is sad that such studies haven’t been done yet, it leaves open new frontiers for creative young scientists today. Stop whining about your budget and get to work!

You’re smarter than you think

Posted in brainz by Scott Locklin on May 18, 2010

One of the more interesting measures in nature is entropy.

Entropy is defined the same in statistical physics and information theory. That’s one of the great discoveries of Claude Shannon. As you may or may not know, entropy is a measure of disorder. It can be disorder in positions and momenta of molecules, or disorder (noise) in signals on a transmission wire. It’s one of those little wonders that the same mathematical object governs both the behavior of steam engines and the behavior of information in computational machines and telegraph wires. It’s actually not much of a wonder once you understand the math, but stepping back and looking at in “big picture” makes it pretty amazing and glorious. The first time I figured it out, I am pretty sure I frothed at the mouth. Heat is a measure of disorderliness in information? Hot dang! I can’t convey the spectacularness of it all, as it requires an undergrad degree in physics to really grok it, but I will give it the old college try anyhow.

People who get degrees in computer science like to think of computers as “Turing machines.” I suspect this is because it is easier to prove theorems about Turing machines than Von Neumann machines. That’s fine by me, though I will note in passing that I have never read a proof (or anything like a proof) that VN machines like you write code on are computationally equivalent to Turing machines. Someone smart please point me to the proof if it exists. We will assume for the moment that they are equivalent; it isn’t important if you don’t believe it. I, personally, like to think of universal computers as shift maps. A shift map takes a string of symbols and shifts them around. Or, if you want to think in terms of Mr. George Boole, I think of a computer as a bunch of NOR (or NAND, it doesn’t really matter) gates. NOR gates flip bits. It’s not really important that you believe you can build a computer of any given complexity out of NANDs and NORs; the important thing is you believe that by taking two input bits, operating on them, and getting one output bit you can build a universal computational machine. It’s quite true; I wouldn’t lie to you. Trust me; I’m a scientist.

Important fact to get from this picture; two inputs, one output

It is crucial to note here that, these are not reversible operations. One of the bits goes away. You can’t take a one bit output and generate the two bits of input, as there are several permutations and combinations of them. This is sort of the crux of the whole argument; what happens when the bit goes away? If you believe in the statistical definition of entropy, you believe that the change in entropy is Boltzmann’s constant times the natural log of the density of states. The density of states of a NAND or NOR gate has decreased by 1/2, making this -k*log(2). So every time you do a calculation, a bit dies, and disorder increases by k*log(2). Most of you who are still reading have glossy eyes by now. Who gives a pair of foetid dingo’s kidneys, right? Well, when you know what the entropy increase is, you can know what the dissipated heat is. k is 1.38*10^-23joules/degree kelvin. If you are dissipating the heat to room temperature (about 300 degrees kelvin), when each bit dies, it dissipates 3E-21 joules of heat. Stop and think about this for a second. For every bit operation done in a computer, I know the minimum physically possible amount of heat dissipated. Assuming a 64 bit calculation is done in 64 bits (an underestimate for almost any kind of calculation), a 64 bit op is 1.84 *10^-19 joules.

What good is this, you might ask? Well, consider: the average human’s resting heat dissipation is something like 2000 kilocalories per day. Making a rough approximation, assume the brain dissipates 1/10 of this; 200 kilocals per day. That works out (you do the math) to 9.5 joules per second. This puts an upper limit on how much the brain can calculate, assuming it is an irreversible computer: 5 *10^19 64 bit ops per second.

Considering all the noise various people make over theories of consciousness and artificial intelligence, this seems to me a pretty important number to keep track of. Assuming a 3Ghz pentium is actually doing 3 billion calculations per second (it can’t), it is about 17 billion times less powerful than the upper bound on a human brain. Even if brains are computationally only 1/1000 of their theoretical efficiency, computers need to get 17 million times quicker to beat a brain. There are actually indications that brains are very close to their theoretical efficiencies; transmissions of information in nervous systems indicate things happen at very close to theoretical efficiency. Too bad we don’t know how calculation takes place in your noggin.

I leave it as an exercise to the reader to calculate when Moore’s law should give us a computer as powerful as a human brain (though Moore’s law appears to have failed in recent years). I leave it as another exercise to determine if this happens before the feature size on a piece of silicon approaches that of an atom (in which case, it can no longer really be a feature at room temperature).

Penrose thinks brains are super-Turing quantum gravitic computers. Most people think he is full of baloney, but, you never know. I am pretty sure quantum computers get around the Landauer limit. People like Christoper Moore, Michel Cosnard and Olivier Bournez have also shown that analog computers are potentially vastly more powerful than digital ones, though they still don’t get around the thermodynamic limit.


“Antikythera mechanism; an early analog computer”

Incidentally, if you google on Landauer entropy (that log(2) thing), you will find many people who don’t know what they are talking about who think they have refuted this bone-simple calculation. All they have done is (re) discovered reversible computing (aka Toffoli and Fredkin) without admitting or realizing it. Such people also must believe in perpetual motion machines, as it is the exact same logical error.

Reversible computing is, theoretically, infinitely powerful. It is also an inadvertent restatement of what “chaos” means -not to mention, a restatement of what heat means. In reversible computing, you need to keep around all the bits which are destroyed in irreversible computing. That’s a lot of semirandom bits. What are they really, but the heat that your computer dissipates when it does a calculation?