Locklin on science

RNA memory hypothesis

Posted in brainz, Open problems by Scott Locklin on February 3, 2021

There’s an old theory that memory is actually encoded in part in RNA. The argument is pretty simple: there’s no obvious way for all that sensory data to be captured in synapses as long term memories, yet long term memories obviously exist and are fairly reliable. RNA, unlike synapses, is energy efficient, redundant and persistent and consistent with what we observe about brains from day to day life.

You’d think with all the neuroscientists running around these days, this would have been eliminated from serious consideration by now, but the opposite is true. There’s actually been a little bit more experimental evidence indicating it might be true. People have allegedly transferred memories between snails, planaria, sea slugs, and there are accounts of people “inheriting” memories after organ transplants. It’s entirely possible that all of these are the result of poor experimental hygiene and wishful thinking, and there’s nothing really there, but they sure are evocative, and it seems like people should be interested in sorting this out, or finding simpler models which have hopes of sorting it out.

I had run across this idea again reading a Ron Maimon screed on physics stack exchange. It’s a pretty good screed worth reading (thanks Laeeth):

Highlight excerpted for the lazy:

RNA ticker tape

It is clear that there is hidden computation internal to the neurons. The source of these computations is almost certainly intracellular RNA, which is the main computational workhorse in the cell.

The RNA in a cell is the only entity which is active and carries significant bit density. It can transform by cutting and splicing, and it can double bind to identify complementary strands. These operations are very sensitive to the precise bit content, and allow rich full computation. The RNA analogous to a microprocessor.

In order to make a decent model for the brain, this RNA must be coupled to neuron level electrochemical computation directly. This requires a model in which RNA directly affects what signals come out of neurons.

I will give a model for this behavior, which is just a guess, but a reasonable one. The model is the ticker-tape. You have RNA attached to the neuron at the axon, which is read out base by base. Every time you hit a C, you fire the neuron. The recieving dendrite then writes out RNA constantly, and writes out a T every time it recieves a signal. The RNA is then read out by complementary binding at the ticker tape, and the RNA computes the rest of the thing intracellularly. If the neuron identifies the signal recieved RNA, it takes another strand of RNA and puts it on the membrane, and reads this one to give the output.

The amount of memory in the brain is then the number of bits in the RNA involved, which is about a gigabyte per cell. There are hundreds of billions of cells in the brain, which translates to hundreds of billions of gigabytes. The efficiency of memory retrieval and modification is a few ATP’s per bit, with thousands of ATP’s used for long-range neural communication only.

The brain then becomes an internet of independent computers, each neuron itself being a sizable computer itself.

 

This is a pretty exciting idea, and there are several near relatives. There are protein kinases involved in mRNA transcription and immunology which are candidates for memory as well. Functionally they’re all kind of similar: the idea is the long term memory is chemical and exists on the sub cellular level. 

Mechanisms are known to exist. If RNA is the persistence substrate, you’d expect there to be something like a nucleotide gated channel in the brain, so it can talk to the signal processing components of the brain. There is, starting from the olfactory system, which is known to be associated with memory. Such RNA gated channels are also important in the hippocampus; the master organ of memory in the brain.  Furthermore, it’s entirely possible that the glial cells have something to do with it; the function of these are still poorly understood. Women have more of them than men; maybe that’s why they can always remember where your keys are. There’s plenty of non-protein transcripting RNA floating around in the brain doing …. stuff, and nobody really knows what it does.

One of the cute things about it, is it is entirely possible RNA works like some kind of ticker tape for a Turing machine the way Maimon suggests above. There are a number of speculations to this effect. One can construct something that looks like logic gates or a lambda calculus through RNA editing rules; various enzymes we know about already more or less do this; weirder stuff like methylation may also play a role.

There are obvious ways of figuring all this out; people do look at RNA activities in the hippocampus for example. But because this theory is out of fashion, they attribute the activity to things other than direct RNA memory formation. Everyone more or less seems to believe in the Hebbian connectome model, despite there being little real evidence for it being the long term memory mechanism, or much understanding of what brains do at all beyond relatively simple image recognition/signal processing type stuff it is known to do. Memory is much more mysterious; seemingly a huge reservoir of super efficient data-storage.

The fact that more primitive organisms which are completely without nervous systems seem to have some kind of behavioral memory system ought to indicate there is something more than Hebbian memory. People are starting to notice. You have little single-cell critters like paramecia responding to stimuli, and acting more or less in as complex a way as larger organisms which do have some primitive nervous system. Various “microtubule” theories do not explain this (sorry Sir Roger), as disrupting them doesn’t change behavior much.

One can measure memory in some of these little beasts; the e. coli that lives in your bowels and in overhopped beers have a memory of at least 4 seconds; better than some instagram influencers. Paramecia have memories which may last their entire lifetime -if the memories are transferable via asexual reproduction (not clear they are; worth checking) that would be a couple of weeks: vastly better than most MSNBC viewers. Larger unicellular organisms like the 2mm long stentor exhibit very complex behaviors. They behave much like multicellular animals they more or less compete with. No neurons! Lots of behavior. Levels of behavior which would be very difficult to reproduce even using the latest megawatt dweeb learning atrocity that would otherwise be used to (badly) identify cat videos.


Since humans evolved from unicellular life, there should be some more primitive processing power still around, very possibly networked  and working in concert together. We already know that bacterial colonies kind of do this; even using similar electrical mechanisms to what is observed in brains. It’s completely bonkers to me that modern “neuroscientists” would abandon the idea of RNA memory when …. something is going on with small unicellular creatures. There is obviously some mechanism for the complex behaviors exhibited by unicellular life, and RNA is weird and active enough, it is a plausible mechanism. Maybe they’re not aware of this because unicellular organisms don’t have neurons? Argument for them taking a more comprehensive biology course, or, like, looking at something other than neurons through a microscope if so.

I’m not sure hyperacuity is fully understood. I’ve read things which claim that dolphin, electric eel, bat and human hyperacuity (eyeballs, or fast reflexes in video games) is  a sort of interferometry done with the rate encoding of the spikes of nervous impulses. It’s possible that this is true, but it is also possible that some extra, offloaded computational element governs this amazing phenomenon. To put a few numbers on it: bat nervous systems can echolocate on a 10 nanosecond time scale, electric eels 100nanoseconds. Biological nervous systems operate on a rate encoded sort of sub kilohertz time scale, but resolve things on a gigahertz time scale; that’s a pretty remarkable characteristic. They claim the neurons are doing some fancy interferometry on the rate encoded spikes that nervous systems are known to operate on, but there is much hand waving going on. I’ll wave my hands further and wonder if offloading some of the computation on RNA computers on the cellular level might help somehow. Certainly neural nets with memory layers are vastly more powerful than those without. Granted the thing on your video card isn’t very Hebbian either, but one can make the argument at least on the box diagram signal processing level.

There are fascinating consequences to this, I think some of which were explored by 50s and 60s science fiction authors who were aware of the then popular RNA memory hypothesis. Imagine you could learn a new language by taking an injection. Of course if such a technology were possible, absolutely horrific things are also possible, and, in fact, likely, as early technological innovations come from large, powerful institutions. 

There are various mystics who assert that humans have multiple levels of consciousness. Gurdjieff, the rug-merchant and mountebank who brought us the phrases “working on yourself” and … “consciousness,” asserted that the average human consciousness was a bunch of disconnected automatons that could occasionally could be unified into a whole, powerful being. While I think Gurdjieff mostly seemed interested in fleecing and pantsing the early 20th century equivalent of quartz-crystal clutching yoga instructors, his idea is one of the few usefully predictive hypothesis for why stuff like hypnosis and advertising (marketing hypnosis) works. Maybe he stumbled upon the multicore networked RNA memory hypothesis by accident. Maybe the ancients are right and the soul resides somewhere in the liver. Don’t laugh; people have led normal lives with giant pieces of their brain removed, but nobody has survived the death of their livers. The former fact; normal people getting by without much brain tissue, at least, ought to be the end of the argument: purely Hebbian models of the brain are obviously false.

Debate in the literature:


https://www.frontiersin.org/articles/10.3389/fnsys.2016.00088/full

https://www.frontiersin.org/articles/10.3389/fnsys.2018.00052/full

45 Responses

Subscribe to comments with RSS.

  1. maggette said, on February 3, 2021 at 12:44 pm

    Interesting thx.

    Fun topic. Given your last posts about Dune.

    https://dune.fandom.com/wiki/Genetic_memory

  2. AlanL said, on February 3, 2021 at 1:15 pm

    This brought back fond memories of one of the first hard SF stories I ever read as a teenager, Larry Niven’s The Fourth Profession from 1972

    https://www.you-books.com/book/L-Niven/The-Fourth-Profession

  3. chiral3 said, on February 3, 2021 at 2:08 pm

    Dude, this is crazy shit. Awesome. At a minimum great Sci-fi.

    I don’t know anything about this so question: I have Kandel on my shelf. Spiking neuron models, etc. That’s about the limit of my technical knowledge on these subjects and, since most of what I know is Kandel and its ilk, I maybe have a good historical knowledge save for what I get from my neurology friends about their specific areas. I get the impression that when I see “current research” talking about Hebbian models and such it’s not too dissimilar than that physics guy we all know that’s back mucking around in early special relativity papers trying to disprove relativity. So I scanned those final references. They seem a bit puffy. Is there more hardcore research going on in this area?

    Re Gurdjieff… all the Russian mysticism stuff – Ouspensky et al – really never resonated with me. But I will say I have fond memories of Mount Analogue and have several copies on my shelf.

    • Scott Locklin said, on February 3, 2021 at 2:34 pm

      I read “Spikes” some years ago as it was mentioned in Shalizi notebook.

      Gurdjieff and Ouspensky I read when I was like 17; didn’t get much out of it. Looking at his wiki page recently (a friend is a fan), the fact that he was an Armenian rug merchant explains a lot.

      I stumbled upon RNA memory I think googling on mRNA as in the vaccine quasi-technology. I remembered it from old Omni and Galaxy magazines, but never really thought about it. If Maimon (early victim of deplatforming) hadn’t written such a convincing screed, I wouldn’t have looked into it. Anyway, brains are cool, and the stuff we know is pretty interesting, but it mostly convinces me we don’t know a damn thing. It was an interesting enough idea to write some words down.

  4. Igor Bukanov said, on February 3, 2021 at 4:43 pm

    RNA memory may also explain a speed of evolution. If cells do not have memory and the evolution is a random guess, then we cannot explain the observed speed of evolutionary adaptations. But if a cell can pass some memory to children, that allow exponentially faster evolution.

    • Scott Locklin said, on February 3, 2021 at 7:49 pm

      I had never thought of that. It would make a lot of sense. If you had any citations handy, that would be fun to look at.

      It’s funny, I find evo-bio types to be a bunch of fedora atheist m’lady conformist dweebs. I assume there is some psychological profile of weak chinned nerds who shelter under “muh science” to protect them from terrifying sky-daddy. Dawkins is their saint. I love asking them what human characteristics they think are heritable; they always wet their diapers.

      The most interesting sparks come from big-brain creationist types trying to poke holes in things; pointing out problems like rapid evolution.

      • Igor Bukanov said, on February 3, 2021 at 9:28 pm

        I read that in “Probably Approximately Correct” by Leslie Valiant. He considered finding fit-enough DNA variant by evolution as an algorithmic search process. If there is no memory, then the best algorithms that we know give very slow evolution rate. So either there is memory or Nature figured how to calculate things quickly.

        As for creationists after I discovered that they typically do not even know the history of their arguments like that it was known at least since Sextus Empiricus and perhaps even since Socrates I lost interest in them. And in 1751 David Hume gave really nice counterargument against intelligent design. From his “Dialogues Concerning Natural Religion”:

        If we survey a ship, what an exalted idea must we form of the ingenuity of the carpenter, who framed so complicated, useful, and beautiful a machine? And what surprise must we entertain, when we find him a stupid mechanic, who imitated others, and copied an art, which, through a long succession of ages, after multiplied trials, mistakes, corrections, deliberations, and controversies, had been gradually improving?

        • Scott Locklin said, on February 3, 2021 at 9:49 pm

          I’m no fan of creationists; just observing they stir the pot more than the establishment orthodoxy who seem content to marinate in their self regard rather than figure things out.

          Will check it out, thanks!

          • Igor Bukanov said, on February 4, 2021 at 9:24 am

            Now I reread that part of Leslie Valiant‘ book. My memory was wrong. He cites papers showing that random evolution can optimize the fitness in polynomial time within within very specific models of interaction with environment. So at least from pure mathematical point of view evolution is possible without enumerating all the possible configurations of, say, proteins. But the moment one allows individual memory the class of models vastly increases.

        • chiral3 said, on February 4, 2021 at 2:28 am

          Despite the fact that a fitness-based algo can’t keep up doesn’t preclude the possibility that complex contributions to anti-selection and paths that result in death / non-propagation, no?

  5. biocin said, on February 3, 2021 at 6:21 pm

    Funny thing Pfizer is delivering an mRNA vaccine without known repercussions.

    • William O. B'Livion said, on February 13, 2021 at 5:00 pm

      What do you mean by that?

  6. anonymous said, on February 3, 2021 at 11:02 pm

    Question: Isn’t RNA supposed to have some short half-life for mutation relative to DNA? (Why DNA is “preferred” for long term storage in our cell-nuclei, and RNA viruses mutate every few months?)

  7. anonymous said, on February 3, 2021 at 11:07 pm

    Interesting idea. I’d take it more seriously than anything quantum going on (other than in the pedantically reductionist sense) on the time/temeprature scales relevant to thinking.

  8. anonymous said, on February 3, 2021 at 11:09 pm

    Without a planet utterly overrun with magic nanomachines, life from paleolithic to modern would be impossible. Sort of humbling: Our technology is still a blunt instrument in comparison.

  9. Brent said, on February 4, 2021 at 2:16 am

    I see what you did there.

  10. DamnItMurray said, on February 4, 2021 at 10:22 am

    Would periodic injections of von Neumann RNA increase say my mathematical abilities?Are there compatibility issues with foreign RNA transfusion? I’ve been interested in cognitive enhancement for all of my short 21-year old adult life but I’ve always suspected that the key factor would be manipulating the electric grid of the ol’ noggin.

  11. George W. said, on February 4, 2021 at 3:04 pm

    >The amount of memory in the brain is then the number of bits in the RNA involved, which is about a gigabyte per cell

    Interesting idea presented by this guy, but I’m calling bullshit about 1GB per cell. The neat trick about the human brain is its ability to “compress”, store only the important parts of, and manipulate data extremely well. Being able to store 8 quintillion instances of a word is not the same as being able to store 1 exabyte.

    To better illustrate that point, consider that the brain can recognize someone’s face based on a 360p photo with poor lighting. I’m not sure where facial recognition technology is at, but a computer probably would need a little more than that. Just speculating, I would say there is much fewer than 200B (compressed) of data involved in memorizing a face in the brain. For instance, eye color isn’t stored as 256-bit color values across several pixels, it’s stored as “brown”, “dark blue”, “light blue”, “green”, etc. In total, eye color may only comprise 3-5 bits of data. If the person’s eye color is extraordinary, maybe the brain reserves more space for it. Do this for 16 or so traits (moles, skin color, nose size, age, etc) and pretty soon you’re able to distinguish the face pretty well among 7 billion with about 10 bytes.

    While it may take a computer 2MB in data to do the same thing, this doesn’t imply that the human brain can store the entire 2MB of a high-res photo on a glance. Looking back, this is a bad example since computers excel at facial recognition…

    There’s definitely at least some role in RNA for memory, as proteins do a heck of a lot in basically all cells. Short term would certainly involve stuff like RNA channels, as “the synthesis of most protein molecules takes between 20 seconds and several minutes.” [1]

    Seems like an underrated idea, and as a bonus, it probably won’t require a 5 mile long vacuum tube to investigate!

    [1] – https://www.ncbi.nlm.nih.gov/books/NBK26829/

    • Scott Locklin said, on February 4, 2021 at 3:20 pm

      Seems like an obvious thing to look into. I know how it works though; obvious things worth looking into don’t get you tenure or funding.

      Maimon is just a speculative shitpoaster on this subject like me, so you don’t have to take everything he says as gospel. I’m not as convinced of this idea as he is, but it certainly seems people should be more curious about, at least, single celled organisms doing brainy things. That should be something we can get a handle on. You can build something like the binary tree tunnels that Boris Ryabko did with the ants for microbes:

      https://scottlocklin.wordpress.com/2013/07/02/on-the-empire-of-the-ants/

  12. mitchellporter said, on February 7, 2021 at 2:40 pm

    Various RNAs are part of how cellular processes are regulated. In that sense, the RNA population in a neuron could conceivably play a large role in its behavior. But if the idea is that RNA can serve as a base-4 digital medium for the storage of memories…

  13. Rickey said, on February 7, 2021 at 6:21 pm

    Interesting hypothesis. Maybe this could explain how persons are able to recover from strokes and severe head trauma or the ability to learn foreign languages fluently. I wonder if cognitive degenerative diseases such as Alzheimer’s, where persons can’t remember or recognize their spouses or children is the result of this memory RNA deteriorating. Also, I read that the human brain on average uses 20 percent of the body’s energy production whether you are running a marathon, sleeping, watching a Tik Tok video or playing grand master level chess. I think there would be spikes in relative energy usage if the brain memory were only neural based.

    • William O. B'Livion said, on February 14, 2021 at 2:14 am

      Alzheimer’s is looking more like a collection of somewhat similar symptoms than an expression of a single (or single set) underlying disorder.

      There’s a reasonable amount of evidence that at least some versions of Alzheimer’s are metabolic related–sort of a “type 3 diabetes”. There’s at least good reason to believe that things that strengthen and increase the functioning of mitochondria will delay onset, possibly long enough for something else (cancer, heart disease, single payer health care) to kill you.

      I’ve heard some other evidence that at least part of the problem might be people not getting sufficient sleep–one of the things the aforementioned glial cells do is that during sleep they shrink and let more brain juice get pumped out/around, possibly aiding in moving toxic crap around. Oddly both Ronaldus Magnus and the Iron Lady were of the “I’ll sleep when I’m dead” mindset and both had Alzheimer’s in their last years. Dunno what Gropey Joe’s sleep schedule is like.

      Doesn’t mean that some versions of dementia aren’t caused by RNA issues, but most of what we call Alzheimer’s is.

      > Also, I read that the human brain on average uses 20 percent of the body’s energy
      > production …running a marathon, sleeping,…

      I can’t speak to running marathons, because the longest I’ve ever run is about 10 miles, but I’ve been running a lot recently, and your brain is pretty active. It’s not like you just drift into a meditative state. Ditto during certain phases of sleep. That said, there’s a LOT of really crap studies out there, and “Science Reporting” is even worse than political reporting in terms of understanding what they’re reading. I’d be really surprised if cognitively difficult work *doesn’t* burn more glucose than reading a USA Today.

  14. zardoz said, on February 7, 2021 at 7:23 pm

    Possibly relevant: Elsa Suberbielle, et al: “Physiologic brain activity causes DNA double-strand breaks in neurons, with exacerbation by amyloid-β” https://www.nature.com/articles/nn.3356

    quote:

    “We show that a natural behavior, exploration of a novel environment, causes DNA double-strand breaks (DSBs) in neurons of young adult wild-type mice. DSBs occurred in multiple brain regions, were most abundant in the dentate gyrus, which is involved in learning and memory, and were repaired within 24 h.”

    Thinking seems to cause DNA strand breaks in the brain. Well, at least in mice! And then sleep is the repair process.

  15. mischa said, on February 8, 2021 at 10:09 pm

    What kind of time does it take to transcribe or read in RNA? How would that affect memory if your neuron has to read in the strand to recover that data or compute something? How would it find the right strand that it needs?
    If RNA strands are programs or memories floating around how does the right one get “loaded up” at the right time?

  16. G said, on February 11, 2021 at 7:56 pm

    I’m just posting here because it’s the most recent article. You might like to see what Allen Millyard does in his shed with a few machines and a hacksaw on youtube. It is refreshing.

    • Scott Locklin said, on February 11, 2021 at 9:16 pm

      Not my cuppa. I guess I used to fiddle with cars, but I’m more of an instrument builder.

      • G said, on February 11, 2021 at 9:27 pm

        No worries. I thought it fit well with the recurring theme of old guys making useful things with a mill and lathe.

  17. William O. B'Livion said, on February 13, 2021 at 4:59 pm

    ” Gurdjieff, the rug-merchant and mountebank who brought us the phrases “working on yourself” and … “consciousness,” asserted that the average human consciousness was a bunch of disconnected automatons that could occasionally could be unified into a whole, powerful being.”

    At a fairly broad level how is this any different than what Sam Harris claims about the brain vis-a-vis free will?

    “I stumbled upon RNA memory I think googling on mRNA as in the vaccine quasi-technology”

    I’m starting to see some anti-vax types (or at least anti-vax in regards to the Corona Virus vaccine) calling it “gene therapy”. Which seems a little off base. Was going to ask you what you thought.

    • Scott Locklin said, on February 13, 2021 at 6:43 pm

      I barely know who Sam Harris is. Fedora atheism passed me by and I prefer ideas by complicated Russian orthodox guys like Berdyaev and Solovev.

      It’s a fairly risky technology in the short and long term, and I prefer adenovirus vaccines like the Russian offering (speaking of Russian things). That said, enough people have gotten it by now it at least probably won’t kill you outright and might even have some benefits in preventing death by Wuhan lung butter. I was spooked early by Moderna’s executive team selling off a bunch of their shares in August and still won’t take theirs unless it’s at gunpoint.
      https://www.npr.org/2020/09/04/908305074/bad-optics-or-something-more-moderna-executives-stock-sales-raise-concerns

      • William O. B'Livion said, on February 14, 2021 at 2:58 am

        I can’t really do Harris’s argument justice (in part because I think he’s a whining bitch), but in one segment I heard him say that our decision making process is massively influence by unconscious and subconscious processes.

        This is straight up obvious from almost all the brain research we’ve done. Between chemically altering the brain, looking at people’s decision making pre- and post- brain injury, asking people questions while they’re being scanned by an fMRI machine etc., we don’t have a unified computing engine, it’s a bunch of massively interconnected chunks of stuff.

        I don’t think we have unbounded free will, we can only make choices from alternatives we know about, and we can only chose what is possible (well, we can chose the impossible, but we can’t do it. Johnathon Livingston Seagull is deeply stupid story). But we do have make choices.

      • William O. B'Livion said, on February 14, 2021 at 3:06 am

        As to the CCP Virus, I might have already had it (late January of 2020), but don’t know.

        I am pretty much treating it like the flu–I don’t get the Flu vaccine because I’m at low risk for catching it (I work at home, etc.), and if I do catch it I’m at low risk for serious symptoms or death–I’m pretty healthy, eat, sleep as well as I can, exercise regularly etc. etc.

        I got the flu vaccine in Oct, because I was out in public all day every day, and it was free. If I was going back into that sort of gig (shudder) I and I could get it, I probably would.

        I wasn’t surprised by Monderna’s executive team selling off their stock–If I went from being “sort of wealthy” to “really rich” on the basis of owning a bunch of a single stock I’d diversify the snot out of my assets. Being poor *sucks*, being middle class ain’t bad, but having f* you money invested well and wisely? Yeah, I’d sell a big chunk as soon as I could.

        • Scott Locklin said, on February 14, 2021 at 11:46 am

          That could be all of it, but it sure freaked out a lot of people. I still think it’s reasonable to prefer the vaccine shipped by the company that has been shipping products for 100 years to the dying startup that won on a punt.

  18. John Fenley said, on March 13, 2021 at 7:13 am

    The thing that has always bothered me about the connectome model, is that learning would seem to require re-wiring, and I’m not sure that would happen fast enough to form memories. RNA seems to be one of the only storage mechanisms with enough bandwidth to be useful for continual recording of all stimuli.

  19. pwyll said, on April 19, 2021 at 4:32 pm

    This post on a “ferret experiment” looks related: https://join.substack.com/p/is-this-the-most-interesting-idea


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: