Locklin on science

RNA memory hypothesis

Posted in brainz, Open problems by Scott Locklin on February 3, 2021

There’s an old theory that memory is actually encoded in part in RNA. The argument is pretty simple: there’s no obvious way for all that sensory data to be captured in synapses as long term memories, yet long term memories obviously exist and are fairly reliable. RNA, unlike synapses, is energy efficient, redundant and persistent and consistent with what we observe about brains from day to day life.

You’d think with all the neuroscientists running around these days, this would have been eliminated from serious consideration by now, but the opposite is true. There’s actually been a little bit more experimental evidence indicating it might be true. People have allegedly transferred memories between snails, planaria, sea slugs, and there are accounts of people “inheriting” memories after organ transplants. It’s entirely possible that all of these are the result of poor experimental hygiene and wishful thinking, and there’s nothing really there, but they sure are evocative, and it seems like people should be interested in sorting this out, or finding simpler models which have hopes of sorting it out.

I had run across this idea again reading a Ron Maimon screed on physics stack exchange. It’s a pretty good screed worth reading (thanks Laeeth):

Highlight excerpted for the lazy:

RNA ticker tape

It is clear that there is hidden computation internal to the neurons. The source of these computations is almost certainly intracellular RNA, which is the main computational workhorse in the cell.

The RNA in a cell is the only entity which is active and carries significant bit density. It can transform by cutting and splicing, and it can double bind to identify complementary strands. These operations are very sensitive to the precise bit content, and allow rich full computation. The RNA analogous to a microprocessor.

In order to make a decent model for the brain, this RNA must be coupled to neuron level electrochemical computation directly. This requires a model in which RNA directly affects what signals come out of neurons.

I will give a model for this behavior, which is just a guess, but a reasonable one. The model is the ticker-tape. You have RNA attached to the neuron at the axon, which is read out base by base. Every time you hit a C, you fire the neuron. The recieving dendrite then writes out RNA constantly, and writes out a T every time it recieves a signal. The RNA is then read out by complementary binding at the ticker tape, and the RNA computes the rest of the thing intracellularly. If the neuron identifies the signal recieved RNA, it takes another strand of RNA and puts it on the membrane, and reads this one to give the output.

The amount of memory in the brain is then the number of bits in the RNA involved, which is about a gigabyte per cell. There are hundreds of billions of cells in the brain, which translates to hundreds of billions of gigabytes. The efficiency of memory retrieval and modification is a few ATP’s per bit, with thousands of ATP’s used for long-range neural communication only.

The brain then becomes an internet of independent computers, each neuron itself being a sizable computer itself.

 

This is a pretty exciting idea, and there are several near relatives. There are protein kinases involved in mRNA transcription and immunology which are candidates for memory as well. Functionally they’re all kind of similar: the idea is the long term memory is chemical and exists on the sub cellular level. 

Mechanisms are known to exist. If RNA is the persistence substrate, you’d expect there to be something like a nucleotide gated channel in the brain, so it can talk to the signal processing components of the brain. There is, starting from the olfactory system, which is known to be associated with memory. Such RNA gated channels are also important in the hippocampus; the master organ of memory in the brain.  Furthermore, it’s entirely possible that the glial cells have something to do with it; the function of these are still poorly understood. Women have more of them than men; maybe that’s why they can always remember where your keys are. There’s plenty of non-protein transcripting RNA floating around in the brain doing …. stuff, and nobody really knows what it does.

One of the cute things about it, is it is entirely possible RNA works like some kind of ticker tape for a Turing machine the way Maimon suggests above. There are a number of speculations to this effect. One can construct something that looks like logic gates or a lambda calculus through RNA editing rules; various enzymes we know about already more or less do this; weirder stuff like methylation may also play a role.

There are obvious ways of figuring all this out; people do look at RNA activities in the hippocampus for example. But because this theory is out of fashion, they attribute the activity to things other than direct RNA memory formation. Everyone more or less seems to believe in the Hebbian connectome model, despite there being little real evidence for it being the long term memory mechanism, or much understanding of what brains do at all beyond relatively simple image recognition/signal processing type stuff it is known to do. Memory is much more mysterious; seemingly a huge reservoir of super efficient data-storage.

The fact that more primitive organisms which are completely without nervous systems seem to have some kind of behavioral memory system ought to indicate there is something more than Hebbian memory. People are starting to notice. You have little single-cell critters like paramecia responding to stimuli, and acting more or less in as complex a way as larger organisms which do have some primitive nervous system. Various “microtubule” theories do not explain this (sorry Sir Roger), as disrupting them doesn’t change behavior much.

One can measure memory in some of these little beasts; the e. coli that lives in your bowels and in overhopped beers have a memory of at least 4 seconds; better than some instagram influencers. Paramecia have memories which may last their entire lifetime -if the memories are transferable via asexual reproduction (not clear they are; worth checking) that would be a couple of weeks: vastly better than most MSNBC viewers. Larger unicellular organisms like the 2mm long stentor exhibit very complex behaviors. They behave much like multicellular animals they more or less compete with. No neurons! Lots of behavior. Levels of behavior which would be very difficult to reproduce even using the latest megawatt dweeb learning atrocity that would otherwise be used to (badly) identify cat videos.


Since humans evolved from unicellular life, there should be some more primitive processing power still around, very possibly networked  and working in concert together. We already know that bacterial colonies kind of do this; even using similar electrical mechanisms to what is observed in brains. It’s completely bonkers to me that modern “neuroscientists” would abandon the idea of RNA memory when …. something is going on with small unicellular creatures. There is obviously some mechanism for the complex behaviors exhibited by unicellular life, and RNA is weird and active enough, it is a plausible mechanism. Maybe they’re not aware of this because unicellular organisms don’t have neurons? Argument for them taking a more comprehensive biology course, or, like, looking at something other than neurons through a microscope if so.

I’m not sure hyperacuity is fully understood. I’ve read things which claim that dolphin, electric eel, bat and human hyperacuity (eyeballs, or fast reflexes in video games) is  a sort of interferometry done with the rate encoding of the spikes of nervous impulses. It’s possible that this is true, but it is also possible that some extra, offloaded computational element governs this amazing phenomenon. To put a few numbers on it: bat nervous systems can echolocate on a 10 nanosecond time scale, electric eels 100nanoseconds. Biological nervous systems operate on a rate encoded sort of sub kilohertz time scale, but resolve things on a gigahertz time scale; that’s a pretty remarkable characteristic. They claim the neurons are doing some fancy interferometry on the rate encoded spikes that nervous systems are known to operate on, but there is much hand waving going on. I’ll wave my hands further and wonder if offloading some of the computation on RNA computers on the cellular level might help somehow. Certainly neural nets with memory layers are vastly more powerful than those without. Granted the thing on your video card isn’t very Hebbian either, but one can make the argument at least on the box diagram signal processing level.

There are fascinating consequences to this, I think some of which were explored by 50s and 60s science fiction authors who were aware of the then popular RNA memory hypothesis. Imagine you could learn a new language by taking an injection. Of course if such a technology were possible, absolutely horrific things are also possible, and, in fact, likely, as early technological innovations come from large, powerful institutions. 

There are various mystics who assert that humans have multiple levels of consciousness. Gurdjieff, the rug-merchant and mountebank who brought us the phrases “working on yourself” and … “consciousness,” asserted that the average human consciousness was a bunch of disconnected automatons that could occasionally could be unified into a whole, powerful being. While I think Gurdjieff mostly seemed interested in fleecing and pantsing the early 20th century equivalent of quartz-crystal clutching yoga instructors, his idea is one of the few usefully predictive hypothesis for why stuff like hypnosis and advertising (marketing hypnosis) works. Maybe he stumbled upon the multicore networked RNA memory hypothesis by accident. Maybe the ancients are right and the soul resides somewhere in the liver. Don’t laugh; people have led normal lives with giant pieces of their brain removed, but nobody has survived the death of their livers. The former fact; normal people getting by without much brain tissue, at least, ought to be the end of the argument: purely Hebbian models of the brain are obviously false.

Debate in the literature:


https://www.frontiersin.org/articles/10.3389/fnsys.2016.00088/full

https://www.frontiersin.org/articles/10.3389/fnsys.2018.00052/full

William R. Corliss and open problems in science

Posted in Corliss, Open problems by Scott Locklin on August 2, 2020

William Corliss was a physicist and rocket scientist from the heroic golden age of physics. He did great work in everything from nuclear engineering, to telerobotics, to neutron spectroscopy, to space flight; a real universal man in the last exciting time in science. What we know him for most these days though are his catalogs of things we don’t know. 

Looked a lot like my late pal Marty as well


He represents exactly my kind of scientist; one who is interested in the cool stuff happening in current year, and all the stuff we don’t know. You infectious human waste “who fucking love science” don’t actually. Science is about the mystery. It’s not a clerisy you can use to bludgeon  your political opponents, nor a series of facts you can feel smug about “knowing” about; it’s about appreciating the wonder of all of it. It’s insufficiently appreciated what a bunch of dumbasses humans are, and how little we actually know about matters of the utmost importance to our self understanding as human beings. Most modern clerisy “scientists” couldn’t even tell you about important open problems in their field. They’re too busy filling out forms, grubbing for money and social status, diddling their students and engaging in maoist witch hunts to bother with the reason all honest people become scientists; appreciating the wonders of nature and figuring things out.

Corliss’ work looks like it more or less wrapped up around the mid-90s; it’s truly enormous and it was almost entirely done before the internet era. He has a sensible rating system involving quality of data and extremity of anomaly. Many of the really big mysteries mentioned are still mysteries. It vast, and at this point I own enough of it I don’t have to worry about you guys cleaning up on volumes I may not have yet. Of course, most of it is not so mysterious, but it is at least noteworthy and thought provoking. Pointing out a certain kind of rock formation is weird and interesting is vastly superior to never mentioning the weird rocks.

Contemplate writing two feet worth of authoritative books on biology, astronomy, meteorology, geology and archaeology before Al Gore invented the internet, while maintaining an active career in rocket science. There’s more to it than meets the eye here; this represents the in-print stuff and a few out of print books I managed to get my hands on: there is more of his work is in out of print books, and some of it only exists in his newsletters, some of which his son has preserved online.

Most of it is taken from Science, Nature and other respectable scientific journals. People will grouse about it, because people always grouse, but he seemed to do a bang up job of picking out interesting things for which there are no reasonable explanations, and a lot more things which are merely “pretty damn weird.” Probably using stuff like index cards.

Now some of it may seem fruity to smug yutzes. Dr. Corliss has a section on the Yeti in Biological Anomalies Humans III. However most of the citations are from, as I said, Science and Nature. Should we ignore these lacunae, “fucking love science” dipshits? I think at this point where even primitive barbarians have ipotato, it’s probable there is no Yeti hominid, but Corliss’ probability of this being a big deal back in 1994 is still approximately correct as far as I can tell. Even if the Yeti is ultimately silly and wrong, his preservation of wonderful tales of the Orang Pendek (a legendary sumatran dwarf homonid race)  or the Agogwe (african mini yeti) a few pages afterwords makes it all worth while.

Since I’ve got this giant stack of books of weird lacunae in the sciences, as I thumb through them, I’ll post a few here, checked against the latest research, at least as well as the most convenient search engines go. Maybe one or two will be worth a full sperdo nerding out on. Ideally to make some of you think about something useful, but at the very least, kick his kids a few bucks by buying his books

A few tastes: 

Fat tropical animals: here’s one looking us in the face: why the fook would fat animals be happy in the tropics? It’s possibly a recent evolutionary adaptation, hippos being in the tropcs, but it’s bloody weird. Most animals, even people are well suited to the climates they live in with physical adaptations that help. BMI3

Human Mortality Correlated with Geomagnetic Activity: here’s one Corliss rated as fairly low in data quality back when he wrote about it, but top notch as an anomaly if it turns out to be true. The geomagnetic field has weird disturbances correlated with the quasiperiodic solar activity. Apparently this also causes premature death. Obviously nobody knows why, but it is fairly well documented at this point;  with the years since Corliss originally wrote about it in BHF32 (Human Anomalies II) (one of his original refs conveniently available here), it’s become fairly well known. I linked seven references above; there are probably a hundred.

Nonrandom Direction-of-Approach of Comets to the Sun: the prevailing theory of the Oort cloud is comets should approach the sun from random directions. People are fairly certain that comet approaches are non-random. Lots of evidence of it; people are more certain than ever that there is something going on here, and various ideas on galactic tidal forces have been proposed to deal with it. (ACB2 The Sun and Solar System Debris)

 Bone Caves, Bone Caches and Other Superficial Accumulations of Bones: -this used to be a trope of H. Rider Haggard and Edgar Rice Burroughs books; aka the elephant graveyard of lore. There are numerous examples of this, though Corliss kind of lumps them together in ESD1 (Neglected Geological Anomalies). Some of them are dinosaurs falling into a ravine and being pickled in the moss that eventually becomes coal. But it’s still freaking weird. Other bone caves are just insane; such things used to be considered evidence by geologists for the Great Flood back when that was the dominant paradigm (150 years ago isn’t that long). He gives this top ratings for weirdness; very strong data, very weird phenomenon. Moderns apparently just ignore it, despite the fact that Darwin himself thought it pretty peculiar.


Production-Consumption Discrepancy in Prehistoric Lake Superior Copper Mining. I bet most of you didn’t know that North America had pre-european copper mines; Indians had been mining copper there for 5000 years. Personally I consider this pretty weird in itself. It’s a fact, and it’s largely ignored. What propels it to “holy shit that’s weird” territory is nobody knows what happened to most of the copper (MSE6 “Ancient Infrastructure”). The calculation of how much copper was taken out of there is pretty straightforward, and copper doesn’t disappear easily; there are copper and bronze artifacts from the Americas (and everywhere else) from that long ago. The speculation is that, perhaps Phonecian Merchants (or Egyptians or Aliens or whatever) were trading with the Americas for much longer than we know. It is in principle a knowable thing; one can identify artifacts made with the particular chemical composition of Lake Superior Copper.  Not something likely to make you friends in the Archaeology department though.

 

 

Open problems in Robotics

Posted in brainz, Open problems by Scott Locklin on July 29, 2020

Robotics is one of those things the business funny papers regularly wonder about; it seems like consumer robotics is a revolutionary trillion dollar market which is perpetually  20-years away -more or less like nuclear fusion.

I had contemplated fiddling with robotics in hopes of building something that would do a useful science-fictiony thing, like go fetch me a beer from the refrigerator. Seemed like a nice way of fucking around with math, the machine shop and ending up with something cool and useful to fiddle with.  To do this, my beer fetching robot would have to navigate my potentially cluttered apartment to the refrigerator, open the door, look for the arbitrarily shaped/sized beer bottle amidst the ketchup bottles, jars of herring, broccoli and other such irrelevant objects, move things out of the way, grasp the bottle and return to me. After conversing with a world-renowned expert in autonomous vehicles; a subset of robotics,  I was informed that this isn’t really possible. All the actions I described above are open problems. Sure, you could do some ridiculous workaround that makes it look like autonomous behavior. I could also train a monkey or a dog to do the same thing, or get up and get the damn beer myself.

There really aren’t any lists in open problems in robotics, I am assuming because it would be a depressingly long litany. I figured I would assemble one; one which I assume will be gratuitously incomplete and occasionally wrong, but which makes up for all that by actually existing. Like my list of open problems in physics and astronomy, I could very well be wrong about some of these, or behind the times, and since my expertise consists in google and 5-10 year old conversations with a cool dude between deadlifts, but it seems worth doing.

  1. Motion planning is an actual area of research, with its own journals, schools of thought, experts and sets of open problems. Things like, “how do I get my robot from point A to point B without falling into a canyon, getting stuck, or being able to deal generally with obstacles” are not solved problems. Even things like a model of where the robot is, with respect to the surroundings: totally an open problem. How to know where your manipulator is in space, and how to get it somewhere else; open problem. Obviously beer fetching robots need to do all kinds of motion planning. Any potential solution will be ad-hoc and useless for the general case of, say, fetching a screw from a bin in the machine shop.
  2. Multiaxis singularities -this one blew my mind. Imagine you have a robot arm bolted to the ground. You want to teach the stupid thing to paint a car or something. There are actual singularities possible in the equations of motion; and it is more or less an underconstrained problem. I guess there are workarounds for this at this point, but they all have different tradeoffs. It’s as open a problem as motion planning on a macro scale.
  3. Simultaneous Location and Mapping. SLAM for short. When you enter a room, your brain knows exactly where your body is, and makes a map of the surroundings. Robots have a hard time with this. There are any number of solutions to the problem, but ultimately the most useful one is to make a really good map in advance. Having a vague or topological map or some kind of prior as to the environment: these are all completely different problems which seem like they should have a common solution, but don’t. While there are solutions to some problems available, they’re not general and definitely not turn-key to where there would be a SLAM module you can buy for your robot. I could program my beer robot to know all about my room, but there’s always going to be new obstacles (a pair of shoes, a book) which aren’t in its model. It needs SLAM to deal.
  4. Lost Robot Problem. Related; if I wake up, and my friends moved my bed to another room; we’ll all have a laugh. Most robots won’t know what to do if it loses track of its location. It will need a strategy to deal with this. The strategies are not general. It’s extremely likely I turn on my beer robot in different positions and locations in the room, and it will have to deal with that. Now imagine I put it somewhere else in the apartment building.
  5. Object manipulation and haptic feedback. Hugely not done yet. The human hand is an amazing thing, and robot manipulators are nowhere near being able to manipulate with haptic feedback or even simply manipulate real world objects based on visual recognition. Even something like picking up a stationary object with a simple graspable plane is a huge unsolved problem people publish on all the time. My beer robot could have a special manipulator designed to grasp a specific kind of beer bottle, or a lot of models of shapes of beer bottles, but if I ask the same robot to fetch me a carrot or a jar of mayo, I’m shit out of luck.
  6. Depth estimation. A sort of subset of object manipulation; you’d figure a robot with binocular vision, or even simply the ability to poke at an object and see it move is something pretty simple to do. It’s very much an open problem. Depth estimation is a problem for my beer-fetching robot, even if the beer is in the same place in the refrigerator every time (the robot won’t be, depending on its trajectory).
  7. Position estimation of moving objects. If you can’t know how far away an object is, you’re sure going to have a hard time estimating what a moving object is doing. Lt. Data ain’t gonna be playing baseball any time soon. If my beer robot had a human-looking bottle opener, it would need a technology like this.
  8. Affordance discovery how to predict what an object you interact with will do when you interact with it.  In my example; the robot would need a model for how objects are likely to behave in moving them aside in searching my refrigerator for a beer bottle.
  9. Scene understanding: this one should be obvious. We’re just at the point where image recognition is useful: I drove an Audi on the autobahn which could detect and somewhat adhere to the lines on the highway. I’m pretty sure it eventually would have detected the truck stopped in the middle of the road in front of me, but despite this fairly trivial “you’re going to turn into road pizza” if(object_in_front) {apply_break} level of understanding, it showed no evidence of being capable of this much reasoning. Totally open problem. I’ll point out that the humble housefly has no problem understanding the concept of “shit in front of you; avoid,” making robots and Audi brains vastly inferior to the housefly. Even putting the obvious problem aside; imagine if your robot is tasked with getting me a beer out of the refrigerator and there is a bottle of ketchup obscuring the beer. The robot will be unable to deal. Even with a 3-d model of the concept of beer bottle and the ketchup bottle which is absurdly complex to program the robot with.

 

several of the above problems illustrated

 

 

There’s something called the Moravec paradox which I’ve mentioned in the past.

“it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”

Robotics embodies the Moravec paradox. There’s a sort of corollary to this that people who work in the tiny field of “actual AI” (as opposed to ML ding dongs who got above their station) used to know about. This was before the marketing departments of google and other frauds made objective thought about this impossible. The idea is that intelligence and consciousness arose spontaneously out of biological motion control systems.

I think the idea comes from Roger Sperry, but whatever, it used to be widely known and at least somewhat accepted. Those biological motion control systems exist even on a microscopic level; even unicellular creatures like the paramecium, or primitive animals without real nervous systems like the hydra are capable of solving problems that we can’t do even in the general case with the latest NVIDIA supercomputer. While robotics is a noble calling and the roboticists solve devilishly hard problems, animal behavior ought to give a big old hint that they’re not doing it right.

 

 

Guys like Rodney Brooks seemed to accept this and built various robots that would learn how to walk using primitive hardware and feedback oriented ideas rather than programmed ideas. There was even a name for this; “Nouvelle AI.” No idea what happened to those ideas; I suppose they were too hard to make progress on, though the early results were impressive looking. Now Dr Brooks has a blog where he opines hilarious things like flying cars and “real soon now” autonomous vehicles are right around the corner.

I’ll go out on a limb and say I think current year Rodney Brooks is wrong about autonomous vehicles, but I think 80s Rodney Brooks was probably on the right path. Maybe it was too hard to go down the correct path: that’s often the way. We all know emergent systems are super important in all manner of phenomena, but we have no mathematics or models to deal with them. So we end up with useless horse shit like GPT-3.

It’s probably the case that, at minimum, a genuine “AI” would need to have a physical form and be capable of interacting with its environment. Many of the proposed algorithmic solutions to the problems listed above are NP-hard problems. To me, this implies that crap involving computers such as we use is wrong. We do approximately solve NP-hard problems in other ways all the time; you can do it with soap bubbles, but the design of the “computer” is vastly different from the von Neumann machine: it’s an analog machine where we don’t care about infinite accuracy.

You can see some of this in various proposed neuromorphic computing models: it’s abundantly obvious that nothing like stochastic gradient descent or contrastive divergence is happening in biological neurons. Spiking models like a liquid state machine are closer to how a primitive nervous system works, and they’re fairly difficult to simulate on Von Neumann hardware (some NPC is about to burble “Church Turing thesis” at me: don’t). I think it likely that many robot open problems could be solved using something more like a simulacrum of a simple nervous system than writing python code in ROS.

But really, all I know about robotics is that it’s pretty difficult.

30 open questions in physics and astronomy

Posted in Open problems, physics, physics anomalies by Scott Locklin on August 2, 2012

A friend of mine asked me if I thought there were actual open questions in physics, ones that individuals or small groups could make a contribution to (as opposed to things like the Higgs boson which require 4000 people and billions of dollars to suss out). Here is a list I came up with. I don’t think it is definitive, and for all I know, some of these problems may no longer be open questions as of today, but I didn’t find anything better on the internets. It may be of interest to young researchers wishing to make a real contribution to human knowledge. Or maybe it’s just something to bullshit about.

Unlike other such lists, there are no silly cosmological or quantum gravitic types of questions on it. I think these are unanswerable questions, and not presently solvable by Baconian science. Essentially, such questions are metaphysical. They can’t presently be solved even in concept by making observations about reality. We’d still like to know the answers to such questions as how to unify gravity with the other forces, but it’s effectively a sort of mathematical philosophic enquiry, rather than normative science.

The other aspect of my “open questions” is they could conceivably be solved by an individual or a small team. I had to use my judgement on that, such as it is. I think these are all interesting and worthy mysteries; ones which could be of great import to the human race. I suppose they vary quite a bit in importance, but all of ’em are interesting.

  1. High Tc superconductors: they cost nothing, and liquid nitrogen is cheap. Nobody knows how they work, or if they could make one at room temperature. The consequences would be tremendous if we could! IMO, every barnyard physicist who is worth two shits should have some perskovites and liquid nitrogen kicking around the lab, just for fiddlin’ with.
  2. Turbulence and Navier-stokes is still little understood: this is pencil and paper physics which stumped Heisenberg. If you think liquids are important, this is huge.
  3. Why is life chiral? When you make amino acids using chemistry, it isn’t chiral. How come life is chiral?
  4. Quantum mechanics is still mysterious, particularly in the classical limit: pencil and paper contributions and relatively cheap (though carefully done) experiments are possible. This is one of the biggest open questions for it’s philosophical and technological (quantum computing?) implications.
  5. Cosmic ray physics still has plenty of unusual phenomena. Detectors are cheap. You do have to wait for things to happen. What’s up with the giant cosmic rays for example?
  6. Solid to glass phase transitions are poorly understood and very interesting.
  7.   Fractional quantum Hall effect: simple and cheap experiments, and pencil and paper theory which could help us understand lots of other things in nature.
  8.   Catalysis is fairly mysterious and potentially revolutionary for novel technologies. The models I have seen are pretty hand-wavey, and not very useful for inventing new catalysts or predicting the properties of old ones.
  9.  Entropy and the arrow of time; this is at least as important as Ernst Mach’s philosophical ideas on relativistic things, which eventually helped lead to relativistic physics. Pencil and paper and thought experiments will suffice here. This is a very important philosophical question. Probably more important than understanding quantum mechanics.
  10.   What is life? Nobody knows.
  11.   How do brains work?
  12.   Properties of metallic hydrogen -I think you can do these experiments in diamond anvils. Or you could send a lab to Jupiter.
  13.   Is there a physics analogue for solving NP-hard problems? OK, this is cheating and stealing from computer science, but there may be physical algorithms you could use as proofs here, just like certain spectra could theoretically calculate Reimann’s Zeta. I’m not the only one to have looked in physical systems for answers here.
  14.   Is there a knowable physics of granular materials? How do singing sands work?
  15.   How does non-equilibrium thermodynamics work? We know there is order here; you can see the order with your eyes, but we don’t know what the rules are. This is potentially bigger than understanding quantum mechanics.
  16.   Are there more new weird properties of matter? We keep discovering poorly understood stuff like high Tc superconductors and the fractional quantum Hall effect. Material science is vast and potentially technologically revolutionary.
  17.   How does water work? The large heat capacity of water is an enormous physical mystery. Water should be vapor at STP. It ain’t.  People wave their hands and talk about hydrogen bonding, but hand waving doesn’t do much. This is also potentially huge. It’s freaking water: doesn’t get much cheaper than a jar of water.
  18.   WTF is going on inside the earth? Whence comes the magnetic field? Why doesn’t Venus have one? Why is it so damn hot in there? Yes, I know there are theories: they don’t even pass a sniff test.
  19.   What is the story with the Pioneer anomaly? If it’s accurate and not from something silly like an outgassing thumb print, this could throw everything we know about physics and astronomy into utter chaos. There is a way of answering this which would cost a half billion or so; shoot something into interstellar space on purpose and see what happens in 20-40 years. This is arguably far more important than anything in particle physics. The “flyby anomalies” turned out to be dipshits not understanding special relativity… I’m assuming that dipshits have done enough special relativity on Pioneer to rule this out. Otherwise: undergraduates: get to work! Edit add: I thought the flyby anomaly was resolved with SR, but googling further; it ain’t!
  20. The atmosphere is filled with anomalies: ball lightning, sprites, ELVES, blue jets, TIGERs, green flashes & etc. I have a fat book by William Corliss catalogueing mysteries from the 60s; there are even more now.
  21. There was a gamma ray burst where the high energy gamma rays got here before the low energy ones. Looks like highly anomalous physics.
  22. GEO600 has produced some bizarro gravity results.
  23. Corona physics makes no sense. Why is the corona hotter than the sun’s surface?
  24. What are diffuse interstellar bands? Nobody has a clue as to what is absorbing light at those wavelengths, yet … I’m supposed to believe the standard model explains everything? Chyeah, right.
  25. Is dark matter real, or is it the same thing that makes orbital mechanics fuck up? Something is weirding up the rotations of galaxies. I’m very tempted to put this in with the Pioneer anomaly and say, “gravity is largely untested using experiments; we should change this.”
  26. Horizon problem;  why does the universe look homogeneous? It shouldn’t be.
  27. For that matter, why is cosmic microwave background anisotropic, when everything else is isotropic?
  28. What are magnetars?
  29. Long delayed echos? This is a seemingly science fictional level of WTF. I recall some science fiction type speculated this was a sign of alien intelligence in the Solar System.
  30. Why is there more matter than antimatter? I threw that one in for high energy types. You already have a result: if you’re so smart -figure it out.