Locklin on science

Fun with complexity theory

Posted in non-standard computer architectures, physics by Scott Locklin on October 27, 2021

The algorithms that reproduce the behavior of matter are NP-hard. There’s lots of examples; soap bubbles on Steiner treesIsing models (of dilute magnetic materials), chaotic dynamics, protein folding; any sort of simulation of biological matter or chemicals in general. Most of physics involves minimization of action. Many of the heuristic approaches used to solve NP-hard problems are taken from physics and nature in general. And nature is known to “solve” its problems in linear time. Computer scientists always pipe up here and invoke some permutation on Church-Turing and the idea that nature doesn’t precisely “solve” for the minimum surface soap bubble on a  Steiner tree; just one which is somehow close to the best minimum surface. I agree of course, but I also assert that this is all that matters: the world of matter is more important than the world of Turing machines; this is a branch of math getting far, far above its station (math is a branch of physics; one where experiments are cheap).

For that matter, there are still people who think we can build analog computers based on dynamical systems theory that solve interestingly large NP-hard and other difficult problems. This is an old field, to my mind it has produced more interesting results than anything in “quantum information theory.” I had a couple of papers brewing on applying what I know about dynamical systems theory to quantum algorithms back in the early 00s; then I got an honest job and gave it up (dynamics to time series pays better). The nice thing about dynamical systems theory applied to other things is it is a grown up branch of physics, based on simple geometric and topological concepts, and it has tremendous explanatory power. Most of classical physics is just dynamics and counting, and that’s most of the physics you see around you. But still, we have ideas from complexity theory that get mapped onto problems in physics. I’ve mentioned the Ford Paradox before, but there are others.

An interesting example of complexity theory being used in physics; back in 2014, Arkady Bolotin pointed out that P \neq NP might explain… Schroedinger’s cat. The idea being you get a computational explosion when you try to solve Schroedinger for a macroscopic object, a la NP-hard. Arkady leaves out QMA, BQP and other quantum complexity classes,  for explicit lack of real world quantum computers, which is exactly as it should be.

The argument goes something like, you can map quantum problems to the Ising model, which is NP-complete. It’s possible that somehow a quantum calculation doesn’t map cleanly onto an Ising model (he says as much), but certainly it sometimes does, and that’s probably enough to make the argument hold together. So, solving the Schroedinger equation for a macroscopic object can’t be done in polynomial time, otherwise P = NP. Furthermore if you try to solve Schroedinger over the course of a year on a sort of Avogadro’s number worth of objects, each one mapping to a Hilbert space, you’d have to do each computation in something like 10^{-3*10^{23}} seconds which is, as things have it, pretty small. Ridonculously smaller than the Planck time. It’s a hand wavey argument, essentially saying macroscopic quantum objects are impossible, because if they weren’t, we’d have a proof that P = NP . Also that Schroedinger on macroscopic or otherwise complex objects (including quantum computers ; objects which have exponential numbers of internal quantum states) is meaningless. This paper is only cited by 5 papers, and is really engaged with by none of them. Mind you, it is somewhat hand wavey, but then, you have entire fields (aka “quantum information theory”) which are nothing but hand waving.

Bolotin has a couple more papers dealing with this idea in broad strokes. Another paper starts off with one of those imaginary quantum computers and works backwards to “well, what if Nature says they don’t work.” Blasphemy I know, what with all those amazing quantum computing startups breaking public key cryptography …. any day now. Like many blasphemies, taking it seriously for a few moments is probably worth the time.

Bolotin sets up a situation where we treat a Stern-Gerlach experiment; modeling both the detector and the funny obviously quantum stuff together. Then we think about actually doing this calculation using some kind of computer. Because of the many-body nature of such a simulation involving a macroscopic detector, we’re back to something that is NP hard to simulate.  But, nature obviously gives us an answer when Stern and Gerlach perform their experiment; doesn’t look NP-hard when nature does this; which implies P=NP, which will make everyone sad. This all goes back to measurement problems in quantum mechanics: we never try to model the detector and the Stern-Gerlach experiment using the Schroedinger equation or other formulations of quantum mechanics. We model the obviously quantum thing by itself, and assume that the detector system is somehow non-quantum a la Copenhagen.

Stern-Gerlach; you need to model the detector with the silver atoms in a fully quantum world

In another paper he uses the run time complexity of solutions to the Schroedinger equation as an algorithm to justify the quantum/classical divide. The argument is much as the above; mapping into Ising and saying “look at this snarl.” This one is more satisfying as it gives a sort of natural way of justifying the Copenhagen interpretation, which is the philosophical approach which simply assumes the classical limit, rather than demonstrates from quantum principles that there must be one. It would be more satisfying if there was a first principle way of pointing to the level of internal degrees of freedom where the Schroedinger equation turns into Hamilton-Jacobi… because of complexity theory. Maybe this is possible to do, but that step hasn’t been taken yet. Certainly there must be some such limit: we observe quantum mechanical behavior all the time in small and simple systems. Yet we never observe macroscopic objects doing quantum things, or even objects with large scale entanglement (aka something like a quantum computer or other mesoscopic system).


If we assume this overall argument is true, which it probably is, there must be something like a “quantum break-time” for complex objects. To refresh your memory, the quantum break time in dynamically complex systems (kicked rotors, bipendulums) is the observation that simple quantum objects with classical dynamical complexities behave classically for short timescales. It’s almost as if the universe itself can’t present us with the quantum solution to the classically chaotic Schroedinger equation. But eventually the universe catches up, and the spectral density looks like eigenvalues of a random matrix as they should. It’s a creepy effect; like the universe “smells” the classical solution for small periods of time, then smears it out into a quantum system when it thinks about it for long enough.

Bolotin’s ideas lead one to the thought that for sufficiently internally complex systems that seem like they should be quantum, the quantum break-time is infinite. Basically the universe can never present the result of a quantum simulation of a very complex system: the proverbial Pentium of the Universe running the calculation is insufficient. If you have something too complex and messy you can’t get the quantum answer, even if you wait around for a long time. Since such a system is computationally very complex, if the universe can be thought of as a sort of analog computer, the quantum calculation can’t be done before the end of the universe. The universe just chumps out and gives you the simple classical mechanics solution which any schoolboy could present you with from fiddling with Lagrangian mechanics. If the complex system can be thought of as quantum, it can only be thought of as quantum on some t->\infty style timescale that is longer than the age of the universe.

One would have to get more strict with definitions of “complexity,” to run this idea to the ground, but entanglement may be sufficient. We’ve done all manner of experiments with two entangled photons, or couple of entangled nuclear spin states. I’m not aware of people having done such things with hundreds of entangled nuclear spin states, or thousands. You can entangle them so it looks like a couple of collective spin states interacting with the magnetic field, but not the sort of all to all entanglement which would be required for something like a quantum computer. Quantum computing researchers will eventually have to make very entangled machines which do all kinds of impossible things; there may be a quantum optics or NMR experiment which establishes a quantum breaktime for number of entangled otherwise independent systems. Joe Ford suggested building smaller and smaller double pendulums and seeing what happened. I guess I’m adding suggesting more and more entangled things. Maybe there is some interesting thought experiment on doing it with systems where we know how to make entanglement. Photons are the simplest one: your high energy photon is split by a KDP crystal into two entangled photons -do it again and again until maybe it looks like a boring thermalized microwave wave front or something. I dunno; this isn’t my day job. Just some thoughts on reading the papers.

This set of Bolotin ideas are pretty clever, and it’s a shame more people haven’t looked into it. I mean, I am a mere laborer in the quantitative fields, a twiddler of bits; I might have missed something important here, but it sure seems viable and interesting to me. At least viable and interesting enough to respond to, which seems to have never happened. When I consider the number of shitty ideas popularized by people that employ PR firms to get their “science” in front of eyeballs, it makes me a little sad. Anyway, maybe someone will tell me why this is wrong.

The Fifth Generation Computing project

Posted in non-standard computer architectures by Scott Locklin on July 25, 2019

This article by Dominic Connor  reminds me of that marvelous artifact of the second AI winter (the one everyone talks about), “Fifth generation computing.”  I was not fully sentient at the time, though I was alive, and remember reading about “Fifth generation computing” in the popular science magazines.

It was 1982. Let’s not rely on vague recollections of what happened this year; some familiar things which happened that year. Tylenol scare, mandated breakup of the Bell System  and the beginning of the hard fade of Bell Labs, Falklands war, crazy blizzards, first artificial heart installed,  Sun Microsystems founded, Commodore 64 released. People were starting to talk about personal robotics; Nolan Bushnell (Atari founder) started a personal robot company. The IBM PC was released the previous year, somewhere mid year they had sold 200,000 of them, and MS-DOS 1.1 had been released. The Intel 80286 came out earlier in the year and was one of the first microprocessors with protected memory and hardware support for multitasking. The Thinking Machines company, attempting to do a novel form of massively parallel computing (probably indirectly in response to the 5thgen “threat”), would be founded in 1983.

Contemporary technology

The “AI” revolution was well underway at the time; expert system shells were actually deployed and used by businesses; Xcon, Symbolics, the Lisp Machine guys were exciting startups. Cyc -a sort of ultimate expert systems shell, would be founded a few years later. The hype train for this stuff was even more  lurid than it is now; you can go back and look at old computer and finance magazines for some of the flavor of it. If you want to read about the actual tech they were harping as bringing MUH SWINGULARITY, go read Norvig’s PAIP book. It was basically that stuff, and things that look like Mathematica. Wolfram is really the only 80s “AI” company that survived, mostly by copying 70s era “AI” symbolic algebra systems and re-implementing a big part of Lisp in “modern” C++. 

Japan was dominating the industrial might of the United States at the time in a way completely unprecedented in American history. People were terrified; we beat those little guys in WW-2 (a mere 37 years earlier) and now they were kicking our ass at automotive technology and consumer electronics. The Japanese, triumphant, wanted to own the next computer revolution, which was still a solidly American achievement in 1982. They took all the hyped technology of the time; AI, massive parallelism, databases, improved lithography, prolog like languages, and hoped by throwing it all together and tossing lots of those manufacturing-acquired dollars at the problem, they’d get the very first sentient machine. 

1) The fifth generation computers will use super large scale integrated chips (possibly in a non Von-Neumann architecture).
2) They will have artificial intelligence.
3) They will be able to recognize image and graphs.
4) Fifth generation computer aims to be able to solve highly complex problem including decision making, logical reasoning.
5) They will be able to use more than one CPU for faster processing speed.
6) Fifth generation computers are intended to work with natural language.

Effectively the ambition of Fifth generation computers was to build the computers featured in Star Trek; ones that were semi-sentient, and that you could talk to in a fairly conversational way.


People were terrified. While I wasn’t even a teenager yet, I remember some of this terror. The end of the free market! We’d all be Japanese slaves! The end of industrial society! DARPA dumped a billion 1980s dollars into a project called the Strategic Computing Initiative in an attempt to counter this (amusingly one of the focuses was … autonomous vehicles -things which are still obviously over the rainbow). Most of the US semiconductor industry and main frame vendors began an expensive collaboration to beat those sinister Japanese and prevent an AI Pearl Harbor. It was called the Microelectronics and Computer Technology Corporation (MCC for some reason), and it’s definitely ripe for some history of technology grad student to write a dissertation on it beyond the Wikipedia entry.  The Japanese 5th gen juggernaut was such a big deal, the British (who were still tech players back then) had their own copy of this nonsense, called the “Alvey Programme” -they dumped about a billion pounds in todays money into it. And not to be left out, the proto-EU also had their own version of this called ESPIRIT with similar investment levels. 


Prolog was of course the programming language of this future technology. Prolog was sort of the deep learning of its day; using constraint programming, databases (Prolog is still a somewhat interesting if over -lexible database query language), parallel constructs and expert system shell type technology, Prolog was supposed to achieve sentience. That’s not worked out real well for Prolog over the years: because of the nature of the language it is highly non-deterministic, and it’s fairly easy to pose NP-hard problems to Prolog. Of course in such cases, no matter how great the parallel model is, it still isn’t going to answer your questions.


One of the hilarious things about 5th generation computers is how certain people were about all this. The basic approach seemed completely unquestioned. They really thought all you had to do to build the future was take the latest fashionable ideas, stir them together, and presto, you have brain in a can AI. There was no self respecting computer scientist who would stand up and say “hey, maybe massive parallelism doesn’t map well onto constraint solvers, and perhaps some of these ambitions, we have no idea how to solve.” [1] This is one of the first times I can think of an allegedly rigorous academic discipline collectively acting like overt whores, salivating at the prospects of a few bucks to support their “research.” Heck, that’s insulting to actual whores, who at least provide a service.



Of course, pretty much nothing in 5th generation computing turned out to be important, useful, or even sane. Well, I suppose VLSI technology was all right, but it was going to be used anyway, and DBMS continue to be of some utility, but the rest of it was preposterous, ridiculous wankery and horse-puckey. For example; somehow they thought optical databases would allow for image search. It’s not clear what they had in mind here, if anything; really it sounds like bureaucrats making shit up about a technology they didn’t understand. For more examples:

“The objective stated (Moto-oka 1982 p.49) is the development of architectures with particular attention to the memory hierarchy to handle set operations using relational algebra as a basis for database systems. “
“The objective stated (Moto-oka 1982 p.53) is the development of a distributed function architecture giving high efficiency, high reliability, simple construction, ease of use, and adaptable to future technologies and different system levels.”
“The targets are: experimental, relational database machine with a capacity of 100 GB and 1,000 transactions a second; practical, 1,000 GB and 10,000 transactions a second. The implementation relational database machine using dataflow techniques is covered in section 8.3.3.”
“The objective stated (Moto-oka 1982 p.57) is the development of a system to input and output characters, speech, pictures and images and interact intelligently with the user. The character input/output targets are: interim, 3,000-4,000 Chinese characters in four to five typefaces; final, speech input of characters, and translation between kana and kanji characters. The picture input/output targets are: interim, input tablet 5,000 by 5,000 to 10,000 by 10,000 resolution elements; final, intelligent processing of graphic input. The speech input/output targets are: interim, identify 500-1,000 words; final, intelligent processing of speech input. It is also intended to integrate these facilities into multi-modal personal computer terminals.”
“The Fifth Generation plan is difficult and will require much innovation; but of what sort? In truth, it is more engineering than science (Fiegenbaum & McCorduck 1983 p 124). Though solutions to the technological problems posed by the plan may be hard to achieve, paths to possible solutions abound.” (where have I heard this before? -SL)

The old books are filled with gorp like this. None of it really means anything.  It’s just ridiculous wish fulfillment and word salad.  Like this dumb-ass diagram:


There are probably lessons to be learned here. 5thGen was exclusively a top down approach. I have no idea who the Japanese guys are who proposed this mess; it’s possible they were respectable scientists of their day. They deserve their subsequent obscurity; perhaps they fell on their swords. Or perhaps they moved to the US to found some academic cult; the US is always in the market for technological wowzers who never produce anything. Such people only seem to thrive in the Anglosphere, catering to the national religious delusion of whiggery.

Japan isn’t to be blamed for attempting this: most of their big successes up to that point were top-down industrial policies designed to help the Zaibatsus achieve national goals. The problem here was … no Japanese computer Zaibatsu worth two shits which had the proverbial skin in the game -it was all upside for the clowns who came up with this, no downside.  Much like the concepts of nanotech 10 years ago, quantum computing or autonomous automobiles now; it is a “Nasruddin’s Donkey bet” (aka scroll to bottom here) without the 10 year death penalty for failure.

Japan was effectively taken for a ride by mountebanks. So was the rest of the world. The only people who benefited from it were quasi-academic computer scientist types who got paid to do wanking they found interesting at the time.  Sound familiar to anyone? Generally speaking, top down approaches on ridiculously ambitious projects, where overlords of dubious competence and motivation dictate a  R&D direction that don’t work so well; particularly where there is software involved. It only works if you’re trying to solve a problem that you can decompose into specific tasks with milestones, like the moon shot or the Manhattan project, both of which had comparatively fairly low risk paths to success. Saying you’re going to build an intelligent talking computer in 1982 or 2019 is much like saying you’re going to fly to the moon or build a web browser in 1492. There is no path from that present to the desired outcome. Actual “AI,”  from present perspectives, just as it was in 1982, is basically magic nobody knows how to achieve. 

Another takeaway was many of the actual problems they wanted to solve were done in a more incremental way while generating profits. One of the reasons they were trying to do this was to onboard many more people than had used computers before. The idea was instead of hiring mathematically literate programmers to build models, if you could have smart enough machines to talk to people and read charts and things the ordinary end user might bring to the computer with questions, more people could use computers, amplifying productivity. Cheap networked workstations with GUIs turned out to solve that in a much simpler way; you make a GUI, give the non spergs some training, then ordinary dumbasses can harness some of the power of the computer. This still requires mentats to write GUI interfaces for the dumbasses (at least before our glorious present of shitty electron front ends for everything), but that sort of “bottom up, small expenditures, train the human” idea has been generating trillions in value since then.

The shrew-like networked GUI equipped microcomputers of Apple were released as products only two years after this central planning dinosaur was postulated. Eventually, decades later, someone built a mechanical golem made of microcomputers which achieves a lot of the goals of fifth generation computing, with independent GUI front ends. I’m sure the Japanese researchers of the time would have been shocked to know it came from ordinary commodity microcomputers running C++ and using sorts and hash tables rather than non-Von-Neumann Prolog supercomputers. That’s how most progress in engineering happens though: incrementally[2]. Leave the moon shots to actual scientists (as opposed to “computer scientists”) who know what they’re talking about. 


1988 article on an underwhelming visit to Japan.

1992 article on the failure of this program in the NYT.


[1] Some years later, 5 honest men discussed the AI winter upon them; yet the projects inexorably rolled forward. This is an amazing historical document; at some point scholars will find such a thing in our present day -maybe the conversation has already happened. https://www.aaai.org/ojs/index.php/aimagazine/article/view/494 … or PDF link here.

[2] Timely Nick Szabo piece on technological frontiersmanship: https://unenumerated.blogspot.com/2006/10/how-to-succeed-or-fail-on-frontier.html

Quantum computing as a field is obvious bullshit

Posted in non-standard computer architectures, physics by Scott Locklin on January 15, 2019

I remember spotting the quantum computing trend when I was  a larval physics nerdling. I figured maybe I could get in on the chuckwagon if my dissertation project didn’t work out in a big way (it didn’t). I managed to get myself invited to a Gordon conference, and have giant leather bound notebooks filled with theoretical scribblings containing material for 2-3 papers in them. I wasn’t real confident in my results, and I couldn’t figure out a way to turn them into something practical involving matter, so I happily matriculated to better things in the world of business.

When I say Quantum Computing is a bullshit field, I don’t mean everything in the field is bullshit, though to first order, this appears to be approximately true. I don’t have a mathematical proof that Quantum Computing isn’t at least theoretically possible.  I also do not have a mathematical proof that we can or can’t make the artificial bacteria of K. Eric Drexler’s nanotech fantasies. Yet, I know both fields are bullshit. Both fields involve forming new kinds of matter that we haven’t the slightest idea how to construct. Neither field has a sane ‘first step’ to make their large claims true.

Drexler and the “nanotechnologists” who followed him, they assume because we  know about the Schroedinger equation we can make artificial forms of life out of arbitrary forms of matter. This is nonsense; nobody understands enough about matter in detail or life in particular to do this. There are also reasonable thermodynamic, chemical and physical arguments against this sort of thing. I have opined on this at length, and at this point, I am so obviously correct on the nanotech front, there is nobody left to argue with me. A generation of people who probably would have made first rate chemists or materials scientists wasted their early, creative careers following this over hyped and completely worthless woo. Billions of dollars squandered down a rat hole of rubbish and wishful thinking. Legal wankers wrote legal reviews of regulatory regimes to protect us from this nonexistent technology. We even had congressional hearings on this nonsense topic back in 2003 and again in 2005 (and probably some other times I forgot about). Russians built a nanotech park to cash in on the nanopocalyptic trillion dollar nanotech economy which was supposed to happen by now.

Similarly, “quantum computing” enthusiasts expect you to overlook the fact that they haven’t a clue as to how to build and manipulate quantum coherent forms of matter necessary to achieve quantum computation.  A quantum computer capable of truly factoring the number 21 is missing in action. In fact, the factoring of the number 15 into 3 and 5 is a bit of a parlour trick, as they design the experiment while knowing the answer, thus leaving out the gates required if we didn’t know how to factor 15. The actual number of gates needed to factor a n-bit number is 72 * n^3; so for 15, it’s 4 bits, 4608 gates; not happening any time soon.

It’s been almost 25 years since Peter Shor had his big idea, and we are no closer to factoring large numbers than we were … 15 years ago when we were also able to kinda sorta vaguely factor the number 15 using NMR ‘quantum computers.’

I had this conversation talking with a pal at … a nice restaurant near one of America’s great centers of learning. Our waiter was amazed and shared with us the fact that he had done a Ph.D. thesis on the subject of quantum computing. My pal was convinced by this that my skepticism is justified; in fact he accused me of arranging this. I didn’t, but am motivated to write to prevent future Ivy League Ph.D. level talent having to make a living by bringing a couple of finance nerds their steaks.

In 2010, I laid out an argument against quantum computing as a field based on the fact that no observable progress has taken place. That argument still stands. No observable progress has taken place. However, 8 years is a very long time. Ph.D. dissertations have been achieved, and many of these people have gone on to careers … some of which involve bringing people like me delicious steaks. Hundreds of quantum computing charlatans achieved tenure in that period of time. According to google scholar a half million papers have been written on the subject since then.


There are now three major .com firms funding quantum computing efforts; IBM, Google and Microsoft. There is at least one YC/Andreesen backed startup I know of. Of course there is also dwave, who has somehow managed to exist since 1999; almost 20 years, without actually delivering something usefully quantum or computing. How many millions have been flushed down the toilet by these turds? How many millions which could have been used building, say, ordinary analog or stochastic computers which do useful things? None of these have delivered a useful quantum computer which has even  one usefully error corrected qubit. I suppose I shed not too many tears for the money spent on these efforts; in my ideal world, several companies on that list would be broken up or forced to fund Bell Labs moonshot efforts anyway, and most venture capitalists are frauds who deserve to be parted with their money. I do feel sad for the number of young people taken in by this quackery. You’re better off reading ancient Greek than studying a ‘technical’ subject that eventually involves bringing a public school kid like me a steak. Hell, you are better off training to become an exorcist or a feng shui practitioner than getting a Ph.D. in ‘quantum computing.’

I am an empiricist and a phenomenologist. I consider the lack of one error corrected qubit in the history of the human race to be adequate evidence that this is not a serious enough field to justify using the word ‘field.’ Most of it is frankly, a scam. Plenty of time to collect tenure and accolades before people realize this isn’t normative science or much of anything reasonable.

As I said last year

All you need do is look at history: people had working (digital) computers before Von Neumann and other theorists ever noticed them. We literally have thousands of “engineers” and “scientists” writing software and doing “research” on a machine that nobody knows how to build. People dedicate their careers to a subject which doesn’t exist in the corporeal world. There isn’t a word for this type of intellectual flatulence other than the overloaded term “fraud,” but there should be.

Computer scientists” have gotten involved in this chuckwagon. They have added approximately nothing to our knowledge of the subject, and as far as I can tell, their educational backgrounds preclude them ever doing so. “Computer scientists” haven’t had proper didactics in learning quantum mechanics, and virtually none of them have ever done anything as practical as fiddled with an op-amp, built an AM radio or noticed how noise works in the corporeal world.

Such towering sperg-lords actually think that the only problems with quantum computing are engineering problems. When I read things like this, I can hear them muttering mere engineering problems.  Let’s say, for the sake of argument this were true. The SR-71 was technically a mere engineering problem after the Bernoulli effect was explicated in 1738. Would it be reasonable to have a hundred or a thousand people writing flight plans for the SR-71  as a profession in 1760? No.

A reasonable thing for a 1760s scientist to do is invent materials making a heavier than air craft possible. Maybe fool around with kites and steam engines. And even then … there needed to be several important breakthroughs in metallurgy (titanium wasn’t discovered until 1791), mining, a functioning petrochemical industry, formalized and practical thermodynamics, a unified field theory of electromagnetism, chemistry, optics, manufacturing and arguably quantum mechanics, information theory, operations research and a whole bunch of other stuff which was unimaginable in the 1760s. In fact, of course the SR-71 itself was completely unimaginable back then. That’s the point.


its just engineering!

its just engineering!

Physicists used to be serious and bloody minded people who understood reality by doing experiments. Somehow this sort of bloody minded seriousness has faded out into a tower of wanking theorists who only occasionally have anything to do with actual matter. I trace the disease to the rise of the “meritocracy” out of cow colleges in the 1960s. The post WW-2 neoliberal idea was that geniuses like Einstein could be mass produced out of peasants using agricultural schools. The reality is, the peasants are still peasants, and the total number of Einsteins in the world, or even merely serious thinkers about physics is probably something like a fixed number. It’s really easy, though, to create a bunch of crackpot narcissists who have the egos of Einstein without the exceptional work output. All you need to do there is teach them how to do some impressive looking mathematical Cargo Cult science, and keep their “results” away from any practical men doing experiments.

The manufacture of a large caste of such boobs has made any real progress in physics impossible without killing off a few generations of them. The vast, looming, important questions of physics; the kinds that a once in a lifetime physicist might answer -those haven’t budged since the early 60s. John Horgan wrote a book observing that science (physics in particular) has pretty much ended any observable forward progress since the time of cow collitches. He also noticed that instead of making progress down fruitful lanes or improving detailed knowledge of important areas, most develop enthusiasms for the latest non-experimental wank fest; complexity theory, network theory, noodle theory. He thinks it’s because it’s too difficult to make further progress. I think it’s because the craft is now overrun with corrupt welfare queens who are play-acting cargo cultists.

Physicists worthy of the name are freebooters; Vikings of the Mind, intellectual adventurers who torture nature into giving up its secrets and risk their reputation in the real world. Modern physicists are … careerist ding dongs who grub out a meagre living sucking on the government teat, working their social networks, giving their friends reach arounds and doing PR to make themselves look like they’re working on something important. It is terrible and sad what happened to the king of sciences. While there are honest and productive physicists, the mainstream of it is lost, possibly forever to a caste of grifters and apple polishing dingbats.

But when a subject which claims to be a technology, which lacks even the rudiments of experiment which may one day make it into a technology, you can know with absolute certainty that this ‘technology’ is total nonsense. Quantum computing is less physical than the engineering of interstellar spacecraft; we at least have plausible physical mechanisms to achieve interstellar space flight.

We’re reaching peak quantum computing hyperbole. According to a dimwit at the Atlantic, quantum computing will end free will. According to another one at Forbes, “the quantum computing apocalypse is immanent.” Rachel Gutman and Schlomo Dolev know about as much about quantum computing as I do about 12th century Talmudic studies, which is to say, absolutely nothing. They, however, think they know smart people who tell them that this is important: they’ve achieved the perfect human informational centipede. This is unquestionably the right time to go short.

Even the national academy of sciences has taken note that there might be a problem here. They put together 13 actual quantum computing experts who poured cold water on all the hype. They wrote a 200 page review article on the topic, pointing out that even with the most optimistic projections, RSA is safe for another couple of decades, and that there are huge gaps on our knowledge of how to build anything usefully quantum computing. And of course, they also pointed out if QC doesn’t start solving some problems which are interesting to … somebody, the funding is very likely to dry up. Ha, ha; yes, I’ll have some pepper on that steak.


There are several reasonable arguments against any quantum computing of the interesting kind (aka can demonstrate supremacy on a useful problem) ever having a physical embodiment.

One of the better arguments is akin to that against P=NP. No, not the argument that “if there was such a proof someone would have come up with it by now” -but that one is also in full effect. In principle, classical analog computers can solve NP-hard problems in P time. You can google around on the “downhill principle” or look at the work on Analog super-Turing architectures by people like Hava Siegelmann. It’s old stuff, and most sane people realize this isn’t really physical, because matter isn’t infinitely continuous. If you can encode a real/continuous number into the physical world somehow, P=NP using a protractor or soap-bubble. For whatever reasons, most complexity theorists understand this, and know that protractor P=NP isn’t physical.  Somehow quantum computing gets a pass, I guess because they’ve never attempted to measure anything in the physical world beyond the complexity of using a protractor.

In order to build a quantum computer, you need to control each qubit, which is a continuous value, not a binary value, in its initial state and subsequent states precisely enough to run the calculation backwards. When people do their calculations ‘proving’ the efficiency of quantum computers, this is treated as an engineering detail. There are strong assertions by numerous people that quantum error correction (which, I will remind everyone, hasn’t been usefully implemented in actual matter by anyone -that’s the only kind of proof that matters here) basically pushes the analog requirement for perfection to the initialization step, or subsumes it in some other place where it can’t exist. Let’s assume for the moment that this isn’t the case.

Putting this a different way, for an N-qubit computer, you need to control, transform, and read out 2^N complex (as in complex numbers) amplitudes of N-qubit quantum computers to a very high degree of precision. Even considering an analog computer with N oscillators which must be precisely initialized, precisely controlled, transformed and individually read out, to the point where you could reverse the computation by running the oscillators through the computation backwards; this is an extremely challenging task. The quantum version is exponentially more difficult.

Making it even more concrete; if we encode the polarization state of a photon as a qubit, how do we perfectly align the polarizers between two qubits? How do we align them for N qubits? How do we align the polarization direction with the gates? This isn’t some theoretical gobbledeygook; when it comes time to build something in physical reality, physical alignments matter, a lot. Ask me how I know. You can go amuse yourself and try to build a simple quantum computer with a couple of hard coded gates using beamsplitters and polarization states of photos. It’s known to be perfectly possible and even has a rather sad wikipedia page. I can make quantum polarization-state entangled photons all day; any fool with a laser and a KDP crystal can do this, yet somehow nobody bothers sticking some beamsplitters on a breadboard and making a quantum computer. How come? Well, one guy recently did it: got two whole qubits. You can go read about this *cough* promising new idea here, or if you are someone who doesn’t understand matter here.

FWIIW in early days of this idea, it was noticed that the growth in the number of components needed was exponential in the number of qubits. Well, this shouldn’t be a surprise: the growth in the number of states in a quantum computer is also exponential in the number of qubits. That’s both the ‘interesting thing’ and ‘the problem.’ The ‘interesting thing’ because an exponential number of states, if possible to trivially manipulate, allows for a large speedup in calculations. ‘The problem’ because manipulating an exponential number of states is not something anyone really knows how to do.

The problem doesn’t go away if you use spins of electrons or nuclei; which direction is spin up? Will all the physical spins be perfectly aligned in the “up” direction? Will the measurement devices agree on spin-up? Do all the gates agree on spin-up? In the world of matter, of course they won’t; you will have a projection. That projection is in effect, correlated noise, and correlated noise destroys quantum computation in an irrecoverable way. Even the quantum error correction people understand this, though for some reason people don’t worry about it too much. If they are honest in their lack of worry, this is because they’ve never fooled around with things like beamsplitters. Hey, making it have uncorrelated noise; that’s just an engineering problem right? Sort of like making artificial life out of silicon, controlled nuclear fusion power or Bussard ramjets is “just an engineering problem.”

engineering problem; easier than quantum computers


Of course at some point someone will mention quantum error correction which allows us to not have to precisely measure and transform everything. The most optimistic estimate of the required precision is something like 10^-5 for quantum error corrected computers per qubit/gate operation. This is a fairly high degree of precision. Going back to my polarization angle example; this implies all the polarizers, optical elements and gates in a complex system are aligned to 0.036 degrees. I mean, I know how to align a couple of beamsplitters and polarizers to 628 microradians, but I’m not sure I can align a few hundred thousand of them AND pockels cells and mirrors to 628 microradians of each other. Now imagine something with a realistic number of qubits for factoring large numbers; maybe 10,000 qubits, and a CPU worth of gates, say 10^10 or so of gates (an underestimate of the number needed for cracking RSA, which, mind you, is the only reason we’re having this conversation). I suppose it is possible, but I encourage any budding quantum wank^H^H^H  algorithmist out there to have a go at aligning 3-4 optical elements to within this precision. There is no time limit, unless you die first, in which case “time’s up!”

This is just the most obvious engineering limitation for making sure we don’t have obviously correlated noise propagating through our quantum computer. We must also be able to prepare the initial states to within this sort of precision. Then we need to be able to measure the final states to within this sort of precision. And we have to be able to do arbitrary unitary transformations on all the qubits.

Just to interrupt you with some basic facts: the number of states we’re talking about here for a 4000 qubit computer is ~ 2^4000 states! That’s 10^1200 or so continuous variables we have to manipulate to at least one part in ten thousand. The number of protons in the universe is about 10^80. This is why a quantum computer is so powerful; you’re theoretically encoding an exponential number of states into the thing. Can anyone actually do this using a physical object? Citations needed; as far as I can tell, nothing like this has ever been done in the history of the human race. Again, interstellar space flight seems like a more achievable goal. Even Drexler’s nanotech fantasies have some precedent in the form of actually existing life forms. Yet none of these are coming any time soon either.

There are reasons to believe that quantum error correction, too isn’t even theoretically possible (examples here and here and here -this one is particularly damning). In addition to the argument above that the theorists are subsuming some actual continuous number into what is inherently a noisy and non-continuous machine made out of matter, the existence of a quantum error corrected system would mean you can make arbitrarily precise quantum measurements; effectively giving you back your exponentially precise continuous number. If you can do exponentially precise continuous numbers in a non exponential number of calculations or measurements, you can probably solve very interesting problems on a relatively simple analog computer. Let’s say, a classical one like a Toffoli gate billiard ball computer. Get to work; we know how to make a billiard ball computer work with crabs. This isn’t an example chosen at random. This is the kind of argument allegedly serious people submit for quantum computation involving matter. Hey man, not using crabs is just an engineering problem muh Church Turing warble murble.

Smurfs will come back to me with the press releases of Google and IBM touting their latest 20 bit stacks of whatever. I am not impressed, and I don’t even consider most of these to be quantum computing in the sense that people worry about quantum supremacy and new quantum-proof public key or Zero Knowledge Proof algorithms (which more or less already exist). These cod quantum computing machines are not expanding our knowledge of anything, nor are they building towards anything for a bold new quantum supreme future; they’re not scalable, and many of them are not obviously doing anything quantum or computing.

This entire subject does nothing but  eat up lives and waste careers. If I were in charge of science funding, the entire world budget for this nonsense would be below that we allocate for the development of Bussard ramjets, which are also not known to be impossible, and are a lot more cool looking.



As Dyakonov put it in his 2012 paper;

“A somewhat similar story can be traced back to the 13th century when Nasreddin Hodja made a proposal to teach his donkey to read and obtained a 10-year grant from the local Sultan. For his first report he put breadcrumbs between the pages of a big book, and demonstrated the donkey turning the pages with his hoofs. This was a promising first step in the right direction. Nasreddin was a wise but simple man, so when asked by friends how he hopes to accomplish his goal, he answered: “My dear fellows, before ten years are up, either I will die or the Sultan will die. Or else, the donkey will die.”

Had he the modern degree of sophistication, he could say, first, that there is no theorem forbidding donkeys to read. And, since this does not contradict any known fundamental principles, the failure to achieve this goal would reveal new laws of Nature. So, it is a win-win strategy: either the donkey learns to read, or new laws will be discovered.”

Further reading on the topic:

Dyakonov’s recent IEEE popsci article on the subject (his papers are the best review articles of why all this is silly):


IEEE precis on the NAS report:

https://spectrum.ieee.org/tech-talk/computing/hardware/the-us-national-academies-reports-on-the-prospects-for-quantum-computing (summary: not good)

Amusing blog from 11 years ago noting the utter lack of progress in this subject:


“To factor a 4096-bit number, you need 72*40963 or 4,947,802,324,992 quantum gates. Lets just round that up to an even 5 trillion. Five trillion is a big number. ”

Aaronson’s articles of faith (I personally found them literal laffin’ out loud funny, though I am sure he is in perfect earnest):



Optalysys and Optical computing

Posted in non-standard computer architectures by Scott Locklin on August 11, 2014

Years and years ago, when I was writing up my dissertation and thinking about what I was going to do with my life, I gave some thought to non von-Neumann computing architectures. This was more or less the dawn of the quantum computing/quantum information era as a field, when it turned from an obscure idea to a sort of cottage industry one could get tenure in.  I had the idea in my mind that the Grover Algorithm could be done on a classical optical architecture, and wrote many equations in a fat notebook I now keep in my kitchen between a cookbook and a book on Cold War Submarines.  I suppose had I written some more equations and discovered I wasn’t full of baloney, I might have made a nice academic career for myself as a designer of novel computing architectures, rather than the underemployed constructor of mathematical models for businesses and sporadic blogger I have turned into. Be that as it may, my erstwhile interests have equipped me with some modest background in how the future gizmos might work.

While quantum computing appears to be turning into a multi-decade over-hyped boondoggle of a field, there is no reason why optical computers might not become important in the near future. Years before my attempted non von-Neumann punt into the intellectual void, my old boss Dave Snoke handed me a book called “Fundamentals of Photonics” which implied optical computers might one day be very important. The book contained a few chapters sketching outlines of how future photonic computers might work. Armed with this knowledge, and the fact that a new startup called Optalysys is claiming practical optical computers are right around the corner, perhaps these endless hours of wasted time can be converted into a useful blog post.


The idea for optical computing has been around for decades. Some trace it back to Zernike’s phase contrast microscopes. Certainly by the 60s, Vander Lugt [pdf link] and company were doing thinking about optics in terms of signal processing and computation. It began to be a real technology in the 70s with the invention of “spatial light modulators” (SLM) of various kinds. Historical SLMs have been all kinds of weird things you run into in the optics world; Pockels cells, phototitus converters, Kerr rotation; but the main technology used  is the familiar LCD. Interestingly, some MIT researchers have recently come up with LCDs that are capable of very fast switching. This could be the innovation which makes optical computing real.


Things Optalysis isn’t: This  certainly isn’t an “all optical computer.” The “all optical computer” is sort of the philosopher’s stone of the optical computing world. If they accomplished this, they’d certainly claim it, and people like me would be plagued with doubt. It is also not any kind of “quantum computer” for the same reasons. In fact, they more or less say it isn’t even a digital computer.

What Optalysis is: From the vague descriptions on their website, this is a  standard analog optical computing architecture consisting of a high resolution detector,  at least one SLM, a lens and/or mirror or two, and an array of laser diodes. Presumably there has been some improvement which has made this a practicable commercial item; the MIT fast LCD might eventually be one of them. This should be seen as a sort of “math co-processor,” rather than a general purpose computer, though it does some important math.


What can it do? Matrix  math, basically. Even thinking about this in the most hand-wavey manner, optical computers should be good at  image processing. When you break down what is involved in image processing, matrix math and Fourier transforms come to mind. We know how to do matrix math with optical computers. The limiting aspect is how many elements you can encode and modulate quickly. An optical matrix multiplication doodad will do it in O(const) time, up to the limiting size of the matrices that can be encoded in such a machine. This is huge, as most matrix multiplications are O(N^3) for square matrices of size N. There is all manner of work done on improving matrix multiplication and other forms of matrix math via statistical methods (subject of a coming blog post); the problem is very important, as  O( N^3 ) is pretty bad on large problems. People who have studied the history of computer science know how important the  Fast Fourier Transform was. It reduced the 1-d Fourier transform from O(N^2) operations to O(N log(N)), opening up vast new applications to signal processing techniques. Optical computing can in principle do Fourier transforms in O(const) [pdf link ] up to some maximum encoding size. Even better, it can do the 2-dimensional version in the same O(const) time. Even the FFT is, of course O(N^2 log(N)) for two dimensional problems, so this is a potentially significant speedup over conventional computing.


Most of programmers probably have glazed eyes at this point. Who cares, right? You care more than you think. Many of the elementary operations  used by programmers have something matrix related at their cores. Pagerank is an algorithm that has become essential to modern life. It is effectively a very efficient way of finding the most important eigenvector of a very large matrix in O(N) rather than the O(N^3 ) it takes to find all the eigenvalues and then find the biggest one. It is an underappreciated fact that Brin and Page turned a nice thesis topic in linear algebra into one of the great companies of the world, but this sort of thing demonstrates why linear algebra and matrix math are so important. Other database search algorithms are related to Singular Value Decomposition (SVD).  SVD is also used in many scalable forms of data mining and machine learning, and I’m pretty sure it works faster on an optical computer than on an ordinary one.  There are also pattern recognition techniques that only make sense on optical computers. Since nobody sells optical computers, it is not clear where they might be useful, but they might very well be helpful to certain kinds of problem; certainly pattern matching on images could use a speedup. The company is shooting for the notoriously computationally intractable field of computational fluid dynamics (CFD) first. CFD involves a lot of matrix math, including 2-d FTs in some realizations.

Scale: right now, Optalysis is claiming a fairly humble 40Gflops in their prototype, which is far from exceptional (laptops do about the same in a much more general way). I think a lot of people pooped their pants at the “eventual exabyte processor” claims in their press release. In principle they can scale this technology, but of course, a lot of things are possible “in principle.” The practical details are the most important thing. Their technology demonstrator is supposed to have 500×500 pixels in the SLM element. This is way too small for important problems; they might even be using an overhead projector LCD for the prototype doodad (the 20Hz frame rate kind of implies this might be the case): this is quite possible to do if you know some optics and have a good detector. How to scale a 500×500 element device to something on an exabyte scale is far from clear, but the most obvious way is to add more and finer elements, and run the thing faster. A factor of 2.5 million or more increase sounds challenging, but increasing the frame rate by a factor of a thousand and the number of elements by a factor of 1000 seems like something that actually could be done on a desktop as they claim.

Problems to look out for:


  1. LCDs: I believe the LCD technology is fast enough to be useful, but lifespan, quality and reliability will be limiting factors. If the fast LCD gizmo only lasts a few million cycles of shining lasers through it, this will be a problem. If the pixels randomly burn out or die, well, you have to know about this, and the more that die, the worse the calculation will be. I guess at this point, I also believe it makes small enough pixels to be useful. On the other hand, if they are using an overhead projector LCD and have no plans to radically upgrade the number of pixels in some way, and the completed machine is fairly large, it probably won’t be improving to Exascales any time soon.
  2. Detectors: I’m guessing there haven’t been optical computers yet mostly because the output detectors have been too noisy  for low power lasers. This is a guess; I’m not that interested in such things to investigate deeply, but it is a pretty good guess based on experience working with things optical. So, if the output detector system isn’t very good, this won’t work very well. It will be very different from writing code on a floating point processor. For one thing, you’ll be using a lot fewer than 64 bits per number. I’m guessing, based on the fast detectors used in streak cameras, this will be something more like 16 bits; maybe fewer. This is fine for a lot of physical problems, assuming it works at 16 bits resolution on a fast time scale. If you have to sit around and integrate for a long time to encode more than 16 bits (or, say, increase the laser power used for more dynamic range), well, it might not work so well.
  3. Data loading and memory: this thing could do very fast Fourier transforms and array math even if the data load operation is relatively slow, and the “registers” are fairly narrow. They are  only claiming a frame rate of 20Hz on their prototype, as I said above. On the other hand, if it’s going to do something like a pentabyte pagerank type thing, loading data quickly and being able to load lots of data is going to be really important. It’s not clear to me how this sort of thing will scale in general, or how to deal with values that are inherently more like 64 bits than 16 bits. If the frame rate increases by a factor of a million, which looks possible with the MIT breakthrough, assuming their detectors are up to it, a lot of things become possible.
  4. Generality: analog computers have been used for “big O” difficult calculations until relatively recently; often on much the same kinds of CFD problems as Optalysis hopes to compete in. I suppose they still are in some fields, using various kinds of ASIC, and FPAAs. The new IBM neural net chip is a recent example if I understand it correctly. The problem with them is the fact that they are very difficult to program. If this thing isn’t generally good at matrix mathematics at least, or requires some very expensive “regular computer” operations to do general computing, you might not be gaining much.

I’m hoping this actually takes off, because I’d love to sling code for such a machine.


Decent history of optical computing:



Their cool video, Narrated by Heinz Wolff:


 Kinds of hard problems this thing might work well on:



Edit add:

One of their patents (I was unable to find any last night for some reason), which includes commentary on the utility of this technique in problems with fractional derivatives. Fractional derivatives are the bane of finite element analysis; I ran into this most recently trying to help a friend do microwave imaging on human brains. I’ll add some more commentary as I go through the patent later this evening.