Locklin on science

The Fifth Generation Computing project

Posted in non-standard computer architectures by Scott Locklin on July 25, 2019

This article by Dominic Connor  reminds me of that marvelous artifact of the second AI winter (the one everyone talks about), “Fifth generation computing.”  I was not fully sentient at the time, though I was alive, and remember reading about “Fifth generation computing” in the popular science magazines.

It was 1982. Let’s not rely on vague recollections of what happened this year; some familiar things which happened that year. Tylenol scare, mandated breakup of the Bell System  and the beginning of the hard fade of Bell Labs, Falklands war, crazy blizzards, first artificial heart installed,  Sun Microsystems founded, Commodore 64 released. People were starting to talk about personal robotics; Nolan Bushnell (Atari founder) started a personal robot company. The IBM PC was released the previous year, somewhere mid year they had sold 200,000 of them, and MS-DOS 1.1 had been released. The Intel 80286 came out earlier in the year and was one of the first microprocessors with protected memory and hardware support for multitasking. The Thinking Machines company, attempting to do a novel form of massively parallel computing (probably indirectly in response to the 5thgen “threat”), would be founded in 1983.

Contemporary technology

The “AI” revolution was well underway at the time; expert system shells were actually deployed and used by businesses; Xcon, Symbolics, the Lisp Machine guys were exciting startups. Cyc -a sort of ultimate expert systems shell, would be founded a few years later. The hype train for this stuff was even more  lurid than it is now; you can go back and look at old computer and finance magazines for some of the flavor of it. If you want to read about the actual tech they were harping as bringing MUH SWINGULARITY, go read Norvig’s PAIP book. It was basically that stuff, and things that look like Mathematica. Wolfram is really the only 80s “AI” company that survived, mostly by copying 70s era “AI” symbolic algebra systems and re-implementing a big part of Lisp in “modern” C++. 

Japan was dominating the industrial might of the United States at the time in a way completely unprecedented in American history. People were terrified; we beat those little guys in WW-2 (a mere 37 years earlier) and now they were kicking our ass at automotive technology and consumer electronics. The Japanese, triumphant, wanted to own the next computer revolution, which was still a solidly American achievement in 1982. They took all the hyped technology of the time; AI, massive parallelism, databases, improved lithography, prolog like languages, and hoped by throwing it all together and tossing lots of those manufacturing-acquired dollars at the problem, they’d get the very first sentient machine. 

1) The fifth generation computers will use super large scale integrated chips (possibly in a non Von-Neumann architecture).
2) They will have artificial intelligence.
3) They will be able to recognize image and graphs.
4) Fifth generation computer aims to be able to solve highly complex problem including decision making, logical reasoning.
5) They will be able to use more than one CPU for faster processing speed.
6) Fifth generation computers are intended to work with natural language.

Effectively the ambition of Fifth generation computers was to build the computers featured in Star Trek; ones that were semi-sentient, and that you could talk to in a fairly conversational way.

 

People were terrified. While I wasn’t even a teenager yet, I remember some of this terror. The end of the free market! We’d all be Japanese slaves! The end of industrial society! DARPA dumped a billion 1980s dollars into a project called the Strategic Computing Initiative in an attempt to counter this (amusingly one of the focuses was … autonomous vehicles -things which are still obviously over the rainbow). Most of the US semiconductor industry and main frame vendors began an expensive collaboration to beat those sinister Japanese and prevent an AI Pearl Harbor. It was called the Microelectronics and Computer Technology Corporation (MCC for some reason), and it’s definitely ripe for some history of technology grad student to write a dissertation on it beyond the Wikipedia entry.  The Japanese 5th gen juggernaut was such a big deal, the British (who were still tech players back then) had their own copy of this nonsense, called the “Alvey Programme” -they dumped about a billion pounds in todays money into it. And not to be left out, the proto-EU also had their own version of this called ESPIRIT with similar investment levels. 

 

Prolog was of course the programming language of this future technology. Prolog was sort of the deep learning of its day; using constraint programming, databases (Prolog is still a somewhat interesting if over -lexible database query language), parallel constructs and expert system shell type technology, Prolog was supposed to achieve sentience. That’s not worked out real well for Prolog over the years: because of the nature of the language it is highly non-deterministic, and it’s fairly easy to pose NP-hard problems to Prolog. Of course in such cases, no matter how great the parallel model is, it still isn’t going to answer your questions.

 

One of the hilarious things about 5th generation computers is how certain people were about all this. The basic approach seemed completely unquestioned. They really thought all you had to do to build the future was take the latest fashionable ideas, stir them together, and presto, you have brain in a can AI. There was no self respecting computer scientist who would stand up and say “hey, maybe massive parallelism doesn’t map well onto constraint solvers, and perhaps some of these ambitions, we have no idea how to solve.” [1] This is one of the first times I can think of an allegedly rigorous academic discipline collectively acting like overt whores, salivating at the prospects of a few bucks to support their “research.” Heck, that’s insulting to actual whores, who at least provide a service.

 

 

Of course, pretty much nothing in 5th generation computing turned out to be important, useful, or even sane. Well, I suppose VLSI technology was all right, but it was going to be used anyway, and DBMS continue to be of some utility, but the rest of it was preposterous, ridiculous wankery and horse-puckey. For example; somehow they thought optical databases would allow for image search. It’s not clear what they had in mind here, if anything; really it sounds like bureaucrats making shit up about a technology they didn’t understand. For more examples:

“The objective stated (Moto-oka 1982 p.49) is the development of architectures with particular attention to the memory hierarchy to handle set operations using relational algebra as a basis for database systems. “
“The objective stated (Moto-oka 1982 p.53) is the development of a distributed function architecture giving high efficiency, high reliability, simple construction, ease of use, and adaptable to future technologies and different system levels.”
“The targets are: experimental, relational database machine with a capacity of 100 GB and 1,000 transactions a second; practical, 1,000 GB and 10,000 transactions a second. The implementation relational database machine using dataflow techniques is covered in section 8.3.3.”
“The objective stated (Moto-oka 1982 p.57) is the development of a system to input and output characters, speech, pictures and images and interact intelligently with the user. The character input/output targets are: interim, 3,000-4,000 Chinese characters in four to five typefaces; final, speech input of characters, and translation between kana and kanji characters. The picture input/output targets are: interim, input tablet 5,000 by 5,000 to 10,000 by 10,000 resolution elements; final, intelligent processing of graphic input. The speech input/output targets are: interim, identify 500-1,000 words; final, intelligent processing of speech input. It is also intended to integrate these facilities into multi-modal personal computer terminals.”
“The Fifth Generation plan is difficult and will require much innovation; but of what sort? In truth, it is more engineering than science (Fiegenbaum & McCorduck 1983 p 124). Though solutions to the technological problems posed by the plan may be hard to achieve, paths to possible solutions abound.” (where have I heard this before? -SL)

The old books are filled with gorp like this. None of it really means anything.  It’s just ridiculous wish fulfillment and word salad.  Like this dumb-ass diagram:

 

There are probably lessons to be learned here. 5thGen was exclusively a top down approach. I have no idea who the Japanese guys are who proposed this mess; it’s possible they were respectable scientists of their day. They deserve their subsequent obscurity; perhaps they fell on their swords. Or perhaps they moved to the US to found some academic cult; the US is always in the market for technological wowzers who never produce anything. Such people only seem to thrive in the Anglosphere, catering to the national religious delusion of whiggery.

Japan isn’t to be blamed for attempting this: most of their big successes up to that point were top-down industrial policies designed to help the Zaibatsus achieve national goals. The problem here was … no Japanese computer Zaibatsu worth two shits which had the proverbial skin in the game -it was all upside for the clowns who came up with this, no downside.  Much like the concepts of nanotech 10 years ago, quantum computing or autonomous automobiles now; it is a “Nasruddin’s Donkey bet” (aka scroll to bottom here) without the 10 year death penalty for failure.

Japan was effectively taken for a ride by mountebanks. So was the rest of the world. The only people who benefited from it were quasi-academic computer scientist types who got paid to do wanking they found interesting at the time.  Sound familiar to anyone? Generally speaking, top down approaches on ridiculously ambitious projects, where overlords of dubious competence and motivation dictate a  R&D direction that don’t work so well; particularly where there is software involved. It only works if you’re trying to solve a problem that you can decompose into specific tasks with milestones, like the moon shot or the Manhattan project, both of which had comparatively fairly low risk paths to success. Saying you’re going to build an intelligent talking computer in 1982 or 2019 is much like saying you’re going to fly to the moon or build a web browser in 1492. There is no path from that present to the desired outcome. Actual “AI,”  from present perspectives, just as it was in 1982, is basically magic nobody knows how to achieve. 

Another takeaway was many of the actual problems they wanted to solve were done in a more incremental way while generating profits. One of the reasons they were trying to do this was to onboard many more people than had used computers before. The idea was instead of hiring mathematically literate programmers to build models, if you could have smart enough machines to talk to people and read charts and things the ordinary end user might bring to the computer with questions, more people could use computers, amplifying productivity. Cheap networked workstations with GUIs turned out to solve that in a much simpler way; you make a GUI, give the non spergs some training, then ordinary dumbasses can harness some of the power of the computer. This still requires mentats to write GUI interfaces for the dumbasses (at least before our glorious present of shitty electron front ends for everything), but that sort of “bottom up, small expenditures, train the human” idea has been generating trillions in value since then.

The shrew-like networked GUI equipped microcomputers of Apple were released as products only two years after this central planning dinosaur was postulated. Eventually, decades later, someone built a mechanical golem made of microcomputers which achieves a lot of the goals of fifth generation computing, with independent GUI front ends. I’m sure the Japanese researchers of the time would have been shocked to know it came from ordinary commodity microcomputers running C++ and using sorts and hash tables rather than non-Von-Neumann Prolog supercomputers. That’s how most progress in engineering happens though: incrementally[2]. Leave the moon shots to actual scientists (as opposed to “computer scientists”) who know what they’re talking about. 

 

1988 article on an underwhelming visit to Japan.

1992 article on the failure of this program in the NYT.

 

[1] Some years later, 5 honest men discussed the AI winter upon them; yet the projects inexorably rolled forward. This is an amazing historical document; at some point scholars will find such a thing in our present day -maybe the conversation has already happened. https://www.aaai.org/ojs/index.php/aimagazine/article/view/494 … or PDF link here.

[2] Timely Nick Szabo piece on technological frontiersmanship: https://unenumerated.blogspot.com/2006/10/how-to-succeed-or-fail-on-frontier.html

Quantum computing as a field is obvious bullshit

Posted in non-standard computer architectures, physics by Scott Locklin on January 15, 2019

I remember spotting the quantum computing trend when I was  a larval physics nerdling. I figured maybe I could get in on the chuckwagon if my dissertation project didn’t work out in a big way (it didn’t). I managed to get myself invited to a Gordon conference, and have giant leather bound notebooks filled with theoretical scribblings containing material for 2-3 papers in them. I wasn’t real confident in my results, and I couldn’t figure out a way to turn them into something practical involving matter, so I happily matriculated to better things in the world of business.

When I say Quantum Computing is a bullshit field, I don’t mean everything in the field is bullshit, though to first order, this appears to be approximately true. I don’t have a mathematical proof that Quantum Computing isn’t at least theoretically possible.  I also do not have a mathematical proof that we can make the artificial bacteria of K. Eric Drexler’s nanotech fantasies. Yet, I know both fields are bullshit. Both fields involve forming new kinds of matter that we haven’t the slightest idea how to construct. Neither field has a sane ‘first step’ to make their large claims true.

Drexler and the “nanotechnologists” who followed him, they assume because we  know about the Schroedinger equation we can make artificial forms of life out of arbitrary forms of matter. This is nonsense; nobody understands enough about matter in detail or life in particular to do this. There are also reasonable thermodynamic, chemical and physical arguments against this sort of thing. I have opined on this at length, and at this point, I am so obviously correct on the nanotech front, there is nobody left to argue with me. A generation of people who probably would have made first rate chemists or materials scientists wasted their early, creative careers following this over hyped and completely worthless woo. Billions of dollars squandered down a rat hole of rubbish and wishful thinking. Legal wankers wrote legal reviews of regulatory regimes to protect us from this nonexistent technology. We even had congressional hearings on this nonsense topic back in 2003 and again in 2005 (and probably some other times I forgot about). Russians built a nanotech park to cash in on the nanopocalyptic trillion dollar nanotech economy which was supposed to happen by now.

Similarly, “quantum computing” enthusiasts expect you to overlook the fact that they haven’t a clue as to how to build and manipulate quantum coherent forms of matter necessary to achieve quantum computation.  A quantum computer capable of truly factoring the number 21 is missing in action. In fact, the factoring of the number 15 into 3 and 5 is a bit of a parlour trick, as they design the experiment while knowing the answer, thus leaving out the gates required if we didn’t know how to factor 15. The actual number of gates needed to factor a n-bit number is 72 * n^3; so for 15, it’s 4 bits, 4608 gates; not happening any time soon.

It’s been almost 25 years since Peter Shor had his big idea, and we are no closer to factoring large numbers than we were … 15 years ago when we were also able to kinda sorta vaguely factor the number 15 using NMR ‘quantum computers.’

I had this conversation talking with a pal at … a nice restaurant near one of America’s great centers of learning. Our waiter was amazed and shared with us the fact that he had done a Ph.D. thesis on the subject of quantum computing. My pal was convinced by this that my skepticism is justified; in fact he accused me of arranging this. I didn’t, but am motivated to write to prevent future Ivy League Ph.D. level talent having to make a living by bringing a couple of finance nerds their steaks.

In 2010, I laid out an argument against quantum computing as a field based on the fact that no observable progress has taken place. That argument still stands. No observable progress has taken place. However, 8 years is a very long time. Ph.D. dissertations have been achieved, and many of these people have gone on to careers … some of which involve bringing people like me delicious steaks. Hundreds of quantum computing charlatans achieved tenure in that period of time. According to google scholar a half million papers have been written on the subject since then.

QC-screenshot

There are now three major .com firms funding quantum computing efforts; IBM, Google and Microsoft. There is at least one YC/Andreesen backed startup I know of. Of course there is also dwave, who has somehow managed to exist since 1999; almost 20 years, without actually delivering something usefully quantum or computing. How many millions have been flushed down the toilet by these turds? How many millions which could have been used building, say, ordinary analog or stochastic computers which do useful things? None of these have delivered a useful quantum computer which has even  one usefully error corrected qubit. I suppose I shed not too many tears for the money spent on these efforts; in my ideal world, several companies on that list would be broken up or forced to fund Bell Labs moonshot efforts anyway, and most venture capitalists are frauds who deserve to be parted with their money. I do feel sad for the number of young people taken in by this quackery. You’re better off reading ancient Greek than studying a ‘technical’ subject that eventually involves bringing a public school kid like me a steak. Hell, you are better off training to become an exorcist or a feng shui practitioner than getting a Ph.D. in ‘quantum computing.’

I am an empiricist and a phenomenologist. I consider the lack of one error corrected qubit in the history of the human race to be adequate evidence that this is not a serious enough field to justify using the word ‘field.’ Most of it is frankly, a scam. Plenty of time to collect tenure and accolades before people realize this isn’t normative science or much of anything reasonable.

As I said last year

All you need do is look at history: people had working (digital) computers before Von Neumann and other theorists ever noticed them. We literally have thousands of “engineers” and “scientists” writing software and doing “research” on a machine that nobody knows how to build. People dedicate their careers to a subject which doesn’t exist in the corporeal world. There isn’t a word for this type of intellectual flatulence other than the overloaded term “fraud,” but there should be.

Computer scientists” have gotten involved in this chuckwagon. They have added approximately nothing to our knowledge of the subject, and as far as I can tell, their educational backgrounds preclude them ever doing so. “Computer scientists” haven’t had proper didactics in learning quantum mechanics, and virtually none of them have ever done anything as practical as fiddled with an op-amp, built an AM radio or noticed how noise works in the corporeal world.

Such towering sperg-lords actually think that the only problems with quantum computing are engineering problems. When I read things like this, I can hear them muttering mere engineering problems.  Let’s say, for the sake of argument this were true. The SR-71 was technically a mere engineering problem after the Bernoulli effect was explicated in 1738. Would it be reasonable to have a hundred or a thousand people writing flight plans for the SR-71  as a profession in 1760? No.

A reasonable thing for a 1760s scientist to do is invent materials making a heavier than air craft possible. Maybe fool around with kites and steam engines. And even then … there needed to be several important breakthroughs in metallurgy (titanium wasn’t discovered until 1791), mining, a functioning petrochemical industry, formalized and practical thermodynamics, a unified field theory of electromagnetism, chemistry, optics, manufacturing and arguably quantum mechanics, information theory, operations research and a whole bunch of other stuff which was unimaginable in the 1760s. In fact, of course the SR-71 itself was completely unimaginable back then. That’s the point.

 

its just engineering!

its just engineering!

Physicists used to be serious and bloody minded people who understood reality by doing experiments. Somehow this sort of bloody minded seriousness has faded out into a tower of wanking theorists who only occasionally have anything to do with actual matter. I trace the disease to the rise of the “meritocracy” out of cow colleges in the 1960s. The post WW-2 neoliberal idea was that geniuses like Einstein could be mass produced out of peasants using agricultural schools. The reality is, the peasants are still peasants, and the total number of Einsteins in the world, or even merely serious thinkers about physics is probably something like a fixed number. It’s really easy, though, to create a bunch of crackpot narcissists who have the egos of Einstein without the exceptional work output. All you need to do there is teach them how to do some impressive looking mathematical Cargo Cult science, and keep their “results” away from any practical men doing experiments.

The manufacture of a large caste of such boobs has made any real progress in physics impossible without killing off a few generations of them. The vast, looming, important questions of physics; the kinds that a once in a lifetime physicist might answer -those haven’t budged since the early 60s. John Horgan wrote a book observing that science (physics in particular) has pretty much ended any observable forward progress since the time of cow collitches. He also noticed that instead of making progress down fruitful lanes or improving detailed knowledge of important areas, most develop enthusiasms for the latest non-experimental wank fest; complexity theory, network theory, noodle theory. He thinks it’s because it’s too difficult to make further progress. I think it’s because the craft is now overrun with corrupt welfare queens who are play-acting cargo cultists.

Physicists worthy of the name are freebooters; Vikings of the Mind, intellectual adventurers who torture nature into giving up its secrets and risk their reputation in the real world. Modern physicists are … careerist ding dongs who grub out a meagre living sucking on the government teat, working their social networks, giving their friends reach arounds and doing PR to make themselves look like they’re working on something important. It is terrible and sad what happened to the king of sciences. While there are honest and productive physicists, the mainstream of it is lost, possibly forever to a caste of grifters and apple polishing dingbats.

But when a subject which claims to be a technology, which lacks even the rudiments of experiment which may one day make it into a technology, you can know with absolute certainty that this ‘technology’ is total nonsense. Quantum computing is less physical than the engineering of interstellar spacecraft; we at least have plausible physical mechanisms to achieve interstellar space flight.

We’re reaching peak quantum computing hyperbole. According to a dimwit at the Atlantic, quantum computing will end free will. According to another one at Forbes, “the quantum computing apocalypse is immanent.” Rachel Gutman and Schlomo Dolev know about as much about quantum computing as I do about 12th century Talmudic studies, which is to say, absolutely nothing. They, however, think they know smart people who tell them that this is important: they’ve achieved the perfect human informational centipede. This is unquestionably the right time to go short.

Even the national academy of sciences has taken note that there might be a problem here. They put together 13 actual quantum computing experts who poured cold water on all the hype. They wrote a 200 page review article on the topic, pointing out that even with the most optimistic projections, RSA is safe for another couple of decades, and that there are huge gaps on our knowledge of how to build anything usefully quantum computing. And of course, they also pointed out if QC doesn’t start solving some problems which are interesting to … somebody, the funding is very likely to dry up. Ha, ha; yes, I’ll have some pepper on that steak.


 

There are several reasonable arguments against any quantum computing of the interesting kind (aka can demonstrate supremacy on a useful problem) ever having a physical embodiment.

One of the better arguments is akin to that against P=NP. No, not the argument that “if there was such a proof someone would have come up with it by now” -but that one is also in full effect. In principle, classical analog computers can solve NP-hard problems in P time. You can google around on the “downhill principle” or look at the work on Analog super-Turing architectures by people like Hava Siegelmann. It’s old stuff, and most sane people realize this isn’t really physical, because matter isn’t infinitely continuous. If you can encode a real/continuous number into the physical world somehow, P=NP using a protractor or soap-bubble. For whatever reasons, most complexity theorists understand this, and know that protractor P=NP isn’t physical.  Somehow quantum computing gets a pass, I guess because they’ve never attempted to measure anything in the physical world beyond the complexity of using a protractor.

In order to build a quantum computer, you need to control each qubit, which is a continuous value, not a binary value, in its initial state and subsequent states precisely enough to run the calculation backwards. When people do their calculations ‘proving’ the efficiency of quantum computers, this is treated as an engineering detail. There are strong assertions by numerous people that quantum error correction (which, I will remind everyone, hasn’t been usefully implemented in actual matter by anyone -that’s the only kind of proof that matters here) basically pushes the analog requirement for perfection to the initialization step, or subsumes it in some other place where it can’t exist. Let’s assume for the moment that this isn’t the case.

Putting this a different way, for an N-qubit computer, you need to control, transform, and read out 2^N complex (as in complex numbers) amplitudes of N-qubit quantum computers to a very high degree of precision. Even considering an analog computer with N oscillators which must be precisely initialized, precisely controlled, transformed and individually read out, to the point where you could reverse the computation by running the oscillators through the computation backwards; this is an extremely challenging task. The quantum version is exponentially more difficult.

Making it even more concrete; if we encode the polarization state of a photon as a qubit, how do we perfectly align the polarizers between two qubits? How do we align them for N qubits? How do we align the polarization direction with the gates? This isn’t some theoretical gobbledeygook; when it comes time to build something in physical reality, physical alignments matter, a lot. Ask me how I know. You can go amuse yourself and try to build a simple quantum computer with a couple of hard coded gates using beamsplitters and polarization states of photos. It’s known to be perfectly possible and even has a rather sad wikipedia page. I can make quantum polarization-state entangled photons all day; any fool with a laser and a KDP crystal can do this, yet somehow nobody bothers sticking some beamsplitters on a breadboard and making a quantum computer. How come? Well, one guy recently did it: got two whole qubits. You can go read about this *cough* promising new idea here, or if you are someone who doesn’t understand matter here.

FWIIW in early days of this idea, it was noticed that the growth in the number of components needed was exponential in the number of qubits. Well, this shouldn’t be a surprise: the growth in the number of states in a quantum computer is also exponential in the number of qubits. That’s both the ‘interesting thing’ and ‘the problem.’ The ‘interesting thing’ because an exponential number of states, if possible to trivially manipulate, allows for a large speedup in calculations. ‘The problem’ because manipulating an exponential number of states is not something anyone really knows how to do.

The problem doesn’t go away if you use spins of electrons or nuclei; which direction is spin up? Will all the physical spins be perfectly aligned in the “up” direction? Will the measurement devices agree on spin-up? Do all the gates agree on spin-up? In the world of matter, of course they won’t; you will have a projection. That projection is in effect, correlated noise, and correlated noise destroys quantum computation in an irrecoverable way. Even the quantum error correction people understand this, though for some reason people don’t worry about it too much. If they are honest in their lack of worry, this is because they’ve never fooled around with things like beamsplitters. Hey, making it have uncorrelated noise; that’s just an engineering problem right? Sort of like making artificial life out of silicon, controlled nuclear fusion power or Bussard ramjets is “just an engineering problem.”

engineering problem; easier than quantum computers

 

Of course at some point someone will mention quantum error correction which allows us to not have to precisely measure and transform everything. The most optimistic estimate of the required precision is something like 10^-5 for quantum error corrected computers per qubit/gate operation. This is a fairly high degree of precision. Going back to my polarization angle example; this implies all the polarizers, optical elements and gates in a complex system are aligned to 0.036 degrees. I mean, I know how to align a couple of beamsplitters and polarizers to 628 microradians, but I’m not sure I can align a few hundred thousand of them AND pockels cells and mirrors to 628 microradians of each other. Now imagine something with a realistic number of qubits for factoring large numbers; maybe 10,000 qubits, and a CPU worth of gates, say 10^10 or so of gates (an underestimate of the number needed for cracking RSA, which, mind you, is the only reason we’re having this conversation). I suppose it is possible, but I encourage any budding quantum wank^H^H^H  algorithmist out there to have a go at aligning 3-4 optical elements to within this precision. There is no time limit, unless you die first, in which case “time’s up!”

This is just the most obvious engineering limitation for making sure we don’t have obviously correlated noise propagating through our quantum computer. We must also be able to prepare the initial states to within this sort of precision. Then we need to be able to measure the final states to within this sort of precision. And we have to be able to do arbitrary unitary transformations on all the qubits.

Just to interrupt you with some basic facts: the number of states we’re talking about here for a 4000 qubit computer is ~ 2^4000 states! That’s 10^1200 or so continuous variables we have to manipulate to at least one part in ten thousand. The number of protons in the universe is about 10^80. This is why a quantum computer is so powerful; you’re theoretically encoding an exponential number of states into the thing. Can anyone actually do this using a physical object? Citations needed; as far as I can tell, nothing like this has ever been done in the history of the human race. Again, interstellar space flight seems like a more achievable goal. Even Drexler’s nanotech fantasies have some precedent in the form of actually existing life forms. Yet none of these are coming any time soon either.

There are reasons to believe that quantum error correction, too isn’t even theoretically possible (examples here and here and here -this one is particularly damning). In addition to the argument above that the theorists are subsuming some actual continuous number into what is inherently a noisy and non-continuous machine made out of matter, the existence of a quantum error corrected system would mean you can make arbitrarily precise quantum measurements; effectively giving you back your exponentially precise continuous number. If you can do exponentially precise continuous numbers in a non exponential number of calculations or measurements, you can probably solve very interesting problems on a relatively simple analog computer. Let’s say, a classical one like a Toffoli gate billiard ball computer. Get to work; we know how to make a billiard ball computer work with crabs. This isn’t an example chosen at random. This is the kind of argument allegedly serious people submit for quantum computation involving matter. Hey man, not using crabs is just an engineering problem muh Church Turing warble murble.

Smurfs will come back to me with the press releases of Google and IBM touting their latest 20 bit stacks of whatever. I am not impressed, and I don’t even consider most of these to be quantum computing in the sense that people worry about quantum supremacy and new quantum-proof public key or Zero Knowledge Proof algorithms (which more or less already exist). These cod quantum computing machines are not expanding our knowledge of anything, nor are they building towards anything for a bold new quantum supreme future; they’re not scalable, and many of them are not obviously doing anything quantum or computing.

This entire subject does nothing but  eat up lives and waste careers. If I were in charge of science funding, the entire world budget for this nonsense would be below that we allocate for the development of Bussard ramjets, which are also not known to be impossible, and are a lot more cool looking.

 

 

As Dyakonov put it in his 2012 paper;

“A somewhat similar story can be traced back to the 13th century when Nasreddin Hodja made a proposal to teach his donkey to read and obtained a 10-year grant from the local Sultan. For his first report he put breadcrumbs between the pages of a big book, and demonstrated the donkey turning the pages with his hoofs. This was a promising first step in the right direction. Nasreddin was a wise but simple man, so when asked by friends how he hopes to accomplish his goal, he answered: “My dear fellows, before ten years are up, either I will die or the Sultan will die. Or else, the donkey will die.”

Had he the modern degree of sophistication, he could say, first, that there is no theorem forbidding donkeys to read. And, since this does not contradict any known fundamental principles, the failure to achieve this goal would reveal new laws of Nature. So, it is a win-win strategy: either the donkey learns to read, or new laws will be discovered.”

Further reading on the topic:

Dyakonov’s recent IEEE popsci article on the subject (his papers are the best review articles of why all this is silly):

https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-computing

IEEE precis on the NAS report:

https://spectrum.ieee.org/tech-talk/computing/hardware/the-us-national-academies-reports-on-the-prospects-for-quantum-computing (summary: not good)

Amusing blog from 11 years ago noting the utter lack of progress in this subject:

http://emergentchaos.com/archives/2008/03/quantum-progress.html

“To factor a 4096-bit number, you need 72*40963 or 4,947,802,324,992 quantum gates. Lets just round that up to an even 5 trillion. Five trillion is a big number. ”

Aaronson’s articles of faith (I personally found them literal laffin’ out loud funny, though I am sure he is in perfect earnest):

https://www.scottaaronson.com/blog/?p=124

 

Optalysys and Optical computing

Posted in non-standard computer architectures by Scott Locklin on August 11, 2014

Years and years ago, when I was writing up my dissertation and thinking about what I was going to do with my life, I gave some thought to non von-Neumann computing architectures. This was more or less the dawn of the quantum computing/quantum information era as a field, when it turned from an obscure idea to a sort of cottage industry one could get tenure in.  I had the idea in my mind that the Grover Algorithm could be done on a classical optical architecture, and wrote many equations in a fat notebook I now keep in my kitchen between a cookbook and a book on Cold War Submarines.  I suppose had I written some more equations and discovered I wasn’t full of baloney, I might have made a nice academic career for myself as a designer of novel computing architectures, rather than the underemployed constructor of mathematical models for businesses and sporadic blogger I have turned into. Be that as it may, my erstwhile interests have equipped me with some modest background in how the future gizmos might work.

While quantum computing appears to be turning into a multi-decade over-hyped boondoggle of a field, there is no reason why optical computers might not become important in the near future. Years before my attempted non von-Neumann punt into the intellectual void, my old boss Dave Snoke handed me a book called “Fundamentals of Photonics” which implied optical computers might one day be very important. The book contained a few chapters sketching outlines of how future photonic computers might work. Armed with this knowledge, and the fact that a new startup called Optalysys is claiming practical optical computers are right around the corner, perhaps these endless hours of wasted time can be converted into a useful blog post.

comp1

The idea for optical computing has been around for decades. Some trace it back to Zernike’s phase contrast microscopes. Certainly by the 60s, Vander Lugt [pdf link] and company were doing thinking about optics in terms of signal processing and computation. It began to be a real technology in the 70s with the invention of “spatial light modulators” (SLM) of various kinds. Historical SLMs have been all kinds of weird things you run into in the optics world; Pockels cells, phototitus converters, Kerr rotation; but the main technology used  is the familiar LCD. Interestingly, some MIT researchers have recently come up with LCDs that are capable of very fast switching. This could be the innovation which makes optical computing real.

comp

Things Optalysis isn’t: This  certainly isn’t an “all optical computer.” The “all optical computer” is sort of the philosopher’s stone of the optical computing world. If they accomplished this, they’d certainly claim it, and people like me would be plagued with doubt. It is also not any kind of “quantum computer” for the same reasons. In fact, they more or less say it isn’t even a digital computer.

What Optalysis is: From the vague descriptions on their website, this is a  standard analog optical computing architecture consisting of a high resolution detector,  at least one SLM, a lens and/or mirror or two, and an array of laser diodes. Presumably there has been some improvement which has made this a practicable commercial item; the MIT fast LCD might eventually be one of them. This should be seen as a sort of “math co-processor,” rather than a general purpose computer, though it does some important math.

optalysys-optical-computing-multiple-lasers-640x353

What can it do? Matrix  math, basically. Even thinking about this in the most hand-wavey manner, optical computers should be good at  image processing. When you break down what is involved in image processing, matrix math and Fourier transforms come to mind. We know how to do matrix math with optical computers. The limiting aspect is how many elements you can encode and modulate quickly. An optical matrix multiplication doodad will do it in O(const) time, up to the limiting size of the matrices that can be encoded in such a machine. This is huge, as most matrix multiplications are O(N^3) for square matrices of size N. There is all manner of work done on improving matrix multiplication and other forms of matrix math via statistical methods (subject of a coming blog post); the problem is very important, as  O( N^3 ) is pretty bad on large problems. People who have studied the history of computer science know how important the  Fast Fourier Transform was. It reduced the 1-d Fourier transform from O(N^2) operations to O(N log(N)), opening up vast new applications to signal processing techniques. Optical computing can in principle do Fourier transforms in O(const) [pdf link ] up to some maximum encoding size. Even better, it can do the 2-dimensional version in the same O(const) time. Even the FFT is, of course O(N^2 log(N)) for two dimensional problems, so this is a potentially significant speedup over conventional computing.

mmult

Most of programmers probably have glazed eyes at this point. Who cares, right? You care more than you think. Many of the elementary operations  used by programmers have something matrix related at their cores. Pagerank is an algorithm that has become essential to modern life. It is effectively a very efficient way of finding the most important eigenvector of a very large matrix in O(N) rather than the O(N^3 ) it takes to find all the eigenvalues and then find the biggest one. It is an underappreciated fact that Brin and Page turned a nice thesis topic in linear algebra into one of the great companies of the world, but this sort of thing demonstrates why linear algebra and matrix math are so important. Other database search algorithms are related to Singular Value Decomposition (SVD).  SVD is also used in many scalable forms of data mining and machine learning, and I’m pretty sure it works faster on an optical computer than on an ordinary one.  There are also pattern recognition techniques that only make sense on optical computers. Since nobody sells optical computers, it is not clear where they might be useful, but they might very well be helpful to certain kinds of problem; certainly pattern matching on images could use a speedup. The company is shooting for the notoriously computationally intractable field of computational fluid dynamics (CFD) first. CFD involves a lot of matrix math, including 2-d FTs in some realizations.

Scale: right now, Optalysis is claiming a fairly humble 40Gflops in their prototype, which is far from exceptional (laptops do about the same in a much more general way). I think a lot of people pooped their pants at the “eventual exabyte processor” claims in their press release. In principle they can scale this technology, but of course, a lot of things are possible “in principle.” The practical details are the most important thing. Their technology demonstrator is supposed to have 500×500 pixels in the SLM element. This is way too small for important problems; they might even be using an overhead projector LCD for the prototype doodad (the 20Hz frame rate kind of implies this might be the case): this is quite possible to do if you know some optics and have a good detector. How to scale a 500×500 element device to something on an exabyte scale is far from clear, but the most obvious way is to add more and finer elements, and run the thing faster. A factor of 2.5 million or more increase sounds challenging, but increasing the frame rate by a factor of a thousand and the number of elements by a factor of 1000 seems like something that actually could be done on a desktop as they claim.

Problems to look out for:

 

  1. LCDs: I believe the LCD technology is fast enough to be useful, but lifespan, quality and reliability will be limiting factors. If the fast LCD gizmo only lasts a few million cycles of shining lasers through it, this will be a problem. If the pixels randomly burn out or die, well, you have to know about this, and the more that die, the worse the calculation will be. I guess at this point, I also believe it makes small enough pixels to be useful. On the other hand, if they are using an overhead projector LCD and have no plans to radically upgrade the number of pixels in some way, and the completed machine is fairly large, it probably won’t be improving to Exascales any time soon.
  2. Detectors: I’m guessing there haven’t been optical computers yet mostly because the output detectors have been too noisy  for low power lasers. This is a guess; I’m not that interested in such things to investigate deeply, but it is a pretty good guess based on experience working with things optical. So, if the output detector system isn’t very good, this won’t work very well. It will be very different from writing code on a floating point processor. For one thing, you’ll be using a lot fewer than 64 bits per number. I’m guessing, based on the fast detectors used in streak cameras, this will be something more like 16 bits; maybe fewer. This is fine for a lot of physical problems, assuming it works at 16 bits resolution on a fast time scale. If you have to sit around and integrate for a long time to encode more than 16 bits (or, say, increase the laser power used for more dynamic range), well, it might not work so well.
  3. Data loading and memory: this thing could do very fast Fourier transforms and array math even if the data load operation is relatively slow, and the “registers” are fairly narrow. They are  only claiming a frame rate of 20Hz on their prototype, as I said above. On the other hand, if it’s going to do something like a pentabyte pagerank type thing, loading data quickly and being able to load lots of data is going to be really important. It’s not clear to me how this sort of thing will scale in general, or how to deal with values that are inherently more like 64 bits than 16 bits. If the frame rate increases by a factor of a million, which looks possible with the MIT breakthrough, assuming their detectors are up to it, a lot of things become possible.
  4. Generality: analog computers have been used for “big O” difficult calculations until relatively recently; often on much the same kinds of CFD problems as Optalysis hopes to compete in. I suppose they still are in some fields, using various kinds of ASIC, and FPAAs. The new IBM neural net chip is a recent example if I understand it correctly. The problem with them is the fact that they are very difficult to program. If this thing isn’t generally good at matrix mathematics at least, or requires some very expensive “regular computer” operations to do general computing, you might not be gaining much.

I’m hoping this actually takes off, because I’d love to sling code for such a machine.

 

Decent history of optical computing:

http://www.hindawi.com/journals/aot/2010/372652/

 

Their cool video, Narrated by Heinz Wolff:

 

 Kinds of hard problems this thing might work well on:

http://view.eecs.berkeley.edu/wiki/Dwarfs

 

Edit add:

One of their patents (I was unable to find any last night for some reason), which includes commentary on the utility of this technique in problems with fractional derivatives. Fractional derivatives are the bane of finite element analysis; I ran into this most recently trying to help a friend do microwave imaging on human brains. I’ll add some more commentary as I go through the patent later this evening.

http://www.freepatentsonline.com/8610839.html

 

 

Spotting vaporware: three follies of would-be technologists

Posted in Design, nanotech, non-standard computer architectures, Progress by Scott Locklin on October 4, 2010

When I was a little boy in the 70s and 80s,  I was pretty sure by the 21st century I’d drive around in a hovercraft, and take space vacations on the moons of Saturn. My idea of a future user interface for a computer was not the 1970s emacs interface that the cleverest people still use to develop software today, I’d just talk to the thing, Hal-9000 style. I suppose my disappointments with modern technological “advances” are the boyish me complaining I didn’t get my  hovercraft and talking artificial brain. What boggles me is the gaping credulity that intelligent people treat alleged developing future technologies now.

A vast industry of professional bullshit artists has risen up to promote and regulate technologies which will never actually exist. These nincompoops and poseurs are funded by your tax dollars; they fly all over the world  attempting to look important by promising to deliver the future. All they actually deliver is wind and public waste.  Preposterous snake oil salesmen launched an unopposed blitzkrieg strike on true science and technology during my lifetime. I suspect the scientific and technological community’s rich marbling with flâneurs is tolerated because they bring in government dollars from the credulous; better not upset anybody, or the gravy train might stop flowing!

While I have singled out Nano-stuff for scorn in an article I’d describe as “well received,” (aka, the squeals of the ninnies who propagate this nonsense were sweet music to my ears), there are many, many fields like this.

The granddaddy of them all is probably magnetic confinement nuclear fusion. This is a “technology” which has been “just 20 years in the future” for about 60 years now. It employs a small army of plasma physicists and technicians, most of whom are talented people who could be better put to use elsewhere. At some point, it must be admitted that these guys do not know what they are doing: they can’t do what they keep promising, and in fact, they have no idea how to figure out how to do it.

I think there is a general principle you can derive from the story of magnetic confinement fusion. I don’t yet have a snappy name for it, so I’ll call it, “the folly of plan by bureaucracy.” The sellers of such technology point out that it is not known to be impossible, so all you need do is shower them in gold, and they will surely eventually deliver. There are no intermediate steps given, and there is no real plan to even develop a plan to know if the “big idea” is any good. But they certainly have a gigantic bureaucratic organizational chart drawn up. The only time large bureaucracies can actually deliver specific technological breakthroughs (atom bombs, moon shots) is when there is a step by step plan on how to do it. Something that would fit in Microsoft project or some other kind of flow chart. The steps must be well thought out, they must be obviously possible using small improvements on current techniques, and have a strict timeline for their completion. If any important piece is missing, or there are gaping lacunae in the intermediate steps, the would-be technology is a fool’s mission. If there is no plan or intermediate steps given: this would-be technology is an outright fraud which must be scorned by serious investors, including the US government.

To illustrate this sort of thing in another way: imagine if someone shortly after the Bernouilli brothers asked the King for a grant to build a mighty aerostat which travels at 3 times the speed of sound. Sure, there is no physical law that says we can’t build an SR-71 … just the fact that 18th century technologists hadn’t invented heavier than air flight, the jet engine, aerodynamics, refined hydrocarbons, computers or titanium metallurgy yet. Of course, nobody in those days could have thought up the insane awesomeness of the SR-71; I’m guessing a science fiction charlatan from those days might imagine some kind of bird-thing with really big wings, powered by something which is thermodynamically impossible. Giving such a clown money to build such a thing, or steps towards such a thing would have been the sheerest madness. Yet, we do this all the time in the modern day.

A sort of hand wavey corollary  based again on fusion’s promises (or, say, the “war on cancer”), I like to call, “the folly of 20 year promises.” Bullshitters love to give estimates that allow them to retire before they’re discovered as frauds; 20 years is about long enough to collect a pension. Of course, a 20 year estimate may be an honest one, but I can’t really think of any planned, specific technological breakthrough developed by a bureaucracy over that kind of time scale, and I can think of dozens upon dozens which have failed miserably to the tune of billions of research dollars. What “20 years” means to me is,  “I don’t actually know how to do this, but I  wish you’d give me money for it anyway.”

A burgeoning quasi-technological field which is very likely to be vaporware is that of quantum computing. This pains me to say, as I think the science behind this vaporware technology is interesting. The problem is, building quantum gates (the technology needed to make this theoretical concept real) is perpetually always somehow 20 years off in the future. We even have a very dubious company founded, and in operation for 11 years. I don’t know where they get their money, and they manage to publish stuff at least as respectable as the rest of the QC field, but … they have no quantum computer. Granted, many in the academic community are attempting to keep them honest, but their continued existence demonstrates how easy it is to make radical claims without ever  being held to account for them.

David Deutsch more or less invented the idea of the quantum computer in 1985. It is now 25 years later, and there is still no quantum computer to be seen. I think Deutsch is an honest man, and a good scientist; his idea was more quantum  epistemology than an attempt to build a practical machine that humans might use for something. The beast only took on a bureaucratic life of its own after Peter Shor came up with an interesting algorithm for Deutsch’s theoretical quantum computers.

Now, let us compare to the invention of modern computers by John von Neumann and company in 1945.  Von Neumann’s paper can be considered a manual for building a modern computer. His paper described a certain computer architecture: one which had already been built,  in ways that made its mathematical understanding and reproduction relatively simple. Most computers in use today are the direct result of this paper.  I’d argue that it was engineering types like Hermann Goldstine and John Mauchly and J. Presper Eckert who actually built digital electronic computers before von Neumann’s paper, who made the computer possible. In turn, their ideas were based on those of analog computers, which have a long and venerable history dating back to the ancient Greeks. The important thing to notice here is the theory of binary digital computers came after the invention; not the other way around.

Now, it is possible for theory to have a stimulating effect on technology: von Neumann’s paper certainly did, but it is rare to nonexistent to derive all the properties of a non-existent technology using nothing but abstract thought. The way real technology is developed: the technology or some sort of precursor gizmo or physical phenomenon comes first. Later on, some theory is added to the technology as a supplement to understanding, and the technology may be improved.  Sort of like, in real science, the unexplained phenomenon generally comes first; the theory comes later.  In developing a new technology, I posit that this sort of “theory first” ideology is intellectual suicide. I call this, “the folly of premature theory.” Theory don’t build technologies: technologies build theories.

Technology is what allows us our prosperity, and it must be funded and nurtured, but we must also avoid funding and nurturing parasites. Cargo-cult science and technologists are not only wasteful of money, they waste human capital. It makes me sad to see so many young people dedicating their lives to snake oil like “nanotechnology.”  They’d be better off starting a business or learning a trade. “Vaporware technologist” would be a horrible epitaph to a misspent life. I have already said I think technological progress is slowing down. While I think this is an over all symptom of a decline in civilization, I think the three follies above are some of the proximate causes of this failing. Bruce Charlton has documented many other such follies, and if you’re interested in this sort of thing, I recommend reading his thoughts on the matter.