Locklin on science

Optalysys and Optical computing

Posted in non-standard computer architectures by Scott Locklin on August 11, 2014

Years and years ago, when I was writing up my dissertation and thinking about what I was going to do with my life, I gave some thought to non von-Neumann computing architectures. This was more or less the dawn of the quantum computing/quantum information era as a field, when it turned from an obscure idea to a sort of cottage industry one could get tenure in.  I had the idea in my mind that the Grover Algorithm could be done on a classical optical architecture, and wrote many equations in a fat notebook I now keep in my kitchen between a cookbook and a book on Cold War Submarines.  I suppose had I written some more equations and discovered I wasn’t full of baloney, I might have made a nice academic career for myself as a designer of novel computing architectures, rather than the underemployed constructor of mathematical models for businesses and sporadic blogger I have turned into. Be that as it may, my erstwhile interests have equipped me with some modest background in how the future gizmos might work.

While quantum computing appears to be turning into a multi-decade over-hyped boondoggle of a field, there is no reason why optical computers might not become important in the near future. Years before my attempted non von-Neumann punt into the intellectual void, my old boss Dave Snoke handed me a book called “Fundamentals of Photonics” which implied optical computers might one day be very important. The book contained a few chapters sketching outlines of how future photonic computers might work. Armed with this knowledge, and the fact that a new startup called Optalysys is claiming practical optical computers are right around the corner, perhaps these endless hours of wasted time can be converted into a useful blog post.

comp1

The idea for optical computing has been around for decades. Some trace it back to Zernike’s phase contrast microscopes. Certainly by the 60s, Vander Lugt [pdf link] and company were doing thinking about optics in terms of signal processing and computation. It began to be a real technology in the 70s with the invention of “spatial light modulators” (SLM) of various kinds. Historical SLMs have been all kinds of weird things you run into in the optics world; Pockels cells, phototitus converters, Kerr rotation; but the main technology used  is the familiar LCD. Interestingly, some MIT researchers have recently come up with LCDs that are capable of very fast switching. This could be the innovation which makes optical computing real.

comp

Things Optalysis isn’t: This  certainly isn’t an “all optical computer.” The “all optical computer” is sort of the philosopher’s stone of the optical computing world. If they accomplished this, they’d certainly claim it, and people like me would be plagued with doubt. It is also not any kind of “quantum computer” for the same reasons. In fact, they more or less say it isn’t even a digital computer.

What Optalysis is: From the vague descriptions on their website, this is a  standard analog optical computing architecture consisting of a high resolution detector,  at least one SLM, a lens and/or mirror or two, and an array of laser diodes. Presumably there has been some improvement which has made this a practicable commercial item; the MIT fast LCD might eventually be one of them. This should be seen as a sort of “math co-processor,” rather than a general purpose computer, though it does some important math.

optalysys-optical-computing-multiple-lasers-640x353

What can it do? Matrix  math, basically. Even thinking about this in the most hand-wavey manner, optical computers should be good at  image processing. When you break down what is involved in image processing, matrix math and Fourier transforms come to mind. We know how to do matrix math with optical computers. The limiting aspect is how many elements you can encode and modulate quickly. An optical matrix multiplication doodad will do it in O(const) time, up to the limiting size of the matrices that can be encoded in such a machine. This is huge, as most matrix multiplications are O(N^3) for square matrices of size N. There is all manner of work done on improving matrix multiplication and other forms of matrix math via statistical methods (subject of a coming blog post); the problem is very important, as  O( N^3 ) is pretty bad on large problems. People who have studied the history of computer science know how important the  Fast Fourier Transform was. It reduced the 1-d Fourier transform from O(N^2) operations to O(N log(N)), opening up vast new applications to signal processing techniques. Optical computing can in principle do Fourier transforms in O(const) [pdf link ] up to some maximum encoding size. Even better, it can do the 2-dimensional version in the same O(const) time. Even the FFT is, of course O(N^2 log(N)) for two dimensional problems, so this is a potentially significant speedup over conventional computing.

mmult

Most of programmers probably have glazed eyes at this point. Who cares, right? You care more than you think. Many of the elementary operations  used by programmers have something matrix related at their cores. Pagerank is an algorithm that has become essential to modern life. It is effectively a very efficient way of finding the most important eigenvector of a very large matrix in O(N) rather than the O(N^3 ) it takes to find all the eigenvalues and then find the biggest one. It is an underappreciated fact that Brin and Page turned a nice thesis topic in linear algebra into one of the great companies of the world, but this sort of thing demonstrates why linear algebra and matrix math are so important. Other database search algorithms are related to Singular Value Decomposition (SVD).  SVD is also used in many scalable forms of data mining and machine learning, and I’m pretty sure it works faster on an optical computer than on an ordinary one.  There are also pattern recognition techniques that only make sense on optical computers. Since nobody sells optical computers, it is not clear where they might be useful, but they might very well be helpful to certain kinds of problem; certainly pattern matching on images could use a speedup. The company is shooting for the notoriously computationally intractable field of computational fluid dynamics (CFD) first. CFD involves a lot of matrix math, including 2-d FTs in some realizations.

Scale: right now, Optalysis is claiming a fairly humble 40Gflops in their prototype, which is far from exceptional (laptops do about the same in a much more general way). I think a lot of people pooped their pants at the “eventual exabyte processor” claims in their press release. In principle they can scale this technology, but of course, a lot of things are possible “in principle.” The practical details are the most important thing. Their technology demonstrator is supposed to have 500×500 pixels in the SLM element. This is way too small for important problems; they might even be using an overhead projector LCD for the prototype doodad (the 20Hz frame rate kind of implies this might be the case): this is quite possible to do if you know some optics and have a good detector. How to scale a 500×500 element device to something on an exabyte scale is far from clear, but the most obvious way is to add more and finer elements, and run the thing faster. A factor of 2.5 million or more increase sounds challenging, but increasing the frame rate by a factor of a thousand and the number of elements by a factor of 1000 seems like something that actually could be done on a desktop as they claim.

Problems to look out for:

 

  1. LCDs: I believe the LCD technology is fast enough to be useful, but lifespan, quality and reliability will be limiting factors. If the fast LCD gizmo only lasts a few million cycles of shining lasers through it, this will be a problem. If the pixels randomly burn out or die, well, you have to know about this, and the more that die, the worse the calculation will be. I guess at this point, I also believe it makes small enough pixels to be useful. On the other hand, if they are using an overhead projector LCD and have no plans to radically upgrade the number of pixels in some way, and the completed machine is fairly large, it probably won’t be improving to Exascales any time soon.
  2. Detectors: I’m guessing there haven’t been optical computers yet mostly because the output detectors have been too noisy  for low power lasers. This is a guess; I’m not that interested in such things to investigate deeply, but it is a pretty good guess based on experience working with things optical. So, if the output detector system isn’t very good, this won’t work very well. It will be very different from writing code on a floating point processor. For one thing, you’ll be using a lot fewer than 64 bits per number. I’m guessing, based on the fast detectors used in streak cameras, this will be something more like 16 bits; maybe fewer. This is fine for a lot of physical problems, assuming it works at 16 bits resolution on a fast time scale. If you have to sit around and integrate for a long time to encode more than 16 bits (or, say, increase the laser power used for more dynamic range), well, it might not work so well.
  3. Data loading and memory: this thing could do very fast Fourier transforms and array math even if the data load operation is relatively slow, and the “registers” are fairly narrow. They are  only claiming a frame rate of 20Hz on their prototype, as I said above. On the other hand, if it’s going to do something like a pentabyte pagerank type thing, loading data quickly and being able to load lots of data is going to be really important. It’s not clear to me how this sort of thing will scale in general, or how to deal with values that are inherently more like 64 bits than 16 bits. If the frame rate increases by a factor of a million, which looks possible with the MIT breakthrough, assuming their detectors are up to it, a lot of things become possible.
  4. Generality: analog computers have been used for “big O” difficult calculations until relatively recently; often on much the same kinds of CFD problems as Optalysis hopes to compete in. I suppose they still are in some fields, using various kinds of ASIC, and FPAAs. The new IBM neural net chip is a recent example if I understand it correctly. The problem with them is the fact that they are very difficult to program. If this thing isn’t generally good at matrix mathematics at least, or requires some very expensive “regular computer” operations to do general computing, you might not be gaining much.

I’m hoping this actually takes off, because I’d love to sling code for such a machine.

 

Decent history of optical computing:

http://www.hindawi.com/journals/aot/2010/372652/

 

Their cool video, Narrated by Heinz Wolff:

https://www.youtube.com/watch?v=T2yQ9xFshuc

 

 Kinds of hard problems this thing might work well on:

http://view.eecs.berkeley.edu/wiki/Dwarfs

 

Edit add:

One of their patents (I was unable to find any last night for some reason), which includes commentary on the utility of this technique in problems with fractional derivatives. Fractional derivatives are the bane of finite element analysis; I ran into this most recently trying to help a friend do microwave imaging on human brains. I’ll add some more commentary as I go through the patent later this evening.

http://www.freepatentsonline.com/8610839.html

 

 

Spotting vaporware: three follies of would-be technologists

Posted in Design, nanotech, non-standard computer architectures, Progress by Scott Locklin on October 4, 2010

When I was a little boy in the 70s and 80s,  I was pretty sure by the 21st century I’d drive around in a hovercraft, and take space vacations on the moons of Saturn. My idea of a future user interface for a computer was not the 1970s emacs interface that the cleverest people still use to develop software today, I’d just talk to the thing, Hal-9000 style. I suppose my disappointments with modern technological “advances” are the boyish me complaining I didn’t get my  hovercraft and talking artificial brain. What boggles me is the gaping credulity that intelligent people treat alleged developing future technologies now.

A vast industry of professional bullshit artists has risen up to promote and regulate technologies which will never actually exist. These nincompoops and poseurs are funded by your tax dollars; they fly all over the world  attempting to look important by promising to deliver the future. All they actually deliver is wind and public waste.  Preposterous snake oil salesmen launched an unopposed blitzkrieg strike on true science and technology during my lifetime. I suspect the scientific and technological community’s rich marbling with flâneurs is tolerated because they bring in government dollars from the credulous; better not upset anybody, or the gravy train might stop flowing!

While I have singled out Nano-stuff for scorn in an article I’d describe as “well received,” (aka, the squeals of the ninnies who propagate this nonsense were sweet music to my ears), there are many, many fields like this.

The granddaddy of them all is probably magnetic confinement nuclear fusion. This is a “technology” which has been “just 20 years in the future” for about 60 years now. It employs a small army of plasma physicists and technicians, most of whom are talented people who could be better put to use elsewhere. At some point, it must be admitted that these guys do not know what they are doing: they can’t do what they keep promising, and in fact, they have no idea how to figure out how to do it.

I think there is a general principle you can derive from the story of magnetic confinement fusion. I don’t yet have a snappy name for it, so I’ll call it, “the folly of plan by bureaucracy.” The sellers of such technology point out that it is not known to be impossible, so all you need do is shower them in gold, and they will surely eventually deliver. There are no intermediate steps given, and there is no real plan to even develop a plan to know if the “big idea” is any good. But they certainly have a gigantic bureaucratic organizational chart drawn up. The only time large bureaucracies can actually deliver specific technological breakthroughs (atom bombs, moon shots) is when there is a step by step plan on how to do it. Something that would fit in Microsoft project or some other kind of flow chart. The steps must be well thought out, they must be obviously possible using small improvements on current techniques, and have a strict timeline for their completion. If any important piece is missing, or there are gaping lacunae in the intermediate steps, the would-be technology is a fool’s mission. If there is no plan or intermediate steps given: this would-be technology is an outright fraud which must be scorned by serious investors, including the US government.

To illustrate this sort of thing in another way: imagine if someone shortly after the Bernouilli brothers asked the King for a grant to build a mighty aerostat which travels at 3 times the speed of sound. Sure, there is no physical law that says we can’t build an SR-71 … just the fact that 18th century technologists hadn’t invented heavier than air flight, the jet engine, aerodynamics, refined hydrocarbons, computers or titanium metallurgy yet. Of course, nobody in those days could have thought up the insane awesomeness of the SR-71; I’m guessing a science fiction charlatan from those days might imagine some kind of bird-thing with really big wings, powered by something which is thermodynamically impossible. Giving such a clown money to build such a thing, or steps towards such a thing would have been the sheerest madness. Yet, we do this all the time in the modern day.

A sort of hand wavey corollary  based again on fusion’s promises (or, say, the “war on cancer”), I like to call, “the folly of 20 year promises.” Bullshitters love to give estimates that allow them to retire before they’re discovered as frauds; 20 years is about long enough to collect a pension. Of course, a 20 year estimate may be an honest one, but I can’t really think of any planned, specific technological breakthrough developed by a bureaucracy over that kind of time scale, and I can think of dozens upon dozens which have failed miserably to the tune of billions of research dollars. What “20 years” means to me is,  “I don’t actually know how to do this, but I  wish you’d give me money for it anyway.”

A burgeoning quasi-technological field which is very likely to be vaporware is that of quantum computing. This pains me to say, as I think the science behind this vaporware technology is interesting. The problem is, building quantum gates (the technology needed to make this theoretical concept real) is perpetually always somehow 20 years off in the future. We even have a very dubious company founded, and in operation for 11 years. I don’t know where they get their money, and they manage to publish stuff at least as respectable as the rest of the QC field, but … they have no quantum computer. Granted, many in the academic community are attempting to keep them honest, but their continued existence demonstrates how easy it is to make radical claims without ever  being held to account for them.

David Deutsch more or less invented the idea of the quantum computer in 1985. It is now 25 years later, and there is still no quantum computer to be seen. I think Deutsch is an honest man, and a good scientist; his idea was more quantum  epistemology than an attempt to build a practical machine that humans might use for something. The beast only took on a bureaucratic life of its own after Peter Shor came up with an interesting algorithm for Deutsch’s theoretical quantum computers.

Now, let us compare to the invention of modern computers by John von Neumann and company in 1945.  Von Neumann’s paper can be considered a manual for building a modern computer. His paper described a certain computer architecture: one which had already been built,  in ways that made its mathematical understanding and reproduction relatively simple. Most computers in use today are the direct result of this paper.  I’d argue that it was engineering types like Hermann Goldstine and John Mauchly and J. Presper Eckert who actually built digital electronic computers before von Neumann’s paper, who made the computer possible. In turn, their ideas were based on those of analog computers, which have a long and venerable history dating back to the ancient Greeks. The important thing to notice here is the theory of binary digital computers came after the invention; not the other way around.

Now, it is possible for theory to have a stimulating effect on technology: von Neumann’s paper certainly did, but it is rare to nonexistent to derive all the properties of a non-existent technology using nothing but abstract thought. The way real technology is developed: the technology or some sort of precursor gizmo or physical phenomenon comes first. Later on, some theory is added to the technology as a supplement to understanding, and the technology may be improved.  Sort of like, in real science, the unexplained phenomenon generally comes first; the theory comes later.  In developing a new technology, I posit that this sort of “theory first” ideology is intellectual suicide. I call this, “the folly of premature theory.” Theory don’t build technologies: technologies build theories.

Technology is what allows us our prosperity, and it must be funded and nurtured, but we must also avoid funding and nurturing parasites. Cargo-cult science and technologists are not only wasteful of money, they waste human capital. It makes me sad to see so many young people dedicating their lives to snake oil like “nanotechnology.”  They’d be better off starting a business or learning a trade. “Vaporware technologist” would be a horrible epitaph to a misspent life. I have already said I think technological progress is slowing down. While I think this is an over all symptom of a decline in civilization, I think the three follies above are some of the proximate causes of this failing. Bruce Charlton has documented many other such follies, and if you’re interested in this sort of thing, I recommend reading his thoughts on the matter.

Follow

Get every new post delivered to your Inbox.

Join 337 other followers