Locklin on science

Optalysys and Optical computing

Posted in non-standard computer architectures by Scott Locklin on August 11, 2014

Years and years ago, when I was writing up my dissertation and thinking about what I was going to do with my life, I gave some thought to non von-Neumann computing architectures. This was more or less the dawn of the quantum computing/quantum information era as a field, when it turned from an obscure idea to a sort of cottage industry one could get tenure in.  I had the idea in my mind that the Grover Algorithm could be done on a classical optical architecture, and wrote many equations in a fat notebook I now keep in my kitchen between a cookbook and a book on Cold War Submarines.  I suppose had I written some more equations and discovered I wasn’t full of baloney, I might have made a nice academic career for myself as a designer of novel computing architectures, rather than the underemployed constructor of mathematical models for businesses and sporadic blogger I have turned into. Be that as it may, my erstwhile interests have equipped me with some modest background in how the future gizmos might work.

While quantum computing appears to be turning into a multi-decade over-hyped boondoggle of a field, there is no reason why optical computers might not become important in the near future. Years before my attempted non von-Neumann punt into the intellectual void, my old boss Dave Snoke handed me a book called “Fundamentals of Photonics” which implied optical computers might one day be very important. The book contained a few chapters sketching outlines of how future photonic computers might work. Armed with this knowledge, and the fact that a new startup called Optalysys is claiming practical optical computers are right around the corner, perhaps these endless hours of wasted time can be converted into a useful blog post.

comp1

The idea for optical computing has been around for decades. Some trace it back to Zernike’s phase contrast microscopes. Certainly by the 60s, Vander Lugt [pdf link] and company were doing thinking about optics in terms of signal processing and computation. It began to be a real technology in the 70s with the invention of “spatial light modulators” (SLM) of various kinds. Historical SLMs have been all kinds of weird things you run into in the optics world; Pockels cells, phototitus converters, Kerr rotation; but the main technology used  is the familiar LCD. Interestingly, some MIT researchers have recently come up with LCDs that are capable of very fast switching. This could be the innovation which makes optical computing real.

comp

Things Optalysis isn’t: This  certainly isn’t an “all optical computer.” The “all optical computer” is sort of the philosopher’s stone of the optical computing world. If they accomplished this, they’d certainly claim it, and people like me would be plagued with doubt. It is also not any kind of “quantum computer” for the same reasons. In fact, they more or less say it isn’t even a digital computer.

What Optalysis is: From the vague descriptions on their website, this is a  standard analog optical computing architecture consisting of a high resolution detector,  at least one SLM, a lens and/or mirror or two, and an array of laser diodes. Presumably there has been some improvement which has made this a practicable commercial item; the MIT fast LCD might eventually be one of them. This should be seen as a sort of “math co-processor,” rather than a general purpose computer, though it does some important math.

optalysys-optical-computing-multiple-lasers-640x353

What can it do? Matrix  math, basically. Even thinking about this in the most hand-wavey manner, optical computers should be good at  image processing. When you break down what is involved in image processing, matrix math and Fourier transforms come to mind. We know how to do matrix math with optical computers. The limiting aspect is how many elements you can encode and modulate quickly. An optical matrix multiplication doodad will do it in O(const) time, up to the limiting size of the matrices that can be encoded in such a machine. This is huge, as most matrix multiplications are O(N^3) for square matrices of size N. There is all manner of work done on improving matrix multiplication and other forms of matrix math via statistical methods (subject of a coming blog post); the problem is very important, as  O( N^3 ) is pretty bad on large problems. People who have studied the history of computer science know how important the  Fast Fourier Transform was. It reduced the 1-d Fourier transform from O(N^2) operations to O(N log(N)), opening up vast new applications to signal processing techniques. Optical computing can in principle do Fourier transforms in O(const) [pdf link ] up to some maximum encoding size. Even better, it can do the 2-dimensional version in the same O(const) time. Even the FFT is, of course O(N^2 log(N)) for two dimensional problems, so this is a potentially significant speedup over conventional computing.

mmult

Most of programmers probably have glazed eyes at this point. Who cares, right? You care more than you think. Many of the elementary operations  used by programmers have something matrix related at their cores. Pagerank is an algorithm that has become essential to modern life. It is effectively a very efficient way of finding the most important eigenvector of a very large matrix in O(N) rather than the O(N^3 ) it takes to find all the eigenvalues and then find the biggest one. It is an underappreciated fact that Brin and Page turned a nice thesis topic in linear algebra into one of the great companies of the world, but this sort of thing demonstrates why linear algebra and matrix math are so important. Other database search algorithms are related to Singular Value Decomposition (SVD).  SVD is also used in many scalable forms of data mining and machine learning, and I’m pretty sure it works faster on an optical computer than on an ordinary one.  There are also pattern recognition techniques that only make sense on optical computers. Since nobody sells optical computers, it is not clear where they might be useful, but they might very well be helpful to certain kinds of problem; certainly pattern matching on images could use a speedup. The company is shooting for the notoriously computationally intractable field of computational fluid dynamics (CFD) first. CFD involves a lot of matrix math, including 2-d FTs in some realizations.

Scale: right now, Optalysis is claiming a fairly humble 40Gflops in their prototype, which is far from exceptional (laptops do about the same in a much more general way). I think a lot of people pooped their pants at the “eventual exabyte processor” claims in their press release. In principle they can scale this technology, but of course, a lot of things are possible “in principle.” The practical details are the most important thing. Their technology demonstrator is supposed to have 500×500 pixels in the SLM element. This is way too small for important problems; they might even be using an overhead projector LCD for the prototype doodad (the 20Hz frame rate kind of implies this might be the case): this is quite possible to do if you know some optics and have a good detector. How to scale a 500×500 element device to something on an exabyte scale is far from clear, but the most obvious way is to add more and finer elements, and run the thing faster. A factor of 2.5 million or more increase sounds challenging, but increasing the frame rate by a factor of a thousand and the number of elements by a factor of 1000 seems like something that actually could be done on a desktop as they claim.

Problems to look out for:

 

  1. LCDs: I believe the LCD technology is fast enough to be useful, but lifespan, quality and reliability will be limiting factors. If the fast LCD gizmo only lasts a few million cycles of shining lasers through it, this will be a problem. If the pixels randomly burn out or die, well, you have to know about this, and the more that die, the worse the calculation will be. I guess at this point, I also believe it makes small enough pixels to be useful. On the other hand, if they are using an overhead projector LCD and have no plans to radically upgrade the number of pixels in some way, and the completed machine is fairly large, it probably won’t be improving to Exascales any time soon.
  2. Detectors: I’m guessing there haven’t been optical computers yet mostly because the output detectors have been too noisy  for low power lasers. This is a guess; I’m not that interested in such things to investigate deeply, but it is a pretty good guess based on experience working with things optical. So, if the output detector system isn’t very good, this won’t work very well. It will be very different from writing code on a floating point processor. For one thing, you’ll be using a lot fewer than 64 bits per number. I’m guessing, based on the fast detectors used in streak cameras, this will be something more like 16 bits; maybe fewer. This is fine for a lot of physical problems, assuming it works at 16 bits resolution on a fast time scale. If you have to sit around and integrate for a long time to encode more than 16 bits (or, say, increase the laser power used for more dynamic range), well, it might not work so well.
  3. Data loading and memory: this thing could do very fast Fourier transforms and array math even if the data load operation is relatively slow, and the “registers” are fairly narrow. They are  only claiming a frame rate of 20Hz on their prototype, as I said above. On the other hand, if it’s going to do something like a pentabyte pagerank type thing, loading data quickly and being able to load lots of data is going to be really important. It’s not clear to me how this sort of thing will scale in general, or how to deal with values that are inherently more like 64 bits than 16 bits. If the frame rate increases by a factor of a million, which looks possible with the MIT breakthrough, assuming their detectors are up to it, a lot of things become possible.
  4. Generality: analog computers have been used for “big O” difficult calculations until relatively recently; often on much the same kinds of CFD problems as Optalysis hopes to compete in. I suppose they still are in some fields, using various kinds of ASIC, and FPAAs. The new IBM neural net chip is a recent example if I understand it correctly. The problem with them is the fact that they are very difficult to program. If this thing isn’t generally good at matrix mathematics at least, or requires some very expensive “regular computer” operations to do general computing, you might not be gaining much.

I’m hoping this actually takes off, because I’d love to sling code for such a machine.

 

Decent history of optical computing:

http://www.hindawi.com/journals/aot/2010/372652/

 

Their cool video, Narrated by Heinz Wolff:

 

 Kinds of hard problems this thing might work well on:

http://view.eecs.berkeley.edu/wiki/Dwarfs

 

Edit add:

One of their patents (I was unable to find any last night for some reason), which includes commentary on the utility of this technique in problems with fractional derivatives. Fractional derivatives are the bane of finite element analysis; I ran into this most recently trying to help a friend do microwave imaging on human brains. I’ll add some more commentary as I go through the patent later this evening.

http://www.freepatentsonline.com/8610839.html

 

 

17 Responses

Subscribe to comments with RSS.

  1. Brian said, on August 12, 2014 at 3:19 am

    How far away ARE we from mastering this technology? Is this another fusion tease? This sure seems like disruptive technology to me. But I sell ice cream for a living so I’m not exactly sure.

    • Scott Locklin said, on August 12, 2014 at 5:49 am

      As far as I know, it has been possible to do something like this for some time.

      • Toddy Cat said, on August 16, 2014 at 11:42 pm

        So why don’t you think more has been done with it? I mean, technology that actually involves moving large chunks of metal like aircraft and space shots and flying cars has been slowing down or even stagnating since the early seventies or so, but computer technology has seemed to be the exception. Is there something here that I’m not seeing? I mean, this sounds revolutionary.

        • Scott Locklin said, on August 17, 2014 at 12:06 am

          There are practical issues alluded to in various comments and in the blog itself: detectors, generality, reliability, noise, etc. That said, they are quietly being used in some places. In my pal’s patent, for doing brain imaging. In the example mentioned in another comment, for reading labels on smokes.
          New technologies are expensive, and businesses and even governments don’t like using things they can’t treat as commodities. There’s been various interesting competitors for disk drives for decades, but we still use disk drives. Incrementalism is something you can plan on; disruptions are a big risk.

          • ippisl (@ippisl) said, on August 27, 2014 at 12:00 pm

            > There’s been various interesting competitors for disk drives for decades

            What are those competitors you speak of ?

            • Scott Locklin said, on August 27, 2014 at 9:35 pm

              Magneto-optics technologies, bubble memory, memristors, holographic memory, phase change memory, PMC memory, ferroelectrics, racetrack memory… of course, flash memory … the list of contenders is endless.

  2. Darth said, on August 12, 2014 at 12:48 pm

    I did my master’s on optical computing on computer generated holograms — (think computing some whacky 2-D complex-valued transfer function, mapping it via some fancy math to some finite levels and etching in glass using IC techniques, or to binary 0/1 values and expressing on film. This was useful for fixed filters e.g. for looking for specific known targets using correlation filters. We also used mirrors from Texas Instruments and LCDs to do this adaptively. This was around 1991-1993. My MSc advisor had a nice business in this area for the military (we did SAR stuff and target-detection filters ), and for private companies (Philip Morris paid him nicely for a fast detector for various labels, tax stamps etc on cigarette cartons — those cartons came by at an insane speed).

    But it was a dying field. I am not a fundi on quantum stuff, but for regular(?) computing you need some nonlinearity and that means a ton of power and wear, or some conversion to electrons (LCD needs to be loaded, detectors pinged, lasers modulated etc). Around that time we just started to build some one-off correlation chips using gate array logic and other simple programmable chips — and they very quickly outperformed the optics. Also, resolution and other wavelength problems meant the optical systems were bulkier and packed fewer bits/area. After some back of the envelope calculations and a 3 year projection of Moore’s law I gave up and did standard Signal Processing for my PhD. And that decision was correct.

    Bottom line — subject to a very limited knowledge of these guys — but based on where we were back in the nineties (256×256 LCDs and 2048×2048 phase holograms) — and comparing against their claims — it sure sounds like they got a hold of my adviser’s circa 1993 pitch book and technology.

    • Scott Locklin said, on August 12, 2014 at 7:09 pm

      I remember power consumption and heat dissipation being a big deal back in the day. Didn’t realize that optical computers had been used for image recognition in a commercial setting, though I guess I am not super surprised. Cool that it was funded by the evil company now known as Altria.

      Anyway, it does still get used. One of my old colleagues is using it in his research for pattern recognition on brains:
      https://www.google.com/patents/CN102202561A?

  3. parkhays said, on August 12, 2014 at 1:06 pm

    I ran into rumors of optical computing years ago from a greybeard at my company. As I recall they where looking at it for Fourier transforms for signal processing and did so by launching a surface acoustic wave across (I think) the Fourier plane of a surface-contained optical path. No need to do this for 1D calculations any more, but pretty cool then.

  4. Seems to me that much of this ground was plowed many years ago by Dave Casasent at Carnegie Mellon. http://users.ece.cmu.edu/~casasent/

    • Scott Locklin said, on August 19, 2014 at 6:39 pm

      Yep, like I said, it’s been around for a while. That webpage brings back some memories though. I walked by his lab on the way to Kryder’s place quite a few times, I think.

  5. JesseCoole (@virtualevil) said, on November 10, 2014 at 5:12 am

    Hi Scott, I am fairly laymen for this sort of stuff. But could you please explain to me in simple terms whether Lightwave Logic’s non-linear organic electro-optical and all-optical polymers (Perkinamine) materials would be something that could help create a breakthrough?

    http://www.lightwavelogic.com/technology/device-technology/

    Do their SLMs using these organic materials mean that they are an upgrade to LCD technology?

    I have been following this company for a while and they seem like they are on to something. Some pretty impressive people joining the company over the last 12 months. Are they on to something?

    Cheers

    Jesse

    • Scott Locklin said, on November 11, 2014 at 3:14 am

      Roughly speaking I guess you’re on the right track, though the materials are quite different from LCDs. I went to a seminar on the subject of organic electro-optics about 15 years ago. My boss at the time thought it was important enough I should go check it out.
      I have no idea if this is viable or not, but if it does what they say it does (aka, you can do very fast switching “all optical”), it should be pretty important. It would also make the optalysys computer work a lot better, unless I am mistaken. Though of course, that would not be the first use of such technology.

      • JesseCoole (@virtualevil) said, on November 11, 2014 at 3:41 am

        Thanks for replying Scott, I have been looking into this stuff only recently and trying to get my head around photonics and how it all works. I’m interested in this company given its CEO is a former Admiral who was in charge of signals and naval space command for a decade or more until recently. They also appear to be doing a lot of work with the University of Colorado Boulder and in particular A/Prof Alan Mickelson who I believe is a world expert in optics and photonics.

        Lightwave Logic also seem to be in a key research partnership with Boulder Non-Linear Systems, that have some pretty significant defence contracts. I agree…I would think this would be the “first use” of any technology.

        It does seem that Lightwaves commercial interests are in speeding up and making it more efficient for the Netflix’s and Amazon’s of the world.

        I will be following your blog with interest, keep this sort of stuff coming. Even for us people who have never done a day of engineering!

  6. slehar said, on February 16, 2015 at 12:13 am

    Hi Scott,

    Great article on optical computing. But I think there is even more potential for it than even you suggest. Forget the Spatial Light Modulator (especially ones with discrete pixels) and switch to phase conjugation as your major computational principle. Phase conjugation is much like holography except that 1: the “recording medium” is fundamentally 3-D, not just 2-D, and 2: unlike holograms which are like static photographs, the phase conjugate mirror is like a dynamic hologram capable of being modulated on the fly. Check out my web page on

    An Intuitive Explanation of Phase Conjugation
    http://cns-alumni.bu.edu/~slehar/PhaseConjugate/PhaseConjugate.html

    Two laser beams cross within the solid volume of a nonlinear optical medium, and virtually any transparent optical medium goes nonlinear when the amplitude is great enough. The interference pattern due to the intersecting laser beams warps the glass and thus transforms the “passive” interference pattern into an “active” component that can bend or warp a third laser beam passing through that same volume.

    What kind of computation can be expected from phase conjugation? Check out this article of mine on phase conjugation for three-dimensional perceptual computation to solve the “inverse optics problem” in perception, i.e. for a given 2-D retinal image, reconstruct the 3-D world most likely to have been responsible for that given 2-D image, tracing light rays backward to reverse the optical projection in the eye, expanding the 2-D stimulus to a 3-D percept based on the Gestalt law of prägnanz, “simplicity”, “regularity”, Occam’s Razor, i.e. the geometrically simplest interpretation in 3-D is favored over less regular interpretations using 3-D optical reconstructive processes. Check out my paper

    The Constructive Aspect of Visual Perception
    http://cns-alumni.bu.edu/~slehar/ConstructiveAspect/ConstructiveAspect.html

    that suggests how this most challenging aspect of perceptual processing could be performed by a parallel analog 3-D optical / wave based computational principle using the principles of phase conjugation.

    It is a quantum leap beyond the simple matrix math multiplication in your proposed example, involving 3-D images interacting with other 3-D images to modulate a third 3-D image.

    Thats the real potential of the optical computing paradigm. And whatever works in optical systems, could also be implemented in other wave based systems of oscillations in a 3-D medium.

    Steve Lehar

  7. Conundrum said, on February 13, 2016 at 8:18 am

    Its an intermediate step between a classical and quantum computer then?

  8. […] (im Wesentlichen „Welche Regelmäßigkeiten zeigen sich in diesem Eingabestrom?“) 40 GFLOPS Programmierer zielen auf ein 340-GFLOPS-System von nächstes Jahr , was in Anbetracht der […]


Leave a comment