Locklin on science

Why quantum mechanics (and electrical engineering) uses complex numbers

Posted in physics by Scott Locklin on January 13, 2021

I make no secret of being a John Horgan fanboy. I came to similar, somewhat less pessimistic conclusions to his “The End of Science” on my own without being aware of it (thanks to Bill Dreiss for pointing it out). Over time I become possibly even more pessimistic;  he seems to be correct in his bracing pessimism in almost all regards and might not be pessimistic enough. I can’t imagine trying to learn quantum mechanics in detail at his age, or even at my age, without the proper didactics or mathematical background. It is a heroic thing to attempt, even for a science journalist as good as he is. It bugs me, though, that nobody sat him down and explained complex numbers to him, and why they’re useful in quantum mechanics and other places. It also bothers me that Scott Aaronson can’t figure out why quantum mechanics uses complex numbers either; mostly because he’s supposed to be smarter than me, a mere toiler in the quantitative vineyards rather than a mighty academic and alleged quantum mechanic.

Jawline of a true hero

There are a lot of confusing things about quantum mechanics. Things which rise to the level of actual mysteries in some cases. The use of complex numbers is not among these things. Complex numbers are obviously not physical; I can’t win the square root of negative one quatloos in a gambling game. You can’t measure a complex number in a physical experiment; not in quantum mechanics or anyplace else. While the solutions to the Schroedinger equation written in its usual form are complex, it is the squared modulus of these solutions which are physical; and that’s not a complex number.

Complex numbers are, however, very useful as a mathematical tool. They’re used everywhere in physics and engineering where one must represent oscillatory phenomena or rotations; as far as my dumb ass can tell, that’s one of the main things they’re for. Nobody is confused in electrical engineering when they’re first trotted out as solutions to the differential equations involved in RLC circuits. Nobody is confused in E&M class when you use complex numbers to find solutions to the Maxwell equations. Nobody blinks an eye when they model the generators of the rotations of a solid body with (complex) Pauli Matrices using the Cayley-Klein parameters. Nobody freaks out when classical optics is festooned with complex numbers. Complex numbers are a great way of keeping track of things that have amplitude and phase; that’s why we use them.

Consider the Schroedinger equation:

i \hbar \frac{\partial}{\partial t}\Psi(\mathbf{r},t) = \hat H \Psi(\mathbf{r},t)

That little i is what gets people’s panties in a bunch. If the \hat H operator isn’t time dependent (aka the particle in the box or any of the other classical QM 101 examples), the time dependent piece falls out and you’re left with

E \Psi(\mathbf{r},t) = \hat H \Psi(\mathbf{r},t)

where you can proceed to not worry about it, beyond the fact that the particle in the box has a frequency term which looks like \exp^(-iEt/\hbar). Oh muh gawrth, a complex number! Well, really bro, it’s just shorthand for cos(Et/\hbar + \phi) -Euler taught us this, like 250 years ago. You know the solution to the Schroedinger equation has a time dependent piece with a frequency proportional to the energy and the quantum \hbar aka \Psi ~  \exp^(-iEt/\hbar) -that’s literally what quantum mechanics is: the realization that matter in potential wells has a time dependent frequency proportional to the quantum. We know it’s true, not because some wizard invented quantum mechanics, but because we observed this happening in the world of matter. You can write this in a different way without the complex number to keep track of the oscillatory term. Schroedinger’s early papers actually did write it without the complex numbers. Go and check for yourself in Schroedinger’s collected papers on Wave mechanics:

https://archive.org/details/in.ernet.dli.2015.211600

It’s pretty interesting as a historical document as you can see Schroedinger putting his ideas together starting from the Hamilton-Jacobi equation. He starts talking about vibrations in terms of complex numbers in the third paper (page 41 Die Naturwissenschaften 1926) in a sort of ad-hoc way, more or less as I did above. He did similar calculations one paper earlier without using complex numbers; just sines and cosines; but as everyone in the universe who has ever worked with complex numbers knows: working with complex exponentials is a hell of a lot easier. It isn’t until page 103 in this book (Annalen der physik 4/81/1926) that he gets around to writing down something which is time dependent. Behold the real Schroedinger equation: still no complex numbers:

 -\hbar^2 \frac{\partial^2}{\partial t^2}\Psi(\mathbf{r},t) = \hat H^2 \Psi(\mathbf{r},t)

Since, as everyone knows, the kinetic energy term is a second order differential operator, you end up with a fourth order mixed partial differential equation here. Fourth order partial differential equations are ass to solve, and you can legit more or less just take the square root of each side and solve for \Psi and call it a day. That’s what Schroedinger did, and that’s why there are complex numbers in quantum mechanics. Go read it; it’s a beautifully reasoned piece of physics with zero mystification from the likes of Schroedinger. Anyone who tries to make this into something weird and mysterious, or who babbles on about quaternions or tessarines or octonions or whatever doesn’t understand differential equations applied to the material world, what complex numbers are, what physics is (this all comes from experiment, dipshits; the equations are just a model): they understand nothing.

Look at the big forehead on Herr Dr. Professor; physiognomy is real

Even more trivially, one could simply write down the ordinary Schroedinger equation as two equations; the real part and the complex part, then transform the solutions into functions of the real part: it’s what you’re actually doing when you solve the complex Schroedinger equation. You’ll end up with a cosine oscillatory thing with a phase. Pretty much everybody who has ever solved any differential equation which has solutions with wiggly bits knows this.

There are other non-complex-number-having formulations of quantum mechanics out there if Euler’s formula rustles your jimmies that hard. You can do Bohmian pilot waves pretty much starting from the Hamilton Jacobi equations of classical physics, ending up with the fourth order differential equations Schroedinger came up with, using a  “quantum potential” and then taking the square root. Or not taking the square root, if you’re study work hard 18 hours differential equations. There have been other non Schroedinger-respecter efforts to inadvertently or vertently de-complexify his equation. Ernst Madelung came up with an equivalent formulation of quantum mechanics based on hydrodynamics with no overt square root of negative ones apparent. Of course virtually nobody understands hydrodynamics (even more so than quantum mechanics if you can imagine that), and a glorified version of the diffusion equation is a lot easier to deal with, so it never really caught on. At the end of the solution, you’re still going to end up with a little wiggly term with a frequency proportional to the Energy divided by \hbar. Because that’s how matter works. We know this, not because Schroedinger was a mighty Oz, all wise and powerful, with an equation from the mind of God, but from experiment. Which is pretty much the only way we ever learn anything in physics.

Again, quantum mechanics is plenty weird, but worrying about the complex numbers is moronic. It’s not just the two above who worry about it; it’s apparently a thing. People all over the place publish papers on this. Papers which presumably move them closer to tenure. There are forum questions all over, with answers that …. are decidedly a mixed bag of mostly right and absolute chowderheaded confusion. I can only wonder at how this confusion arose in the first place. Do people not study functions of a complex variable any more? Do people not solve differential equations using pencil and paper any more? Some of these mystifying dimwits are allegedly working physicists or quantum information theorists (whatever the fuck that is -presumably they warm a seat in a university somewhere). Have they not ever built, like, a simple RLC circuit for tuning in AM radio waves? What is going on here? There are plenty of mysteries in quantum mechanics: complex numbers are not mysteries of quantum mechanics. Making them somehow mystical rather than an ordinary use of centuries old mathematical tools is mush-headed nonsense and any jabroni who plays at this beyond the undergraduate bong-water level ought to be ashamed of themselves.

50 Responses

Subscribe to comments with RSS.

  1. glaucous_noise said, on January 13, 2021 at 8:33 pm

    His analogies to the Eikonal equation were particularly intriguing. I used them in a paper at one point for developing a numerical method.

    People need to read original papers more frequently.

    • Scott Locklin said, on January 14, 2021 at 12:26 pm

      Speaking of which, can you post a link to the Eikonal thing you mention?

      • glaucous_noise said, on January 14, 2021 at 3:11 pm

        I’m actually not sure which paper he wrote it in. I found it in a gem of an old book called “Electromagnetic Theory and Geometrical Optics” by Klein and Kay. I believe it is referenced in this monograph by them:

        https://archive.org/details/electromagnetict00klin/page/n1/mode/2up

        To summarize, Schrodinger asked the question “What would my wave equation be if its characteristics were given by classical mechanics?” and worked backwards.

    • Anonymous said, on January 14, 2021 at 11:05 pm

      If I understand correctly (from a different source): An eikonal approximation method is something where you propagate rays from points through a system, and keep track of phase along each ray. If you pick your reference planes right, you get close to the right wavefunctions without having to solve a PDE over space. Some sort of formalization of Huygen’s method.

      • glaucous noise said, on January 15, 2021 at 4:22 pm

        Yes that’s essentially correct, although generally it is not used to get the correct fields or wavefunctions, rather it is used as an approximation where it is valid. For instance, it is used in rendering Monte Carlo codes in computer graphics, where wave effects are handled phenomenologically in the scattering mechanisms.

  2. guidovranken said, on January 13, 2021 at 8:36 pm

    Hi Scott, you’re one of the people I followed at HN. Sorry to see you go but I get it. I’ve been wanting to set up a new forum like HN but more open minded and maybe invite-only, though it would be hard to amass a good and active user base. I hope you keep writing your blog.

    • Scott Locklin said, on January 14, 2021 at 12:39 pm

      The problem with open, modern forums is 80% of the participants in any of them are animated human chum incapable of rational thought. People whose thoughts consist of propaganda and corporate press releases are the rule; the exceptions are the only ones worth listening to. I’ve found a few corners of the internet which are reprieves from this; they mostly have strict rules about posting on topic subjects (quantitative finance, machine tools, geopolitics), they cost money, or are places which NPC types denounce as being evil somehow and so, leave alone.

      You can probably do a HN type thing which is vastly better by keeping people discussing, like, computer stuff. It’s a reasonable thing for, say, Urbit to try doing if they can manage to refrain from saying goofy shit like “Hoon” too often.

  3. a scruffian said, on January 13, 2021 at 8:57 pm

    Schrodinger really was a great natural philosopher. True, much of the time I can only see a genius cruising high above my flight ceiling: but there’s always the impression that he’s grappling with reality, and when he descends to where I can understand him, he always gets to the heart of the matter. The efforts of Heisenberg and especially Bohr to communicate seem hopelessly muddled by comparison.

    I’d especially recommend the essays “Are There Quantum Jumps?”, and the book “My View of the World”.

    In his book on General-Relativistic cosmology — “Expanding Universes” — he’s so frank about the intractabilities and ambiguities of that paradigm that I suspect him of subtly trolling the GR community.

    • Scott Locklin said, on January 14, 2021 at 1:10 pm

      I think all the men of that era exhibit a thoughtfulness completely missing in modern “muh science” types, who seem more the types who enjoy sitting in committees and petty political fights. They were deeply cultured and actually enjoyed wrestling with ideas. Men of the mind rather than men of the defined contribution pension plan and diversity committee. Schroedinger really has a level of thoughtfulness which was well beyond even most of his contemporaries. Heisenberg definitely had a lower level of verbal intelligence than most of those guys; my late pal Marty said the same thing about the late-life Heisenberg he met. To pile on a little bit; his math wasn’t first rate either.

      Verbal intelligence isn’t everything though. Heisenberg inspired awe in his colleagues, even his elders, for his actual physical insight. He was considered by Born to be embarrassingly superior to himself (also a Nobelist mind you) in his physical insights, though Born more or less helped Heisenberg with his math. I sort of picture Schroedinger yacking with his smart friends in cafes and figuring stuff out that way. Heisenberg would figure stuff out in non-verbal thunderbolts while wandering around on lonely paths in the mountains.

      Behind all this was the very active and monstrous political conflicts of the day: we think of these guys as living quiet bookish lives (most of them had been pre-1914, which was a sort of golden age of prosperity and peace), but they were living in times even in the 1920s where Jewish scientists had their work denounced and were actively persecuted by Nazi imbeciles. Guys like Heisenberg who weren’t Jewish were involuntarily drawn into these political upheavals. I mean, WW-1 only ended 7 years prior to Schroedinger’s series of papers.

      For that matter, Hermite had only invented the polynomials which bear his name, and which Schroedinger used to quantize the harmonic oscillator 60 years previous to his great work. To give a sense of historical scale: that’s like 1960 to us. So, for novel mathematical ideas …. maybe Reed Solomon codes.

  4. DamnItMurray said, on January 13, 2021 at 9:06 pm

    I’m currently doing engineering, second year, and have yet to come by a complex valued function or de. I should probably start self-learning par your blog, because, considering the type of brain power in universities nowadays one is to expect real analysis to also be dropped from the curriculum. Maybe they will be aptly replaced by more rigorous and applicable subjects, like Lithuanian feminist poetry and/or (gender)fluid mechanics.

    • Scott Locklin said, on January 14, 2021 at 1:25 pm

      I first encountered the ideas of functions of a complex variable with Fourier, Laplace and Z transforms in an early course in electrical circuit analysis. It was pretty damn the torpedoes and was significantly ahead of our levels of Calculus understanding, but we got through it quick and it definitely stuck. We then proceeded to model it all using p-spice because it’s a PITA modeling complex circuits using transforms, graphs and slide rules compared to just entering an array of components into a text file feeding some ancient fortran thing. It’s possible they just give you the modeling technology now, which would be tragic, even if you never use the transforms again. Later as I said above I got a course in functions of a complex variable which ended up making physics grad school a breeze, but even then it was far from normal; we just had a bored physics professor who wanted to teach something off the beaten path to see if he could.

      The book was one of those $8 Dover thingees; “Complex analysis and its applications” by Silverman. Schaum’s outline also helped. You could probably do it yourself with a little tutoring. To sell it a bit, the method of residues was one of Feynman’s tricks for doing complicated integrals in his head; I trolled a few people with this in grad school. The last time I looked (admittedly in the 90s), computer algebra systems couldn’t deal with many kinds of integrals which were relatively trivial using the method of residues.

  5. Andrei Radulescu-Banu said, on January 13, 2021 at 9:15 pm

    Good stuff. A bit over my head though.

    You might like Stephen Wolfram’s https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/. It explains how to build a physical universe computationally (in fact, how to build many universes) in a way that unifies gravity and quantum mechanics. It is very accessible.

    Interviews of Wolfram are available here:
    https://www.youtube.com/watch?v=ez773teNFYA&t=2539s
    https://www.youtube.com/watch?v=-t1_ffaFXao

    • Scott Locklin said, on January 14, 2021 at 1:32 pm

      #triggered

      I hate that guy. His “new kind of science” is both horse shit and a ripoff of superior thinkers like Ed Fredkin, who never thought to hire a PR agency to promote it.

      • Andrei Radulescu-Banu said, on January 17, 2021 at 3:11 am

        Did not know about Ed Fredkin – will look him up. Thanks!

  6. Chiral3 said, on January 14, 2021 at 12:16 am

    There’d be a lot more pen and paper work without – time evolution, rotation, commutators, … – and that’s probably the biggest reason why history has filtered out other formulations. Conversely people spend a ton of time integrating real valued functions and doing all types of transformations when the complex integration would be easier.

    Related but separate is the pros and cons of using real, complex, and spinor/quaternion (or Cl() more generally).

    Like learning quant finance I’ve come to believe that learning subjects like E&M or QM is an exercise in semiotics. I think after watching the nth mathematically unsophisticated person derive BS (using Ito and ignoring or assuming the underlying limit theorems) as either a test of interview question I started to reflect on parts of my physics education. Not the graduate level of mechanics, stat mech, QM, and E&M, for which we all used the same text books – Jackson, Sakurai, Goldstein, etc. – and the questions were really hard. But the intro books, like Griffiths, are largely exercises in symbol manipulation (maybe not unlike the first time you solve R_ij = 0 which initially is just index gymnastics up to showing vanishing christoffels). Particle in well, derive Erhenfest, etc. It’s really not until around 2nd quantization or something at a graduate level that things start to click and it isn’t just doing rote calcs. At least that’s my opinion. I would think that most people with some decent basic math could plow through, say, Griffiths, if that’s your definition of learning QM later in life.

    • Scott Locklin said, on January 14, 2021 at 2:21 pm

      My undergrad jr year QM book was filled with unpleasant busy work; I’ve excised it from my library; a friend who had aced Griffiths (or whatever he used) in another school flunked when he transferred because the algebra was too much for him. No semiotics there. It was worthless educationally despite being a tough two semester course. Undergrad E&M was Reitz-Milford and Christie, which was an odd combination of unpleasant precursors to Jackson and really trivial shit. Amusingly we were deriving spherical harmonics in E&M, where it was presented as “too hard” in QM. I knew E&M quite well after this; the professor was a brutal chinese immigrant and I had a real passion for it, as I had built a lot of electrostatic doodads and Tesla coils as a teenager. We really could have just done Jackson though. In fact when I did Jackson I found it more or less a review, with a few new things that were lacunae in Reitz/Milford/Christie. Undergrad stat mech was Reif which was harder (and vastly better) than grad school Ma, which seemed like semiotics to me and was totally worthless. Classical mechanics undergrad was Thornton-Marion, which was pretty good on Lagrange picture and the Euler angles as a text. Jennie was a hippy though, and it was definitely slowed/dumbed down: we could have done more of the text in 2 semesters. Classical mech grad school used Florian Sheck as a text; it’s a very good text, but we only did a couple of chapters out of it, and really followed Goldstein. Great course; Ezra Newman was OG, and I guess the Penrose continuity helped.

      Back to QM: grad school we got lectures from Messiah (which is by far the best QM book) and somehow our text was Gordon Baym, who I assume was friends with the professor, because it had no merits as a text otherwise -I think Messiah was out of print at that point. I later had a GF taking a course at Cal using Sakurai (from a useless background of “here’s a picture of molecular orbitals”) and couldn’t understand how anyone could learn anything from that turd. Even Baym was better than that.

      Anyway you might be right that the standard trajectory is semiotics to understanding, but in my case it was stupid grunt work, mixed with useful grunt work. Grad school was grunt work and semiotics. A real mixed bag. It was obvious that it was an ad-hoc collection of approaches, and that it left me with some strengths and not too many weaknesses; I think the grunt work is needed to actually understand things beyond hand wavings.

      • glaucous noise said, on January 15, 2021 at 4:37 pm

        My background was a physics bachelors followed by applied physics/electrical engineering PhD.

        I took all of the physics graduate courses as an undergrad.

        What I felt was missing was the dirty business of studying the phenomenology of the real world. I work in complex applications (e.g. heterostructures or semiconductor devices) and when your objective is to make something that works, you realize that the physics is in the experiments, not the textbooks. I also focus entirely on numerical modeling so I’m not an experimentalist either, and I know quite a bit of nonequilibrium statistical mechanics, many-body theory, electromagnetics, and so on.

        Let me illustrate. A friend of mine and I got into a lengthy black board discussion about exactly the topic of this blog post. We were contrasting classical and quantum mechanical path integrals (e.g. the sort you see in Onsager-Machlup vs. those you see in non-relativistic quantum mechanics). We were pretty obsessed with the complex phase factor as opposed to the real factor in the classical path integral.

        The reason we didn’t consider the obvious answers was because there is very limited understanding in the structure of the theory, which physics textbooks obsess over. Studying the structural properties of an abstract theory is like studying a logical outline of the plot of Tolstoy’s War and Peace: you’ll see patterns which may correspond to something, and you can write a PhD thesis in literature on them. But those patterns are only obvious if you read the book. Suffice it to say we got nowhere, gave up, and got drunk at the local college bar, where I tried to pick up a chick who was probably several standard deviations out of my league. Really not my best day.

        Physics textbooks also avoid real world applications because they cannot describe them in the artificial context of a course. Not so in engineering courses; you have to talk about nasty things like the oxide interface in a MOS capacitor, and you realize that modeling isn’t witchcraft where reality is captured in mystical pictograms on a page (voodoo in the way a savage might think a Polaroid has captured part of his soul), but actually more of a linguistic game where you’re describing what’s happening. That’s not to say there are no deep laws to nature, but rather that some people take those laws too seriously.

        • Scott Locklin said, on January 16, 2021 at 12:09 am

          >not to say there are no deep laws to nature, but rather that some people take those laws too seriously.

          That’s an amazingly awesome aphorism that I’d like to club every “quantum information theorist” to death with.

          I confess I had a giant inferiority complex to the theorist johnnies (for some reason; I got better grades and passed the quals earlier than almost all of them) and figured someone would give me an engineering job if I stuck close enough to matter. I mean, big success and all; I wouldn’t trade places with any of them. But in hindsight probably many of them were good at formalism rather than physical understanding. It was probably a pervasive problem, which has now passed down unto the generations. I suppose at this rate, the pyramid scheme of physics academia is doing a pretty good job of discrediting itself, so whatever residual respect it had will dissipate shortly.

          A famous HF founder once told me he only hired EEs (and almost nobody with a PhD) and that physicists are kind of retards. I was mildly offended at the time, and now I realize he was completely justified in this; I wouldn’t hire a physicist either.

          • benespen said, on January 16, 2021 at 10:39 pm

            A decade ago I used to wonder sometimes whether I had done myself a disservice by skipping physics grad school, despite all my professors telling me that I should do it. These days, I don’t care at all. It turns out I like figuring out how things work and then using that knowledge to make stuff. I probably lucked out by ignoring what everyone was telling me to do.

            Your hewers of wood and drawers of water madrassa sounds pretty awesome though. I would love to talk to some kids who went through it, I bet they would learn all kinds of useful stuff, unlike some of the kids I interview these days.

  7. Chiral3 said, on January 14, 2021 at 4:14 am

    I’ll mention one other thing on top of my original comment – thinking about this, which is in the rear view mirror for me. Complex numbers obviously make the math easier. It extends linearly what is an operator theory so things like commutators and poisson brackets are easy to write down, maybe with some loss of unitarity. But if I think about what you learn before and after QM it really bridged classical mechanics to QFT. Classical Hamiltonian mechanics is most easily written down complexified, which naturally extends to quantum. This sets up for canonical quantization by keeping everything linear and the canonical problems naturally extend to QFT. For instance, the HO is just a thing on a circle in classical phase space which is just the symmetry group for SU() in quantum. So I suppose there’s a pedagogical angle too. Someone smarter than me probably knows the whole history.

    • Scott Locklin said, on January 14, 2021 at 12:15 pm

      I probably learned in some weird order that nobody else gets; I was doing Greens functions and path integrals before I ever quantized anything. I basically had two grown-up courses my sophomore year that put me mathematically ahead of most of my colleagues in grad school who had to work through Arfken when their brains were three or four years more ossified. I also lucked out and Jennie Traschen (a Penrose student, most famous for damning America on public access TV Sept 10, 2001) taught us a bunch of group theory junior year in a 1-credit seminar, also before I was doing any serious QM. Made spin orbit coupling easy-peasy. Also got lucky with an experimental junior year course (probably now reserved for grad students) in applied analysis using Hilbert spaces/orthogonal functions, which should probably be given freshman or sophomore year.

      I’ve given a lot of thought on the didactics of physics (and signal processing/stats/machine learning); from when I was learning it myself it was obvious that they were giving me some baby steps bullshit to justify having another tenured position teaching the training wheels versions. My assertion is you can take a kid who knows calculus-2 (high school) and in 3 years, they’ve finished their grad school coursework. I strongly suspect, based on what age people 100 years ago got their PhDs that this was the perfectly normal trajectory of everyone daring enough to attempt physics. Not doing this is a total waste of human potential; you’re literally wasting the most productive years of your most talented students lives. No wonder science in current year is shit: the curriculum is designed for stupid people. I’m not sure what stats/ML/signal processing people get taught; I assume from observation you’re better off having studied EE than just about anything else, so maybe just jumping into Kalman and Cramer-Rao and damn the torpedoes is what is needed. I have a fantasy of implementing this in some mountain Madrassa where I make the kids chop wood, carry water and build internal combustion engines when they’re not doing math.

      • Chiral3 said, on January 14, 2021 at 1:19 pm

        It’s funny you say that because, thinking back, I spent quite a bit of time noodling around with path integrals and green’s functions and group theory, particularly Lie groups, before my formal QM education. At some point as a 11 or 12 year old I decided I was really interested in physics and just read everything I could and decided at some point I had to learn calculus (to solve diff eqs) and linear algebra. So I had the professors as an undergrad that said “I’ll sign off on x credits of independent study so you can keep working through blah blah.”

        So I shlogged through Sakurai, Jackson, and Goldstein as an undergrad, which is/was unusual (probably the reason I hiked qualifiers, but not because of any natural ability, it was just natural curiosity). I had taken complex analysis with the math department but that wasn’t helping me as I started to wade into QFT. It was as simple as all those typical Gaussian / QFT kernels before you learn all the tricks, which are really only good for solving the same problems (my point about semiotics). So I asked a math prof for help. He was a little wacky. Did important work on Riemann, functional analysis, looked homeless, had strong opinions. He agreed to meet once a week with me and he told me to bring “whatever text they are using in calculus 1”. I scratched my head at this request. So I meet with him and he says “turn to the back of the book where the tables are”. And we proceeded to solve each and every integral that was in a table in the back of the book because they weren’t the derivative of elementary functions or they didn’t yield to the calc 1-3 tricks for functions defined over R. That was how I learned practical complex integration so I could study QFT more effectively and it was a better that all the graduate complex analysis I took for that purpose.

        But I think you bring up a good point in your post re Schrodinger Scott. I think you are talking about the physicists of that time and either the powerful simplicity of their exposition or their insanely deep physical intuition. I don’t think I have read Schrodinger’s original work. I recall Heisenberg, of course Dirac, and I spent some time reading all the original gauge theory work (Fock, London, up to Yang-Mills). I was very fortunate to have a wonderful GR teacher directly connected to Einstein and he spent 1.5 years giving me a distinctly non-modern treatment. On the side I read all of the original relativity papers. But I think the point still holds – they didn’t start wacking off with noodle math, it was all very physically motivated, very straight ahead, rooted in dimensional analysis and the desire for no jerk derivatives. An advisor to the dude that taught me GR, who I met a few times, would always start conversations with me the same way: what are you working on?…. uh huh… yes… good… ok… how would you build an experiment to verify that?

        • Scott Locklin said, on January 14, 2021 at 2:36 pm

          My complex analysis stuff was taught by a bored community college physics prof who still published GR papers, and who sawed off all the unimportant pieces and just taught us the bits that mattered; even conformal mapping got short shrift for being a trivial trick you might use once or twice in an E&M course, but he still taught it well enough I can do it to this day. Only 4 people in the class, and only 2 were serious, so it was practically a personal tutorial. Gave me a taste for what would be possible in a physics madrassa.

          My only real educational regret in grad school was not taking the team taught course in GR by Ezra Newman and Carlo Rovelli. I figured my career path was fixed at that point to experimental atomic physics, specializing in quantum dynamics, so it wouldn’t help me with anything. Hindsight it’s the one area of Real Physics I’m pretty ignorant of, and both men were absolutely spectacular educators. Worse than that; I’m interested in information geometry from my background in topological data analysis (point topology wasn’t bad to learn as an Old Dude since I went pretty deep into dynamics -muh donuts), and completely lack the mathematical machinery to put a dent in it without about 6 months of heads down on it. Please tell me information geometry is horse pookey so I can forget about it. I bought Thorne/Wheeler last year, and it just pisses me off, sitting there, taunting me for being a lazy dumbass in grad school.

          I highly recommend the linked collection of Schroedinger papers. It will remind you why physics is awesome, or was anyway. I motored through most of them in a few hours in between zoom meetings.

          Linking it again so you can open it in a tab for later: https://archive.org/details/in.ernet.dli.2015.211600

          • glaucous noise said, on January 15, 2021 at 6:45 pm

            I can give you my personal experience with information geometry and let you make of it what you will.

            In a previous lifetime, I was a PhD student working in computational molecular biophysics. To be succinct, the general physics and modeling problem was as follows: A protein is a very complex molecular system, and as a molecule it undergoes substantial conformational changes. These are associated with its function. Using physical models, can we understand how these occur? Framed physically, we would like to sample pertinent portions of the molecular distribution function. The trouble is that even with immense supercomputing resources and physically oversimplified models, this is a tremendous challenge.

            My thesis (which I later abandoned before switching to electrical engineering) at the time focused on an intuitive importance sampling technique. Our competitors ranged from the Folding-at-Home guys who seemed to think that a protein’s dynamics could be entirely modeled by a Markov process (!!!!!), up to a range of individuals who considered far more mathematically exotic techniques including techniques that emphasized the non-Euclidean geometry of the protein phase space. Techniques employed included those related to information geometry, to Taken’s embedding theorem, to a related effort in statistical thermodynamics to “geometrize” everything (see work on so-called thermodynamic length, I believe this was pioneered by Gavin Crooks).

            What I noticed was that our intuitive importance sampling technique (it was a bit more than that, but I’ll leave it there) could compete with every fantastical technique that was being considered. The success hinged primarily not on technique utilized, but on “feature” selected… as is often the case.

            However, the geometric approach was so unintuitive and esoteric that converting an intuitive picture of the molecular dynamics into the mathematics of non-Euclidean geometry was so elusive and challenging that our competitors had to obtain their models algorithmically. Their performance wildly varied relative to ours. Often their methods would just rediscover obvious choices such as the radius of gyration.

            In summary, and to be specific, geometry is a very clumsy, abstract language that is often less efficient than dynamical systems. Every time I have taken my work in that direction I have regretted it.

            But, for what you do, involving data analysis, many features to select from, differing objectives such as pattern recognition and so on, perhaps my personal experience is irrelevant.

            • Scott Locklin said, on January 16, 2021 at 12:29 am

              It absolutely looks like horse shit; like someone sneezed while editing a noodle theory journal. It’s even largely made in Japan, which, for all their cool wristwatches and glorious engineering, has basically kind of failed to produce any interesting native-made applied math in its long history. But I hear rumors it’s the tits at pulling signal out of weird heteroskedastic noise and unbiasing messy estimators, which is pretty much the most interesting thing in the world to me right now. Nobody I actually know has direct experience, mind you, nor can anyone point me to an example sufficiently clear I can wrap my brain around what happened or even ascertain whether it was impressive or not.

              Ideas from point topology turned out to be useful in data science. Unfortunately virtually everyone who understood this well enough to use it, works on something else now, and most of the open source software on the topic is hot garbage. But I saw/did some neat stuff with it. Most of my classifiers have some kind of metric space crap in them. In particular, “unsupervised learning” is basically all distance function oriented. If you stop to think about it; if you don’t have labels for your data, what the fuck else are you going to do? How do you define, for example, an outlier when you don’t know what normal or happy data looks like? I mean, the word “outlier” kind of suggests a solution, and that’s how most people chop the data up. Compression works too, but you use it to define a metric.

              Anyway maybe I’ll hire someone who claims to understand it and make them do tricks. Probably more direct and cheaper than learning general relativity and slogging through a bunch of books written by Japanese autists to find out.

  8. Raul Miller said, on January 14, 2021 at 6:38 am

    This writeup strikes a sympathetic chord with me.

    But… there’s also Heisenberg’s approach, which also does not use complex numbers, and I think his work deserves at least a passing mention.

    For some reason, people seem to shy away from matrix approaches. I am not sure why. We seem to love using tables of numbers, and while the math involved can get into fiddly realms, the basics are pretty straightforward.

    That said… quantum mechanics has always suffered from fiddly issues. So, there’s that, also…

    What’s interesting for many people is how you can design interesting things (antennas, computer chips, etc.) using quantum mechanics (and chemistry, and regular “large scale” mechanics, and, …) to give hints on how to improve things.

    • Scott Locklin said, on January 14, 2021 at 12:21 pm

      Heisenberg picture, of course, came first: the book I linked explicitly mentions this; Schroedinger is at pains to justify why his idea is equivalent. It’s really a beautiful series of papers, and despite my disgust for what the field is now, reading it makes me remember why physics, at least 1920s heroic era physics, is so damn cool, and what innovation, rather than gerbil-wheel science, actually looks like. If you’re a former physics guy, spend an afternoon with the book; it’s really wonderful.

      Heisenberg approach probably not given because people don’t get exposed to linear algebra until later, and infinite dimensional matrices are more difficult to work with directly. Of course, knowing some orthogonal function theory and Schroedinger makes it easier to populate the matrix elements. I think he actually does have complex numbers in there; they also show up naturally in linear algebra, though the important eigenvalues are real valued (aka Hermitian matrices if I recall my definitions properly).

  9. Igor Bukanov said, on January 14, 2021 at 10:14 am

    Negative numbers does not have sense in many domains, but it is useful for calculations to assume that all subtractions have a meaning just to have simpler calculation rules. Then 3 – 5 should have the same meaning as 0 – 2 so we just call that -2. And it is fine as long as the final result is a positive number or zero.

    Then we want all divisions to make sense. When integers are not dividable we just call the result of 3/5 a rational number even in reality this is just a pair of (3, 5) with rules how to do operations on pairs.

    Then we want all square roots of positive numbers to have a meaning. So we just call sqrt(2) a new irrational number. And nobody complains that if a result of a formula in physics is sqrt(2), it is meaningless as no results of physical experiment can give that. At beast we can record a result of a measurement as an interval between two rational numbers. But properly dealing with intervals is very messy even on computers, so we continue to use sqrt(2) as a nice shorthand and floating point as its approximation even if that leads to numerical instabilities which proper accounting for intervals may often address.

    Then we want a revers of a**b to always work. So we call ln(2) a transcendent number. Then we want square roots and in general any function to have a reverse. Then we call sqrt(-1) an imaginary number and the pair of (real number, imaginary number) a complex number and invent a bunch of rules how to apply common operations to such pairs.

    Then we stop since with complex numbers all operations are reversible.

    • Igor Bukanov said, on January 14, 2021 at 10:29 am

      And the relevance in physics is that one can treat (phase, amplitude) mathematically as encoding in polar coordinates of a complex number. Sp one can model with complex numbers a lot of phenomenons involving phases and amplitudes.

      • Scott Locklin said, on January 14, 2021 at 12:24 pm

        Exactly. “It’s just polar coordinates, bro” kind of sums up my stream of consciousness screed.

        Horgan probably never thought about polar coordinates because he went to journalism school, but could be brought up to speed in 10 minutes with a globe. I’m not sure what the other jackasses excuses are.

  10. Anonymous said, on January 14, 2021 at 11:11 pm

    The algebra of the split complex numbers leads to a relationship between them and hyperbolic rotations/hyperbolic trig/lorenz boosts. There’s also a split-complex Cauchy Reimann theorem, though I’m not sure how useful it is, since “closed curves” don’t really mean the same thing.

    I remember watching this siggraph video on algebraic geometry (grassman/clifford algebra type stuff) and how you could represent all these geometric processes with this abstract algebra. Pretty neat.

    Thing to keep in mind though: We represent the thing-in-the-world with math. The representation is a choice we make. It doesn’t seem legitimate to try to derive information about the world from our choice of representation of it – but that seems to be the program with people chasing compactness of representation or “mathematical beauty”.

  11. Anonymous said, on January 14, 2021 at 11:27 pm

    One thing that I think is a legitimate mystery, but the quantum mechanics seem to think is trivial (just stare hard at this page of commutators): is the way spin-1/2 particles relate to geometry. We have a mathematical object, the spinor, which “explains” the states of a fermion. (The internal state of orientation and rotation). Except, it doesn’t really explain anything. (Sure there are *just* enough DOFs to cram a relative phase and pointing direction in there, and I can do the twiddling as well as the next guy.)

    There is a perfectly obvious relation between integer spins and geometry-as-we-know it. There is also a perfectly obvious relationship between orbital-angular-momentum and macroscopic angular momentum. If you plot the probability-current of the eigenstates of an angular momentum operator, it comes right out that the blob is moving around in a way that carries angular momentum. Angular momentum is the moment of the way something is moving around. Spin interchanges with macroscopic angular momentum – and whenever anything interchanges with anything else, we expect them to be the same sort of motion, the same sort of mechanical quantity in the end.

    But asking what happens “on the surface of an electron”, or why it seems to live in some bizzaro-geometry with twice as much angular space as the world we know (when what is going on “on the surface of a photon” is obvious and directly copy-pasted from Maxwell’s equations) is one of those “meaningless questions” you’re supposed to get over.

    There are many more things I consider mysterious – I may be wrong though, I’m an engineer and still learning this stuff, not a QFTer.

    • Anonymous said, on January 14, 2021 at 11:58 pm

      (and I realize this staring hard at a representation is something I warned against in the post above. Is it the right representation? How seriously can we take the derived implications?)

    • Scott Locklin said, on January 15, 2021 at 11:24 am

      Spin-1/2 is definitely one of those things which is lost in didactic treatments. Again I feel lucky I got the group theory/Cayley-Klein stuff early on which motivates it, and ties it back to generators for ordinary rotational geometry. I sort of remember asking Traschen why it was 1/2 instead of 1 and getting back something like “well, that’s just the way it is.” Putting it on its head; if matter were spin-integer, there’d be no exclusion principle and matter would be pretty damn boring.

      Probably, if you go figure out who wrote about this first back in the 1920s, it will be a model of didactic clarity the way Schroedinger’s papers are. Dealing with the spin orbit coupling math is such a bear for people to deal with, nobody really gets time to think about it.

  12. Walt said, on January 15, 2021 at 4:43 pm

    Complex numbers are a great way of keeping track of things that have amplitude and phase; that’s why we use them.

    Yes. In reality, measuring EM waves travelling through circuits are one of the few ways to visualize phase. There really aren’t many other good ways. Even at that, what is phase? It’s the number of times the wave has rotated around the unit circle relative to some time t = 0, which is when you made the measurement. You can also visualize phase in S-parameters, which are complex ratios of power waves in a two-port device, in which case the phase is relative to the input and output of the device. This is pretty confusing, even for people who work with it.

    Things on a quantum level are even tougher to measure and visualize.

    I suppose amateur astronomy might be another way of teaching kids how to visualize phase if you could set up some sort of interferometer.

    • Scott Locklin said, on January 16, 2021 at 12:13 am

      Polarized sunglasses? “Why can’t I see the HUD on the Audi?” Phase!

      • Walt said, on January 20, 2021 at 5:46 pm

        I thought polarization was the orientation of the E field relative to earth axes. H-pol sunglasses will pass only H-pol light. Maybe polarization and phase are related in ways I hadn’t considered. TBH, I haven’t worked through the math.

  13. Anonymous said, on January 15, 2021 at 5:44 pm

    I wonder if there’s some thing as an absolute phase of a quantum wave, and we’ve just been unable to measure it because 1. measurements are disruptive events done with other particles, and 2. The “time frequency” implied by the mass of even a light particle like an electron at rest is something like 1E20 Hz.

    In the RF domain, we can look at an antenna and see the voltage climb up and down in time as the passing E-field pushes on things. In optics, the best we can do is interferometry, because we have no 10THz measuring devices. (though I may be out of date on this with all the ultra-short-pulse laser stuff)

    (I know that in standard Schodinger quantum physics, the system as a whole has a phase, which changes when you add or subtract particles, and we can only measure the effects of relative phase through interference. I’m speculating here.)

    With water waves, or voltage waves in a circuit, or masses on springs, it’s obvious that there is a true fact of the matter what the displacement is at a given time, and that the (amplitude,phase) refers only to how it varies in space or in time. Real quantities are what you get after picking one component or the other.

    In quantum physics, it’s obvious that something “wavy” is going on. Orthodoxy states that the absolute phase is irrelevant and only relative phases are measurable and real. Is it?

    • Scott Locklin said, on January 16, 2021 at 12:33 am

      I dunno, as an undergrad we did some kind of two slit experiment thing with electrons in a physics lab. Looked like phase to me. Some people tell me I’m imagining things and nobody did this until 2010 or something, but I’m pretty sure it happened!

  14. Maynard Handley said, on January 18, 2021 at 7:46 pm

    We can do much better than this.
    Complex numbers appear in QM in at least three places (which all appear to be the same place because the structure of quantum mechanics is “multiplicative”, so three conceptually independent things look like the same thing because they are multiplied together.

    So we have
    – spinors. Here it is very clear that complex representation is purely a calculational convenience; it’s trivial to represent spinors without complex numbers, but the cost is that 3-spinors become 4 (rather than 2) component, and 4-spinors become 8 (rather than 4) component.

    – U(1) interaction (ie interaction with the EM field). Again a calculational convenience, somewhat analogous to using e^(i kx) rather than sin(kx) in vast amounts of classical physics and EE. Again this could be replaced by encoding the same info in a unit-length 2-vector which rotates around a circle. The same is true (and hooks into the same math) for electroweak interaction, based on what I already said about spinors.

    – “spreading in space” and “Born interpretation”, ie basic schrodinger equation. This is the most obvious use case, the one that’s the focus of this article.
    I’m loath to say “we can express single-electron Schrodinger without i” because that’s somewhat trivial. What about multi-electron Schrodinger, bosons, the rest of the particle zoo, i appearing in Lagrangians and path integrals?

    Rather than sweep this under the rug, I’d propose two radical statements:
    First is that all of QM (and QFT) can be understood as complex-valued measure theory. This doesn’t help with interpretation, but gives you something familiar to latch onto. Most of measure theory and stochastic processes, basically everything except the stuff that relies on “probabilities are non-negative” and suchlike can be pulled in to form a conceptual scaffold. *Everything* (even collapse of the wave function) reduces to a *single* mystery, namely complex-valued measure.
    Second is it possible that ALL the complex number stuff arises from scattering off the Higgs field? I honestly don’t know, but it’s an interesting idea that I haven’t yet falsified. That reduces the problem to something even simpler; that Higgs scattering modifies the measure in a way that *looks like* a complex valued measure, and everything later flows from that?

    None of these three points exactly say you are wrong. But, IMHO, they say that there’s more interesting stuff to be found by examining the question closely than by insisting that it can be swept away at the cost of more complex math. (Of course, yes, I agree on the nonsense of woo, and the insistence that the complex numbers mean magic, or even that they mean [insert favorite mathematical structure] is essential to QM. Except if that mathematical structure is measure theory! That’s absolutely the key to understanding QM/QFT comfortably!)

    • Scott Locklin said, on January 18, 2021 at 7:55 pm

      >First is that all of QM (and QFT) can be understood as complex-valued measure theory. This doesn’t help with interpretation, but gives you something familiar to latch onto.

      As opposed to …. polar coordinates? Nothing in QM is different from regular people looking at polar coordinates. You don’t even need to look at a globe. You could stick your arm around and point at things in the room; that’s all it is.

      >Second is it possible that ALL the complex number stuff arises from scattering off the Higgs field

      No.

      > None of these three points

      That was only two points.

  15. John Horgan said, on January 20, 2021 at 4:22 pm

    Scott, you had me at “John Horgan fanboy.” I’m pretty sure I never read those words before. Others who read my recent column on QM have tried to set me straight on complex numbers and their long pre-quantum usefulness and history. But no one does so as helpfully and wittily as you do here. Thanks!

    • Scott Locklin said, on January 20, 2021 at 4:52 pm

      New experiences as we get older. You deserve more approbation; noticing naked emperors is one of the best things one can do in current year.

      Good luck with QM. Susskind’s books are pretty good FWIIW!

  16. Free Logic said, on January 20, 2021 at 5:10 pm

    Re
    “Complex numbers are obviously not physical; I can’t win the square root of negative one quatloos in a gambling game. You can’t measure a complex number in a physical experiment; not in quantum mechanics or anyplace else”

    Integer and real numbers are not physical either, right? While you can win $1 in a lottery, you aren’t and can’t win the number 1.

    So in a sense, it is not clear what makes the worry about complex numbers special. Or do I miss something?

  17. Tony said, on January 22, 2021 at 4:37 am

    Scott, I sympathise with your frustration on this. But you didn’t contribute any new insight. People are not puzzled about why complex numbers show up in ODE’s/PDE’s etc. They’re puzzled about Born’s Rule: the mechanics of how complex numbers ‘square’ to give the probability of a particular measurement giving a particular value. Hundreds of papers are written on this, but with no clear physical picture. Hestenes for example has a physical interpretation for hidden phase variables in the rotational movement of an electron (arXiv:1910.11085v2), but how does this produce entanglement (though see section VII of his paper)?

    • Scott Locklin said, on January 22, 2021 at 12:27 pm

      Apparently people are puzzled why complex numbers show up in QM. I mentioned two high profile ones by name, and linked a dozen more.

    • Raul Miller said, on January 22, 2021 at 6:08 pm

      Born’s rule is more that the probability of a particular possibility being observed is the square of the *magnitude* of the wave function. And even that’s an oversimplified expression of the rule (and not only because “square of the magnitude of a complex number” means the same thing as “complex number times its conjugate” — it’s that I have omitted talking about the constraints describing the cases where the rule applies).

      But, anyways… it’s not actually squaring a complex number, and I expect that people confused by the idea that you square a complex number to get the probability would be right to be confused — those people probably weren’t paying enough attention to the concept.

      • Tony said, on January 23, 2021 at 8:37 pm

        Excuse my sloppiness. I hereby declare my allegiance to the Wikipedia page on POVM’s.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: