Locklin on science

Poking holes in Bell’s inequality: E.T. Jaynes possibly clearing up some mysteries

Posted in physics by Scott Locklin on June 6, 2022

E.T. Jaynes is sort of the patron saint of physicists going into data science.  He’s a guy I’m particularly fond of as I spent a good part of my early grad school career studying the quantum optics; the Jaynes-Cummings model was important in my first Ph.D. thesis project which was supposed to measure a quantum breaktime. Jaynes book “Probability theory the logic of science” came out around the time I started grad school and ended up being a huge influence on me. Some day I’ll read Jeffreys books which inspired it. I won’t go into great detail about his life and career; suffice it to say he was an important figure rather than some isolated weirdo, despite his fairly sparse wakipedia entry.

He wrote a little-commented-on paper in 1988 “Clearing up Mysteries (the Original Goal)” in which he shits on quantum mysticism from a tremendous height. This is amusing as his Thesis Advisor, Wigner is the source of a lot of the moosh headed quantum mysticism as far as I can tell. It’s even more amusing that he’s very possibly correct, and people mostly haven’t noticed or engaged with his ideas. It’s a little bit disjointed as a paper in my opinion; it comes off as a sort of dinner conversation by a great polymath, but its tremendously clear and filled with hilarity,

“…A standard of logic that would be considered a psychiatric disorder in other fields, is the accepted norm in quantum theory. But this is really a form of arrogance, as if one were claiming to control Nature by psychokinesis. In our more humble view of things, the probability distributions that we use for inference do not describe any property of the world, only a certain state of information about the world.In our system, a probability is a theoretical construct, on the epistemological level, which we assign in order to represent a state of knowledge, or that we calculate from other probabilities according to the rules of probability theory. A frequency is a property of the real world, on the ontological level, that we measure or estimate. So for us, probability theory is not an Oracle telling how the world must be; it is a mathematical tool for organizing, and ensuring the consistency of, our own reasoning.”

 

Jaynes goes on, rather irritatingly to take sides in Bohr versus Einstein arguments about completeness in quantum mechanics. This is justified in that the meat of the argument originated in this discussion, but it still bugs the hell out of me these dead people have any relevance to today’s thought (or that of 1988).  There must have been six orders of magnitude more time spent talking about those conversations than the amount of time and thought put into the conversations, and while Bohr and Einstein were inarguably great men, they weren’t 10^6 more clever than normies. The EPR paradox is explained. Effectively Einstein and his collaborators postulated quantum entanglement as an example of the incompleteness of quantum mechanics as entangled systems appear to violate special relativity. For the classic example: if you turn a photon into two quantum entangled photos using a KDP  or BBO crystal, you immediately know something about the polarization state of both photons when you measure one of them, even if they’re on the opposite sides of the universe. EPR’s argument was that you needed some hidden variables to explain this or that locality doesn’t exist (which is, of course absurd, but which appears to be doctrinal). Bell’s inequality is a formalization of this about a class of hidden variable theories (de Broglie’s hidden variable theories basically, which Bell was a partisan of). He then goes on to explain the Aspect experiments and the frothing hysteria around them from hippy-dippy “physicists” who used too much LSD in Santa Fe.

 

Jaynes unfucks this by pointing out that Bell is insisting that conditional probability is causal somehow; hence the babbling about “spooky action at a distance” in the EPR case and with entanglement in general. This reasoning is false; that’s not how conditional probability works. To believe this you’d have to believe the proverbial drawing of balls from Bernoulli’s urn causes some kind of action at a distance. Aka if you have a red marble and a black marble in your pocket, you withdraw a marble without looking and put it on a rocket, looking at the marble on the other side of the universe doesn’t cause anything to happen or travel faster than the speed of light; it’s just conditional probability. You can talk about wave functions collapsing but that’s really all this is.

The spooky superluminal stuff would follow from Hidden Assumption (assuming conditional probability means causal influence); but that assumption disappears as soon as we recognize, with Jeffreys and Bohr, that what is traveling faster than light is not a physical causal influence, but only a logical inference.

He also points out that the excluded Bell hidden variable theories are not the full set of hidden variable theories,  but merely the ones Bell was a partisan of (de Broglie-Bohm theories) and goes on to explain what  hidden variable theories excluded from the Bell inequality might look like and makes the suggestion that Herbert Walther has gizmoes which might put such non-de Broglie hidden variable theories to the torture. This was a very good guess at the time; I remember Walther’s surrealistic Bohm trajectories -Walther was a great experimentalist. Sadly this class of experiments never happened. Or if it had, I suppose it would end up as confusing epistomology and ontology as surrealistic Bohm trajectories ultimately probably was. He then goes on about some insanely frustrating pedantic irrelevancy about the thermodynamics of human muscle tissue which I will ignore.

 

 

I remember reading this towards the end of my physics career and thinking it made a lot of sense but he must be wrong somehow because it confused Einstein, Bohr, Bell, Aspect and a whole lot of other people who are/were smarter than the former auto mechanic who used to work for a guy named “Hatchet.” This paper was presented to me again a few weeks ago by my pal, the futures trader Bill Dreiss. At the time back in 2003, I needed to walk over to the physics library to use INSPEC to figure out who cited the paper; something I never did, but which wouldn’t have provided much as feedback in 2003 (one paper gave a passionate defense of psychokinesis). Now we have google scholar, a vast improvement over walking over to the physics library as the potential well is lower, even if the results are less complete and optimal. As it happens most who talk about this issue in detail, at least every paper I read carefully, seems to say that Jaynes is correct. I suppose I am also now …. considerably older, and have met many great savants (and “great savants”) and I’ve noticed that even the truly great ones very often make incredibly stupid blunders -more or less due to their being members of the human race. I have also noticed that it is extremely rare that followers call their heroes on their shit, even when their heroes are egregiously, hilariously wrong. So, I think it quite possible Jaynes was correct, and entanglement is basically mystical gobbledeygook that disappears in the cold light of clean probabilistic thinking.

I encourage everyone to have a look at the citing papers: there are 227 of them thus far. Quite a lot of them are citing the non-sequiturs in this paper. Some are Jaynes citing himself. I haven’t found any saying “Jaynes is quite incorrect about EPR paradox/Bell’s inequalities and here is where he went wrong.” Most of those I’ve read thus far who actually address this issue seem to think Jaynes is correct and quantum entanglement is just Bernoulli’s urn. One which was breezily negative by Richard D. Gill  appeared not to have read Jaynes paper properly, and is generally a reddit tier emotional deboonking of various things which make him uncomfortable; feel free to read it and contradict me. Gill’s an interesting person at least, but part of his career is kind of reddit man in nature. This may be unfair, but I thought his deboonk was perfunctory and lame.

 

A couple of workers wrote a 2008 paper making even stronger assertions than Jaynes, asserting more or less that Bell flubbed formal probability theory. They have a cute gedanken example from engineering demonstrating an EPR like “paradox” using table top electronics. FH Froehner wrote an amusing thing back in the 90s very much in the Jaynes Bayesian sensibility where he asserts both the truth of Jaynes assertions about entanglement and the class of non-Bell hidden variable theories. Money quote “quantum mechanics looks much like an error propagation formalism for uncertainty-afflicted physical systems that obey the classical laws of motion.” A Viennese by the name of Svozil wrote a lovely dyspeptic rant called “Quantum Hocus Pocus” in 2016, which is an absolutely delightful takedown of all kinds of quantum nonsense touted by various EU science initiatives. This includes a reference to his classic “Staging quantum cryptography with chocolate balls.” He agrees with Jaynes criticisms of interpretations of the Bell Inequality. Whether or not Svozil is correct in all details, I certainly appreciate his dyspeptic thumos. Thumos is something sorely missing both in contemporary science and modern life in general.

A.F. Kracklauer wrote two papers examining Jaynes assertions, 2017 “Entanglement vs Correlation in Quantum Theory” and “Bells Theorem: Loopholes vs. Conceptual Flaws”. Taken together these are each better than Jaynes original paper as he doesn’t digress as much, explains better and provides some historical background as to how this snarl of confusion got started in the first place. Marian Kupczynski also wrote a couple of articles in favor of Jaynes views “Entanglement and Quantum Nonlocality Demystified”  sets up a classical Bell experiment to show us what’s really going on here, “Is the Moon There If Nobody Looks: Bell Inequalities and Physical Reality” has Jaynes listed as a reference, and is broadly consistent with the argument though he never actually cites him (I assume a faulty Bibtex stuck the paper in there).

I’m no longer any kind of physicist but I enjoy reading such presentations and encourage people who know better than me to look into it. Probability theory is still pretty mooshy today; the level of understanding was quite low from Bell’s time on back to when physicists started talking about probability in Maxwell’s time. It is entirely possible that a lot of the quantum weirdness is just probability theory weirdness, and whether Jaynes is entirely correct in this instance, the probability part of quantum mechanics is a really good place to probe around looking for weaknesses.  Of course I’m also biased towards such “flipping over the card table” ideas, but it seems to me as there are entire fields with thousands of workers whose alleged subject is based around quantum entanglement, someone should be able to explain to me why Jaynes is wrong here. I realize they’re busy, possibly getting steaks for statisticians, but surely one of them has the time to dispel the ignorance of bit twiddling wretches such as myself.

88 Responses

Subscribe to comments with RSS.

  1. Derek P said, on June 6, 2022 at 12:05 pm

    Thanks for the pointer, I’ll check out the paper. On the rough topic, anyone have any idea what happened to Motls’s blog?

    • Scott Locklin said, on June 6, 2022 at 12:09 pm

      I’m afraid I don’t follow Motl or any other noodle theorists, though he is an amusing fellow and I hope he’s doing well.

      • mitchellporter said, on June 9, 2022 at 9:49 am

        He shut it down because his blog was monetized, Google Ads kept insisting that he remove pages about climate and Covid and probably other topics, and he didn’t want to keep handing out unappreciated wisdom for free. He also wanted to make a point. It’s a huge loss to public discussion of physics.

  2. Frank said, on June 6, 2022 at 12:50 pm

    “Jaynes book “Probability theory the logic of science” came out around the time I started grad school and ended up being a huge influence on me. ”

    Likewise!

  3. Hassaan said, on June 6, 2022 at 12:50 pm

    Great post Scott. Will have to poke into all material alluded too.

  4. Igor Bukanov said, on June 6, 2022 at 2:37 pm

    Besides Bell’s inequality there are Greenberger-Horne-Zeilinger (GHZ) cases. Those much harder to explain using just conditional probabilities.

    On the other hand both GHZ and Bell’s inequalities can still be explained by a local theory if one allows backward causation. The neat thing about that is that such backward causation on a quantum level avoids paradoxes related to the time travel as one cannot perform the relevant measurement without destroying the state.

    • Scott Locklin said, on June 6, 2022 at 2:46 pm

      GHZ…. I never studied this, but one of the first things to come up in google scholar is by Stapp, which is extremely not encouraging.
      https://journals.aps.org/pra/abstract/10.1103/PhysRevA.47.847

      I haven’t looked to see if Jaynes wrote more extensively on non-Bohmian hidden variables: I think he did. Anyway I encourage you to have a look at the paper and see how it might work with GHZ.

      • Igor Bukanov said, on June 6, 2022 at 6:37 pm

        I think what E.T. Jaynes proposed was what was called the common origin explanation. The paper is rather dismissive of Bell’s split in (12). Yet it is the essence of the requirement that distant measurement apparatus can be made independent despite the common origin.

        And I also fail to see how this can explain a thought experiment by David Mermin, see https://vdocuments.net/mermin-epr-paradox.html?page=1

        The author challenged to produce an explanation that did not violate the independence assumption while using a local state. So far only 2 ways to explain this were proposed.

        The first was that the measurement apparatuses did not detect all particles. That violated the requirement of the thought experiment, but could not be ruled out from the real setups. I read claims that more sensitive recent experiments ruled this out, but in those the independence of detectors could not be ensured as I understood. Plus the idea of not detecting everything does not work for generalizations of David Mermin’s thought experiment to GHZ, as those are not based on statistical reasoning. On the other hand experimental verification of those with truly independent measurement devices were not done either.

        The second explanation to the thought experiment was backward causation. To be precise the idea was that the state of a local patch inside its causation light corner in the future could not be determined from the state of the patch at the initial moment t. Rather the boundary conditions in the future were also necessary.

  5. Chiral3 said, on June 6, 2022 at 3:09 pm

    Haven’t thought about this stuff in years. I recall, not so much after finishing QM (Griffiths undergrad, Mertzbacher/Sakurai grad), but QFT (we used Viki Weisskopf notes for half the first semester, which was awesome), where days were as long as the integrals. A model is not necessarily indicative of what’s actually happening, even if it has great predictive power. I suppose this could be an argument for naturalness and prelude to the noodle dorks. Maybe it’s just as simple as QM isn’t the best thing to explain some of the things that come out of QM, which we’ve seen with less irksome theories, like GR. Nobody physically interprets renormalization but it works. I suppose this is the point with probability theories and even adjacent headaches, like 2nd law, which have really not resolved much in decades but still allow people that deal with matter to do cool shit. Like the “macro” quantum guys out in CA that are detecting the gravitational signature of gopher farts underground. Amazing and practical.

  6. Abelard Lindsey said, on June 6, 2022 at 3:28 pm

    Thank you for writing this article, Scott. I really do appreciate it. Someone has to reply to all of this quantum woo-woo, and your posting is exactly what needs to be said.

    • Scott Locklin said, on June 6, 2022 at 4:47 pm

      Hey, I’m not sure Jaynes got it right, and I fully admit I’m not really qualified to adjudicate the thing. I think he’s right because I basically can’t see how he’s wrong, but I’m very willing to be convinced otherwise if some quantum wunderkind shows up and proves it.

      Either way I think it’s an extremely valuable conversation to have, somewhere beyond the redditor event horizon.

      • Abelard Lindsey said, on June 7, 2022 at 7:56 pm

        Something about the quantum woo woo simply rubs me the wrong way. Especially when you have guys like Jack Sarfatti claim that the “hippies saved physics” when they did no such thing. You will note that ALL of our recent and current technological developments that came out of theoretical physics came about as result of pre-1960’s physics. So, you cannot say the hippies “saved” physics. If anything, they helped screw it up.

        Kudos for keeping an open mind about it, however.

  7. Brutus said, on June 6, 2022 at 5:13 pm

    Thanks for the references. It always seemed to me that mystical theories with “spooky action at a distance” were contradicted when Nelson was able to derive the Schoedinger equation from assuming a Brownian motion aether. If quantum theory can be derived from classical models, the mysticism has to go.

    • Scott Locklin said, on June 6, 2022 at 5:47 pm

      Yeah I remember Nelson’s thing. For those who don’t:

      Click to access 1966_ENelson_Derivation_of_SchrodEq_from_NewtMech.pdf

      There were a few problems with it: I think it doesn’t work if you’re not O(2) in momentum in the Hamiltonian; I’ll just guess this is a solvable problem, along with spin, relativity and all that other stuff he ignores. More importantly the stochastic motion of small particles implies this quantum colloidal aether -seems like you should be able to see that in the large. He does have a weird hand-wavey test which I think can be translated to observing electron positions in a microwave cavity, but it’s so vague and ill posted he doesn’t say useful things like “build cavity A, if this theory is correct it will do Z which disagrees with quantum result Y.” Which of course is very typical with this sort of thing.

      I don’t remember where I came across it; maybe reading Carver Mead’s book.

      • mitchellporter said, on June 9, 2022 at 9:52 am

        Nelson’s theory, like Bohm’s theory, requires nonlocality to describe multi-particle situations, precisely because of the need to account for entanglement (empirically, nonfactorizable joint probability distributions).

        • Scott Locklin said, on June 10, 2022 at 12:52 pm

          Nelson’s theory is kind of a non-sequitur to the Jaynes discussion; it’s just another pilot wave thing that works less well than Bohm’s thing. I think it’s local, but whatever: who cares, it doesn’t work. The two interesting things from “clearing up mysteries” is that

          1) Entanglement is kind of a LARP; conditional probability exists and isn’t causal. Psychokinesis isn’t real, and maybe it’s just urns. Nothing mysterious about urns if you’re a Bayesian (and you should be; Frasian at least).

          2) Bell’s inequality leaves out important classes of hidden variable theory

          As I said, the literature when it engages with this ideas at all, engages with 1, and mostly seems to think he was right about that. 2 is also interesting, but sparse enough nobody’s managed to engage with it at all (I might have missed something; it was a few hundred references I skimmed through).

          There’s still some subjectivity to Baysian probability; you have weirdos like Savage turning probability and statistics into some sort of subjective quantum weirdness. Jaynes seems to think Jeffreys solved all that; it’s been decades since I read Jaynes probability book and I have never read Jeffreys, so he might be right. I don’t think it matters much though; it has to be something like this (aka Bayesian probability is basically right, even if it doesn’t satisfy philosophers and mathematicians).

          • mitchellporter said, on June 14, 2022 at 12:10 am

            Bell inequalities and other bounds are meant to quantify how much correlation you can have in a world of local causality. The only potentially contentious assumption has been, that all the properties of the physical system prior to measurement, are statistically independent of the choice of which measurement to make. Questioning this may sound innocuous, but it means e.g. that if we choose to make a series of measurement choices, based on Moby Dick translated into a Unicode bitstring, the measured system must have had a bias towards precisely that bitstring already lurking among its hidden variables. That’s the objection to “superdeterminism” a la Hossenfelder, and if this kind of questioning is what Jaynes is doing from equation 11 forwards, then the same objection applies.

            These problems arise in the first place because multi-particle wavefunctions are ontologically ‘polylocal’; the individual probability amplitude is the amplitude for there to be a particle at x1 *and* another particle at x2, etc. Bohm’s and Nelson’s theories are polylocal in this way too, I think; they involve (deterministic or stochastic) motion that is local in *configuration space*… Maybe you can mimic this by monkeying with the rules in other ways, e.g. two oppositely directed arrows of time (Feynman-Wheeler absorber theory), networks of micro-wormholes (similar to Maldacena and Susskind’s “ER=EPR”), micro-patches of non-orientable space-time (Mark Hadley; similar to Feynman-Wheeler). But I have essentially zero hope that local causal interactions among objects conventionally localized in space-time can explain quantum mechanics.

            • rademi said, on June 14, 2022 at 12:28 am

              The issue with “locality” is, roughly speaking, trying to characterize what we might characterize as the “size” and “density” of a photon or perhaps “its” contribution to the history of “its” light cone.

              This tends to raise confounding vocabulary issues, because the words we like using descriptively here (and their associated memories and experiences) aren’t specifically tailored all that much to discussions of electromagnetism.

              That’s why we get quips like “God does not play dice” when talking about characterizations of the probability that a “magnetic effect” might pull one way or another depending on incredibly minute details of relative position. And these difficulties are why we try so repeatedly to distinguish between our models of such systems (and the necessary lack of detail in those models) and the physical characteristics that we’re trying to learn about.

              Put differently, “The only potentially contentious assumption” is itself a potentially contentious assumption.

            • Scott Locklin said, on June 14, 2022 at 3:52 pm

              You should read Savage. He’s basically applying what you said to non quantum probability theory everywhere; meaning people with urns are doing non local psychokenisis.

  8. DSJ said, on June 6, 2022 at 7:57 pm

    Scott, I always appreciate your attempts to fight back against our civilization-scale efforts to achieve peak bullshit. (downright sisyphean but I presume it’s as cathartic to write as it is to read)

    As opposed to COVID nonsense, this is one of those ‘old school obfuscations’ that most tend to ignore. Sort of tells you just how irrelevant we have made deep physics in current year.

    Anyway, enough of that – back to tune my quarkoscope and gluon-ray tubes!

    • Scott Locklin said, on June 6, 2022 at 10:32 pm

      On the contrary, billions perhaps tens of billions have been invested and thousands of people work on things which assume hundreds of thousands of elements can be “quantum entangled” -despite the fact that nobody knows how to do this …. or if Jaynes was correct, if it even means anything physical.

      Look at this clown from the Hudson institute (megadeath inc) calling two famous quantum computing critics traitors more or less for doubting overt frauds.

      https://www.forbes.com/sites/arthurherman/2022/05/19/waging-war-on-quantum/?sh=5c9cfe8e1889

  9. rademi said, on June 6, 2022 at 8:45 pm

    I noticed something odd, reading the pdf of the 1988 Jaynes paper.

    On page 11, I was staring at equation 15 and trying to figure out how it made sense. And then, I clicked on the page and it changed (the minus signs I had been staring at changed into equal signs.)

    So now I am wondering how many of the bad takes I’ve been noticing lately are a consequence of rendering problems…

  10. Cameron B said, on June 7, 2022 at 2:08 am

    I’d love to contribute to the discussion but my quantum knowledge is incompetent (however, the disdain in “clearing up mysteries” is the fantasy for every non-crack-smoking liberal arts publisher).

    Regarding your previous post, approximately how many pages of the George Kennan book examined Christianity?

    • Scott Locklin said, on June 7, 2022 at 2:41 pm

      I dont remember it being important in his diaries, though he was a committed Christian. If you mean on the cragged hill, there is a chapter where he makes a declaration of faith and tries to back it up with observations about life.

      • Cameron B said, on June 11, 2022 at 12:31 am

        Excellent, thanks. And, if you didn’t already know, Netflix released a final Norm special.

  11. Mongoose said, on June 7, 2022 at 2:35 am

    I’m also very fond of Jaynes’ work, and because of this I was quite excited when I happened across this paper at the same time I was getting into the meat of QM and starting to worry about things like Bell’s theorem. Unfortunately I came to the conclusion that it is Jaynes who is mistaken. I think his mistake is understandable as he spent the latter half of his career pointing out problems with how people use probability, so it’s natural that he would jump at what may seem at first glance to be dodgy probabilities.

    The problem comes from the fact that conditional probability IS causation. However the causation is not necessarily from one event to another. In the case of Bernoulli’s urn the conditional probability comes from the fact that the colours of the marbles have a common cause. In Bell’s theorem however, any common cause can be considered part of the hidden variables because this is the fundamental assumption: that the observed correlations are due to a common cause in the past light cones of the measurements. I’m not sure how well established the subject of causal modelling was at the time of Jaynes’ writing (the main text in the field, Pearl’s book on causality, was published in 2000) so this may explain the confusion.

    As someone else has mentioned, there is a stronger version of Bell’s theorem using GHZ states that doesn’t require any statistical argument.

    My favourite explanation of Bell’s theorem is Tim Maudlin’s lecture on the topic titled ‘What Bell Did’, and I highly recommend watching it if you have the time: https://www.youtube.com/watch?v=Vg5z_zeZP60

    But the real killer for Jaynes argument is actually discussed in his paper, and that’s Steve Gull’s version of Bell’s theorem. https://www.mrao.cam.ac.uk/~steve/maxent2009/ This basically boils down to “If there is a classical model that reproduces EPR statistics, make it.” There’s no physical assumptions, just computer programming. Gull’s takeaway is that some form of retrocausal/teleological dynamics are required, as this violates the assumption in Bell’s theorem that the hidden variables don’t depend on the measurement settings. Jaynes’ in his paper writes that he expects this ‘teleological spook’ to be ‘exorcised’ at some point, however this is yet to happen, and in fact the idea is growing in popularity to the point that Sabine Hossenfelder organised a conference around it.

    On a related note, Rob Spekkens started a research program a while back in the spirit of Jaynes’ work, the idea being to attempt to derive QM from classical statistical mechanics by adding in something like an uncertainty principle. In doing so ‘unscrambling the omelette’ of ontology and epistemology as Jaynes would say. They’ve made some progress, but there’s also a paper by Leslie Ballentine that argues against the interpretation of all probabilities in QM as being information theoretic in origin: https://link.springer.com/article/10.1007/s10701-016-9991-0

    • Scott Locklin said, on June 7, 2022 at 8:49 am

      I’m not sure what Im supposed to be looking at; hopefully it isn’t this: https://www.mrao.cam.ac.uk/~steve/maxent2009/images/bell.pdf

      Edit add:
      The lecture is pretty good for history (I have Zurek’s compilation of the historical papers), but it’s the standard thing that Jaynes was speaking against. I also completely disagree that conditional probability is causation. Urns bro. Of course causation can be modeled with conditional probability, that doesn’t make them the same thing any more than “taking the mean” is equivalent to, say, height.

      • Mongoose said, on June 8, 2022 at 1:44 am

        The results of an EPR experiment either have a common cause in terms of hidden variables or they do not. If not, then the observed correlations do not have an explanation.

        The most general factorisation of the probability for results ‘A’ and ‘B’ under measurement settings ‘a’ and ‘b’ and hidden variables ‘L’ as Jaynes states is P(AB|abL) = P(A|BabL)*P(B|abL)

        (Note that this is assuming that the choice of measurement settings are independent from one another as well as the hidden variables. This assumption is violated by retrocausal/superdeterministic approaches, but Jaynes thinks it is a reasonable assumption so we will run with it.)

        Now the problem for Jaynes is that Bell writes P(AB|abL) = P(A|aL)*P(B|bL) under the assumption that the conditional probability P(A|BabL) implies a non-local causal relation between A and {B,b}. Jaynes correctly points out that a conditional probability does not necessarily imply causation. However the issue is that whatever common causes do exist between A and B in the EPR scenario, if they are local then they can be described by the hidden variables. So either the more general factorisation is redundant or it is necessary because there are non-local causes. In the case of the balls from the urn that are removed and then moved apart before being looked at, it is correct to write P(AB|abL) = P(A|aL)*P(B|bL) because which ball is which colour is completely specified by the local hidden variables.

        I think some confusion arises because we tend to think of the probabilities as subjective. If I am person B, and know that I share a state |00>+|11> with person A, and then I measure in the {|0>,|1>} basis and get the result ‘1’ I can immediately update my probability distribution for A’s results as a function of their choice of measurement – because I know that their state must now be |1>. But there is no cause or communication here, simply an update in my own information.

        This is the wrong way to think about a Bell test though. Instead consider this: Person C repeatedly makes the same state |00>+|11> and sends a mode each to persons A and B, who then choose measurements to perform and then send their choice of measurement and their results to another person D. However C does not tell A and B what the state is or that it is entangled, and as a consequence to A and B their results are meaningless random numbers and they do not know that their results are correlated. It is then D’s job to explain the correlations he sees between the results of A and B. D has two hypotheses: either the correlations are from a non-local interaction, or they have a local common cause in which case he can use the distribution P(AB|abL) = P(A|aL)*P(B|bL) (again this is because if the results are cause be the hidden variables, conditioning on {B,b} will not change the probability distribution for A). Bell’s theorem shows that the latter cannot replicate the correlations that are predicted by QM and have been found experimentally.

        We’re then left with a few options. Non-local hidden variables a la Bohm. Violating the statistical independence assumption. Or denying that the correlations require explanation. There is a camp of physicists who call themselves ‘Qbists’ and maintain the third position. It is basically the Copenhagen interpretation with extra steps. Ironically though, they don’t shut up about Jaynes while simultaneously promoting the mind-projection-fallacy and things like “physics is information” that he argued so strongly against.

        Unfortunately that’s the only writing I could find by Gull about his argument that stumped Jaynes. I don’t know if the original conference paper has been lost. Richard Gill has written up a more formal version of the argument: https://www.mdpi.com/1099-4300/24/5/679

        • Igor Bukanov said, on June 8, 2022 at 7:25 am

          As I wrote in another comment, there are more options that can explain the observed experiments. The first is that not everything is registered by B and C. If the probability to detect depends on the hidden state, then it is possible to explain results classically. But proposed schemes required quite substantial loss and as I understand the recent experiments got enough sensitivity to rule this out.

          The second option is the backward causation. Ordinary forward causation implies that from the state at the initial moment one can deduce the future of the system or, as the relevant equations are symmetrical in time, it’s past as long as the future and past are inside the light cones.

          Backward causation effectively states that this is not true. The full description of the system requires more information than its state at the initial moment. For example, to describe the state inside the future light cone in addition to the initial conditions some knowledge of conditions on the light cone boundaries may be required.

          • Mongoose said, on June 8, 2022 at 1:04 pm

            Backwards causation is my personal view. This gets around Bell’s theorem by violating the assumption of independence between the hidden variables and the choice of measurement. However it’s worth pointing out that this doesn’t remove the non-locality from QM. Rather it provides an explicit mechanism for non-local interactions, but it doesn’t appear to conflict with relativity in principle. Anyone interested in this should read Huw Price’s book on the topic.

    • a scruffian said, on June 18, 2022 at 1:42 am

      But the real killer for Jaynes argument is actually discussed in his paper, and that’s Steve Gull’s version of Bell’s theorem. This basically boils down to “If there is a classical model that reproduces EPR statistics, make it.”

      Were you listening to the Dude’s story Donny? https://arxiv.org/ftp/arxiv/papers/1205/1205.4636.pdf — Bell ineq. violated with marbles colliding.

      Apparently it was also done in 1992 by some guys in Brazil. You put a lightbulb which flashes at random times in a rotating tube. Spinning on the ends of the tube are two polarizers at a fixed angle relative to each other. Outside the tube are two more polarizers fixed in the lab and then intensity detectors.

      Einstein-Podolsky Correlation for Light Polarization
      Mizrahi, Salomon S. ; Moussa, Miled H. Y.

      Considering a classical source of light (macroscopic), we propose an experiment, based on the principles of the Einstein-Podolsky-Rosen-Bohm correlation, for which one expects to obtain the same polarization correlation coefficient as the one predicted by the quantum theory, when photons are counted in coincidence. The results of a numerical simulation give good ground to believe that the conjectured experiment is reasonable. So, one may argue that the property of light called polarization, that is manifest at any level — microscopic and macroscopic — and which has a precise description in both, the quantum and the classical theories, leads to coincident results under correspondingly similar experimental procedures. Therefore the EPRB correlation is a consequence of that property of light, independently whether it is viewed as constituted by photons or by electromagnetic waves.

      This is recapitulated in the second Kracklauer article linked in OP: https://www.degruyter.com/document/doi/10.1515/phys-2017-0088/html

      Pirate link: https://libgen.lc/edition.php?id=48590541

  12. glaucous_noise said, on June 7, 2022 at 12:55 pm

    I’m currently in the quantum optics world and the density of spooky bullshit, mediocre papers, and vague, ambiguous results and objectives is disturbing. My background is engineering solid state physics/optoelectronics (e.g. simulating transistors, quantum effects in lasers etc), so the culture is utterly alien. My theory peers behave like decadent aristocrats whose sole purpose in consuming tax payer funds is edifying their intellectual fetishes.

    Entanglement really does look to me like nothing more than a fancy correlation. Ironically, Wigner’s poorly known contribution of the Wigner transform and its associated equation is what lead me to this conclusion more than anything else. The transform takes a wavefunction and transforms it into a scalar function in phase space. If it is a mixed state, an oscillating density in phase space appears which is identical to the cross correlation one obtains by applying the transform in signal processing. This is called “quantum coherence”, “quantum interference”, or “quantum entanglement” depending on the situation (they are all identical, save the latter occurs when the field in question depends on two or more degrees of freedom).

    The equation is more interesting. Only when the classical potential experiences a turning point does this oscillation in phase space appear. Whenever the scalar Wigner function is “split” or “sheared” in phase space, extending its volume in such a way as to violate the uncertainty principle, the oscillation rears its ugly head to ruthlessly enforce it. Otherwise, for a harmonic or less potential, it’s just the classical Liouville equation.

    More or less everything propagates classically except for these nonlocal oscillating regions.

    The spectral properties of the entanglement are also interesting, as they pretty much entirely recapitulate conservation laws.

    Your post increased my respect for Jaynes, I have always reviled him for being the progenitor of the “information is physical” and “Boltzmann entropy is Shannon entropy” horseshit that has polluted countless areas of applied physics, from quantum information to biophysics, but it seems he was sharper than I had thought.

    • Scott Locklin said, on June 7, 2022 at 2:33 pm

      Very interesting; I never noodled with the Wigner transform. Maybe on a rainy day.

      Information theory is just another form of probability theory; a binary probability theory. I was kind of mind-blown when I saw the connection reading some Herman Haken book (Synergetics turned out to be kind of bullshit), but now it’s second nature. Information theorists obviously have different priorities from people who do statistical physics, sort of like signal processors and machine learners (all of which are basically a kind of probability theory). It’s all a piece of the elephant somehow.

      Anyway Rovelli has some idea where QM is just Information theory on some level; I was in the room with only a dozen other people when he unveiled it in the 90s. It didn’t quite work out, but he’s got some extension of it I can’t be arsed to look at. Wouldn’t surprise me if he ultimately figures it out; he’s really a first rate creative mind.

  13. Darth Vader of Internets said, on June 7, 2022 at 2:28 pm

    Definitely would be interesting if more mathematicians tried to place causality more firmly into probability. Judea Pearl took a stab at it — I looked at his approach as part of trying to order financial effects beyond just doing the old Granger-causality tests (always pained me Granger got his name attached to something totally *duh* — I assume it is because economists cannot look up anything except by first attributing it to some random person). Pearl’s stuff wasn’t that practically useful and has some serious problems especially as regards assuming there is a way to implicitly tweak the underlying Spaghetti monster’s noodles but — the man took a stab at it — kudos. If anyone on this thread has some comments on how his stuff relates to hidden-variable results referenced here I would be interested.

    • Scott Locklin said, on June 7, 2022 at 2:37 pm

      I thumbed through it, but it hurt my brain too much. I don’t have much patience any more with things I can’t turn into tools.

      His various PGM doodads are pretty helpful but I’m not sure they’re directly related to his more recent books.

      • Darth Vader of Internets said, on June 7, 2022 at 2:59 pm

        Yes. Ironic that his main operator is ‘do(x)’ and when you try to apply it your first question is ‘do(WHAT exactly?)’ Not a very readable book or author, for sure.

        Information theory I like a lot. I came to it as an EE but was luckily taught it is the probability framework you get when you replacing Riemann/Stieltjes integration for computing event probabilities with Lebesgue integration across symbols and the main results and definitions all fall out.

        Again, outside coding theory, though, not as practically useful as one may think — for example, when given a set of actual data from two financial time series consider the task of computing their joint information without assuming some generating distribution form. Turns out this is actually quite hard — the obvious approaches are almost always some form of density estimation plus secondary computation and are awful on limited data. Some work has been done by looking at co-occurence of symbols etc. but this is still a very unexplored area. Or maybe it is just never going to be useful on problems without zillions of samples.

        • Scott Locklin said, on June 7, 2022 at 3:19 pm

          I dunno I use something like that as a feature selector; you quantize and look at what changes when the other thing changes. I think dcor is more or less that with more MIT marketing or whatever. Works great when things aint linear or are incredibly noisy. I shouldn’t tell you this, Dr. “I work with incredibly noisy things” but it’s not like it’s some big secret or anything.

          Also “aggregating experts” is a neat trick, straight from Thomas Cover book. Well by way of “Perdition Burning and Flames” anyway. I’ll blerg on that one day.

          • chiral3 said, on June 7, 2022 at 4:50 pm

            I read Pearl’s book and found it laborious. He makes a simple point, which I agree with, that we dabble in maths that are divorced from causality, but I was reminded of the Hitchens’ quote “‘Everyone has a book in them and that, in most cases, is where it should stay” in that it could have been shortened to an op-ed. Not to say that mindlessly pumping Ax=b with data isn’t useful, but it’s become an albatross on research in a world brimming with data.

            Likelihood of realistically producing massive amounts of qubits to do real problems aside, and speaking of Ax=b, I wrote down the quantum version some years ago, A large, A sparse, as in a practical sense, and it just seemed silly to solve the most ubiquitous problem that way. I don’t know much about cryptography, but it seems prudent for NIST et al to move to lattice methods or something similar anyway ad part of some future-proofing scheme. I can’t listicle all the use cases for a QC but I’d be interested in for how many the QC solution is actually a benefit over classical. Factoring a 2 digit number with the answer known a priori to overcome noise seems like a meatball of a tollgate to satisfy investors.

        • danielgarvey00 said, on June 26, 2022 at 5:16 pm

          Most the time in ML, people use variational lower or upper bounds.

    • rademi said, on June 7, 2022 at 5:57 pm

      “something totally *duh*” often winds up being a lot better than many alternatives.

  14. nater said, on June 7, 2022 at 8:01 pm

    I am pretty sure that I am starting to understand what you are talking about. Thank you for breaking it down.

  15. Thor said, on June 8, 2022 at 1:33 am

    I have ascended to a state of mind where the whole world is mystical, so now there is no meaningful distinction between myth and saga anymore. I can only talk in riddles, or make the clear appear like an enigma. It is pragmatic and unworkable all at once, because you start to forge unacceptable truths, and such truths will not be accepted by others, no matter your eloquence; Persuasion quickly becomes homeomorphic to intimidation.

    > The spooky superluminal stuff would follow from Hidden Assumption (assuming conditional probability means causal influence); but that assumption disappears as soon as we recognize, with Jeffreys and Bohr, that what is traveling faster than light is not a physical causal influence, but only a logical inference.

    Apart from causality, which is as real as systems are real — though whether the left domino block hits the right domino block, or the right attracts the left, depends not on the blocks or their configuration, but on the vantage point of the observation — there is always another confounder: we live in a turbulent lossy stochastic system, but to decide what is reality, we ask the Wheelerian yes-no binary question; not how God meant to communicate it, but how human monkeys manage to hunt for food and not starve: if vast majority of tribe says the saber tooth tigers are that-a-way, such tigers will never be found in the opposite direction, for nobody goes there.

    So we want to know whether we got either of Bell’s Balls, the red one is not edible, the black one is, so we make a binary decision, and those who chose correctly, procreated and evolved, as it paid to be certain in decision-making as an evolutionary strategy. For this sin of our ancestors, we suffer the Curse of the Grayface, where things either are or are not, and not not things are things too. Seems rather mystical too if you read it like that, but these are hard unescapable rules. For even by observing one of Bell’s Balls, we cannot be a 100% certain we are looking at a black or red ball. Some demon lab assistant could have messed with our measuring stick. Our binary schema of “things are or are not, but not at once”, a sort of misplaced learned object-permanency, dictates what we get from reality (by which Wheelerian question we ask, and get an answer for, we are allowed to say meaningful things about some aspect of an object). This illusion is just as real as for the realists who struggle with entanglement. We may have taken too much of that Santa Fe LSD, and see double or nothing at all.

    Finally, the other ball, if assumed to be “red”, now is not suddenly “black” by logical inference, much like finding a
    Black Swan does not exclude the possibility of red swans. Light may work different in that part of the universe, so any observer there views and computes “red” as “black”. Also the probability of going through a portal which changes matter or time, with a preference for balls, is worth at least 1/10**6 %. We are back at probabilities and intuition, broken all of post-Hilbert mathematics, and may just as well substitute causal entanglement with probabilistic ball measuring. We end up with a fuzzy causality, subjective even, and the pragmatic question of: can humans visit all possible universes or not? If you care to join me, we then live in a world where 1+1 may not equal 2 for 100% of all measurements, by all means and purposes a mystical world. But at least we are outside of Plato’s cave, and do not bother with the illusion of boolean logic dictating our realities. Why follow an illusion, when you can make your own, and see more clearly?

    • Thor said, on June 8, 2022 at 1:20 pm

      The only real (mathematical) tool that we humans have is the power of imagination. It makes everything, including hammers, look like nails.

      Imagination is the only thing which may violate the speed of light without a divine speeding ticket. No particles, waves, causal influences, or logical inferences can travel faster than the speed of light, but we may keep playing the game of relativity in our imagination forever.

      Imagine a particle in your mind traveling at the speed of light. You can then imagine another particle slowly but surely overtaking it. But this rule is only broken in your mind, and does not paint a realistic picture of reality. These problems with spooky action at a distance are problems of the power of imagination, and not a problem of puzzle pieces not fitting together.

      Logical inference is just some common agreed upon imaginative rules, so we can play solipsism as a multi-player game without crashing due to a buffer overflow.

      • Thor said, on June 8, 2022 at 6:46 pm

        > Of course I’m also biased towards such “flipping over the card table” ideas, but it seems to me as there are entire fields with thousands of workers whose alleged subject is based around quantum entanglement, someone should be able to explain to me why Jaynes is wrong here. I realize they’re busy, possibly getting steaks for statisticians, but surely one of them has the time to dispel the ignorance of bit twiddling wretches such as myself.

        As long as the string theorists are ramen noodle profitable, they will keep themselves and others busy. They are acting out an economic necessity; by pointing out the bullshit, we may look for the valuable cows who must be around somewhere. You probably would not have written this blogpost, if string theorists were all proven right, and then I would not have spend an evening falling down the rabbit hole of Dreiss’s Fourth Law of Thermodynamics. To fall down a hole, you need a hole to fall into, and perhaps these thousands of workers are digging them. Good for all of us. I’ve been chasing your white rabbit on neglected machine learning ideas for a while now and it is great fun, if anything. Thanks.

    • rademi said, on June 8, 2022 at 6:13 pm

      Yeah… mysticism, and similar approaches (aka “religious nonsense”) should be accepted when they are correct and rejected when they are observably false.

      More specifically: when we are dealing with extremely complex systems, we should not let terminology be a distraction. Instead, we should understand what’s going on (which typically requires practical efforts).

      That said, the weakness of logic is that it gives garbage results when you feed it garbage. Probabilistic and statistical approaches inherit this weakness.

  16. Frank said, on June 9, 2022 at 9:58 am

    I am not a physicist and don’t know much about Quantum theories at all. My maths is rusty and I’ve forgotten basic formulas from school. I don’t think I can remember the quadratic equation solver formula. Regardless I wanted to understand Quantum stuff a bit more so I picked up a book introducing it and tried to slowly slog through all the math.

    I managed to get through most of the book, on QM specifically, and I pretty much understood it. But I found it very dissatisfying. It describes the probability vectors and how if you measure an electron in a field it will have a probability of having an orientation depending on what angle you measure it again etc… It’s all about how to calculate the probabilities of these things happening. But nowhere does it go into WHY it happens.

    This idea of “there’s no underlying mechanism from the perspective of QM and we don’t need to know why even if there is one” seems to be attached to anything I read about the field. I found it all very antithetical to the spirit of pulling things apart and trying to understand deeper. It all seems to come across as a barrier to entry, an attempt to say “look no further!” But that’s just my uninformed and naive opinion.

  17. anonymous said, on June 12, 2022 at 1:38 pm

    Thanks for writing this. It’s one of the things that bugs me about quantum mechanics. 100 years, and we still have no clear idea of what entanglement really is.

    IMO, the state of a quantum description of nature has no business looking so much like a joint probability distribution if there isn’t some kind of purely epistemological “our-subjective-knowledge-of-the-underlying-state” component to it. Joint probability, probability in general, is what it is for *logical* reasons, not mechanical reasons. Nature does what it does for mechanical reasons. The two shouldn’t be munged up together to the extent that members of my department are entertaining solipsism as an answer.

    One thing that occured to me about the Bell paper back when I read it (I need to re-read it) is that when they switched from their quantum description of the problem to their “classical” contrasting description, they switched from a wave-mechanics picture of how light travels through polarizing filters to a “little-bullets-of-light” picture (which isn’t really the classical picture of a light wave.) So a joint distribution over what? If it’s a distribution over the state of a wave system, we should have a probability distribution over a polarization state and amplitude state. A “photon” is allowed to half (or whatever proportion) travel through a polarizing filter. Discreteness probably doesn’t happen until you get to the PMT.

    Another thing that occurs to me: How wide physically is a “photon”? People will talk about the “natural line width” as the smallest amount of broadening in a spectral line and associate some timescale with that: The timescale ends up being one cycle of whatever wave we’re talking about. Real broadening ends up being larger than that due to other influences (collisions etc), which operate at higher, or comparable frequencies.

    But when an atom emits light semi-classically, it does so like any other antenna: Charge distributions oscillate over several periods and spit out a wavetrain before settling down. This timescale (I’m forgetting the name, we covered it in laser physics class) is much longer than any of these other timescales, and makes me wonder how large the “photons” are. Do they extend over tens of wavelengths? cm? Meters, to the point where the scale of your laboratory apparatus becomes important? Maybe something transactional is going on, similar to feedback from your optics bench into a laser cavity during the “measurement” of a photon.

    • rademi said, on June 12, 2022 at 6:00 pm

      It’s not really meaningful to talk about the “width” of a photon. We can talk about the wavelength of a photon, but we have pretty good reason to believe that photons are electromagnetic in character, which gives us a rough idea of their structure — and that structure is not a rigid sphere.

      Rigid spheres are a convenient oversimplification, in some contexts, and oversimplifications are fun. But …

    • Mongoose said, on June 16, 2022 at 4:43 am

      The particle picture of photons is nonsense and doesn’t actually explain the quantum properties of light. It’s historical motivations (black-body radiation, photo-electric effect) have semi-classical explanations, and the things that quantisation is needed to explain (HOM interference, squeezing etc) don’t make sense in a particle picture. Lamb complained about this in his paper ‘Anti-Photon’ which I strongly recommend.

  18. anonymous said, on June 12, 2022 at 1:49 pm

    whoops, sorry. That’s not natural linewidth. Typing before coffee. Smallest delta-E about a line probably does end up comparable to the atomic “ring-down-time”.

  19. anonymous said, on June 12, 2022 at 2:06 pm

    “Typical atomic state lifetimes are 1E-8 sec, or 6E-8 eV” … so meters. A semiclassical “photon” is meters wide. Nothing with a line-width so narrow can be anything smaller! There’s physics in them thar facts.

    • Scott Locklin said, on June 12, 2022 at 3:04 pm

      The only length I can think of which corresponds to line width is the spectrometer dispersion length. Otherwise, visible light is still much larger in wavelength (600-400nm) than the size of an atom (0.3nm). Which is kind of the point of QM; it’s weird something that small pops out photons that big.

      • anonymous said, on June 12, 2022 at 3:27 pm

        The frequency/timescale I was thinking of was the Rabi frequency from the 2-level-atom models, which is pretty directly related to Einstein A/B coefficients/electric dipole moments/etc.

        I suppose what I’m getting at is that the “little bullets of light” picture of a photon is treacherous, maybe only useful from an energy/momentum accounting perspective. Between the time an atom has started and finished a transition, the leading edge of that radiation should be all the way across the optics bench (or have made a complicated round-trip through the whole setup.)

        And we know from the narrowness of the line how many coherent cycles in a wave must be contributing.

        • anonymous said, on June 12, 2022 at 3:40 pm

          Also, (semiclassically, since that’s how I think, still learning QFT) a whole gob of wave-crests have to sweep by an atom before an atom can be completely kicked from the ground state to an excited state. Absorption and emission events probably *can’t* be thought of as isolated local things on an atomic scale.

          So what’s going on in our amplifier/PMT/single-photon-counter type devices? Not little bullets of light hitting a sensitive surface!

          • Scott Locklin said, on June 12, 2022 at 4:33 pm

            I dunno man photoelectric effect seems pretty real at this point.

            • anonymous said, on June 15, 2022 at 12:31 pm

              There is a semiclassical account of the photoelectric effect, but I want to revisit the math before dumping a wall of text. IIRC, the behavior of it was that the rate at which bound-free transitions occur are proportional to the amplitude of the light, but there is a strong frequency dependence. In the two-level atom model, only frequencies within the linewidth obtain significant transition amplitudes within the Rabi cycle.

      • rademi said, on June 12, 2022 at 6:14 pm

        If you think about the time/space equivalences, the “length” of an atom along the “time dimension” is … big.

  20. a scruffian said, on June 12, 2022 at 7:33 pm

    Another money quote from the Froehner 1998 paper, p. 646:

    The spin-statistics relationship is widely believed to be inexplicable without relativity and quantum field theory. Here it appears, however, as a nonrelativistic consequence of the two angular periodicities allowed by the Riesz-Fejer theorem for wave functions in ordinary space.

    NICE

    • Scott Locklin said, on June 13, 2022 at 9:04 am

      All these papers are great; sarcastic and … almost in disbelief at how the entire scientific community overlooked a bunch of obvious stuff for fear of looking dumb.

      • Sprewell said, on June 14, 2022 at 5:24 pm

        Isn’t that evidence that nobody really understands what’s going on? I blame the over-mathematization of Physics, a la Hossenfelder, though of course the experimental results are weird too.

        Maybe Digital Physics (I’m a skeptic, though I wouldn’t rule it out), or at the very least moving everything including theory to open-source computational modeling instead, presents a way out.

        • Frank said, on June 14, 2022 at 8:31 pm

          I also can’t help feeling that the maths puts up too much of a barrier to entry, and I can’t help feeling that it is in the interests of the organisations involved to keep it an ivory tower.

          That said, I am hopeful that GPT and the like might be able to change it all. Perhaps we could conduct some experiments taking nuggets of QM / QFT/ etc and asking GPT to ELI5 it and see what happens. If we could get a bunch of QM ELI5 in English but in a rigorous, useful, form, not just some hand wavy crap, we might open the doors a bit and get someone with a fresh look at things in.

          • Scott Locklin said, on June 15, 2022 at 10:31 am

            GPT is basically KNN, so if you train it on my blerg it will hate nanotech and quantum computing and tell you that quantum mysticism is probably bullshit but I dunno yet.

            It’s amusing that most “AI” chatbots that impress anybody seem to include lots of anon Honae training data, rather than just Tatemae NPC gibberings. Making them all incredibly racist.

            • anonymous said, on June 15, 2022 at 1:05 pm

              For a while a few years back I was reading stuff from an internet subculture called the Transhumanists. Fun as a sort of sci-fi imagination exercise. Also the affirmation of technological progress was attractive. Why would I have become an engineer/scientist if I didn’t want to extend the reach and power of man with technology? If we could do X (no matter how wild the change from historic circumstances), why wouldn’t we? How might we do X? (Though there is a vacuum of serious/practical details on the “how” part.)

              A more attractive worldview (on the surface) than many where men are seen as having all the agency of pieces of driftwood being blown about by impersonal vast historic forces. (Hi Spengler!)

              AI though is where they jump the shark. (Or it’s symptomatic of how they jump the shark in general.) Here, if you read them carefully, you can see them shift from “and then we will be as Gods” to cultists of a sort of external god. They don’t so much want to extend the reach of men until we have the power and clarity of thought (achieved through whatever tools) to encompass superhuman problems. Instead they seem to want to delegate thinking to some other agency (usually some sort of superhuman AI.) Rather than the computer as a tool to help them think, they want the computer to do their thinking for them. Their utopian vision, in the end when it is made explicit, involves men being minded like children (or swept aside) and the robot gods having all the agency and power. Our brains would be vestigal organs, and we’d be doing the equivalent of the monkey hitting the cocaine button.

              I’d rather see a world where agency is distributed and made more potent. Prometheus didn’t give men a free lunch, he taught them how to light fire.

              Anyway, your description of “let a computer understand physics for us”, when getting a computer to unambiguously apprehend anything at all is a trick on the order of that first campfire in the stone-age, sort of triggered the memory of that dichotomy.

              • anonymous said, on June 15, 2022 at 1:23 pm

                What you’re proposing seems more on the order of “let’s look at a bunch of propositions lightly scrambled to the point where it might lead us in new directions.”

                I suppose I’m reacting more to ideas like the “automated math proofs” or some of Wolframs odder ideas where the computer is supposed to pull physics out of Platonic space merely by being incomprehensibly mathy.

                That was the thing that did it for me about the trranshumanists: They seem to worship incomprehensible complexity: Their robot god will be too subtle and powerful, and what it would do with the world would leave us helpless to understand the resulting mess. As opposed to making men capable of understanding the world in all it’s complexity. Rather than developing subtle techniques to do things, which requires experience and understanding, build a robot god (some assembly required) that generates a bunch of magitech that you then wouldn’t have to understand. Just mash the cocaine button.

        • anonymous said, on June 15, 2022 at 12:39 pm

          My main point of skepticism about any sort of “digitalization” on physics is that requiring almost any sort of grid, no matter how randomized and fine in one reference frame, would develop anisotropies under a Lorenz boost to some other reference frame. Lorenz symmetry seems to require continuity of space.

          Even ignoring that, it might be possible to “hide” a rotational anisotropy, but you would find it eventually at a small enough energy/spatial scale. So far we haven’t. People refer to the planck length a lot, but that is just us dividing a few constants together to get a lengthscale at which we know that QFT descriptions of matter-waves doesn’t make sense because the gravity due to the waves would significantly distort space.

          (First) Quantization seems to occur within the context of a continuum theory, and doesn’t seem to be about discreteness. Seems to be a consequence of resonant modes of a wave theory. Second quantization is bolted on because of encountering countable particles, but it’d be weird if it didn’t end up being some mechanism related to first quantization, since they use the same planck’s constant.

  21. Igor Bukanov said, on June 18, 2022 at 9:08 am

    I think the reason E.T.Janes is wrong is that Bell’s reasoning about factorization of P(AB|abλ) as P(A|aλ)P(B|aλ), see (14) in the paper, should be applied to the values of a,b,λ not at the moment of the measurement t1, but at the moment of the state preparation t0 of measurement devices. When the measurement is taken clearly Janes is right. Due to the common history since the measurement preparation at t1 P(A(t1)B(t1)|a(t1)b(t1)λ(t1)) independence assumption does not hold and the proper Bayesian inference should be applied.

    But what Bell was arguing was that P(A(t1)B(t1)|a(t0)b(t0)λ(t0)) = P(A(t1)|a(t0)λ(t0))P(B(t1)|a(t0)λ(t0))

    If the measurement apparatus at t0 are separated by a space-like interval from the system, then this factorization is essentially the definition of independence of events.

    In the example with balls drawn from an urn and sent far away the independence assumption is trivially violated at t0. A proper example would be for a device to flash light when it receive a ball of a particular color. The flashing color is decided randomly at the moment when the device was separated from the urn by a space-like interval.

    If one then draws the balls from the urn and sends them to the devices A and B, one should expect then on average flushing/non-flushing to coincide in 50% of time. If one observes then, say, that when the device A receives the black ball and the device B the white they both flash at the same time 100%, but when A receives white then flashing coincides still at 50% then clearly something fishy is going on.

    • Scott Locklin said, on June 18, 2022 at 7:11 pm

      Bell is definitely saying conditional probability is causal somehow; I read his original series of papers. That’s definitely not true!

      You should check out the Brazool experiment above where they made a classical Aspect experiment.

      Poking holes in Bell’s inequality: E.T. Jaynes possibly clearing up some mysteries

      • Igor Bukanov said, on June 18, 2022 at 9:21 pm

        In the original paper https://cds.cern.ch/record/111654/files/vol1p195-200_001.pdf Bell discussed the justification for factoring out probabilities in the single sentence:

        The vital assumption [2] is that the result B for particle 2 does not depend on the setting of the magnet for particle 1, nor A on b.

        Then in description for [2] contains a citation:

        [2] – “But on one supposition we should, in my opinion, absolutely hold fast: the real factual situation of the system S2 is independent of what is done with the system S1 , which is spatially separated from the former.” A. EINSTEIN in Albert Einstein, Philosopher Scientist, (Edited by P. A. SCHILP) p. 85, Library of Living Philosophers, Evanston, Illinois (1949).

        Taken together this clearly refers to the state of measurement devices when they were separated by space-like interval from each other and the photon source, when the factoring of the joint probability into individual terms is by definition the notion of independence.

        Janes refers to the probability at the measurement time when the devices are no longer can be treated as independent.

        As for Brazool the paper starts from the correction:

        This is an update of the article published in 2012. In particular we explain shortly that in the experiments discussed in the last section of this paper the inequality which is violated it is not Bell inequality.

        Then the paper uses similar reasoning as Janes arguing that at the moment of the experiment one cannot factor out the joint probability distribution. I.e. they made the same mistake as Janes.

        • rademi said, on June 18, 2022 at 10:25 pm

          Unless we are talking about spatial separation which crosses an event horizon, there’s going to be a light cone which contains both A and B.

          • Igor Bukanov said, on June 18, 2022 at 11:36 pm

            The measurement devices can use relict radiation with photons coming from the opposite directions as the source of randomness to decide which experiment to make.

            • rademi said, on June 18, 2022 at 11:52 pm

              There’s still going to be a light cone which contains the sources of that relict radiation.

              Is any of this likely relevant in practical contexts? Maybe, maybe not.

          • Mongoose said, on June 19, 2022 at 12:47 am

            In Bell’s theorem it is assumed that the hidden variables can be described fully in the immediate past light cones of A and B, so that the overlapping region is irrelevant. If you are a strong determinist you can abandon this assumption though. Gerard ‘t Hooft does this in his cellular automata model/

    • Mongoose said, on June 19, 2022 at 12:44 am

      Bell’s factorisation is valid because the point of conditioning on the hidden variables is to remove any classically explainable correlations. In the case of pulling balls out of urns the hidden variables are just the colours of the balls. In this case, Jaynes’ objection to Bell’s factorisation is equivalent to making the argument that even if you knew the colour of both A and B’s balls prior to their measurements, learning what B’s measurement outcome is will give you information about A’s outcome. This is obviously not true – you already have as much information as possible before either measurement.

      Of course, if you think that conditional probabilities aren’t related to causes one could argue that because no information is transmitted from A to B by B’s choice of measurement there’s no non-locality, just strong correlations that have no cause. But this goes against the whole spirit of Jaynes’ paper, as you are now in the position of denying that quantum mechanics is in need of deeper explanations.

  22. anonymous said, on July 1, 2022 at 8:10 pm

    Here is another interesting paper:
    https://aapt.scitation.org/doi/10.1119/1.2735628

    Einstein had a (probably didn’t think of it this way at the time) semiclassical explanation for the Planck blackbody spectrum that can be extended to understand the transition between line-emission and blackbody emission in atmospheres.

    It sort of hinges on understanding that the emission and absorption of light by atoms is *not* some instantaneous process at a discrete frequency, and only that frequency. Emission and absorption of light is a damped resonance process: There are dynamics to the event, there *must be* for these out of band emissions/absorptions to make sense. (Even so far out of band to be all the way across the spectrum over long enough optical depths.)

    • anonymous said, on July 1, 2022 at 8:16 pm

      That wasn’t the paper. That was a comment by some dork denying the point of the paper.
      Paper is here: https://aapt.scitation.org/doi/10.1119/1.1819931

      • anonymous said, on July 2, 2022 at 3:01 am

        Also, I spoke too soon. The critic is right about something, and the original paper is screwing something up.

  23. Minh said, on July 10, 2022 at 5:05 am

    I’m one of those guys who’s sorely unimpressed with (actually sick and tired of ) the over-hyped and underwhelming QM. So it’s really refreshing when really smart and common-sense people like Edwin Jaynes reinforce what I’ve been thinking all along: “QM is not a physical theory at all, only an empty mathematical shell in which a future theory may, perhaps, be built.”

    I first came across Jaynes reading some of his thermodynamic papers, and was immediately impressed by his clarity and simplicity. That’s the sign of true intelligence, and his writing in this EPR discussion is highly reflective of his great intellect, esp. his interpretation of Bohr’s statement on pg. 13 which I agree completely with. He can be misguided by getting into rubbish entropy discussions, but it doesn’t change the fact he’s otherwise very smart and sharp.

    Obviously the Bell inequality is only as good as its assumptions, some of which are untenable, as clearly pointed out in some of the papers cited in the blog. In the “Bell’s Theorem: loopholes vs. conceptual laws”, they point out several Classic systems also exhibit Bell violation. So does this mean they’re irreal or superluminally non-local? It just means the Bell violation has nothing to with entanglement, due to his wrong assumptions of the hidden variables’ independence which are debunked on page 5-6 of the same paper (and in other papers too). More importantly, since Bell made assumptions about hidden variables, that means he had no freakin clues as to how many of them there were in total, plus their interrelationship and all, so by necessity, his inequality at its very best, only accounts for the set of situations covered by the hidden variables he had in mind, not the ones unknown. This is just common sense, and it alone drives home the point of (2) on Jayne’s paper: “The class of Bell theories does not include all local hidden variable theories”. So it’s absolutely ridiculous that the entanglement fanatics use this violation as proof of QM’s nonlocality and irreality, a classic case of mistaking between absence of evidence and evidence of absence.

    Other papers also point out that the BI cannot be even proven due to the absence of a common probability space. This point also strikes home for anyone with common sense and a little understanding of physics and probability. Not to mention that the experiments designed to prove/disprove BI stray too far from the ideal experiments and therefore, their conclusions have little to do with anything the BI specifies. Plus, on a physical level, due to QM having nothing to say about how much the measurement apparatus affects the measurement outcome, there’s no way they can verify entanglement. Measuring A automatically means B in the other device according to nonlocality, but how much of A and B are contributed by device’s specific attributes, and how is it accounted for by entanglement maths? None.

    All of this clearly indicate to me that those who continue to believe in that BI violation is proof of superluminal entanglement, are either misinformed/uninitiated or are true believers, in which case one’d better stop discussing, because nothing can sway a true believer.

    I myself also have big beef with superposition of QM. As you know from my email, the Uni I work for is supposed to be world-leading in QC — I’m IT, not academic btw — so once in a while, due to me doing the infrastructure support work, I get to interact with the QC people. Last year I complained to them, that the QM’s superposition is total BS; it’s just a bunch of math symbols inspired by the Fourier transform, with no correspondence in reality, so Von Neuman’s projection that all values exist at once (the source of QC magic and superpower) is total mathematical garbage with no underlying physics. So what were they building? Turned out the QC they’re building is nothing of the kind. They just entangle (in their words) two phosphorous atoms in a Qbit, and depending on different values of spin orientation/polarization, they can encode different values. This looks to me like modulation in signal processing, e.g. QPSK. There’s no such thing as entanglement of hundreds, let alone thousands/millions of atoms. So entanglement, on an engineering level, is just a fancy word for interaction, while superposition as envisioned by Von-Neuman and popularized by idiotic die-hards (including the Many-World interpretation advocated by fuckwits like David Deutsch) is stuff of science fiction, not reality. It also means a QC of million Qbits is just a distant dream, most likely never to be realized, and anything based on Von-Neuman superposition (incl. the Shor algo if I understood correctly) won’t come to be even if a QC is physically built to specs.

    I think deep down, the problem with QM is it’s simply not a physical theory. It’s a bunch of math, some of them conflicting (particle- and wave-based maths), and then its creators tried hard to interpret the symbols and force-fit physical meaning into them, creating all sorts of open problems that have no answer, even after almost 100 years. In “Heisenberg, Models, and the Rise of Matrix Mechanics”, Einstein said this to Heisenberg (pg 185)

    “Possibly I did use this kind of reasoning [diese Art von Philosophie], but it is nonsense all the same. Perhaps I could put it more diplomatically by saying that it may be heuristically useful to keep in mind what one has actually observed. But, on principle, it is quite wrong to try founding a theory on observable magnitudes alone. In reality the very opposite happens. It is the theory which decides what we can observe.”

    https://www.jstor.org/stable/pdf/27757370.pdf?refreqid=excelsior%3A9c419ac0ce7b131328096a05a94d5fdd&ab_segments=&origin=&acceptTC=1

    It shows clearly, as flawed as he was, Einstein was much more of a MAN and a PHYCISIST than the QM bunch (the same can be said about him vs the current crop of so-called physicists). This is also the viewpoint I stand by: physics starts with the phenomena and ends with the mechanics (explanation). Everything else, including the math, is just supplement.

    QM was the antithesis of all of this, what resulted when physics got invaded by positivist mathematicians. Math was turned into an end, not a means. When this tradition continued, which it did, we had shit like QFT, SUSE, String theory. Dirac was truly sick of QFT and RGT. Like all true physicists, he understood well just because you could force-fit some mathematical sleight of hand like RGT, and manage to calculate some agreeable results, doesn’t mean the theory isn’t out of whack. Dirac hence considered his life a failure. This is an excerpt from the book “the strangest man: the hidden life of Paul Dirac, Quantum genius”:

    “So he asked Dirac directly whether it would be a good
    idea to explore high-dimensional field theories, like the ones he had presented in
    his lecture. Ramond braced himself for a long pause, but Dirac shot back with an
    emphatic ‘No!’ and stared anxiously into the distance. Neither man moved,
    neither sought eye contact; they both froze in a silent stand-off. It lasted several
    minutes. Dirac broke it when he volunteered a concession: ‘It might be useful to
    study higher dimensions if we’re led to them by beautiful mathematics.’
    Encouraged, Ramond saw an opportunity: doing his best to sound understanding,
    he invited Dirac to give a talk on his ideas at Gainesville any time he liked,
    adding that he would be glad to drive him there and back. Dirac responded
    instantly: ‘No! I have nothing to talk about. My life has been a failure!’”

    • rademi said, on July 10, 2022 at 5:54 am

      There’s definitely issues with relativistic gravitational theory — is that what you meant by RGT?

      However, I don’t see those issues as having much of anything to do with quantum mechanics.

      What am I missing?

      • Minh said, on July 10, 2022 at 11:53 pm

        No Rademi, by RGT I meant Renormalization Group Theory :)) .

    • Scott Locklin said, on July 11, 2022 at 3:33 pm

      I mean, nobody has been able to come up with something which contradicts quantum mechanics yet. That’s not nothing! Of course entanglement …. people can’t even agree what it is, and experiments are almost entirely lacking. I think if it’s right it will be something like what Igor above talks about. I have an email chain Bill sent me from the perimeter institute where someone was saying something along those lines.

      The moosh headed BS, of course, we can all live without.

  24. Dave said, on July 13, 2022 at 6:00 am

    I’ve always been meaning to read Jaynes book. However at the same time another blog I dip into occasionally really laid into it: https://www.logicmatters.net/2007/07/02/godel-mangled/
    Now, I don’t have the time to read both of them to make up my own mind, back to the salt mines, aligning Regen cavities I suppose.

    • rademi said, on July 13, 2022 at 11:48 am

      Jaynes had a point, in the sense that the concept of mathematical triviality underlies Godel’s theorem.

      But Godel’s theorem also … roughly speaking, constructed a way of serializing theorems into numbers and, thus, provides a way of making theorems about numbers also be theorems about theorems. That’s a very nerdy sort of thing, and it doesn’t really say a lot about theorems. But it casts light on theorems in the context of that concept of triviality (paraphrasing heavily here): because numbers are infinite and proofs involve a finite number of theorems, there’s always the potential for more theorems and unless we are using a trivial system there’s always a need for further proofs if we want to deal with those theorems.

      Smith’s objection, if I understand right, is that there’s a stronger constraint which doesn’t depend on Godel’s encoding to represent other theorems. And, roughly speaking that seems like an accurate description of the nature of theorems — you need some starting axioms to base the theorem on. But I don’t think that the existence of Godel’s encoding of theorems is sufficient to prove that that’s the case — other encodings could exist. (Then again, it’s been a long time since I’ve read anything by Godel, and I might have forgotten something important that he had said…)

    • Scott Locklin said, on July 13, 2022 at 12:08 pm

      In this case, the blerg nerd is missing the point. In fact, he misquotes and gets the page wrong as well. Jaynes is repeating (and he specifically states he is repeating) a formally incorrect Fisher/Jeffries hand wavey description of Goedel applied to probability theories, and asserting for all practical purposes, stuff like Bayesian probability gets you something extra, despite being more or less subject to Goedel because its components are formally subject to Goedel. That’s why we use such things in machine learning instead of Prolog. This is a practical point, beyond Goedel’s spegeloid kvetching that Whitehead’s principia mathematica isn’t bootstrap extensible.

      I looked to see if the author was somewhere else an enthusiast for “AI” -as that would have been a great irony if so. He’s apparently the author of weighty tomes on Goedel, which I guess makes sense.

      Stuff Jaynes said afterwords:

      “When Gödel’s theorem first appeared, with its more general conclusion that a mathematical system may contain certain propositions that are undecidable within that system, it seems to have been a great psychological blow to logicians, who saw it at first as a devastating obstacle to what they were trying to achieve.
      Yet a moment’s thought shows us that many quite simple questions are undecidable by deductive logic. There are situations in which one can prove that a certain property must exist in a finite set, even though it is impossible to exhibit any member of the set that has that property. For example, two persons are the sole witnesses to an event; they give opposite testimony about it and then both die. Then we know that one of them was lying, but it is impossible to determine which one.
      In this example, the undecidability is not an inherent property of the proposition or the event; it signifies only the incompleteness of our own information. But this is equally true of abstract mathematical systems; when a proposition is undecidable in such a system, that means only that its axioms do not provide enough information to decide it. But new axioms, external to the original set, might supply the missing information and make the proposition decidable after all.
      In the future, as science becomes more and more oriented to thinking in terms of information content, Gödel’s result will be seen as more of a platitude than a paradox.”

      I think this is right. The way Goedel is used by certain people, it’s the sort of argument that you might be poisoned, so you should starve yourself to death. Jaynes is against this, just as he is against quantum surrealism (I still hold out the possibility he may be wrong here; probably in the way Igor points out) as another sort of Nihilism.

      To take an example from engineering/mathematical history, people used crap that was basically just like Wavelets long before the formalism of Wavelets was developed by mathematicians. Goedel types probably whined and cried about this, saying it wasn’t formally correct somehow; the engineers kept using them until Stephane Mallet and friends figured out why they work. I think Goedel types can be useful, but some of them are nihilists who present silly proofs as to why you can’t do a thing to people who are actively involved in doing the impossible thing. Like goons who whine about NP hard problems being formally basically impossible when we have perfectly good tools like simulated annealing to get us through the day.

      This sort of debate probably dates back to the ancient Greeks; Sextus Empiricus and his lot at least. Perhaps there is a deeper reason why Jaynes is wrong here, but I can’t see it. Looks to be just a spergeloid pedantry by someone who is so deep into Goedel he probably mistrusts his breakfast. Even logician formalists are required to read surrounding passages to determine what the author was saying; this one should have. I mean, it’s a whole fucking book on why probability theory is the right way to think about the world; why doesn’t Goedel nerdoid think this might be relevant in the paragraph he quotes?

      • Sprewell said, on July 18, 2022 at 5:17 pm

        Speaking of wavelets, whatever happened to them? There was a great deal of research interest in them back when I was in grad school decades ago, but I hear of almost no practical use nowadays, only some niches like medical imaging or specialist video codecs. The one big attempt at a broad standard failed, JPEG 2000, too early?

        It’s not a field I pay much attention to, but there have been several new image and video codecs in those years, none of them wavelet-based. Anyone know what went wrong with wavelets?

        • Scott Locklin said, on July 19, 2022 at 10:26 am

          Wavelets have simply become part of the signal processing toolbox, a more practical Fourier transform. Same story with empirical mode decomposition/Hilbert–Huang transform, or, say, mel-cepstral filters. I use them all the time. Paper-writers probably don’t talk about them any more for the same reasons they don’t talk excitedly about Fourier transforms: they’re no longer new and someone’s probably written a paper about it already.

  25. anonymous1331 said, on July 31, 2022 at 6:25 pm

    I’m having trouble seeing what Jaynes’ point is. I’ll preface this with saying that I’m just a layperson working in finance with an interest in physics, so I would be genuinely interested in what I might be misunderstanding here.

    In Jaynes’ paper, he says “for me QM is not a physical theory, only an empty mathematical shell in which a future theory may, perhaps, be built.” He says, “I want to know” what is happening.

    I agree with Jaynes that probability is logical inference and happens at the epistemological level. Therefore if we want a “future theory” (using Jaynes’ words) on the ontological level, such a physical theory shouldn’t have probabilities [because probabilities are on the epistemological level as they are a tool for inference]. This “future” “physical” theory must have a mathematically deterministic description.

    It’s very reasonable to want to seek such a theory. For example if we see a coin empirically land about 50% of the time when flipped, on an epistemological level we could simply assign 50% as our degree of belief that a next, similar coin flip lands on heads. But we could also reasonably seek a deeper, deterministic theory of the coin flip.

    So turning to Bell’s 1964 paper, Bell begins by analyzing a class of deterministic theories where A and B are deterministic functions of (a,lambda) and (b,lambda) respectively. This is very reasonable: why would we expect A to be a deterministic function of (a,lambda,b)? With these deterministic functions, this theory is so far entirely operating on the ontological level.

    But going back to QM, QM involves probabilities and thus operates on the epistemological level.
    To connect Bell’s deterministic hidden variable theory to QM and check if they make inferences that are consistent with each other, Bell needs to then regard lambda as an unknown quantity with a probability distribution. The cool thing is that after carrying out the integrals and math, Bell finds that this theory is incompatible with the inferences of QM.

    Jaynes claims: “It appears to him [Bell] that the intentions of EPR are expressed in the most general way by writing” {insert equation (12) from the Jaynes paper}. I don’t see where this is present in the Bell paper: Bell’s main expression of the EPR intention in his paper is the existence of deterministic A(a,lambda) and B(b,lambda).

    In addition, I don’t understand the point Jaynes is making about his equation (14). If A and B are deterministic functions of (a,lambda) and (b,lambda) respectively [again, the key assumption Bell makes] doesn’t (14) clearly follow from that? P(A,B | a,b,lambda) would be a probability density with all its mass centered on (A(a,lambda),B(b,lambda)).

    • rademi said, on July 31, 2022 at 9:25 pm

      From my perspective, a key issue with QM is that it (deliberately) fails to distinguish between the measurement equipment and the phenomena being observed.

      But this means that the terminology and concepts which are “uniquely QM” also fail to make this distinction.

      So, it’s not just that we’re talking about probability, but that we’re talking about probability where the equipment is a fundamental part of the system being measured.

      This allows us to encode a lot of information in a way that can be used to test other claims, but only by people who adequately understand how to produce equipment which corresponds to the reported statistics.

      If we want to reason about physical systems, we either need to factor out the aspects of QM which are specific to the equipment, or we need to limit our reasoning to systems which involve similar equipment.

      Fortunately, the investment into different approaches to measurement has been fairly significant. So, there’s a lot there.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: