Locklin on science

Sabine Hossenfelder’s Lost in Math

Posted in Book reviews, history, physics by Scott Locklin on May 16, 2021

My pal Bill Dreiss suggested I have a look at this one the other day. I had seen Frau Dr. Professor gabbling on in various videos circulating among the nerdetariat; never really listened as I don’t have time for podcast type entertainments, and certainly not in the subject of high energy physics, which is a field I am mostly contemptuous of. She seemed fairly sensible; an earnest and apparently well-meaning person who is disillusioned with the direction of high energy physics. She also seemed a little late to the party; I came to virtually the same conclusion on my own while I was still in grad school, just looking at the behavior of high energy guys in physics departments. Woit and Lee Smolin wrote fairly convincing take downs of noodle theory back in 2006, though I suppose they didn’t go after the larger enterprise of High Energy Theory as a field. Apparently her book made people in my old physics department REALLY MAD. In fact one of the last things my late thesis advisor Chuck Fadley said to me before checking out for the great unknown was a strong recommendation I read this book. I had clean forgotten, but I remember being in the hotel in Hamburg and being confused (post Sauna and beer) why he was trying to get me to read some German lady’s pop science book, and chalked it  up to his illness. Sorry Chuck: you were right, I should have read it when you said.

My old boss: almost always right

Hossenfelder describes the group madness of the high energy physics community. It really is a sort of mass hysteria; literal tens of thousands thousands of presumably high IQ people are gripped by it, and it’s accelerated and intensified by the internet, which makes peer pressure and communication practically instant. It’s also encouraged by the pyramid scheme nature of generating new PhDs with nothing better to do. Consider the average high energy theorist; they are effectively doing the kind of work they did in grad school; working on “cutting edge” problems in the latest woo. The average paper I’d say represents approximately the kind of mindset and work effort of a couple dozen (90s era, pre solutions websites) JD Jackson homework problems. It’s not that this stuff is easy; neither were JD Jackson problems. I’m just saying these guys are grinding the proverbial organ box like trained monkeys rather than, you know, being curious and thinking about things. Their “field” is a sort of shared delusion about what people should be working on, based on what everyone else they know is working on, and ridiculous hero worship of the 1920s theoretical physics community. And “aesthetics” and various quasi philosophical views about how their godless universe will conspire to be “aesthetic” and “natural” to them for some reason.

One thing she hammers on is the idea that “aesthetics” is bullshit (something I mentioned in 2009). Most of physics isn’t particularly aesthetic or beautiful. Physics is weird and often surprising. Frankly, so is mathematics, biology, human nature, the appearance of the universe: just about everything is weird and surprising. The only people who think the world around them isn’t weird and surprising are navel gazers who don’t get out much. In particular the idea that certain physical constants would be conveniently sized for perturbation theory (aka “naturalness“) is just fucking insane. La Hossenfelder talks about it, but doesn’t emphasize how crazy this is, but it’s rather like assuming your checkbook will always have 00 in the cents columns because it’s more convenient for you that way. Or that useful hashing functions will have lots of 00s on randomly generated numbers. This numerology assumes  that the universe will conspire to make itself understandable with current year fashions in mathematical tools used by the clown car that is contemporary theoretical high energy physics.

The book has an odd style (I assume it’s a translation) and is a combination of editorializing, layman didactics and interviews with important figures in the field. Some of the choices of didactic effort are peculiar; explicitly calling out matrices as tables of numbers while describing perturbation techniques and various group theories as some kind of woo. As the interviews go on, you get the feeling that physicists and cosmologists are not great people. They’re kind of … losers. She never exactly says this, but on some inner emotional level she must be thinking it, because that’s how all her interlocutors come off; sweaty palmed, lost, over grandiose, goofball nerds.

And let’s face it; postwar theoretical high energy physics has been a big basket of failure.  Sure we have electroweak theory from the early 50s which was verified in the 70s. What has come of it? It’s been 70 years. 70 years after Maxwell we had television. 70 years after quantum mechanics we had pentium chips. It’s been 70 years guys; where’s my electroweak technology? Really we all know there will never be a technological implication to electroweak theory. Which makes it not some great achievement of humanity: it makes it irrelevant to the point of being a sort of theology for nerds. The field of theoretical high energy physics (and frankly the experimental part) itself is, in every respect, a failure rather than something for humans to be proud of. While it is difficult to master the mathematics involved in it, that’s not much of an argument that it should actually be worthy of respect. Respectable branches of physics make predictions and produce results in the physical world. High energy theory ought to be about as respectable as any other cult: at least those people handing out flowers in the airport are giving people beautiful flowers. For all I know, the indoctrination of airport-flower-people involves similarly difficult mental gymnastics. The very difficulty of the indoctrination is arguably what makes them so reluctant to give it up: career as sunk cost fallacy.

The various interviews are filled with dispiriting LARPing (Live Action Role Playing for those of you that don’t speak /chan). From the twittering astrophysicist wearing a NASA insignia from the 1960s, back when NASA wasn’t the DMV for rockets, to the various famous wrinkley brain scientists play-acting at profundity as if they were at the head of a successful, relevant and respected profession, rather than the fools who helped lead a failed intellectual enterprise into a ditch. Hossenfelder’s book exposes the whole squalid enterprise for the LARPy failure that it is. The people involved in it aren’t great savants anyone should be paying attention to; they’re losers. Like most losers, they’re only dimly aware of their failings; if losers were self reflective they’d probably find a way to, like, not be losers.

There were a few bits and pieces I was only dimly aware of; the latest experiments demonstrating what losers the theorists are, the odd Witten-victim condensed matter guy developing some goofy qubit based cosmology. My favorite such thing was actually from her blog; pointing out that the one time the noodle theorists thought they could make a useful calculation involving measurements in the world of matter, they failed. That’s ridiculously damning; for all the alleged brainpower put in service of noodle theory, they failed utterly in their attempts to be, you know, actual scientists making testable predictions.

String theorists’ continuous adaptation to conflicting evidence has become so
entertaining that many departments of physics keep a few string theorists around
because the public likes to hear about their heroic attempts to explain everything.
Freeman Dyson’s interpretation of the subject’s popularity is that “string theory is
attractive because it offers jobs. And why are so many jobs offered in string
theory? Because string theory is cheap. If you are the chairperson of a physics
department in a remote place without much money, you cannot afford to build a
modern laboratory to do experimental physics, but you can afford to hire a couple
of string theorists. So you offer a couple of jobs in string theory and you have a
modern physics department.”

Women named Sabine were at the beginnings of great things in the past. Her politely bullyciding high energy theorists into non-existence would be a great boon both to physics and the human race. It’s a shame more people didn’t listen to Phil Anderson back in the day, or John Horgan’s numerous educated outsider criticisms, but if this Sabine woman manages it: I, for one will be grateful. What makes physics powerful is the combination of mathematics with experiment: nature holds you accountable for your success or your blunders -not your dimwitted nerd friends on the tenure committee. The High Energy clowns play acting as Pauli or Einstein like figures are not physicists; they’re at best live action role players or overt mathematical mountebanks occupying seats which would be better issued to tribologists, fluid mechanists or optical physicists. If we can nudge the profession back to, you know, things like the scientific method and testing things involving matter, (quantum computers definitely don’t count) maybe humanity will get somewhere.

well played Dr. Hossenfelder

85 Responses

Subscribe to comments with RSS.

  1. maggette said, on May 16, 2021 at 3:20 pm

    It is nice to see that in the department of “Physicist that try to do or appreciate real science and (god forbid) even applied science and engineering” game recognizes game:).

    She is high in my books just because she somehow got Eric “the intellectual dark web” Weinstein and many of his incel fanboys on heels…because she just doesn’t match his “gated institutional narrative” and is still has the audacity to not be in awe of his work.

    • Scott Locklin said, on May 16, 2021 at 5:41 pm

      Wait, people respect Eric Weinstein? Anyway, good on her for administering spankings to him.

      Miz Sabine obviously has a lot of problems with credulity; her most recent blog is listing the same 5 quantum computing technologies that were “coming real soon” in the late 90s. But the book she wrote took considerable intellectual courage and so she deserves kudos for it.

      • r 618 said, on June 20, 2021 at 12:16 pm

        > Wait, people respect Eric Weinstein?
        many a
        hardly surprising

        f Hossenfelder ist ein Instrumentalist as she put it – and that’s OK: somebody has to measure what a theory – including Eric’s – predicts, after all ^^

  2. npc_man said, on May 16, 2021 at 4:15 pm

    Sabine tries to take the untenable position that skepticism to Climate Change is “unscientific”, almost crazy and must be crushed, (she does comment quite forcefully comment against this) but skepticism to string theory and other high energy physics is totally legit and must be encouraged. I don’t understand why she or Woit take such definite stances on Climate Change, when they see their own field is basically suffering a mass delusion

    • Scott Locklin said, on May 16, 2021 at 5:26 pm

      Sounds very teutonic. There’s obvious reasons why they don’t do this. None of them have anything to do with truth or falsity of their opinions. You’d have to be REALLY interested to take a public stand against the mainstream here, as there are an awful lot of people whose bread is buttered by the mainstream.

    • maggette said, on May 16, 2021 at 6:34 pm

      Because they are entirely different things.

      A) One has models which actually predict things quite successfully. The other doesn’t
      B) One persists even though a trillion dollar lobby fights it,

      Yes. Other than your filter bubble might makes you believe, the ensemble of models is quite accurate and has good in sample and acceptable out of sample predictive power. The data is downloadable from IPCC web sites (even though it is hard to sort it out). Just check them.

      The models are quite good. Way better than anything economists do.

      And way better than noodle theory. Because there is data….and predictions!

      “Climate change denialist” have yet to come up with any models that predict anything remotely right. There aren’t any.

      Even the simplistic models from Exxon and Shell were quite good:
      http://www.climatefiles.com/exxonmobil/1982-memo-to-exxon-management-about-co2-greenhouse-effect/
      http://www.climatefiles.com/shell/1988-shell-report-greenhouse/

      It is very easy to have a definite stand on human driven climate change: because it’s good science. A couple of attention whores fighting the data is no reason for a scientific debate.

      • SZ said, on May 17, 2021 at 12:22 am

        Thanks for the resources. FWIW – those are “skeptical” of climate change, in general, are skeptical because everything “mainstream” (aka establishment) *has* to be wrong. Just for the heck of it. Dissent is good, but has a cost: one fails to see one’s own biases.

        • Altitude Zero said, on May 17, 2021 at 4:34 pm

          There’s skepticism of the problem, and then there’s skepticism of the Establishment “solutions” being offered. Any alleged “solutions” to the global warming problem that do not include the word “nuclear” are not being made in good faith, and people are right to be skeptical. There are plenty of examples of this in other fields – for example, the Capitalist boom and bust cycle exists, but many of the solutions put forward (aka Communism, Keynesianism, etc) are worse than the original problem. The manifest hysteria and bad faith of many Warmists does not help, IMHO

          • Scott Locklin said, on May 17, 2021 at 5:42 pm

            I’ve mostly cultivated ignorance of the subject, but read some 80s stuff that seems reasonable; altering atmospheric chemistry too much seems unwise.

            Of course, listening to Climate Karen or Al Gore or whoever who thinks California wildfires are a direct result of this is just fake and I don’t blame people for scoffing at this.

            • maggette said, on May 18, 2021 at 12:40 pm

              Couldn’t agree more. They are taking a, from a scientific and risk management perspective, sound position and add a lot of nonsense and propaganda to it.

              After thinking a bit about it, reading a couple of papers and played around with data a bit….I am pretty much in line with the ICCP position, but I think Al Gore is a pure propaganda movie and is doing the whole thing a disservice.

              Countering Heartland Institute lies with super-woke-nonsense lies and mixing solid claims with “every flood and wildfire is because of climate change” fiction is a bad/no science, a bad strategy and morally wrong

              • maggette said, on May 18, 2021 at 12:41 pm

                Typo: meant IPCC

          • Joel A Seely said, on May 27, 2021 at 9:48 pm

            Also, must include “population control” otherwise any changes are stop-gaps at best.

      • npc_man said, on May 20, 2021 at 2:55 am

        I don’t know where you’ve got the impression that climate science is good science. I am currently a PhD in machine learning but I spent a year in climate science and the situation is much worse than the overhyped field of ML/AI. Do you even read the papers published in Nature on climate science lately. Here let me give you a heading : “Protect rights and advance gender equality to mitigate climate change” – 6th May 2021. I’ve also read through the papers of Michael Mann (Paleoclimate – Hockey Stick graph) to see what the fuss was about, and it is obviously wrong. The fact that the field cannot disown him says a lot about the intelligence of these scientists. “Short-Centering” is nonsense, you have to be mathematically illiterate to not see the problem, it has not been used anywhere outside his papers while doing Principal Components Analysis and when I first read it I thought it was an honest mistake. Then when he started defending it, I realised he thinks climate scientists are idiots or cowards and he’s probably right. My general feel is that these people are idiots, I always felt the smartest person in the room climate science which is not the case in AI/ML. In AI/ML most students competed in Putnam/ Competitive Coding or they came from Hedge Funds/ Tech Companies (a big pay cut btw) and generally would be really successful out in the corporate world if they chose not to be in AI research. I get the opposite impression in Climate Science, a bunch of doofuses who would never even pass a Google interview, much less get into hedge funds or other highly selective firms. As a final example there is 1 IMO gold medalist and 1 silver medalist in my AI PhD program (at CMU), I don’t think there’s even been an IMO gold medalist who’s gone into climate modelling. (Physics has a lot of them which shows the tragedy of the field)

        • maggette said, on May 21, 2021 at 7:26 am

          Hi,
          I am no expert in Climate Science. I read 2 books about it, worked through a couple of papers and IPCC reports, read a lot of blogs from the “sceptism” camp (there isn’t much that I can take seriously to be honest). I talked to a couple who works at Jülich (German climate science hub). Since you apparently worked in climate science and I never felt to bethe smaretest dude in any room, you know probably more about it than I do.

          Still:
          – we have a phenomena (quite an drastic temperature rise)
          – all the obvious forcings (as you well know if you worked in climate science) are not explaining it: it should be getting cooler right now.
          – there are simple (like the Enron paper I posted) as well as more complex models. The complex models are nothing else than the (rather successful) weather models, they just skip some boundary conditions.
          – these models, as an ensemble, make predictions. These predictions are short term (max 30 years out of sample)
          – these predictions are good. The same models work quite well in hindcast as well.

          The couple of Julich I talked about earlier: she was part of the German Math Olympian team, has a PHD in Physics with an BA in EE…and not even a linked in profile, since the couple sold an SaaS start up for 8mio EUR (with no investors).

          I am really really not seeing the point that the number of (or lack of) IMO medalist are somewhat important for the quality of research. And some stupid gender study papers? I am sure as fuck that I can’t point you to millions of stupid papers regarding “AI”. There were a lot of incredible smart russians people working on quantitative finance back in the days. And most of there research was pure nonsense. Smart goes were the money goes.

          It’s simple: you get point me to a “sceptic” model that survives my out of sample test. Actually, right now I am even willing to accept an model that works well in hind-cast and explains anything about the rise we have right now. Than I will reconsider my position.

          I honestly think that being smart has nothing to do with a position on climate science. Jim Simons is absolutely somebody who defends and finances climate research. Rob Mercer laughs about it. From what I hear, both are smart.

          • npc_man said, on May 26, 2021 at 7:53 am

            Apologies for the delay in my responses. You claim that Climate Science has no problem with a multitude of stupid people because you have met smart people and other sciences have their share of crazy papers. I would point out that most other sciences unlike climate science have almost 0 crazy papers in top journals. Find me a crazy paper in Journal of Machine Learning Research: https://jmlr.org. This is a problem unique to climate science. Second, I don’t doubt you met smart people in climate science but then you read stuff like Michael Mann being nominated into the American Geophysical Union. This is only possible if the rest of researchers are dumb (mathematically illiterate), or complicit with Mann. My bet is on A, due to my personal experience in this field.

            Since you want to talk about modelling, I too dealt with modelling in my 1 year. I couldn’t help but notice most models were North-South Homogenous (poles don’t exist), East-West Homogenous (not true in reality), do not account for geography in any manner. My work was on accounting the Himalayan mountains into the climate model, A task that will turn out fruitless. But rather than enumerating problems with models let us examine the science itself, to understand when we can trust the models and when we can’t. The climate is a complex phenomena, it has a hundred different factors working simultaneously. For example increase in CO2 will cause increase in the greenhouse effect with it’s logarithmic relation that we understand perfectly. Decrease in Solar input, we also understand perfectly. So we need to code all these relationships, give each of them their appropriate weightage based on their real world values (Amount of CO2, solar power etc) and then simulate the model to get some understanding of the climate. If the climate is reasonably affected by 200 factors, how many do we need to accurately model to the climate to a reasonable accuracy, perhaps 100, perhaps 150. So where are we actually in our models. We know there are probably hundreds of factors that control the climate though no one has gone through the exercise of painstakingly noting them down. In our models however we code at max 5. I’ve never seen a model with accurate ice-albedo feedbacks, nor models with long term ocean current effects (like the Gulf Stream) nor any model with cloud formations based on aerosol effects that we know is one of the most important factors regarding climate. In fact we generally code two-three effects (Greenhouse effect, Water-Vapour Feedback and radio-active feedback), we only measure carbon dioxide and add that as a fixed quantity to our model and tune the parameters for the rest of the effects to fit the existing climate. So we get stuff like wildly different water vapour concentrations and radioactivity compared to the real world, same carbon dioxide and yet the same temperature in the current year in our models. If we tune these parameters we can hilariously make our models predict our current temperatures since 1980 and then show a sharp fall from 2050 into an ice age. Needless to say my guide wasn’t amused when I showed him this. But don’t misunderstand me, just ensuring that this sharp fall does not occur, is not enough to save these models. Saving these models requires coding at least 100 of these effects, measuring these values, implementing them and seeing if their results make sense. To give you an analogy, we are trying to model the US economy based on currency supply that matches the real world value, steel production and oil production that need not match actual values but our GDP needs to match the US GDP since 1990 and then we use this model to predict our future GDP. Climate is at least as complex as an economy, this whole situation seems ridiculous. You want alternative models, take the existing models and change their parameters. There are in my experience hundreds of different combinations of parameters that can match the world temperature in the recent past, and at least 1 will show some crazy change in climate after say 2050. Hector is a good place start experimenting: https://github.com/JGCRI/hector . It has many ways to see a sharp fall after 2100, I think there is a paper on this but realise my objection is not to this single model though! (which is simple and not used in literature. I gave its example since it is publicly available), my objection is to the methodology that all these models use.
            Why do all climate scientists do this, if its obvious that these climate models are nonsense? Hard to say, my answer is because some of them are idiots (like my guide) and don’t realise the faults of their methods even if you explain it to them. Some of them are mendacious like Michael Mann, some of them are too invested in their career to see otherwise ( these people generally angrily flip out if you point it out to them), some of them mix science with politics (something becoming more common by the day in climate science), regardless of the reason this field is a complete mess with a garbage output. My preference would be if really smart people from CS, Physics join this field and are given free reign to straighten it out (cancelling papers that are nonsense, firing useless climate scientists) and restart the field to see what conclusions they would then reach.

  3. Chiral3 said, on May 16, 2021 at 4:48 pm

    I’ve been familiar with Sabine for a long time. As much as I agree I also wonder why people spend so much time criticizing the insanity versus just doing research and advancing the chains. Like that IT guy from Columbia. It’s crazy internet-age stuff, this need for cult-of-personality, not so different than RBG, Hawking, Fauci, Joe Rogan, et al. Kaku and Green I don’t think have published a paper in 30 years yet that doesn’t stop them from being regarded as experts to the lay audience they so ambitiously try to court. I can only assume that someone has a tattoo of Kaku. I once read a book on martyrdom that started with a chapter on Mother Teresa. Really? After some thought it’s not so far fetched. After some more thought it really becomes apparent that there’s more going on with people and all these scientists I was so enamored with through my twenties are biased and vain just like mortals. We spoke about Kevles recently… Serge Lang, bigger that life, an intellect beyond reproach,…, and totally batshit nuts in the spergy way people can draw hard lines between right and wrong. A long time ago someone smarter than me said “buff books are bad for science”. He was right. There’s no point. Buff books (BHoT) altered university funding programs and career paths.

    • DamnItMurray said, on May 16, 2021 at 7:03 pm

      What’s a buff book, English is my second language?

      • chiral3 said, on May 17, 2021 at 12:07 am

        I’ll take a swag at the etymology: to buff something out is to take a chamois (from hide) and rub it till it is polished and shines. A “buff”, as in “movie buff”, “car buff”,…, is someone that knows alot about the subject, and is derived from the original meaning “to polish”. I mean it in the pejorative sense, which was common in the generation before me, as in the dude in the quanta mag comments that chastises someone that “yeah but SU(2) covers SO(3)” and follows it by am arxiv.org reference. A good example of a buff book would be a brief history of time, which produces a person that will argue the merits of string theory online.

  4. Scott Locklin said, on May 16, 2021 at 5:34 pm

    > buff books are bad for science

    This is 100% true. In fact it’s the sort of thing that got me interested in science for mostly the wrong reasons. It’s also why I didn’t want to read some German lady’s pop science book.

    La Hossenfelder has very obviously made a career for herself as a sort of freelance physics educator. I doubt she’ll find much wrong with the rest of the subject despite its abundant and pervasive problems; what would she have to talk about? It’s tough rehashing the old stuff for the n-th time, though I suppose she could just crib from Gamow books. This is fine; arguably a better lifestyle than publishing silly physics papers nobody will ever read.

    Lang seems kind of nuts, but then, he was a number theorist. Physicists (and chemists) used to be chosen as hard-headed people to staff technocratic roles. This was a fair thing to do at some point. But even the JASON committee got a whole bunch of things wrong back in the 60s, and who knows how debauched their membership is these days.

  5. anonymous said, on May 16, 2021 at 6:33 pm

    (quantum computers definitely don’t count) While other people might not think of them this way, wouldn’t a quantum computer be a sort of experimental test of entanglement and how far it goes? At one extreme, you basically have many-worlds. The entire configuration space is real. At the other, maybe continued inability to handle noise and limits to how much information you can cram into the state of the system says something about what entanglement really is.

    I still occasionally entertain the idea that we’re staring cross-eyed at some sort of statistical mechanics: It’s very strange to me that the state of a quantum system looks like it’s distributed over some sort of configuration space, because a classical configuration space is what it is for logical reasons that have to do with what we’re talking about when talking about information, and an orthodox quantum system must be ontological.

    Any world where quantum computers actually work would basically have to be a many-worlds world.

    • Scott Locklin said, on May 17, 2021 at 9:05 am

      You’re talking about something that doesn’t and probably won’t ever exist, like nanotech. That’s pretty much my entire theme here: modern “physicists” never talk about physical things; they’re just wanking imaginary glass bead games.

    • Chiral3 said, on May 17, 2021 at 11:50 am

      Another interesting aspect of physics research is the effect that where/who you study determines trajectory. If I recall SH was one of those early LQG people centered around PI in Canada. The graph emanating from the Ashtekar / Penn State crew sweeps up a munch of these people (Smolin, Dreyer, …) and seems to be pitted against the graph that originates with Texas (Polchinski, Weinberg, …)

    • Mongoose said, on May 26, 2021 at 9:39 am

      “Any world where quantum computers actually work would basically have to be a many-worlds world.”

      This isn’t the case. The theory of quantum computing doesn’t depend on any particular interpretation of QM, despite what David Deutsch might like to think.

      You are right though that quantum computing is a test of how far you can push entanglement, and there have been suggestions that the reason why it hasn’t yet been successful is because we don’t understand entanglement e.g. https://arxiv.org/abs/1301.7351 (note that this isn’t an interpretation of QM, it’s a correction to it)

      Of course the simpler reason that QC hasn’t worked is that it just doesn’t scale. This should be obvious because if quantum effects did scale then we’d see them all the time and they wouldn’t be counter-intuitive.

  6. Rickey said, on May 16, 2021 at 6:58 pm

    Beauty, elegance and aesthetics mostly originate from understanding and appreciation and are not an inherent quality of the subject being examined. For example, a mechanical engineer would find a modern internal combustion engine “beautiful” since they realize all of the effort that went into the design and manufacturing, whereas, a layman would just see a hunk of metal. Kepler’s laws of planetary motion or the periodic table are elegant since they created order from what was formally chaos. In any scientific field, discover something new that creates understanding, then the beauty and elegance will follow. It is not the other way around.

  7. Frank said, on May 17, 2021 at 5:16 am

    Dr. Hossenfelder’s mouth is remarkably symmetrical.

    • Altitude Zero said, on May 17, 2021 at 3:57 pm

      Yeah, she’s actually kind of cute, or at any rate a lot better looking than Ed Witten…

  8. glaucous noise said, on May 17, 2021 at 3:05 pm

    High energy is funding starved and considered near-suicide for physics students (completed my PhD in 2020 so I’m quite fresh in my understanding of the climate; the push was overwhelmingly towards ML/data “science”).

    So, the system works, albeit agonizingly slowly.

    Sclerosis in computing seems in part due to the fact that engineers have been able to juice out a teeny bit more performance without really doing anything seriously hard (e.g. going beyond CMOS). The propaganda around AI and quantum computing seems like it will continue the charade for a bit longer, but international competition to be top dog in computing will cause backlash sooner or later when upstart nations blow their ability to unseat the US as computing top dog by wasting money on AI/quantum computing.

    What astounds is how classic physicists, with their appalling track record, have taken so long to completely implode. I can’t imagine the UK is too happy about how their graphene investments have turned out, nor that the EU technocrats have any easy time defending the use of taxpayer money on the LHC.

    • chiral3 said, on May 17, 2021 at 3:23 pm

      AI/ML… nauseating. So what’s your immediate future hold? You just finished what options did you have?

      • glaucous noise said, on May 17, 2021 at 3:30 pm

        Got a job at a national lab. Work on simulation codes for solid state electronics/optoelectronics.

        Since quantum “computing” is the topic du jour I find myself working on devices with nonlinear effects which can all be repurposed as “entangled pair sources”.

        So, real science disguised as fundable science.

        • chiral3 said, on May 17, 2021 at 3:41 pm

          That’s fantastic, congratulations. If you stay away from finance, crypto, and universities it should be a blast.

          • glaucous noise said, on May 17, 2021 at 3:58 pm

            Thanks, yeah I intend to milk quantum “computing” as long as possible.

            I think the colossal funding for quantum computing is an opportunity for more than just grifters and parasites. It is a good opportunity to work on desperately needed fundamentals. Which does not mean black board pontification, it means models that actually conform to reality.

            I can also say that the current funding environment is going to be even more colossal than it currently is without a doubt, for anybody who can take advantage of it.

    • Scott Locklin said, on May 17, 2021 at 8:05 pm

      My thesis advisor (pictured above) had a sort of big party when he knew his time left was limited and all the active researchers were asking me, an actually successful data scientist what I thought of “deep learning” which they were all doing sort of moronic things with. None of them liked the answers. I thought it was just some dumb fad. Anyway, I just thought they were being weird; interesting to learn that they’re re purposing as shitty feeder programs for data science.

      Congratulations on getting done, and I wish you big success.

      • glaucous noise said, on May 17, 2021 at 8:13 pm

        Thanks, it was a real party during COVID.

        That is exactly correct, the overwhelming majority of theory PhD’s from physics are pumped into data science positions. What’s hilarious is they usually have to retrain for several years before getting hired.

        The experimental guys do far better.

        I did my PhD in applied physics/engineering doing computational stuff, so I was fine.

        Only losers do a PhD in goofy physics, as you say.

  9. Walt said, on May 17, 2021 at 10:18 pm

    I can’t think of a field that isn’t moribund nowadays.

    • Altitude Zero said, on May 18, 2021 at 1:23 am

      Well there’s OnlyFans and Critical Race Theory, but that’s about it.

    • Scott Locklin said, on May 18, 2021 at 11:19 am

      Cryptography is pretty interesting. I think ZNP systems a la STARK, plonk, etc could be as revolutionary as something like Reed Solomon codes.
      View at Medium.com

      Rockets and batteries slowly improving; mostly by Musk building type-1 organizations around their improvement. There are more revolutionary things which could take place here, or lots of little improvements could sum up to something really big like the green revolution. After all, hydrocarbons are still vastly better energy storage systems than the best batteries -batteries which are as good as an equivalent mass of diesel (preferably with a comparable safety profile) would be a really big deal.

      Someone could probably do the same thing for aircraft design. We’ve basically be flying around in turbofan B-47s for the last half century; that can’t be the only way to move people and material around in the air. I mean, “why change” is a valid question here, but the slow failure of Boeing and Lockheed might provide opportunities for big changes.

      I also think there are areas of classical statistics and machine learning which border on revolutionary, but which very few people have noticed. Probably will require a Tukey to pull it all together into a new field. I write about them here on occasion. Even something like Boosting hasn’t really fully played out yet: very revolutionary technology, has already had a huge impact on the field well beyond dweeb learning.

      Cheap genetics tests are also producing very interesting results in anthropology and archaeology.

      No idea what’s going on in physics, but quantum optics (or, just “optics”) looked like it could be a big deal in the 90s. Materials science also makes incremental progress; maybe something cool could happen here.

      Astronomy: I think there’s lots going on here. Could be even more going on if we shot more telescope stuff into space.

      Crispr could produce big changes; looks really neat. I suppose it could also be the end of humanity.

      I strongly suspect there are drug-like substances hiding in everyday supplements which could improve the human condition in ways comparable to antibiotics. There is zero incentive to discover and mass produce them beyond the ordinary snake-oil industry; it would have to be done by a government, and it might require Tukey-type statistical breakthroughs. I’m thinking of humble things like glucosamine which can have miraculous effects on quality of life. Various vitamins as treatments for disease, including chronic disease could be a real game changer (menaquinone-7 might be effective for arteriosclerosis, vitamin-D for respiratory infections, etc, etc).

      I assume there are other fields grinding forward in places where people are trying to solve practical problems or can make money. People have pointed out that lots of small changes to automobiles have made them safer; it’s true, and this shouldn’t be discounted. Fracking was like this; lots of little changes, long term engineering ideas; nobody noticed it creeping up on us until it had implications of geopolitical importance. One thing all these fields have in common: they mostly don’t have PR.

      Maybe this comment could be fleshed out into a blerg.

      • Chiral3 said, on May 18, 2021 at 12:44 pm

        Yeah this could be a blerg, and I could say so much on this subject. One thing that is missing today is the required gestational period, the characteristic timescale required for innovation. It can’t all be “rapid”. Vapid innovation is real. I can’t stand sci-fi, mainly because the writers or poor writers, but I seem to recall a Neal Stephenson book called Anathem, methinks, and it was about this race of beings, similar to humans, that got jobs for the betterment of their planet based on scientific needs. Some projects lasted thousands of years and, despite your involvement, would never be completed in your lifetime. Which brings us to quarterly earnings, marketing, 3D printing, nanotechnology, regression (I mean AI). Type-1 companies have no gestational period. They are gobbled up – culture, IP and all – and forced to hardcore commodify against artificial time scales without a safeword. Not to say there isn’t small scale innovation but it makes me think we snuff out large scale innovation as a behavior. In the 1950s everyone was facing the same direction when they showed up to work at, say, Bell Labs. We (the US) were a country of ego-centric individuals all aligned on the big picture. The collective unconscious took in Ayn Rand and JD Salinger. Today we are a country of group-centric collaborators misaligned on the big picture. A quick peek at the latest best-selling non-fiction list will tell you everything you need to know about the zeitgeist.

        Things are working out
        https://pbs.twimg.com/media/EvBqQ99UcAAQuwh?format=jpg&name=900×900

      • Walt said, on May 18, 2021 at 4:11 pm

        Rockets and batteries slowly improving; mostly by Musk building type-1 organizations around their improvement. There are more revolutionary things which could take place here, or lots of little improvements could sum up to something really big like the green revolution. After all, hydrocarbons are still vastly better energy storage systems than the best batteries -batteries which are as good as an equivalent mass of diesel (preferably with a comparable safety profile) would be a really big deal.

        Batteries still do, and likely always will, disappoint. I am learning similar lessons on the nickel-iron battery front. We may have to face the fact that gasoline has been the ultimate transportation fuel, and the economists’ picture of universal substitutability may not apply

        Someone could probably do the same thing for aircraft design. We’ve basically be flying around in turbofan B-47s for the last half century; that can’t be the only way to move people and material around in the air. I mean, “why change” is a valid question here, but the slow failure of Boeing and Lockheed might provide opportunities for big changes.

        Judging by the F-35, airframe design is going backward. I got through the part of Coram’s book where the bureaucrats turned Boyd’s F-16 into another compromise and was filled with nausea. The financial and bureaucratic incentives to do things poorly to make the most money are impossible to overcome, thus there is no incentive to innovate. Rutan’s A-10 replacement was designed in the ’90s.

        I strongly suspect there are drug-like substances hiding in everyday supplements which could improve the human condition in ways comparable to antibiotics. There is zero incentive to discover and mass produce them beyond the ordinary snake-oil industry; it would have to be done by a government, and it might require Tukey-type statistical breakthroughs. I’m thinking of humble things like glucosamine which can have miraculous effects on quality of life. Various vitamins as treatments for disease, including chronic disease could be a real game changer (menaquinone-7 might be effective for arteriosclerosis, vitamin-D for respiratory infections, etc, etc).

        The FDA has gone backwards in its ability to distinguish good drugs from bad. In the ’90s, legislation eliminated the requirement for new drugs to demonstrate better effectiveness and lower side effects than existing treatments. Now drugs only have to be compared to a placebo, and it’s much easier to design studies that make drugs look better than a placebo, hence SSRIs. In 2016, legislation was passed that allowed drug mfrs to withhold raw trial data from the FDA. We obviously can’t trust BigPharma to give us safe new treatments. How would BigPharma patent and profit from what you describe? Is it profitable to cure an illness?

        I assume there are other fields grinding forward in places where people are trying to solve practical problems or can make money.

        The problem is that companies have figured out how to make money without innovating.

        • Scott Locklin said, on May 19, 2021 at 8:26 am

          I didn’t say anything about pharma companies or the FDA; I said there are definitely helpful things in OTC supplements and probably elsewhere. It doesn’t take much imagination to figure how people can learn about them, and an enterprising young scientist might make a nice career playing Sean Connery in Medicine Man at the local GNC. Removing the profit incentive would probably produce better results than the present rentier capitalism situation. It sounds crazy but people really used to make/discover new drugs for the benefit of humanity in general.

          Batteries are shit compared to gazzuline, but that fact indicates there’s more possibilities for improvement there. They’ve certainly improved a lot in the last 20 years. Mostly because someone tried to improve them.

          Rutan had an A-10 replacement? Links or it never happened.

          • Walt said, on May 19, 2021 at 11:42 pm

            https://www.scaled.com/portfolio/ares/

          • anonymous said, on May 20, 2021 at 4:44 pm

            Hey, I saw that one at the LA-county airshow. It’s built for low-and-slow, but IIRC it was lighter than the A-10 by far – couldn’t possibly have been as armored. Maybe it was just the scale-model-demo. Also, I don’t think the government is capable of buying something from someone until they’ve been eaten alive by Lockheed or Boeing (see the predator).

            • Walt said, on May 20, 2021 at 5:22 pm

              I thought the armor of the A10 was only around the cockpit to keep the pilot from being killed. Survivability is a different concept: can the aircraft get back after being shot-up, however the aircraft will not survive if the pilot dies, so you need both. I think the ARES could be both armored and survivable at a lighter weight.

              The quality and success of an aircraft seems to be inversely proportional to its cost.

              • Scott Locklin said, on May 21, 2021 at 8:07 am

                Hard agree on that. F111, F35; golden dodo birds. Hilariously being repurposed to do stuff an armored biplane could do better.

                The exception is the F15 (or Su-27 family). But there might have been a cheaper way to achieve similar goals. Or maybe air superiority is inherently expensive; who knows.

                • Walt said, on May 21, 2021 at 5:58 pm

                  Pierre Sprey et al think air superiority should only cost $30 million/aircraft:

                  Click to access americas-defense-meltdown-2008.pdf

                  I’m not sure we really need all of these expensive electronics on them either. For example, when you turn on your own radar, you give away your position to anyone with a modern RWR.

      • asciilifeform said, on May 18, 2021 at 6:28 pm

        ZKP is an epic scam of perhaps-unprecedented complexity and cynicism: in particular, it is very likely NSA & co.’s flagship attempt to re-fight (and win) the 1990s “key escrow wars” via subterfuge, rather than force.

        AFAIK, all ZKP schemes proposed to date fall into one or both of the following categories:

        1) Intrinsically master-keyed bombs — where users are required to believe, strictly on faith, that a “trusted” (though performed by potentially hostile strangers!) computation had taken place; and in particular, that a certain intermediate value — inevitably generated and able to play the role of a master key — had been destroyed (i.e. retained nowhere) at the end of the computation. Somehow we are expected to ignore the fact that such a destruction is physically-impossible to prove.

        2) Obscurantist bugman complexity soup. Often enough with a generous helping of (1). Sitting on top of half a dozen or more unproven number-theoretical assumptions (not always explicitly catalogued by the proponents of the scheme in question) ; multi-megabyte blob of pointeristic implementation code; extensive (and usually to a large extent unknown) weak-key spaces; and the utter impossibility of fitting the entire program “into your head” (whereas e.g. RSA can be explained to an intelligent schoolboy within an hour.)

        Add to this unabashed “quantumism”, where hucksters ask us to replace a simple and well-understood scheme (RSA) with egregiously complex ellipticisms, accompanied by highly questionable proofs of “equivalently strong” yet shorter keys; and all of this pushed via a FUD industry based around the notional properties of an imaginary wunderwaffen “to be built any day now, but More Studies Needed, more grantola please!”

        • Scott Locklin said, on May 19, 2021 at 8:54 am

          I don’t think anyone claims elliptic curves are “post quantum.” They claim they have better probabilistic properties than RSA. I’m generally suspicious of new magic curves myself, but you can always just use bigger RSA keys.

          FWIIW STARKs, Spartan, Virgo, bulletproofs, Hyrax, Aurora and others don’t require trusted setup. There’s a lot happening here, and I assume some of these people know what they’re talking about. Even trusted setup isn’t so bad if your organization participates. Beats trusting DNS or whatever.

          I’m not a cryptographer, but can certainly see the possibilities here. It’s not actually the privacy piece which is interesting or important: it’s the compression.

          • asciilifeform said, on May 19, 2021 at 5:44 pm

            > I don’t think anyone claims elliptic curves are “post quantum.”

            Until very recently — USG’s NSA and NIST claimed just that. (Then backpedalled. Without explanation.)

            > There’s a lot happening here, and I assume some of these people know what they’re talking about.

            This statement may well be true, but not in a way the prospective user of ZKP would like it to be. Concretely, “what they’re doing” is quite likely permanently opaque to even an intelligent user on account of unbounded, egregious proliferation of (ever-changing! to add insult to injury) moving parts. What I’m seeing is a mathematically-flavoured shell game, not unlike Official academi-economics. Or, for that matter, “strings”.

            And I’m not fully convinced that “they know what they’re doing” for that matter. People who genuinely grasp a subject historically spent at least *some* brain cycles on making it approachable to the intelligent amateur. Think of e.g. Feynman’s physics schoolbook. But — in contrast — does, e.g. this strike you as the product of people who want to improve your understanding of a subject? Or possibly the opposite?

            • Scott Locklin said, on May 19, 2021 at 10:02 pm

              Do you have citations on the NIST claim? I’ve never seen that. Post quantum is of course completely irrelevant. There’s never going to be post-quantum ciphers because there’s never going to be a quantum computer.

              Some ZKP papers are well written. Some are not. Similarly a lot of early Reed Solomon stuff looked like moon math (frankly a lot of recent assholes make it harder than it seems). Until you just read Berlekamp or Golay or whatever whose brains wasn’t made of swiss cheese.

              If you really think you know ZKP gizmos are baloney and can prove it, I can get you multimillion dollar bug bounties; all legal and paid for by people with zkrollups and other such zkp based doodads: https://immunefi.com/learn/
              I can personally guarantee they’ll pay up too.

              • asciilifeform said, on May 22, 2021 at 12:24 am

                Re: NIST: https://pomcor.com/2016/02/09/nsas-faqs-demystify-the-demise-of-suite-b-but-fail-to-explain-one-important-detail/

                > Some ZKP papers are well written.

                Can I persuade you to link to a few ? Genuinely interested in the subject (or rather, such a part as constitutes something usable.)

                > I can personally guarantee they’ll pay up

                Can link to some concrete instances of these folks paying such a bounty ? (with details of “for what” and “to whom”) ? Esp. when the “to whom” wasn’t in their selfsame mafia?

                • Scott Locklin said, on May 22, 2021 at 8:13 am

                  I don’t think that link says anywhere that NSA claimed Elliptics were post quantum. Just that “don’t bother switching from diffie-helman since you’d have to switch again.” I also don’t trust the NSA for reasons listed in there.

                  I think this is a decent review article on SNARKs:
                  https://arxiv.org/abs/1906.07221
                  Bulletproofs:
                  https://crypto.stanford.edu/bulletproofs/

                  Don’t have a good one for the myriad others; Eli’s review article (on medium linked above cf “Cambrian”) of the different flavors may contain some. I’m assuming Israeli cryptographer dudes might have …. certain connections …. but it’s not in their best interests to attempt to commercialize a backdoored STARK in a blockchain (mind you it’s used for compression, not for privacy). Verifiable computation on exchanges is huge.

                  Re: bug bounties:
                  I’m an investor in those folks; they pay out on a regular basis and if they didn’t pay someone I know, I’d have the CEO in a headlock and give him a noogie until he coughed it up. It’s not a scam, and wouldn’t work if it was.

              • asciilifeform said, on May 22, 2021 at 12:25 am

                Tried 3 times to post the link, but can’t seem to get past your spam filter?

      • StagSlag said, on May 19, 2021 at 5:10 pm

        What are your thoughts on ocean exploration Scott?

        Also, please flesh out this comment into blog. Would love to read.

        • Scott Locklin said, on May 19, 2021 at 5:15 pm

          I assume submarine people already know neat things we don’t. Ocean is big though!

      • The Serpent and the Rainbow said, on May 20, 2021 at 12:04 am

        Could you elaborate on the type of statistical breakthroughs that might be necessary to unlock the potential of regular supplements? The first thing that came to mind were all of the projects by various big name labs and drug companies to use deep learning for drug discovery, but they all were shamefully swept under the rug after they accomplished exactly nothing, and I doubt they were what you have in mind.

        Tangentially (or maybe not), I’d love to read a post from you that elaborates on Topological Data Analysis, which is what you’re presumably alluding to on the statistics and machine learning front. More generally, I’d also definitely like to hear you expand on the rest of your comment, it’s just that you’ve had my interest piqued by the TDA stuff for a while now.

        • Scott Locklin said, on May 21, 2021 at 8:37 am

          Deep learning is mostly put to dumb purposes. “Hey this computationally expensive curve fitting technique works well on German traffic signs and some spergs taught them to play video games and Google bought them; let’s abandon all rational thought and concentrate on this because the newspaper says it’s …. the future…”

          Most of the fundamental problems in statistics and probability date back to the 1950s. How do you combine models to know more about things. Bayesians, frequentism, information geometry, Dempster Shafer theory, the work of Vovk and company using algorithmic information theory or game theory, TDA, signal processing, De Finetti and Savage’s subjectivism, “the scientific method.” All of these bring various opinions based on branches of mathematics to bear on the question, and some of them produce results. None of these make a lot of sense or can replicate what humans with calculators can do; best results from all of them are sort of slightly better calculator for human operators. Really what you want is something that works as well as, say, a good coach training a bunch of athletes. Anyway there is much to be done here and it’s possible new insights here could lead to better tools. Old tools might also work for drug discovery.

  10. mitchellporter said, on May 19, 2021 at 8:27 am

    “In particular the idea that certain physical constants would be conveniently sized for perturbation theory (aka “naturalness“) is just fucking insane. La Hossenfelder talks about it, but doesn’t emphasize how crazy this is, but it’s rather like assuming your checkbook will always have 00 in the cents columns because it’s more convenient for you that way. Or that useful hashing functions will have lots of 00s on randomly generated numbers. This numerology assumes that the universe will conspire to make itself understandable with current year fashions in mathematical tools”

    Naturalness is the opposite of hoping for a conspiracy. A natural theory is one where its viability doesn’t depend on tuning the coefficients in special ways.

    The only reason naturalness is under attack, is because the Higgs showed up without there being other new particles. The Higgs mass is sensitive to virtual contributions from other heavy particles, and the assumption was that it’s kept light by new symmetries which make such contributions cancel out. But those new symmetries imply new particles, which aren’t being seen.

    So perhaps some new philosophy is called for. These days there is an interest in the circumstances under which the coefficients in perturbation theory *are* unnatural. One might also be interested in approaches to physics in which new heavy states beyond the standard model simply don’t exist. That’s a problem because even if there’s nothing else, there ought to be virtual micro black holes making the Higgs heavy, but maybe there’s something about quantum gravity that nullifies that.

    But the point is, if you have standard model plus gravity as conventionally understood, the lightness of the Higgs indicates a miraculous cancellation among all the virtual contributions to the Higgs mass. Without some further explanation, this requires an umpteenth decimal-place tuning of Higgs couplings.

    I think Hossenfelder doesn’t dismiss the problem, but she thinks there are better problems to focus on, that should yield progress more immediately. That’s fair, but too many people are just going, ha ha, the stupid particle physicists assume the numbers have to be close to 1 for no reason.

    • Scott Locklin said, on May 19, 2021 at 9:17 am

      I disagree: naturalness is the assumption that the universe believes in the mathematical tools in current vogue because they had some (probably spurious) successes in the past. There are other branches of math which could be brought to bear on these problems.

      “Virtual black holes” IMO, are a black hole. Can you do an experiment? No? Then you’re a mathematical theologian!

      I continue to maintain Anderson’s statement that “more is different” is more reasonable than quantized gravity or other such high energy “philosopher stones.” There is no more reason to believe there is a quantum theory of gravity than there needs to be a quantum theory of steam engines. Generally speaking high energy physics is asking the wrong questions, which is why it’s basically irrelevant, and has been irrelevant for three quarters of a century now. Studying emergent properties of matter, say, taking the standard model and actually trying to predict properties of atoms with it, rather than making the standard model more “unify-ey” because some smart German refugee with good PR had an intuition that might be interesting: seems like a better approach to me.

      • glaucous noise said, on May 19, 2021 at 9:23 am

        They’ve been trying to get basic properties of protons from QCD for quite a while. A lot of your tax dollars go to GPU bakeries which essentially convert QCD into a molecular dynamics problem via a math wank and then crank n’ spank for a few years.

        Several decades in and they are still orders of magnitude off if I’m not mistaken, although I have not paid attention to this topic for a while now.

        • glaucous noise said, on May 19, 2021 at 9:24 am

          er, not “a lot of your tax dollars”, a drop of a speck of your tax dollars

          • chiral3 said, on May 19, 2021 at 10:16 am

            I am dated but the QCD crews always were hogging NERSC LLNL computer time. I was doing plasma and fluid calcs and it always seemed the lattice QCD dudes got all the nodes.

            • glaucous noise said, on May 19, 2021 at 10:29 am

              I think they were overrun and routed off the field by the biodorks in the 90’s and 2000’s doing molecular dynamics on proteins, and now are probably an even more minute contribution to total with all the machine learning and AI grandmeisters training deep networks.

              • chiral3 said, on May 19, 2021 at 12:01 pm

                Alright, alright, let’s not get crazy,…, I was the 90’s, the last decade music or movies were good, too young to be scared of AIDS and to old to be scared of sex and when copying UNIX prompts from a message board constituted hacking. The protein folding guys were up-and-coming then. I remember interviewing at Schrodinger before I went to finance was probably around 2000. Pre GPU when Beowulf clusters were the poor man’s SC (on the east coast… west coast for some reason liked macs – appleseed clusters – and reverse Polish calculators). Spent most of the time doing memory management on the IBM clusters. But, yes, I could see the bio guys knocking the LQCD guys in the 00’s just like they knocked the fusion dudes in the 90’s.

      • mitchellporter said, on May 19, 2021 at 2:14 pm

        “Can you do an experiment?”

        The basic experiment here was done in 2012 when the mass of the Higgs boson was determined. That’s data and it’s a problem for any theory where that mass is unnatural.

        But on the topic of quantum gravity, let me point out that the Higgs boson mass was *correctly predicted in 2009* by an alternative approach to quantum gravity, asymptotic safety. (It’s “alternative”, but it has good pedigree in that it stems from an idea due to Steven Weinberg.)

        https://arxiv.org/abs/0912.0208

        An acquaintance in Brazil (Daniel Rocha) described this as the first ever successful prediction of quantum gravity. 🙂

    • anonymous said, on May 20, 2021 at 12:20 pm

      Correct me if I’m wrong, but isn’t the sequence of events for why the Higgs boson was added into the standard model something like this?:

      1. The weak force is weird – it experimentally breaks certain symmetries and weak decays don’t respect conservation laws that are otherwise respected (lepton numbers/baryon numbers of various sorts).
      2. The model they came up with for the weak force is also weird: You have to postulate left-handed and right-handed helicity versions for every fermion, and the weak force only acts on one helicity and not the other. This isn’t Lorenz invariant, because, for massive particles a Lorenz boost will change one helicity to another.
      3. So they decided every fermion was massless and must move at the speed of light, so that the helicities could be preserved. They added the Higgs mechanism to re-add mass into the picture – instead of a mass, all these particles now have a “Higg’s charge”. Some combination of the Higg’s charge and higgs mass is the particle mass.

      Is that roughly the chain of reasoning here?

      • anonymous said, on May 20, 2021 at 12:23 pm

        If so, how did they make any sort of prediction for the Higgs mass at all, since it seems you have freedom to dial the mass and the higgs charges independently to explain the particle masses?

      • anonymous said, on May 20, 2021 at 12:32 pm

        Also, one other thing: I see a lot of stuff I can’t quite translate yet as reasons why a theory of quantum gravity is difficult, but isn’t one fairly glaring reason why quantum gravity would be difficult that in a gravity theory, energy has an absolute sensible effect on the world?

        This makes some of the mathematical tricks that have been baked into QFT unusable, right? You can’t just arbitrarily transform your gauges if those background field values have some absolute meaning (some associated energy) to them. You also have to clean up what you mean by all these various vaccum-energies and zero-point-energies, because there certainly aren’t any gravitational effects from them!

        • chiral3 said, on May 20, 2021 at 1:27 pm

          If you want an interesting / entertaining view into how analytically continuing a train of thought fails check out Feynman’s Lectures on Gravitation. I wish more physicists made available their failed thinking. If you believe that naturalness is a trap then, as heretical as it sounds, so may be symmetry.

      • glaucous noise said, on May 20, 2021 at 12:34 pm

        “chain of reasoning”

        Reasoning? That sounds more like an intern explaining his spaghetti code.

        I’ve always had the sneaking suspicion that the Standard Model is Ptolemaic spaghetti code.

        • anonymous said, on May 20, 2021 at 12:43 pm

          I’m still trying to learn this stuff, so I may be representing it badly.

          • glaucous noise said, on May 20, 2021 at 1:44 pm

            You’re doing a fine job.

            You’re just in the unenviable position of having to communicate an interns Ptolemaic spaghetti code on the internet.

  11. Scott Locklin said, on May 19, 2021 at 5:29 pm

    Yeah, that QG don’t matter!

    I really look at 1973 Gargamelle experiment as being good enough for electroweak theory. FWIIW one of t’Hooft’s students is complaining I was mean to Miz Sabine (on the contrary; as flattering as I get) on linkedin. From the looks of him, the poor bastard can’t get a job as a data scientist despite a Nobel signed Ph.D. and a couple years of “certifications” in python or whatever.

  12. mitchellporter said, on May 21, 2021 at 2:12 am

    I’ll respond to the three questions from @anonymous together…

    A. Motivation for the Higgs:

    The weak interaction was originally modeled by Fermi as a contact interaction between nucleons, electron, and neutrino. The Higgs field was introduced in order to give mass to bosonic mediators of the interaction (now known as W and Z particles), by providing the necessary extra degrees of freedom to otherwise massless Yang-Mills fields. Out of the numerous people who contributed to the working-out of this idea, Higgs was the one who mentioned that the unused components of such a field would also show up, as new stray particles. “The Higgs boson” is actually the leftover part of the Higgs field, the other components being the “Goldstone bosons” that are subsumed into the W+, W-, and Z.

    That the Higgs field could also create massive *fermions* by binding two Weyl spinors into a Dirac spinor via a yukawa-type interaction term… The history of that idea is much, much more obscure. The Higgs mechanism for bosons was definitely the center of attention, you can read about its history at nobelprize.org. But who first realized that couplings to the Higgs field could also explain fermion mass? I have no idea. I think there’s a precedent in Schwinger…

    Regarding the handedness of fermions, that potential is implicit in the Dirac equation. It was the experimental discovery of parity violation in the weak force, plus the apparent masslessness of the neutrino, which made Weyl spinors directly relevant for particle physics.

    The fermion masses arise from a combination of the Higgs-fermion yukawa coupling (this is what you are calling the “Higgs charge” of the fermion), and the energy density of the Higgs field (not the mass of the Higgs boson). These interactions exist because both fermion fields and Higgs field have electroweak charges (“weak isospin”, which is the weak-force difference between e.g. electron and neutrino, and “hypercharge”, the more primordial charge which in combination with weak isospin gives rise to electric charge).

    There are higgs-yukawa couplings, not just between the left and right handed components of the various familiar Dirac fermions, but between e.g. the left handed part of the electron and the right handed part of the muon. This accounts for the changes in lepton and quark flavor which, as you observe, are also part of the weak interaction.

    B. Predicting the mass of the Higgs boson: :

    There were some apriori lower and upper bounds on its mass, but the mass of the Higgs boson is indeed undetermined by the standard model, since it derives from the self-couplings of the Higgs field and those are free parameters to be determined by experiment.

    In the end, the Higgs mass has turned out to lie at one of those bounds, the vacuum stability bound. Along with the existence of the Higgs, this is the real discovery of the LHC. The 2009 paper which anticipated this outcome derived it from a quantum-gravity hypothesis that’s unpopular since it contradicts string theory and also says there are no further particles to discover, so it has not been embraced.

    C. Role of vacuum energy in quantum gravity:

    This is definitely an issue (one of many). The most common idea is that vacuum energy *does* gravitate, but that out of anthropic necessity the contributions to it almost cancel out, with dark energy being the harmless remainder. Weinberg used this argument to predict dark energy of the observed magnitude in 1988, ten years before it became the new consensus in cosmology thanks to supernova measurements.

    • anonymous said, on May 21, 2021 at 11:13 am

      Thanks for the reply. I’ll have to think about it.

      The most common idea is that vacuum energy *does* gravitate, but that out of anthropic necessity the contributions to it almost cancel out, with dark energy being the harmless remainder. Weinberg used this argument to predict dark energy of the observed magnitude in 1988, ten years before it became the new consensus in cosmology thanks to supernova measurements.

      This sort of reasoning drives me up a wall: I don’t understand how it’s possible to arrive at any conclusion, much less a predictive one, by assuming that one poorly understood thing (that would otherwise blow up your model if it even resulted in a finite magnitude without the imposition of arbitrary cutoffs) (vacuum energy) cancels with another poorly understood thing (???) leaving a third poorly understood thing (dark energy) that just so happens to correspond with future measurements. This shouldn’t be possible, and it makes me suspicious. A prediction should follow from some sort of model derived from some sort of physical principles. Conspiracies of cancellation seem nuts to me. Why does this work at all?

      • mitchellporter said, on May 22, 2021 at 3:47 am

        I am not asserting as a matter of fact that this is how things are. There may be a simple non-anthropic explanation for the size of dark energy, or there may even be no such thing as dark energy, the supernova red shifts might be due to something else.

        But the argument is something like this:

        If you have too much vacuum energy we couldn’t exist, because the universe would expand too fast for material structure to form.

        There’s no other anthropic constraint on its value.

        We know empirically that it’s not negative, because the universe is expanding, so space is either flat or positively curved.

        Therefore we should expect that the vacuum energy is a small positive value somewhere between zero and the anthropic bound. The size of the dark energy meets this expectation.

        Weinberg made this argument in 1988 at a time when everyone assumed that vacuum energy was simply zero. A few years after dark energy became a pillar of the new cosmological orthodoxy (“Lambda CDM”), Bousso and Polchinski showed how this could work in string theory, by describing a universe in which different domains develop vacuum energies sampled from a distribution of possible values dense enough for Weinberg’s argument to work.

        About the contributions to vacuum energy almost cancelling out… The total vacuum energy is a sum of partial vacuum energies. In a field theory those are the zero-point energies of the fundamental fields, in string theory they come from string-theory entities like branes and fluxes and the curvature of the extra dimensions. These contributions can be positive or negative.

        The point of the anthropic argument is that these partial vacuum energies can be heterogeneous – large and small, positive and negative – but we will see them summing almost to zero, not for any deep physical or mathematical reason, but simply because there are no observers in regions of high total vacuum energy.

        It’s a little like supply and demand. The intricate global supply chains that produce consumer goods, exist not because wishes magically come true, but because the market has enough combinatorial variety to organize so as to satisfy them. Analogously, observers “demand” a low vacuum energy in order to exist, and so the basic fields (or whatever) “supply” this.

    • anonymous said, on May 21, 2021 at 11:22 am

      Also, I could keep cranking out questions all day, but I’m sure you have finite time, and Scott’s comment section has finite pixels:

      Does the presence of a particle at a particular energy level strongly determine where it slots in to a given model in terms of interactions, or are the interactions independent of the mere existence of the particle? Could the discovered massive particle in the LHC be something other than a Higg’s boson that interacts differently? I suppose a particle that didn’t interact with the set we have at all would never be created by dumping energy into the known set, so I suppose the frequency of occurrence has something to do with the strength of the coupling with *something*. Beyond that though, what can we tell from experiment?

      • mitchellporter said, on May 22, 2021 at 4:42 am

        Such new particles are discovered in a particular context. You have a model of e.g. what happens when two protons collide head-on at high energies – an energy-dependent probability distribution over the different kinds of shrapnel that you get – and then you test the model, by colliding two protons a zillion times, at increasing energies.

        One day you see that at a given energy, you’re getting events in your distribution of outcomes, that deviate from the model. Suppose you want to explain this in terms of the decays of a new heavy particle. Already there are constraints on its nature. To begin with, the only inputs were, two protons. So the heavy particle was produced by something that was produced by something that was produced … by the interaction of two protons. And similarly, the new events were caused by something that was caused by something that was caused … by the immediate decay products of the heavy particle. So absolutely, there are constraints on the new particle’s interactions: its interactions must allow it to have the required place in the causal chain between what goes in and what comes out.

        By all reports, the new particle behaves exactly like a standard model Higgs boson – which is an identity with highly specific implications. Once it’s formed, half the time it should decay to a pair of bottom quarks, about a fifth of the time it should decay to a pair of W bosons, and so on. At this point hardly anyone is trying to explain the data differently, and the main interest lies in probing the particle’s properties – is it composite, could there be new unknown particles among its decay products, and so on.

  13. Montius said, on May 21, 2021 at 2:37 am

    Scott,

    I remember reading you are a fellow Lex Fridman non-respecter (he puts me to sleep as well), but as a BJJ respecter you might be interested in this conversation he has with John Danaher. I listened to it to hear what Danaher had to say about grappling, but they did discuss some other interesting topics about machine learning/AI and other things I found interesting as a lay person as well. Interested to know what you think about their discussion.

    Be warned: it is a long podcast.

    • Scott Locklin said, on May 21, 2021 at 6:02 am

      I don’t have time (or inclination) to listen to podcasts; only Caribbean Rhythms. Danaher’s a beast though.

      • Montius said, on May 24, 2021 at 2:33 am

        I hear you on that one. I don’t normally listen to Fridman at all either, but I just made an exception for Danaher. Really good insights into training and conceptually thinking about jiu jitsu.

        Caribbean Rhythms is the best podcast and the only one I will pay money to listen to. Enjoyed BAP’s shoutout to you on the Nietzsche episode a few weeks back.

        • glaucous noise said, on May 27, 2021 at 1:48 pm

          argh I just started BJJ, first time I’ve done martial arts in more than a decade

          my… ribs… ugggh

  14. chip said, on May 21, 2021 at 9:55 am

    When you talk about physics I wonder what your attitude is of pure math as it exists now. That’s an example of a group of people who are culturally scientists, but who basically spend their entire careers doing the sort of abstract, not-materially-realizable work that you find rightfully distasteful in physics. Does the generally increased level of rigor make a difference to you? Are their actions excusable since they are at least honest about their work’s applicability?

    There are many strong examples of times when topics from pure math have resulted in great new technologies. But group theorists, topologists, etc. don’t do what they do because of this, and the world they aim to understand is pretty much just made up. Your favorite “Counterexamples in Topology” is a good example of a vast amount of math that will probably never be relevant to technology.

    Dyson maybe commented on this in the “birds and frogs” talk. Sometimes I do feel that the “birds” are treated as the only “real” mathematicians today.

    • Scott Locklin said, on May 21, 2021 at 6:01 pm

      Well math is a branch of physics and most of them have forgotten this, but otherwise I don’t have too much of a problem with them. They almost all study various useless things and enjoy bragging about how useless their field is. Zero imposture here, unlike high energy physics guys.

      Of course, some mathematicians remember they’re a branch of physics; like Arnold:
      https://www.uni-muenster.de/Physik.TP/~munsteg/arnold.html

      Applied math guys are often very good, and at this point EE/signal processing or applied math are the two most useful majors for general numerate roles.

  15. Matthew Cory said, on August 12, 2021 at 10:45 pm

    Einstein might get revenge on the entire field. They ignored his dismissal of geometrization of GR and condescended to his views on QM. Turns out non-Kolmogorov probability was at work in Bell’s argument (you can even break Bell inequalities in classical optics). QTT was recently confirmed in a recent experiment in Nature and a lot of hydrodynamic quantum analogs have been worked out. Andrei Khrennikov has even constructed a classical random field theory that is a local ‘realist’ model for QM. Physics is a house of cards.


Leave a reply to Scott Locklin Cancel reply