Locklin on science

Saving the world and “passion” is bullshit

Posted in five minute university by Scott Locklin on April 16, 2024

There are two things I hear from tech people which really, really harsh on my mellow. One is that you should follow your passions, and the other is that your passion has something to do with saving the world or making the world a better place.

You work to make money, not make the world a better place. Making the world “a better place” may be a side effect of your business (it’s arguably a side effect of most businesses outside of private equity and internet advertising), it might be a brand for your business, but if it’s what defines your business, your business will probably fail because you’re not thinking about business, you’re thinking about something else. For example, the Davos-spawn company “Better Place.” It was obvious on inspection this would fail, and even if it succeeded via regulatory capture it would make the world more dystopian. Electric cars continue to be a stupid  idea for general use. There simply ain’t enough electricity in the fragile power grid, and most of it comes from fossil fuels, so converting automobile use to electrical is out of the question. Putting that aside, using exchangeable battery packs owned by some giant firm is both shady, a liability nightmare and insane. They burned a billion dollars and shipped batteries for 1400 cars which ended up being useless without a company to support them.

There was no business case for “Better Place.” There was no physical or economic need for this complex and ridiculous scheme. This wouldn’t have helped the environment at all: it is impossible for this hare-brained scheme to do so for simple thermodynamics reasons. But it was something that caught the imagination of people who wanted to make the world a better place. You could have made the world a better place working at SpaceX or even Google. If you like electric cars, Tesla made numerous real breakthroughs, and was relentless about making money (IMO their cars suck). You could also have made the world a better place by piling up giant heaps of money and donating it to worthy causes. Or if you’re incapable of this, you could volunteer at the soup kitchen, or help bums kick their drug habits: that is actually helping, rather than nerd fantasies about saving the planet like Mr. Spock. Nebulous pieties and bullshit do not make the world a better place. Get your nebulous pieties from religion like a normal human being; your personal vision of “making the world a better place” is almost certainly false because you’re a dumb monkey more susceptible to mass media programming with obvious falsehoods than  in any other era of human history.

Muh passion

You can have fun at work, you should be interested in your work and lowering the entropy of the universe, and you can do work that gives you energy and satisfaction if you’re lucky, you can even be obsessed with work, but you should not be passionate about your work. Save that for the bedroom. That’s a nostrum, as Scott Adams put it, that successful people use: “follow your passions.” It’s a distraction from how they became successful, which is ruthless competition, fierce struggle, hard work and luck. Also because successful people are generally luckier, smarter, better looking, were harder working than their competitors and may have been born with other advantages. Nobody wants to hear that. They want to hear some happy bullshit about following your passions. I might believe that Oligarch Bezos was passionate about selling crap on the internet at some point: it’s easy to be passionate when you’re winning. Lots of people followed their passions into the poor house. His biggest success was  EC2 API shit, which I 100% guarantee was nerd rage (or admiration) at how service APIs worked at DE Shaw, rather than any passionate desire to build really modular service APIs.

Imagine  investing with someone who is actually following their passions! Adams uses the example of the sports memorabilia nerd opening a sports memorabilia shop being a terrible investment. Let’s make it more obvious: would you invest in a coke dealer who is passionate about high quality cocaine? Would you invest in a brothel run by a sex addict? You want to invest in someone that wants to make money, has a clear path to doing so, and has the drive and talent to pull it off. Dental practices are good investments; nobody is passionate about dentistry, but everyone loves money and healthy teeth. People with 7 kids to feed are good investments. People working on their passions as avocation are numskulls.

Just following my passions

I get this silly happy talk from Silicon Valley dunderheads and their victims all the time. People ask me about my passions as if that’s what motivates me. I’ve helped a few things that people were doing as passion projects (which made the world a better place even). Nobody made money on these projects, and whatever good my efforts did for them would have been better spent making money and donating it to charity, or simply giving it to a street bum who will use it on drugs. This sort of thing doesn’t go anywhere: actually selling things would have been better for the state of the human race.

These sorts of things are American Upper-Middle class nostrums. Like all American Upper-Middle class nostrums, they’ve confounded concepts of personal success, making money and status anxiety. The UMC is ridiculously status sensitive in a way other social classes are not, because they realize their position is precarious, and are terrified of being normal schlubs; this is one of the things which makes them so ridiculous. They tell themselves nonsense like this to grub up the status pole with their wives boyfriends. It’s rubbish and everyone should laugh this sort of nonsense to scorn.

If you want to get all weeb about it, the Japanese have a concept called Ikigai. As far as I can tell it’s their version of what Aristotle talks about in his various Moralia, which nobody reads any more, but everyone should. I think this concept is what silly con valley dorks are reaching for when they say they follow their passion or save the world or whatever, but because they’re using different words, and people don’t parse them right. Stupid upper middle class American class-neurotic nostrums: these are just confusion.

 

 

Against the nerds

Posted in Locklin notebook, Progress by Scott Locklin on March 28, 2024

One of the delusions of modern times is that we need a nerd clerisy to help us run things. We’re presently at the end of the post-WW-2 order (or the post-broadcasting order), sort of nervously contemplating what happens next. There has been an active clerisy of nerdoids in place since the 1930s before the war: FDR implemented this idea of a nerdoid clerisy in its current form. Herbert Hoover offered a different nerdoid clerisy -he was an excellent engineer and administrator and was more effective than what replaced him. FDRs nerds were arguably a failure from the start: FDR’s clerisy put the “Great” in “Great Depression.”

You know who didn’t have a Great Depression? Knuckle dragging anti-intellectual fascisti, that’s who: people who were sans clerisy. Literal beer hall philosophers. The US clerisy did take credit for winning WW-2. Whether or not they did anything useful is questionable: the Russians did most of the actual fighting. The US had the foresight to build nukes and ramp up military production to help the Russians kill our enemies (and themselves -an important unspoken goal of WW2) for us. Some of this plan was executed by various kinds of bureaucrat-nerd, but few to none  of the important decisions were made by such people, who probably supported the communists on principle. Nukes would have been built without Oppenheimer, but probably wouldn’t have without the mostly unsung Leslie Groves who did all of the important leadership work, including hiring Oppenheimer.

I bet Groves gave Oppie noogies when out of range of the cameras: the weak should fear the strong

Groves was 0% nerd race; he was an Army engineer, a type of cultured thug who has  existed since the late Stone Age. He had zero tolerance for nerdoid bullshit, and you can see how hard he mogs Oppie in the photo above. Groves is the type of man leaders have relied on for all of human history, and quite a few centuries before. Groves, to put it in American terms, was more of a Captain of the Football team than he was a nerd. The same can be said of other technical work done in Radar. Nerds had little to do with American victory in a “calling the shots” sense. They helped: but only because they were told what to do and kept under strict control by the Captain of the Football team.  Subsequently, nerds and their bureaucracies flourished in the US, essentially cargo-culting what happened in WW-2, leaving out the all-important urgency and accountability to the Captain of the Football team who mercilessly bullycided them into producing results on a timeline, as is correct and proper.

After American victory, nerds proliferated like cockroaches, and to this proliferation was attributed a lot of the postwar American financial and industrial dominance.  American financial and industrial dominance was more readily attributed to the fact that it held the world’s gold and the only functioning factories that hadn’t been bombed to cinders. The  proliferation of nerds and nerd institutions was a result of prosperity; not a driver of it. You can make new nerds more easily than we do now should we happen to need more; the sciences did better when a Ph.D. was unnecessary or a brief apprenticeship. This compared to the present system where science nerds aren’t even paper productive almost until their 30s, and are often still kissing ass and publishing bullshit papers to get tenure in their 40s.

A historical example of astounding governmental success: the East India company (the US was modeled after it; the flag anyway). None of the men in it were nerds. All of them were Leslie Grove types. British gentlemen, while often superbly educated in the classics and in technical fields, were not nerds. The British elite were known by continentals to be anti-intellectual. Then you go look at the situation where nerds run everything: Wiemar Germany, current year, any random 1000 years of shitty Chinese history,  peak Gosplan Soviet times. Nerd leadership isn’t good. Nerds belong in the laboratory. If they’re not in the laboratory they should be bullycided. Even when in the lab they need to be held accountable for producing good results; nerds will always tell you some bullshit story about their fuckups. That includes bureaucrat nerds pushing bits and paper. Do something useful with matter or GTFO. Nerds like Robert McNamera come up with  failson do-everything products like the F-111, the golden dodo-bird of its time. Not-nerds who put their ass on the line like John Boyd come up with the F-16; after almost 40 years, still the backbone of Western air forces.

The same is true in tech leadership. Most of the leaders of nerds who matter are not really nerds, even if they  fake it for the troops. Zuck does Brazool Jiu Jitsu and kills goats: he ain’t doing leetcode pull requests. Elon was a street fighter before he developed his interest in payment systems and rockets, and his personal life is more like Andrew Tates than that of a nerd. The nerds who founded google and kept it an engineering company in its early days hired a womanizing chad to make it a useful company, and speaking of Larry types, Larry Ellison is both a womanizing saleschad and lunatic jet pilot rather than a nerd. Look at the most prominent actual nerd entrepreneur in recent history: Sam Bankman Fiend. Archetypical nerd; he even worked at uber-nerdy Jane Street and had filthy sex orgies with other ugly nerdoids. Nerds need to be bullycided. It’s good for them, good for the organizations they work for.

Being intelligent isn’t the same as being a nerd. Though nerdism is touted as being a sort of definition of intelligence: it isn’t. Being a nerd is being a disembodied brain; a king of abstraction. Being a nerd is a lifestyle open to obvious stupidians. Even when they’re bright, nerds lack thumos; they have a hard time operating outside the nerd herd. If something is declared “stupid” the nerd won’t give it a second thought. If other nerds like a thing, or are declared “expert,” even the 200 IQ nerd will go along with it, because being a nerd is his identity. This is why the football star is superior to the nerd: his life isn’t made of abstractions -it’s made of winning, which is something that happens when you’re right, not when you do the proper nerd-correct thing to sit at the nerd table in high school.  Right now there are probably a hundreds thousand nerds trying to predict the stock market with ChatGPT (aka autocomplete). That’s what a nerd does: acts on propaganda as if it is real information. Chad either exploits a bunch of ChatGPT specialists and flips it as a business to a greater fool, or invents a new branch of mathematics to beat the market the way Ed Thorp did.

Objectivity is another thing the nerd lacks. Nerds are masters of dogma. They’re good at putting dogma into their brains: that’s in one sense what “book learning” is -you have a sort of resonator in your noggin that easily latches into patterns. People who are good at tests are good at absorbing propaganda. They’re bad at noticing the thing they absorbed is propaganda; that takes another personality type. One that nerds associate with “stupid people” who bullied them in high school. You know, the ones who should be their bosses.

 

Nerds become in love with their ideas, even when they’re wrong. Architecture astronauts, mRNA enthusiasts, marxists and other schools of economics, diet loons, snake-oil pharmaceutical salesmen, “experts” in most fields -these are ideologies that people can’t course correct without losing face. Since being “smart” is all a nerd has, they stick with shitty ideas even unto their actual deaths. Actually intelligent people play with ideas, consider where they might be useful and where they might break down. Ideas are like wrenches; they’re not useful in every situation, and you have to pick the right one for the job. You have to put down the wrong wrench and pick a screwdriver sometimes. That’s why you need a General Groves to manage the nerds: your legions of shrieking nerd wrench-enthusiasts can be helpful in putting together a car, they need to be bullycided into not using a wrench to install rivets or screws. The other useful management technique is to pair them with machinists who will make fun of them for trying to use a wrench for everything: the China Lake approach.

It’s OK to be a nerd; nerds can serve a purpose. We can even admire the nerd if he’s actually capable of rational thought. It’s not OK to give nerds leadership positions. You need people who played sports or who killed people for a living, or otherwise interacted with matter and the real world. The cleric doesn’t order the warrior in a functioning society; it’s the other way around.

Examples of nerd failure:

https://gaiusbaltar.substack.com/p/why-is-the-west-so-weak-and-russia

https://archive.is/20240107140058/https://www.bloomberg.com/opinion/articles/2024-01-07/2024-elections-in-taiwan-eu-uk-us-and-elsewhere-threaten-democracy

https://collabfund.com/blog/the-dumber-side-of-smart-people/

 

Post-quantum gravity

Posted in physics by Scott Locklin on March 18, 2024

The classic conundrum for the last 100 years of wannabe Einsteins is quantizing gravity. I came to the conclusion some time ago that this pursuit is …. aesthetic based at best. Most of the low energy “High Energy” theorists who claim to work on this are numskulls who should be bullycided by experimentalists, but also because it’s fundamentally retarded. As someone put it (maybe Phil Anderson? I can’t locate the quote), there is no more reason for gravity to have a quantum theory any more than there should be a quantum theory of steam engines. They’re both macroscopic phenomena. Gravity in particular is very macroscopic, being much weaker than the other forces of nature.

Something happened last December which as far as I know hasn’t happened in my lifetime: a couple of guys suggested plausible outlines of a theory which resolves the issue and suggested an experiment to put it to the test. Amusingly the main protagonist here, Jonathan Oppenheim, has a bet with my near (and now exceedingly famous: too bad I slept late that year)  Gravity professor, Carlo Rovelli, at 5000 to 1 odds that there is no quantum gravity. This is an important enough paper, even the scientific publishers have made it available in cleartext rather than making everyone go through the ridiculous conga dance of looking it up on arxiv or sci-hub. Hopefully this is one of the last gasps of those gate-keeping parasites. Ever notice how science only started to suck with “peer reviewed journal articles?” Some of us certainly have!

Our accepted classical theory of gravity, aka General Relativity, posits that gravity is a sort of geometric phenomenon. Quantizing a geometry is an interesting idea, but you need the continuum to have quantum mechanics and physics involving differential equations in general, so people resort to loops and noodles or whatever. Hence a lot of the trouble with quantizing gravity. This is a pretty good argument in itself that gravity can’t be quantized and is Oppenheim’s point of departure for his ideas.

The theory article is reasonably well written and clearly argued. No hiding behind equation forests here, though there are equation forests. He starts off with “why not go against the consensus that gravity can be quantized and see what happens” to which I say, “just so.” Very Zwicky-like mindset. He gives a history of why people worry about this issue from a theoretical standpoint, and the various paradoxes which come about taking standard views of gravity and quantum mechanics and assuming gravity is non-quantum. His theoretical starting point is something I was dimly aware of called Lindblad operators; a formulation of quantum mechanics where one attempts to model the quantum system in conjunction with its classical environment (and also off diagonal elements). Essentially you model quantum mechanics as a density matrix (which makes more sense than a wave function as it has better classical analogs) coupled to some kind of markovian jumping bean thing which couples the classical environment to the observable variable:

{\displaystyle {\dot {X}}={\frac {i}{\hbar }}[H,X]+\sum _{i}\gamma _{i}\left(L_{i}^{\dagger }XL_{i}-{\frac {1}{2}}\left\{L_{i}^{\dagger }L_{i},X\right\}\right).}

Editorial on this idea: Schroedinger or Heisenberg picture suffer from wave functions diffusing through all of spacetime and collapse of the wave function being sort of added on afterwords. This model is more satisfying in that measurement is baked into the thing. You could argue that it’s an ad-hoc addition, but it gives a more correct answer than Schroedinger picture (which is an ad-hoc Hamilton-Jacobi add-on), even if it is considerably harder to teach in introductory classes. Of course it also kind of does away with absurdities like large scale quantum entanglement if you take it completely seriously. This thing isn’t even unitary!

He beats this thing into various shapes which show that it doesn’t violate other kinds of field theoretical work, making normie field theorists happy, and recovering all the usual stuff we learn in our “modern” physics coursework (aka stuff which is only 75-100 years old). Then he squeezes this idea into the ADM formalism (a sort of toy GR formalism developed to quantize gravity) to develop a “post-Quantum” gravity, which is basically classical gravity with a sort of random noise popcorn machine hooked up to it which is left over from Lindbladian quantum picture.

I’m more or less filtered by his math and the various wankings about renormalization groups and gauge invariance and whatever else makes theorists happy, but my retard view is that he simply takes the Lindblad picture seriously and assumes this is the correct way of looking at quantum mechanics. The paper hasn’t been up long but it already has 37 citations as of my writing this.

The experimental paper gives an overview of the theory, then cites various experiments which bound the decoherence coupling constant -aka the weight of the Markovian popcorn machine term in the Newtonian limit, aka away from relativistic masses where we can use stuff like torsion balances to measure masses and look for excessive noise. They also suggest tests in LIGO, and have an unpublished paper suggesting other ideas for testing the thing.

The paper works through and attempts to dismiss various objections by quantized gravity advocates that a quantum gravity would produce similar outcomes. This is almost certainly where the idea’s attack surface lies. Most of it is impenetrable to me as I’ve never spent more than a few minutes even wondering about such things, but it’s the most obvious theoretical attack on the idea. I guess the other angle of attack is that General Relativity isn’t correct; this only works if you have a theory of gravity involving space-time rather than something like the old Newton’s laws. We know of course that Newton wasn’t quite right in high gravity cases, but the amount of actual experimental evidence for GR is pretty thin; as far as I know it’s all observational. We’re pretty sure it’s OK though.

The proposed experiments to narrow things down: precision measurements of masses using torsion balances as in the Cavendish experiment and other precision gravity measurements. The idea is you should be able to measure this Markovian popcorn thing as noise. How to distinguish this from other kinds of noise? Well, you can’t, but you’d be surprised at how well a good experimentalist can get rid of noise, which would home in on other limits for such a theory. Also one can bound the theory by making large scale (massy) quantum objects and observing their quantum effects for decoherence. These experiments are very cool stuff and ambitious experimental people should be working on them. But Cavendish style experiments (or MEMS doodads): those are things that any sperdo with a machine shop can make and get an answer from. If you’re clever you can move the needle here, and it will look like something a Victorian gentleman could have built (though it will probably have some laser interferometers in it).

One of the fun consequences of all this is it may eventually dispel the phantom of quantum entanglement and force quantum computards to look for productive work, regardless of the excellent Dyakonov argument against such things, which is unrelated to and independent of problems with large scale quantum entanglement. If Lindblad view is the more correct way of looking at quantum mechanics, it’s likely that you can’t couple lots of things together in an entangled state. I’m not sure what the record is for quantum entanglement; I strongly suspect it’s something like 2 otherwise distinct objects. Quantum computers require thousands or millions of classically distinct things to be quantum entangled somehow. Of course there are also other arguments against “macroscopic” entanglement and decoherence. This decoherence thing isn’t something he mentions in the paper, so if he didn’t mean it, don’t blame this statement on him: but that’s more or less what Lindblad is, so I’m pretty sure that’s accurate.

Of course because Oppenheim is a theorist, he has to go after dork matter, and most people are just talking about that. I guess one of the side effects of this idea is it has some MOND-like consequences. It would be interesting to compare his results to the Gerson Otto Ludwig idea I mentioned before. At a glance, Ludwig’s idea produces somewhat different results, but I slept late for that course as I continue lamenting. Ludwig’s scholarship and engagement with the observational data is considerably detailed: there may be a way to simply tease out some different consequences, but this doesn’t even rise to the level of hobby for me, so I leave that to others.

I don’t really have an informed opinion on the topic. If I had to guess or make a bet, Oppenheim isn’t right, in part because he’s a former noodle theorist who came across this idea in some black hole information “paradox” gooning session. Otherwise I’m sure he’s a fine human being and I support his efforts with all my military and naval power. I don’t even understand general relativity well enough (having skipped Carlo’s class) to know much about black holes: it’s just obvious that about 99% of black hole physics papers are unfalsifiable piffle. You can hide a lot of nonsense in singularities. Anyway, this is mere bigotry on my part, I hope he’s onto something. Also because it would be weird if gravity had some Markovian popcorn machine hooked up to it, though it certainly would have escaped notice thus far.

What Physicists Have Been Missing

https://www.quantamagazine.org/the-physicist-who-bets-that-gravity-cant-be-quantized-20230710/

https://phys.org/news/2023-12-theory-einstein-gravity-quantum-mechanics.html

More suspect machine learning techniques

Posted in five minute university, machine learning by Scott Locklin on March 14, 2024

Only a few weeks after “Various Marketing Hysterias in Machine Learning” and someone took a big swing at SHAP. Looks like the baseball bat connected, and this widely touted technique is headed for Magic-8 ball land. A few weeks later: another baseball bat to SHAP. I have no doubts that many people will continue using this tool and other such flawed tools. People still write papers about Facebook-Prophet and it’s just a canned GAM model. Even I am a little spooked at how this went: I was going to fiddle with SHAP, but the python-vm contraptions required to make it go in a more civilized statistical environment was too much for me, so I simply made dissatisfied noises at its strident and confused advent (indicative of some kind of baloney), and called it a day. Amusingly my old favorite xgboost now has some kind of SHAP addon in its R package. Mind boggling as xgboost comes with importance sampling which tells you exactly which features are of importance by using the goddamned algorithm in the package!

This little SHAP escapade reminds me of a big one I forgot: t-SNE. This is one I thought should be cool because it’s all metric-spacey, but  I could never get to work.  I should have taken a hint from the name: t-distributed stochastic nearest neighbor embedding. Later a colleague at Ayasdi (names withheld to protect the innocent) ran some tests on our implementation and effectively proved its uselessness: it’s just a lame random number generator. This turkey was developed in part by neural eminence grise Geoff Hinton -you know, the guy making noise about how autocomplete is going to achieve sentience and kill us all. I think this is why it initially got attention; and it’s not a bad heuristic to look at a new technique when it is touted by talented people. Blind trust in the thing for years though, not so good. At this point there is a veritable cottage industry in writing papers making fun of t-SNE (and its more recent derivative UMAP). There are also passionate defenses of the thing, as far as I can tell, because the results, though basically random, look cool and impress customers. There has always been dimensionality reduction gizmos with visualization like this: Sammon mapping, Multidimensional Scaling (MDS), PacMAP, Kohonen maps, autoencoders, GTMs (PGM version of Kohonen maps), Elastic maps, LDA, Kernel PCA, LLE, MVU,  things like IVIS, various kinds of non negative matrix factorization, but also …. PCA. Really you should probably just use PCA or k-means and stop being an algorithm hipster. If you want to rank order them: start with the old ones. Stop if you can’t solve your problem with one of these things which dates after ~2005 or so when the interbutts became how people learn about things: aka through marketing hysterias. I’ve used a number of these things and in real world problems …. I found Kohonen maps to be of marginal hand wavey utility: the t-SNE of its day I guess -almost totally forgotten now; also Kernel PCA, LLE, MDS. I strongly suspect Sammon mapping and MDS are basically the same, and that LDA (Fisher linear discriminants, though Latent Dirichlet seems to work too, it’s out of scope for this one) is probably a better use of my time to fiddle with.

undefined

I suspect t-SNE gets the air it does because it looks cool, not because it gives good answers. Rather than being relentlessly marketed, it sold itself because it easily produces cool looking sciency plots (that are meaningless) you can show to customers so you look busy.

Data science, like anything with the word “science” in the name, isn’t scientific, even though it wears the skinsuit of science and has impressive sounding neologisms. It’s sort of pre-scientific, like cooking, except half the techniques and recipes for making things are baloney that only work by accident when they do work.

Hilarious autism which I almost agree with

Some older techniques from the first or second generation of “AI” are illustrative as well. Most nerds have read Godel Escher Bach and most will go into transports about it. It’s a fun book, exposing the reader to a lot of interesting ideas about language and mathematical formalisms. Really though, it’s a sort of review of some of the ideas current in “AI” research in his day (Norvig’s PAIP is a more technical review of what actually was being mooted). The idea of “AI” in those days was that one could build fancy parsers and interpreters which would eventually somehow become intelligent; in particular, people were very hot on an idea called Augmented Transition Networks (ATNs) which he gabbles on about endlessly. As I recall the ATN  approach fails on inflected languages, meaning if you could make a sentient ATN, this would imply that Russians, Ancient Greeks and Latin speaking Romans are not sentient, which doesn’t seem right to me, Julian Jaynes not withstanding.  The idea seems absurd now, and unless you’re using lisp or json relatives (json is a sort of s-expression substitute: thanks Brendan), building a parser is hard and fiddley, so most people never think do it.

Some interesting things came of it; if you use the one-true-editor, M-x doctor will summon one of these things for you. Emacs-doctor/eliza is apparently a fair representation of a Rogerian psychologist: people liked talking to it. It’s only a few lines of code; if you read Winston and Horne (or Paul Graham’s fanfic of W&H) or Norvig you can write your own. People laugh at it now for some reason, but it was taken very seriously back in the day, and it still beats ChatGPT on classic Turing Tests.

Back then it was mooted that this sort of approach could be used to solve problems in general: the “general problem solver” was an early attempt (well documented in PAIP). There’s ancient projects such as Cyc or Soar which still use this approach; expert system shells (ESS -not to be confused with the statistical module for the one true editor) more or less. This is something I fooled around with on a project to give me an excuse for fiddling in Lisp. My conclusion was that an internal wiki was much more useful and easier to maintain than an ESS.  These sorts of fancy parsers do have some utility; I understand they’re used to attempt to make sense of things like health insurance terms of service (health insurance companies can’t understand their own terms of service apparently: maybe they should make a wiki), mathematical proof systems, and most famously, these approaches led to technology like Maple, Maxima, Axiom and Mathematica. Amusingly the common lisp versions of the computer algebra ESS idea (Axiom and Maxima) kind of faded out, though Maple and Mathematica both have a sort of common lisp engine inside of them, proving Greenspun’s law, which is particularly apt for computer algebra systems.

Other languages were developed for a sort of iteration on the idea; most famously Prolog.  All of these ideas were trotted out with the Fifth Generation Computing project back in the 80s, the last time people thought the AI apocalypse was upon us. As previously mentioned, people didn’t immediately notice that it’s trivial to make an NP-hard query in Prolog, so that idea kind of died when people did realize this. I dunno constraint solvers are pretty neat; it’s too bad there wasn’t a way to constrain it to not make NP-hard queries. Maybe ChatGPT or Google’s retarded thing will help.

yes, let’s ask the latest LLM for how to make Prolog not produce NP-hard constraint solvers

The hype chuckwagon is nothing new. People crave novelty and want to use the latest thing, as if we’re still in a time of great progress, such as when people were doing things like inventing electricity, quantum mechanics, airplanes and diesel engines. Those were real leaps forward, and the type of personality who was attracted to novel things got big successes using the “try all the new things” strategy. Now a days, we have little progress, but we have giant marketing departments putting false things into people’s brains. Nerds seem to have very little in the way of critical faculties to deal with this kind of crap. For myself, I’ve mostly ignored toys like LLMs and concentrate on …. linear regression and counting things. Such humble and non-trendy ideas work remarkably well. If you want to get fancy: regularization is pretty useful and criminally underrated.

 

Yeah, OK we have genuinely useful stuff like boosting now, also conformal prediction, both of which I think are genuine breakthroughs in ways that LLMs are not.  LLMs are like those fiber optic lamps they used to sell to pot heads in the 70s at Spencers Gifts. Made of interesting materials which would eventually be of towering importance for data transmission, but ultimately pretty silly. Most would-be machine learning engineers should probably stick with linear regression for a few years, then the basic machine learning stuff; xgboost, kmeans. Don’t get fancy, you will regret. Definitely don’t waste your career on things you learned about from someone’s human informational centipede. Don’t give me any crap about “how can all those smart people be wrong” -they were wrong about nanotech, fusion, dork matter, autonomous vehicles, string theory and all the other generations of “AI” that didn’t work as well. Active workers in machine learning can’t even get obvious stuff like SHAP and t-SNE (and before these, prophet and SAX and GA and fuzzy logic and case based reasoning) right. Why should you believe modern snake oil merchants on anything?

Current year workers who are fascinated by novelty aren’t going to take it to the bank: you’re best served in current year by being skeptical and understanding the basics. The Renaissance came about not because those great men were novelty seekers: they were men of taste who appreciated the work of the ancients and expanded on them. So it will be in machine learning and statistics. More Marsilio Ficino, less Giordano Bruno.