How to be a technology charlatan
I’ve mentioned many times that I do not think technology is advancing in a serious way. By “a serious way” I mean something like what happened between 1820 and 1970. That kind of progress is apparently over. What we have now in the way of technology is 1970s DARPA funded technology made available to the masses and leavened with javascript. Also atrocities like electric cars, and frippery like my car using radar to make up for having shitty visibility. Against all evidence, all historical perspective: we still have people trying to sell us the idea that… right around the corner … is some kind of miraculous new thing which will increase human power over nature and fuel the next burst in economic productivity. In the 1990s and early 2000s it was supposed to be nanotech. Nanotech has finally been laughed out of existence among serious people as an actual technology; even its inventor seems to have abandoned it. Allegedly serious people in the mid-late 2010s thought “AI” was just around the corner. After all, deep neural nets were able to identify human-cropped German traffic signs slightly better than K-nearest neighbors, and slightly better than (apparently astigmatic) humans. As a result of this and some improvements in GPS, my car is now able to tell me, with perhaps 90% accuracy, what the traffic signs I can see with my own eyeballs say.
We’re again going through such a mass hysteria, with featherheads thinking LLMs are sentient because they’re more interesting to talk to than their redditor friends. The contemporary LLMs being a sort of language model of the ultimate redditor. People think this despite the fact that since the 2010-2015 AI flip out which everyone has already forgotten about, there hasn’t been a single new profitable company whose business depends on “AI.” It’s been over 10 years now: if “AI” were so all fired useful, there would be more examples of it being used profitably. So far, the profits all go to data centers, NVIDIA, and nerds who know how to use PyTorch. A decade after they invented the airplane there was an entire industry of aircraft manufacturers, and they were being used productively in all kinds of places. You’d figure if “AI” were important, it would be used profitably by a single solitary AI oriented firm somewhere. As far as I can tell, it’s only used to goof off at work.

A hero for our time
There is a certain kind of charlatan out there who deals in science fiction horse shit like LARPing that LLM is actual AI. Science fiction ideas make people feel important. They think we are moving ourselves into the future as we still did in the last century when we did stuff like invent airplanes and refrigerators. The idea is their gaseous codswallop is somehow going to help society sort out all the problems associated with these new technologies. Just like those important blathering contributions for sorting out the side effects of inventing airplanes and refrigerators.
Of course, other than some advances in applied mathematics, we do not presently move ourselves into the future in any useful sense: mostly things just get older and more difficult. For example, the US, a country allegedly much more wealthy than in 1969, has a hard time sending human beings into low earth orbit, and is so far a mere 10-20 years behind schedule in sending up another mission to the moon. We also have a harder time keeping the lights on than in the past; this despite the bet that electric cars will be the new mass transport technology. Yet, money is made on online ad platforms, so we have nincompoops who think they know something about “technology” because they won a VC lottery ticket on selling a shabbier, more intrusive version of the yellow pages.
There used to be something called the center for responsible nanotechnology. At this point their website is lol, as it is predicting nanotech right around the corner in 2015 or so. But at one time, it was an actively maintained website with some kind of organization behind it. There were rich ninnies who were worried about the science fiction fairy tales of us being turned into grey goo by nanotechnology, and I assume they and various WEF lizard types funded this ridiculous thing. Bill Joy, the no-goodnik who blessed humanity with the Java ecosystem was worried that nanotech would destroy us all for example. I have always thought of this as some kind of exotic projection for inflicting a lousy programming language on the human race: java is its own sort of “grey goo.” I’m not a psychologist, and I could be wrong. It is now 2023, and I believe this towering grey goo fear has dissipated to the point where nobody gives a damn about “responsible nanotechnology.” This is too bad; some chemicals are unhealthy. If we had serious people worried about irresponsible actually existing nanotechnology, perhaps they’d save us from nasty agricultural and environmental chemicals. Glyphosates, BPA, PFAS, and rubbish like atrazine are awful. Nobody wants those things in their bloodstream, they do no good for humanity and they should be banned. Just looking at the fluoride stare of the latest generation ought to be enough evidence for this sort of nanotech irresponsibility.

a hero for our time
Now that the “bring ourselves into the future” meme has switched from nanotech to AI to quantum computards back to “AI,” we have various centers for “responsible AI” and “open AI.” There are “singularity institutes,” “future of humanity institutes,” “Machine Intelligence Research Institutes” and “singularity universities” which postulate some kind of “AI” is going to get so damn smart, it will program itself to be even smarter in a sort of intellectual perpetual motion machine.
Just as with large scale quantum entangled forms of matter, nobody has the slightest idea how to do this. Consider the fact that the “autonomous vehicles” meme is finally dying a deserved death. We’re probably not much closer to truly autonomous vehicles than when Ernst Dickmanns invented the field back in the 1980s. Some things are much easier now than back then (machine vision, LIDAR), but the fundamental problem remains. Yet, despite the preposterous failure of autonomous vehicles; reddit man informs me that “AI” is right around the corner because muh chatGPT. If it is, I’d like to see chatGPT park their car for them.
The trajectory of actual machine learning “AI” technology is pretty straightforward and not very interesting or science-fictioney. The actual future societal implications of machine learning seems to be a government-corporate surveillance dystopia, with public-private witch hunt partnerships for political control. Jobs and manufacturing will continue to be outsourced from the West (the main “AI” which are taking jobs: Aliens and Immigrants) to increase the power of the oligarchs. It’s been the obvious trajectory for decades now, and shows no signs of abating. Hell if I were paranoid, I’d assume the spooks invented dystopian crap like Facebook in anticipation of the civil unrest resulting from deindustrialization.

A hero for our time
The key to the technology charlatan’s career is the intersection of marketing, reddit nerds and fear. Marketing, aka virtually 100% of the “news” you consume, drives the hype. Put out a glorified autocomplete trained on redditors and reddit man will see a kindred spirit. He’ll assume he can be replaced by this contrivance because he can’t tell the difference: reddit man has never displayed much capacity for independent thought. He considers himself clever; after all, he is filling up Reddit with text in an attempt to …. well, who knows why Reddit man does what he does. Reddit man drives the hysteria because muh progress and muh technology. Finally, you get the Harry Potter fanfic author opinionating and telling one of the guys who invented Deep Learning that he doesn’t know what he’s talking about.
I’m all for educated amateurs making contributions to science and technology -as long as they are actual contributions. Wasting the time of one of the few great inventors of our time with word salad is not an actual contribution. I’m pretty sure you could replace the contributions of people like Eliezer Yudkowsky with ChatGPT2. Robin Hanson, with an unsuccessful 1950s era science fiction short-story writer who has a day job in a record shop. Nick Bostrom, probably a secretly racist lutefisk merchant with high Reddit karma who takes LSD and goes to discotheques. Yudkowsky is already obsolete, so his terror is perhaps justified. Hanson, any day now might be replaced by a LLM. There are so many people like Bostrom it’s not worth the electricity to replace him. But by and large the idea that clowns like this are taken seriously makes me wish the Rooskies would nuke us: an actual existential threat driven entirely by stupidity. We in baizuo-land live in a profoundly stupid culture, so our only chance of ridding ourselves of these morons is by calling them racist or rapey or rapey racist pedophiles. Since that doesn’t seem to be working, how about we simply notice that these are simply extremely online dimwits who understand little and consistently say stupid things?
It is funny to watch Western Civilization writhe as its false god of progress fails. The West has had an ideology of historical progress since Christianity took over the Roman Empire. The original idea was that the Savior would return soon, bringing an end to the existing order in favor of some paradisical and just future. Eventually this historical progress concept mutated into ideas of scientific and technological progress: our present Faustian civilization in the West. Since actual progress in technology broke down some time in the 1970s, we have a lot of post-Christians who think, as a matter of faith, that Faustian tier improvements are still happening. They point to their nerd-dildos as evidence of progress, rather than evidence that they’ve been psyoped into carrying around a sulfurous machine which is essentially the slave-shackle of the emerging dystopia. Periodic hysterias over amusing toys like chatGPT or imaginary nonsense like nanotech or quantum computing are basically a sort of millenerian cult. So are all the social crazes like transgender toddlers, equalism and gay everything. If we can’t have new technological transformations creating real technological and societal change, we must make “social progress.” This is the sort of social progress which leads straight to the abbatoir. Millenarian cult leaders should at all times and in all places be ignored. These aren’t people warning of real dangers: they’re clowns who have a bad model of reality. They’re certainly not making anything better with their deluded speculations. Taking them seriously is like taking representatives of Aum Shinrikyo seriously.
Technological and Scientific blind spots
It was said of Henri Poincare that he was a “conqueror, not a colonist.” He was the type to make new contributions in disparate areas rather than laboring along in some well established area for his whole life. Poincare made contributions in fluid mechanics, number theory, group theory, E&M, differential equations, quantum mechanics, celestial mechanics and without exaggeration he invented special relativity, most of topology and chaos theory, all the while working for the French bureau of standards and mining -and he died at 58. While one can’t realistically aspire to the greatness of Poincare, one can aspire to be a conqueror as Poincare was in a small way. Poincare manfully walked into the darkness. Because he was a genius he could consistently pull gold out of the muck and confusion. He actually wrote about how he did it; lessons almost completely forgotten today, written with a mental clarity and elegance of phrase -also forgotten.
People read science and technology papers for different reasons. Redditor and other bugmen read papers to win arguments on the internet, as if “peer review” were some kind of magical phlogiston which confers truthity. I don’t read science papers like I’m reading something by an “authority.” I assume the authors of most papers know something I don’t, otherwise I wouldn’t read them, but that doesn’t mean I think they know what they’re talking about or have any particular accuracy in describing reality. Unless you’re dealing with something very specific like what the infrared spectrum of Neon is, you’re looking for ideas which might approximate reality rather than the thing itself.
Consider the way brains work. I have no idea how they work. Nobody else does either. Obviously the literature contains lots of interesting details which are at least partially true, and observations which I wouldn’t know anything about unless I read them from an expert. But “the experts” are far from authoritative. An idea like messenger RNA brains could very well be true; there are a number of indications that it might be. These indications have been around for longer than I have been around, so the present Hebbian model of how your noodle works is effectively just fashion. At some point people may figure it out, but for now, taking anything an “expert” says about how your noggin works in toto is about as likely to be right as taking the word of a medieval philosopher. If I were a researcher interested in figuring things out, would I go dig in the Hebbian view like everyone else, or would I fool around with something wacky like messenger RNA? You ain’t gonna find gold where others ain’t finding it, and I’m a gambler. It seems to me that pursuing crazy ideas, even if you don’t actually believe them, is more useful than hewing the familiar line.
I look for weird stuff; science is often weird. A fun one that has been making the rounds of meme-land is “Bread and Other Edible Agents of Mental Disease” by Peter Kramer and Paola Bressan. Basically a long rationalization for gluten intolerance, it speculates that a lot of insanity is caused by grain consumption, with some actual evidence. I have no idea who the authors are, but they seem to regularly have interesting ideas like this. Example subjects include, sexual imprinting on eye color, the biology of home-wreckers, the idea that mammals are giant super-organisms rather than individual organisms, the internal clock guided by Mitochondrial metabolism (and why monks live so long), how the Ebbinghaus illusion is perceived by sperdos and what it means, how infection threat relates to various kinds of sociability, why mental illness is gendered and how it relates to mitochondria: these are a just couple of weird papers by this pair of researchers. You can tell some of it is thoughtful and oriented towards normal life. These sorts of ideas are fascinating to me. It isn’t fair to call most of them “science” I would say; many of them are sort of idea generators which could lead to new science. Once you have a bunch of interesting hypotheses like these, you can design experiments to rule out falsely correlated observations. I find all of these ideas by the two authors above to be vastly more interesting and potentially productive than the hokey just-so stories of “sociobiology” and its offshoots such as “evolutionary psychology.” I don’t know what to call what they’re doing, but I like it.
There are good reasons to not adopt this sort of conqueror/colonist thing for technological approaches to solving problems. Engineers wanting to use the latest woo or reinvent the wheel is another sort of curse of our time. Unless you’re trying to do something entirely new that nobody has succeeded at yet. “AI” for example: everyone seems convinced for no good reason, that neural approaches are going to immanentize the eschaton. They even think, also for no good reason, that these approaches are going to defeat google, an idea which is as laughably insane as Tensorflow driving a car. If you’re interested in making technological progress in “AI” -it seems chasing after bullshit that everyone else is doing is a fool’s mission. Works fine for grifters though.
There are all manner of relatively unexplored ideas out there that could push the needle on “AI” or at least machine learning. The basket of tricks associated with topological data analysis looks promising for signal processing and data science: I got a few useful tricks from my experience at Ayasdi, and there are likely many more waiting out there. I’ve also always said if you ported graphical model primitives a la Daphne Koller and her marvelous book to GPUs (which they are eminently well suited for) and threw 10,000 grad students over a decade at the subject, you’d leapfrog whatever the latest neural atrocity is. This already sort of happened in that neural approaches were considered ridiculous in the 90s and early 00s for the same reasons they’re still ridiculous today: nobody knows what’s going on inside them, and they require too much compute for what they accomplish compared to other approaches. The only company that actually profits from neural approaches to “AI,” as far as I can tell is NVIDIA.
Similarly something like tokamaks or laser inertial confinement have been around for half a century now without ever producing what they claim, which is fusion power above break-even -to where you could think about building a power plant. I don’t know why one would go into these fields now, other than to get a sciencey-looking bureaucratic job. I think on the timeline of a human lifetime they’re almost certainly not going to pay off: you have about as much chance of success going into alchemy. There are weird old ideas that occasionally get revived, but nobody has dumped tiny fractions of the resources dumped into tokamaks, laser intertial confinement, or even the old Stellarator approach. Maybe there is a good reason for this, I’m not a plasma physics guy after all, but I bet it’s just people herding, cattle-like into the safest, most popular directions. Directions which have been a failure for 50 years. I was excited for a while about Tri-Alpha: any serious effort needs to be aneutronic and it seemed like a substantively different approach. It would be amazing if they succeeded, and it is less bugman than working on a tokamak in current year, but they have been at it for an awfully long time. There aren’t any historical precedents for success on a timeline that long. There must be other approaches; certainly cold fusion weirdos seem more productive than more fiddling around with giant lasers (not that giant lasers aren’t really cool).
I am going to remind everyone (again) that there is no good reason to assume that nuclear fusion can be harnessed and controlled at scales convenient to human beings. The existence of suns is not a good reason, as the energy released by the sun is approximately the density of the energy released by a dung-hill; the only reason this form of fusion energy is useful to us is the Sun is so damn big. The existence of hydrogen bombs are also not a good reason to assume this for reasons that should be obvious (non obviously: most of the energy released from an H-bomb is from neutron enhanced fission). Let me remind you that “controlled nuclear fusion” assumes multiple things at once: 1) that you can do above break even nuclear fusion in a controlled way and 2) that this can happen at energy densities which are not too high as to be potentially uncontrollable or dangerous, and not too low as to be practically without value. I call this the “Goldilocks theory of controlled nuclear fusion.” There is no reason to believe that it is true or that the universe will cooperate with us in making this possible. It’s a sort of anthropomorphic idea to think that the kinds of energy densities we’re interested in will be both self-sustaining and well behaved.
I figure if you want to be a colonist or a bureaucrat, that’s fine and such people are still needed. But you only have one life to live, and there are plenty of colonist bureaucrats. Technology and the sciences needs tinkerers and conquerors. You don’t get big payoffs hewing to the road others are on. Everyone else is looking for the keys under the street lamp; you need to look elsewhere.
He that can live alone resembles the brute beast in nothing, the sage in much, and God in everything. -Baltasar Gracian
48 comments