Locklin on science

On cultures that build

Posted in econo-blasphemy by Scott Locklin on June 19, 2020

I tire of the Andreessen spurred discussion of “cultures that build.” I agree with the sentiment; I do miss the America that could make stuff.

I am annoyed that numskulls refuse to face the actual fact of the matter. The historical entity which built most of the stuff you see around you no longer exists. That civilization is dead. Full stop; the end. In fact, the predominant social energy of the moment, backed by most of the mainstream organs of respectable thought, most government agencies, virtually all corporations and collectives, and right thinking people everywhere is to wipe out any remaining historical reminders of that civilization because of muh feels. For example:

People who dislike the idea of tearing down statues are so thoroughly politically vanquished they can’t prevent the destruction of statues of the historical founder of the country. Pardon me if I laugh at the concept of becoming a “culture that builds” at this present moment in time. US culture and its colonial offspring are now cultures of destruction; both at home and abroad. Virtually all organs of US power are organized to not only prevent building things; they’re organized to destroy things.

see a pattern here?

I would say that the chances of the US becoming “a culture that builds” is about the same as the present day municipality of Venice becoming a powerful trade and naval empire in the Adriatic and Bosphorus. The knowledge is gone. The cultural capital is gone; the society that produced those kinds of productive people hasn’t existed in decades. The physical ability to do this is gone; thanks to the globalization our genius economists told us was inevitable, the US lacks the factories, mines and shipyards required to build things. The human material who would actually do the building is gone: dimwit MBAs destroyed the skilled working classes, atomized their communities, continue to demonize and demoralize them and utterly destroyed the kind of basic low level education and social cohesion required to have a productive workforce.

Our technocrats (aka you lot and the morons you went to college with) themselves are typically not capable of working with matter any longer, preferring more profitable and more fashionable masturbatory financialized nonsense that doesn’t pollute the environment. Instead of building Project Pluto, modern american technocrat and managerial types prefer making dopamine rat mazes such as Facebook, imbecile glass bead games like “quantum information theory” or abstract quasi-religious bullshit such as…  woke collitch culture and its sinister city-burning, cancel-culture Jacobin offspring.

In fact, one of the main things the US produces at the moment is the type of people who think “cultures that build” are so horrible, visible reminders of them need to be removed from the public square. We don’t produce many innovators, but we produce plenty of people who think remaining builders  should be persecuted and made to apologize for having the temerity to excel. We’ve created a managerial caste who is so psychologically fragile they can’t even abide images of success. What are they going to do when they’re asked to do something difficult like invent the transistor or discover DNA, or even skirt San Francisco Zoning Laws

Let me posit this, fellow builders of things. Politically speaking, the kind of changes required for the country to go back to its past of building and inventing cool things will involve at minimum dealing with the kinds of loathesome barbarians tearing down statues and burning cities. Those people have to be prevented from interfering with both built structures and the present day builders of things. There are a lot of them and they have a lot of free time on their hands to get up to mischief. 

Not only that; a productive future will involve active persecution of the evil dimwits responsible for making chimping barbarians think it’s OK to burn it all down. There are a lot of their lot too, and they’re generally comfortably ensconced in schools, foundations, non-profits, government bureaucracies, large corporations, entertainment complexes and other such places of institutional power. These are the people who would implement any government or societal policy. You have to either  change their minds or get them out of the way somehow. 

These bozos would be pretty easy to deal with if we had the political will to do so. I’m not even talking physically, though there is that; most are noodle-armed vegans or two twinkies from a heart attack. Many of these mentally ill assclowns are so hysterical they actually require trigger warnings to get through the day. You could probably take away their antidepressants and they’d all have to check themselves into the booby hatch. This alone would probably double US economic output. Just removing crazy people from positions of responsibility instead of promoting them would be an enormous help. 

 Every historical example of a society turning to a productive direction (I dunno, post Revolution France, or Deng era China) involved defanging tin pot Robespierres before anything good happened. Removing statue toppling city burners and their encouragers and enablers as active dangers to the rest of society is table stakes for making a society of builders. The more serious issue is the MBA types who think it’s just fine to ship middle class jobs to the third world, or import new helot worker classes to destroy the bargaining power of local labor because “muh free markets.” These people are sharks, they’re wreckers, and it is they who have weaponized the “woke culture” of the left to prevent the actual left (as opposed to numskulls who think overturning a statue helps anything) from raising their taxes.

None of them are interested in investing money in productive directions; they’re all about pyramid schemes and looting the remaining human and physical capital. These fuckers are burning the proverbial furniture to warm themselves. They’ll have to go, and they won’t go easy because they have all the loot and no loyalties beyond their bank accounts. That includes almost everyone in Andreessen’s shitty industry (reminder: “VC” means “toilet” in Russian): almost none of them are interested in investing in things involving innovation or matter. They’d rather invest in garbage which skirts hotel and taxi laws or become sneaker loan sharks, making everyone else more miserable in the process by socializing the costs. 

The society we have right now is a result of the people that compose it. Outcomes won’t change until you at least change minds of the people in charge of running the day to day operations of it. Are you willing to ship NPR reporters, Goldman Sachs bankers, Ford foundation grant administrators, pornographers, Booz Allen Hamilton consultants,  mid-level tech managers, 99.8% of Venture Capitalists, and all the 3rd assistant secretaries of education to a potato picking Gulag in North Dakota? Are you willing to at least get them fired so they have to get jobs at Burger King, and put your supposedly waiting-in-the-wings non kakistocrats in charge of their bureaucracies? To be honest, me neither; that’s probably why we can’t have nice things. We’ve built our cages out of iphones, twitter, prozac and people obsessed with their feels and the doings of their crotches. You won’t get any more Edisons or Wozzes or Bardeens in America as long as hysterical imbeciles and demonic looters are preeminent and people who actually lower the entropy of the universe, past, present and future, are demonized. 

It’s over; the US has has a remarkable run as a place where regular people could have a nice life, and exceptional people could make exceptional contributions. “Vanished under night’s helm as if it had never been.” Genap under nihthelm, swa heo no wære.  Acting like some minor tweak in policy is going to reverse this is laughably insane. Policy fiddling is a ghost dance; trying to bring back 1945 in America when we had a competent and productive civil service, nuclear lightning in our hands,  our enemies vanquished at our feet, a largely virtuous and almost fanatically united society, sitting on top of the stock of the world’s capital with a host of giant new high technology factories. That reality and that America is long gone. It has run down the curtain and joined the choir invisible; it is bleedin’ demised. That society isn’t pining for the fields; it’s pushing up the daisies. I’m standing in front of you with a dead parrot society.

I realized it was too late about 7-8 years ago, and organized my life around my exit strategy. The country is too far down kakistocracy, and the remaining decent people are too deluded about the root causes and their potential remedies to ever change things. If you’re still in the US, you live in an evil empire of chaos and destruction, and the best of you are probably serving the worst ends of it.

 You can cower under your desks with home-made diapers on your faces hoping some member of a productive society invents a vaccine for the Chinese Lung Butter or whatever phantom (and entirely inflicted by our kakistocrat mandarins) terror of the moment afflicts you. Those N95 factories aren’t coming back, let alone Bell Labs type innovations; even if you wish really really hard. 


Automotive memories

Posted in fun, manhood by Scott Locklin on April 10, 2020

When I was a teenage kid in the 80s, my hometown had youth car culture. If you don’t know what this is, check out the old George Lucas movie, American Graffiti for a 1950s version of it. People driving up and down the strip, occasionally racing, getting crappy food, hanging out, getting into fisticuffs in parking lots, playing hide the salami in the back seat of the car parked behind the Zayres department store.  Vidya games sucked in those days, and our parents yelled at us if we talked on the phone for too long (twisted pair, yo). I guess there was cable-tv, but the novelty kind of wears off. The closest thing to a wholesome pass-time in my boring suburban home town was driving around aimlessly, blowing giant holes in the ozone layer, giving everyone brain damage and creating acid rain in Canada with our stinky “still uses tetra ethyl lead” old automobiles. I’m sure there are youthful tittering pustules now gasping in horror at the environmental destructiveness of it all: great; have fun furiously thumb twiddling  your outwage on your nerd dingus -I pity the new generation of human soybeans.

When you’re a working class teenage kid in a podunk suburb of a 3rd tier city, unless you have rich parents or are a drug dealer, you’re not driving a new car. You’re driving something 10 to 25 years old. In the 80s, on the East Coast, this also meant you’re driving something with gaping rust-holes in it; possibly with “bondo” patches. I remember one of my buddies drove this giant 2-door buick with a “fred flintstone” hole in the floor. Would occasionally drive over puddles when he had someone he didn’t like in the back seat.


The menagerie of cars we drove in those days really were something, and nothing like the things people drive now. One of the cool things about them was they were all “hackable.” You could work on your own car, and in fact, those old cars were meant to be fiddled with. At minimum, you had to fiddle with the carb/s, the voltage regulator and distributor of an old car.  Sky’s the limit for fiddlin; swapping an engine or transmission out was a project which could be accomplished by one or two people in an afternoon, even using shitty equipment. Less if you had a real garage with lifts to work in. Most of my youthful colleagues liked fiddling with automobiles. Some of them went on to become engineers and scientists as a result.

The one that got away: Starfire with 10.5:1 ultra high compression pistons, and alas a cracked frame

The car of dreams for a young guy was something like a Hemi Cuda, Boss Mustang, Firebird or Chevelle. A two door “compact” car of its day with sporty styling and a 7+ liter displacement “big block” engine in it producing upwards of 400 horsepower. Modern automakers started making these again a few years back, to cater to my generation; with even more preposterous horsepower numbers as routine equipment. Nobody actually owned one of these, but they might have owned one with a smaller engine in it (I had a couple of Barracudas) and done an engine transplant. That’s just redneck aspirational engineering though. The really cool ones in hindsight were the various kinds of “cigar butts” we got our hands on. Cars that were beat to shit, but had some kind of cool motor or other quality to them.

Satan’s Buick

The Buick I mention above was one of those. It was a two door, which considering how bloody long and boat-like it was, was pretty funny. It only had a 350 in it, but it was a Buick three-fiddy, which meant it had some decent guts to it; often beating newer IROC-Zs (preferred middle class jocko automobile; it looked fast, but the smog system of the day made it a real dog) in a race between stoplights. It also had the most preposterous boat-like suspension; when it was raining, and we were driving it hard on the baloney-skin little 14″ tires, it would occasionally smoothly slide sideways over 4″ curbs without anyone in the car noticing.


New cars were hilarious in those days; particularly US compact cars. I remember one dude whose girlfriend was a middle class girl who owned a Chevy Chevette she more or less bought new. What a trash fire that thing was. Lousy handling, 50 odd horsepower, and the fine engineering qualities we associate with Detroit in the 1980s. It was insanely bad, constantly breaking down, and she probably dated my pal because he was a mechanic. US technology of the day couldn’t figure out how to build a car with decent performance, gas mileage and emissions qualities. This is why everyone who had a choice ended up driving Japanese cars. The smog system on cars in those days was an unholy spaghetti of vacuum hoses and valves which rarely (if ever) worked properly.


the car that made the Yugo look good

I had this thing called an AMC Concord at one point; in principle this sort of car in 4WD form was the origin of the “crossover vehicle.” In actuality mine was an ordinary rear wheel drive. Someone’s older brother bought the thing, handed it down to his bro, who sold it to me when he upgraded to something people wouldn’t make fun of him for driving. It was basically an economy car of the late 70s early 80s; it had a straight-6 engine, and unlike the chevettes was a fairly comfy ride. There are various stories I could tell about my antics with the thing, involving quarts of vodka, offroad adventures with dead deer and sleazey women, but the operative story was how poor I was when I was driving this contraption. For some reason I didn’t think I could afford anti-freeze for the thing in the winter (probably $50 I’d rather spend on gas). I’d just keep the thing running by driving it around all the time, which is more or less what I did anyway. Seemed reasonable, as I worked a lot when I wasn’t plumbing the mysteries of Calculus. It actually worked almost the entire winter, until I slept in on a cold day and the engine block froze. I figured the thing was kaput, so I sold it to the local junkyard for $200 and bought another cigar butt with the proceeds. After the spring thaw I saw it in the junkyard I was picking over for parts for my new cigar butt; a Dodge Dart. Laughing, I stuck my key in it and it fired right up. The block was sturdy enough I guess; same one they used in Jeeps until fairly recently.

The Dodge Dart and Plymouth Valiant was the ultimate cigar butt car. It was a “compact car” of its day; it actually weighed under 3000lbs with a driver in it. The standard engine was this thing called a “slant-6.” A really antiquated inline-6 design with such a large (4.125 inch) piston stroke, it had to be put in the engine compartment at a thirty degree angle. You could have put it in a Studebaker or a Packard sticking straight up and down, but in 60s and 70s contemporary cars, the hoods weren’t so tall and cavernous.

dat slant-6

The thing was bulletproof. This came from a couple of interesting design decisions. Originally it was designed to use a futuristic aluminum block, so the castings were thick to support the aluminum design. To save money, the iron castings were the same as aluminum. The iron blocks could have been made thinner as iron is stronger.  Most of the engines ended up being made of iron, a spectacular waste of material from a planned obsolescence point of view, but a huge win for those who owned one. The engine also used giant crankshaft journals; the bearings which kept the engine together. The small bore combined with long stroke helped keep things torquey and fuel efficient. And for some reason they used a forged crankshaft, which is ridiculous overkill on an economy motor that makes 125 horsepower. It also ran really well with good rolling torque; mostly because of the intake manifold design. In those pre-fuel injection days, that was usually the limiting thing about your engine; getting the fuel from the carb jets (basically these were just reversed spray can nozzles) to the combustion chambers over the pistons. The design of the slant-6 intake manifold actually came from Chrysler’s experience with cross ram Max-Wedge engine manifolds; the much cooler looking 7 liter high performance engines that came before the legendary 426 hemi. This, combined with weird antiquated things like … solid lifters; something that hadn’t been standard since the early 60s; combined to make this weird atavism virtually indestructible.

This motor, plus the decently designed carriage of Chrysler A body cars gave the reputation of “only cockroaches and dodge darts will survive the apocalypse.” I’ve had a couple of them, again, you pay a few hundred bucks and drive them until the tires fall off.



While I probably should have worked on my calculus a few years earlier than I did instead of screwing around with hoopty mechanics, the type of thinking and practical experience you’d get from such things was pretty helpful. Putting together anything mechanical in the atomic physics world was pretty trivial after working on weird borked up cars in backyard garages. More to the point; debugging things on these old cars was a great lesson in fixing anything mechanical or electronic. If you can make a ratty old engine purr by fiddling with the carbs and dwell angle on some distributor points, you can make a complex scientific apparatus work.

Andreessen-Horowitz craps on “AI” startups from a great height

Posted in investments by Scott Locklin on February 21, 2020

Andreessen-Horowitz has always been the most levelheaded of the major current year VC firms. While other firms were levering up on “cleantech” and nonsensical biotech startups that violate physical law, they quietly continued to invest in sane companies (also hot garbage bugman products like soylent).  I assume they actually listen to people on the front lines, rather than what their VC pals are telling them. Maybe they’re just smarter than everyone else; definitely more independent minded. Their recent review on how “AI” differs from software company investments is absolutely brutal. I am pretty sure most people didn’t get the point, so I’ll quote it emphasizing the important bits.


They use all the buzzwords (my personal bete-noir; the term “AI” when they mean “machine learning”), but they’ve finally publicly noticed certain things which are abundantly obvious to anyone who works in the field. For example, gross margins are low for deep learning startups that use “cloud” compute. Mostly because they use cloud compute.


Gross Margins, Part 1: Cloud infrastructure is a substantial – and sometimes hidden – cost for AI companies 🏭

In the old days of on-premise software, delivering a product meant stamping out and shipping physical media – the cost of running the software, whether on servers or desktops, was borne by the buyer. Today, with the dominance of SaaS, that cost has been pushed back to the vendor. Most software companies pay big AWS or Azure bills every month – the more demanding the software, the higher the bill.

AI, it turns out, is pretty demanding:

  • Training a single AI model can cost hundreds of thousands of dollars (or more) in compute resources. While it’s tempting to treat this as a one-time cost, retraining is increasingly recognized as an ongoing cost, since the data that feeds AI models tends to change over time (a phenomenon known as “data drift”).
  • Model inference (the process of generating predictions in production) is also more computationally complex than operating traditional software. Executing a long series of matrix multiplications just requires more math than, for example, reading from a database.
  • AI applications are more likely than traditional software to operate on rich media like images, audio, or video. These types of data consume higher than usual storage resources, are expensive to process, and often suffer from region of interest issues – an application may need to process a large file to find a small, relevant snippet.
  • We’ve had AI companies tell us that cloud operations can be more complex and costly than traditional approaches, particularly because there aren’t good tools to scale AI models globally. As a result, some AI companies have to routinely transfer trained models across cloud regions – racking up big ingress and egress costs – to improve reliability, latency, and compliance.

Taken together, these forces contribute to the 25% or more of revenue that AI companies often spend on cloud resources. In extreme cases, startups tackling particularly complex tasks have actually found manual data processing cheaper than executing a trained model.

This is something which is true of pretty much all machine learning with heavy compute and data problems. The pricing structure of “cloud” bullshit is designed to extract maximum blood from people with heavy data or compute requirements. Cloud companies would prefer to sell the time on a piece of hardware to 5 or 10 customers. If you’re lucky enough to have a startup that runs on a few million rows worth of data and a GBM or Random Forest, it’s probably not true at all, but precious few startups are so lucky. Those who use the latest DL woo on the huge data sets they require will have huge compute bills unless they buy their own hardware. For reasons that make no sense to me, most of them don’t buy hardware.

In many problem domains, exponentially more processing and data are needed to get incrementally more accuracy. This means – as we’ve noted before – that model complexity is growing at an incredible rate, and it’s unlikely processors will be able to keep up. Moore’s Law is not enough. (For example, the compute resources required to train state-of-the-art AI models has grown over 300,000x since 2012, while the transistor count of NVIDIA GPUs has grown only ~4x!) Distributed computing is a compelling solution to this problem, but it primarily addresses speed – not cost.

Beyond what they’re saying about the size of Deep Learning models which is doubtless true for interesting new results, admitting that the computational power of GPU chips hasn’t exactly been growing apace is something rarely heard (though more often lately). Everyone thinks Moore’s law will save us. NVIDIA actually does have obvious performance improvements that could be made, but the scale of things is such that the only way to grow significantly bigger models is by lining up more GPUs. Doing this in a “cloud” you’re renting from a profit making company is financial suicide.


Gross Margins, Part 2: Many AI applications rely on “humans in the loop” to function at a high level of accuracy 👷

Human-in-the-loop systems take two forms, both of which contribute to lower gross margins for many AI startups.

First: training most of today’s state-of-the-art AI models involves the manual cleaning and labeling of large datasets. This process is laborious, expensive, and among the biggest barriers to more widespread adoption of AI. Plus, as we discussed above, training doesn’t end once a model is deployed. To maintain accuracy, new training data needs to be continually captured, labeled, and fed back into the system. Although techniques like drift detection and active learning can reduce the burden, anecdotal data shows that many companies spend up to 10-15% of revenue on this process – usually not counting core engineering resources – and suggests ongoing development work exceeds typical bug fixes and feature additions.

Second: for many tasks, especially those requiring greater cognitive reasoning, humans are often plugged into AI systems in real time. Social media companies, for example, employ thousands of human reviewers to augment AI-based moderation systems. Many autonomous vehicle systems include remote human operators, and most AI-based medical devices interface with physicians as joint decision makers. More and more startups are adopting this approach as the capabilities of modern AI systems are becoming better understood. A number of AI companies that planned to sell pure software products are increasingly bringing a services capability in-house and booking the associated costs.

Everyone in the business knows about this. If you’re working with interesting models, even assuming the presence of infinite accurately labeled training data, the “human in the loop” problem doesn’t ever completely go away. A machine learning model is generally “man amplified.” If you need someone (or, more likely, several someone’s) making a half million bucks a year to keep your neural net producing reasonable results, you might reconsider your choices. If the thing makes human level decisions a few hundred times a year, it might be easier and cheaper for humans to make those decisions manually, using a better user interface. Better user interfaces are sorely underappreciated. Have a look at Labview, Delphi or Palantir’s offerings for examples of highly productive user interfaces.


 Since the range of possible input values is so large, each new customer deployment is likely to generate data that has never been seen before. Even customers that appear similar – two auto manufacturers doing defect detection, for example – may require substantially different training data, due to something as simple as the placement of video cameras on their assembly lines.


Software which solves a business problem generally scales to new customers. You do some database back end grunt work, plug it in, and you’re done.  Sometimes you have to adjust processes to fit the accepted uses of the software; or spend absurd amounts of labor adjusting the software to work with your business processes: SAP is notorious for this. Such cycles are hugely time and labor consuming. Obviously they must be worth it at least some of the time. But while SAP is notorious (to the point of causing bankruptcy in otherwise healthy companies), most people haven’t figured out that ML oriented processes almost never scale like a simpler application would. You will be confronted with the same problem as using SAP; there is a ton of work done up front; all of it custom. I’ll go out on a limb and assert that most of the up front data pipelining and organizational changes which allow for it are probably more valuable than the actual machine learning piece.


In the AI world, technical differentiation is harder to achieve. New model architectures are being developed mostly in open, academic settings. Reference implementations (pre-trained models) are available from open-source libraries, and model parameters can be optimized automatically. Data is the core of an AI system, but it’s often owned by customers, in the public domain, or over time becomes a commodity.

That’s right; that’s why a lone wolf like me, or a small team can do as good or better a job than some firm with 100x the head count and 100m in VC backing. I know what the strengths and weaknesses of the latest woo is. Worse than that: I know that, from a business perspective, something dumb like Naive Bayes or a linear model might solve the customer’s problem just as well as the latest gigawatt neural net atrocity. The VC backed startup might be betting on their “special tool” as its moaty IP. A few percent difference on a ROC curve won’t matter if the data is hand wavey and not really labeled properly, which describes most data you’ll encounter in the wild. ML is undeniably useful, but it is extremely rare that a startup have “special sauce” that works 10x or 100x better than somthing you could fork in a git repo. People won’t pay a premium over in-house ad-hoc data science solutions unless it represents truly game changing results. The technology could impress the shit out of everyone else, but if it’s only getting 5% better MAPE (or whatever); it’s irrelevant. A lot of “AI” doesn’t really work better than a histogram via “group by” query. Throwing complexity at it won’t make it better: sometimes there’s no data in your data.


Some good bullet points for would be “AI” technologists:

Eliminate model complexity as much as possible. We’ve seen a massive difference in COGS between startups that train a unique model per customer versus those that are able to share a single model (or set of models) among all customers….

Nice to be able to do, but super rare. If you’ve found a problem like this, you better hope you have a special, moaty solution, or a unique data set which makes it possible.

Choose problem domains carefully – and often narrowly – to reduce data complexityAutomating human labor is a fundamentally hard thing to do. Many companies are finding that the minimum viable task for AI models is narrower than they expected.  Rather than offering general text suggestions, for instance, some teams have found success offering short suggestions in email or job postings. Companies working in the CRM space have found highly valuable niches for AI based just around updating records. There is a large class of problems, like these, that are hard for humans to perform but relatively easy for AI. They tend to involve high-scale, low-complexity tasks, such as moderation, data entry/coding, transcription, etc.

This is a huge admission of “AI” failure. All the sugar plum fairy bullshit about “AI replacing jobs” evaporates in the puff of pixie dust it always was. Really, they’re talking about cheap overseas labor when lizard man fixers like Yang regurgitate the “AI coming for your jobs” meme; AI actually stands for “Alien (or) Immigrant” in this context. Yes they do hold out the possibility of ML being used in some limited domains; I agree, but the hockey stick required for VC backing, and the army of Ph.D.s required to make it work doesn’t really mix well with those limited domains, which have a limited market.

Embrace services. There are huge opportunities to meet the market where it stands. That may mean offering a full-stack translation service rather than translation software or running a taxi service rather than selling self-driving cars.

In other words; you probably can’t build a brain in a can that can solve all kinds of problems: you’re probably going to be a consulting and services company. In case you aren’t familiar with valuations math: services companies are worth something like 2x yearly revenue; where software and “technology” companies are worth 10-20x revenue. That’s why the wework weasel kept trying to position his pyramid scheme as a software company. The implications here are huge: “AI” raises done by A16z and people who think like them are going to be at much lower valuations. If it weren’t clear enough by now, they said it again:

To summarize: most AI systems today aren’t quite software, in the traditional sense. And AI businesses, as a result, don’t look exactly like software businesses. They involve ongoing human support and material variable costs. They often don’t scale quite as easily as we’d like. And strong defensibility – critical to the “build once / sell many times” software model – doesn’t seem to come for free.

These traits make AI feel, to an extent, like a services business. Put another way: you can replace the services firm, but you can’t (completely) replace the services.

I’ll say it again since they did: services companies are not valued like software businesses are. VCs love software businesses; work hard up front to solve a problem, print money forever. That’s why they get the 10-20x revenues valuations. Services companies? Why would you invest in a services company? Their growth is inherently constrained by labor costs and weird addressable market issues.

This isn’t exactly an announcement of a new “AI winter,” but it’s autumn and the winter is coming for startups who claim to be offering world beating “AI” solutions. The promise of “AI” has always been to replace human labor and increase human power over nature. People who actually think ML is “AI” think the machine will just teach itself somehow; no humans needed. Yet, that’s not the financial or physical reality. The reality is, there are interesting models which can be applied to business problems by armies of well trained DBAs, data engineers, statisticians and technicians. These sorts of things are often best grown inside a large existing company to increase productivity. If the company is sclerotic, it can hire outside consultants, just as they’ve always done. A16z’s portfolio reflects this. Putting aside their autonomous vehicle bets (which look like they don’t have a large “AI” component to them), and some health tech bets that have at least linear regression tier data science, I can only identify only two overtly data science related startup they’ve funded. They’re vastly more long crypto currency and blockchain than “AI.” Despite having said otherwise, their money says “AI” companies don’t look so hot.

My TLDR summary:

  1.  Deep learning costs a lot in compute, for marginal payoffs
  2. Machine learning startups generally have no moat or meaningful special sauce
  3. Machine learning startups are mostly services businesses, not software businesses
  4. Machine learning will be most productive inside large organizations that have data and process inefficiencies



Shitty future: Bugman design versus eternal design

Posted in Design by Scott Locklin on February 4, 2020

I was yacking with nerds recently on the reason why some people enjoy owning  mechanical wristwatches. In the finance business or any enterprise sales org, wearing a mechanical wristwatch is well understood, like wearing a nice pair of leather shoes or a silk necktie. Tastes may differ, but people in that milieu understand the appeal. In tech, other than a small subculture  of people who wear the Speedmaster moon watch (because we all wanted to be astronauts), and an even smaller subculture who wear something like the Rolex Milgauss (some of us work around big atom-smashing magnets), the mechanical wristwatch is mostly a source of confusion.

You can dismiss it as an expensive status symbol (many things are; nice cars, nice bags, nice nerd dildo, nice anything), but the continued existence of the mechanical wristwatch is more than that. The wristwatch became popular after WW-1, and was a necessary piece of equipment in the time of the last great explorers, from the Everest and Polar expeditions to Jaques Cousteau‘s undersea adventures to the Moon landing. The association with this now historic, but still golden era continues to sell wristwatches.

The geared mechanical clockwork itself is ancient: we have no idea where/when it was invented, but we know the ancient Greeks had such mechanisms. While there is no evidence for or against it, it is possible that gear trains predate recorded civilization. The geared mechanical clock, like the pipe organ and the Gothic cathedral is a defining symbol of Western Civilization. Division of the day into mechanically measured hours  unrelated to the movements of the sun is a symbol of the defeat of the tyranny of nature by human ingenuity and machine culture.

As a piece of technology, wristwatches probably peaked around 1970 when quartz watches became a thing. Quartz watches are undoubtedly more accurate, and at this point you could probably stick a microdot which syncs to GPS atomic clocks anywhere. But the psychological framework, and the association with the last  human earthbound age of adventure and exploration remains. Watchmakers continue to innovate; my daily beater by Damasko contains a bunch of technology you usually only see in an experimental physics Ph.D. thesis (saw them all in mine anyway); ceramic bearings, martensitic steel, preferentially etched silicon springs, viton o-rings. None of this is necessary to build a good watch; it is just a tribute to the art of mechanical things and the creativity and artistry of the craftsman.

There is still much to be said for the mechanical wristwatch as a useful object. Whether it is self winding or manual, it doesn’t require batteries or plugging into USB ports, and it might keep track of any number of useful things. It’s also routine to make new ones waterproof. While quartz has more accuracy, for most purposes (including orbital mechanics navigation), mechanical watches are accurate enough it doesn’t matter. If it does matter, you can buy a hybrid quartz/mechanical self winding springdrive.  There is also the aspect of durability: if you take good care of them and avoid mishaps, most well made watches will continue to be serviceable without a major overhaul for … centuries. People hand them down to their grandchildren.

I expect there to be mechanical wristwatches made for as long as some remnant of Western Civilization continues to exist, if only to sell luxury products to the Chinese.  It’s a fundamental art form; a physical embodiment of the spirit of Western Faustian civilization.

I do not expect goofy innovations like the present form of “smart watches” to be around for as long. Smart watches are bugman technology.  They tell time … and do all kinds of other crap you don’t need such as informing you when you have email/slack updates, saving you the towering inconvenience of reading them a half second later on your phone or laptop. When you dump $600 on one of these goofy things, you can’t even expect it to be around in 20 years to give to the kids you (as a bugman) will never have, let alone 100 or 200 years as a $600 watch might. It isn’t because new “smart watches” have amazing new features which obsolete the old ones: it’s because the connectors and case will physically wear out and the operating system for your phone won’t support old models.

The difference between mechanical watches and smart watches is a sort of useful test case to generalize this sort of value judgement from. Consumerist capitalism has committed many great sins.   I could put up with most of them if they could get engineering aesthetics right. The world we live in is ugly.  Bugman engineering is one of the forms of ugliness which makes life more unpleasant than it needs to be.

Bugman devices are festooned with unnecessary LED lights. Whether it is a smoke alarm, a computer monitor switch, keyboard, power strip, DVD player, radio: you virtually never need an LED light to tell you that some object is hooked up to power. Especially objects which stay on all the time, like a smoke alarm or monitor. If you must have an indicator of activity; place a mechanical button on the object that makes a noise when you press it with power is on. Nobody is going to notice one among the sea of stupid little lights in a room have gone out. The time when it was “futuristic” to have little led’s all over your refrigerator or toaster is long past. Just stop it.

Bugman designed appliances have digital clocks you must set. There is no reason for your oven, blender, microwave, refrigerator, dish washer or water dispenser to know what time it is.  Power does go out on occasion (all the time in “futuristic” shit holes like Berkeley), and nobody wants to tell their stove what time it is. If you must have a clock; make one with a mechanical clock with hands you can easily move rather than navigating a 3 layer menu of membrane switches to set digits.

Bugman devices don’t use mechanical switches; they’re not “futuristic” enough. Capacitative switches are terrible and never work right. Touch screens on your car’s entertainment system are a horror. Membrane switches on your appliance or anything else are a planned obsolescence insult unless you are operating in a flammable or underwater atmosphere; the only reason to use membrane switches.

Bugman devices are besmirched with extraneous software and are networked when they don’t have to be. Being able to control your light bulb over wifi or bluetooth is almost never necessary. It is wasteful, a security nightmare and aesthetically disgusting. And no I don’t want my stove to be on the internet so its clock knows what time it is.

Bugman devices and services use invasive phone applications for payment instead of credit cards. If your device is hooked up to the internet enough to talk to a cell phone, it’s hooked up to the internet enough to use a credit card, crypto currency or paypal. Bugmen don’t mind the security and privacy nightmare of loading new executables on their nerd dildo phones.

Bugman devices complexify life and make people work rather than making their lives better. Every password, clock, networked device, app you have to manage, every battery you have to charge, change or replace is making your life worse. Bugman don’t care though; it helps fill the emptiness.

Bugman software substitutes software for actual experiences. Not all video games or online entertainment are bugman, but most VR applications or immersive social games (looking at you, Guitar Hero) are. Bugman sexuality; well, I bet they’re excited about sex robots.

Juicero, an internet equipped, phone interfacing, centrally planned/distributed subscription juice machine that costs $700 instead of a manual juicer that costs $10 and lasts multiple lifetimes.

Peloton: an internet equipped, exercise bicycle that costs $2000 plus subscription, as opposed to a $500 bike and some competitive friends.

Soylent is bugman food. It even looks like something the actual bug-man in the classic “The Fly” movie would eat. Hell, the bugmen in the media are trying us to get to eat actual bugs.

Many images and ideas from the excellent (arguably NSFW) “Shitty Future” twitter feed.