Locklin on science

Andreessen-Horowitz craps on “AI” startups from a great height

Posted in investments by Scott Locklin on February 21, 2020

Andreessen-Horowitz has always been the most levelheaded of the major current year VC firms. While other firms were levering up on “cleantech” and nonsensical biotech startups that violate physical law, they quietly continued to invest in sane companies (also hot garbage bugman products like soylent).  I assume they actually listen to people on the front lines, rather than what their VC pals are telling them. Maybe they’re just smarter than everyone else; definitely more independent minded. Their recent review on how “AI” differs from software company investments is absolutely brutal. I am pretty sure most people didn’t get the point, so I’ll quote it emphasizing the important bits.

https://a16z.com/2020/02/16/the-new-business-of-ai-and-how-its-different-from-traditional-software/

They use all the buzzwords (my personal bete-noir; the term “AI” when they mean “machine learning”), but they’ve finally publicly noticed certain things which are abundantly obvious to anyone who works in the field. For example, gross margins are low for deep learning startups that use “cloud” compute. Mostly because they use cloud compute.

 

Gross Margins, Part 1: Cloud infrastructure is a substantial – and sometimes hidden – cost for AI companies 🏭

In the old days of on-premise software, delivering a product meant stamping out and shipping physical media – the cost of running the software, whether on servers or desktops, was borne by the buyer. Today, with the dominance of SaaS, that cost has been pushed back to the vendor. Most software companies pay big AWS or Azure bills every month – the more demanding the software, the higher the bill.

AI, it turns out, is pretty demanding:

  • Training a single AI model can cost hundreds of thousands of dollars (or more) in compute resources. While it’s tempting to treat this as a one-time cost, retraining is increasingly recognized as an ongoing cost, since the data that feeds AI models tends to change over time (a phenomenon known as “data drift”).
  • Model inference (the process of generating predictions in production) is also more computationally complex than operating traditional software. Executing a long series of matrix multiplications just requires more math than, for example, reading from a database.
  • AI applications are more likely than traditional software to operate on rich media like images, audio, or video. These types of data consume higher than usual storage resources, are expensive to process, and often suffer from region of interest issues – an application may need to process a large file to find a small, relevant snippet.
  • We’ve had AI companies tell us that cloud operations can be more complex and costly than traditional approaches, particularly because there aren’t good tools to scale AI models globally. As a result, some AI companies have to routinely transfer trained models across cloud regions – racking up big ingress and egress costs – to improve reliability, latency, and compliance.

Taken together, these forces contribute to the 25% or more of revenue that AI companies often spend on cloud resources. In extreme cases, startups tackling particularly complex tasks have actually found manual data processing cheaper than executing a trained model.

This is something which is true of pretty much all machine learning with heavy compute and data problems. The pricing structure of “cloud” bullshit is designed to extract maximum blood from people with heavy data or compute requirements. Cloud companies would prefer to sell the time on a piece of hardware to 5 or 10 customers. If you’re lucky enough to have a startup that runs on a few million rows worth of data and a GBM or Random Forest, it’s probably not true at all, but precious few startups are so lucky. Those who use the latest DL woo on the huge data sets they require will have huge compute bills unless they buy their own hardware. For reasons that make no sense to me, most of them don’t buy hardware.

In many problem domains, exponentially more processing and data are needed to get incrementally more accuracy. This means – as we’ve noted before – that model complexity is growing at an incredible rate, and it’s unlikely processors will be able to keep up. Moore’s Law is not enough. (For example, the compute resources required to train state-of-the-art AI models has grown over 300,000x since 2012, while the transistor count of NVIDIA GPUs has grown only ~4x!) Distributed computing is a compelling solution to this problem, but it primarily addresses speed – not cost.

Beyond what they’re saying about the size of Deep Learning models which is doubtless true for interesting new results, admitting that the computational power of GPU chips hasn’t exactly been growing apace is something rarely heard (though more often lately). Everyone thinks Moore’s law will save us. NVIDIA actually does have obvious performance improvements that could be made, but the scale of things is such that the only way to grow significantly bigger models is by lining up more GPUs. Doing this in a “cloud” you’re renting from a profit making company is financial suicide.

 

Gross Margins, Part 2: Many AI applications rely on “humans in the loop” to function at a high level of accuracy 👷

Human-in-the-loop systems take two forms, both of which contribute to lower gross margins for many AI startups.

First: training most of today’s state-of-the-art AI models involves the manual cleaning and labeling of large datasets. This process is laborious, expensive, and among the biggest barriers to more widespread adoption of AI. Plus, as we discussed above, training doesn’t end once a model is deployed. To maintain accuracy, new training data needs to be continually captured, labeled, and fed back into the system. Although techniques like drift detection and active learning can reduce the burden, anecdotal data shows that many companies spend up to 10-15% of revenue on this process – usually not counting core engineering resources – and suggests ongoing development work exceeds typical bug fixes and feature additions.

Second: for many tasks, especially those requiring greater cognitive reasoning, humans are often plugged into AI systems in real time. Social media companies, for example, employ thousands of human reviewers to augment AI-based moderation systems. Many autonomous vehicle systems include remote human operators, and most AI-based medical devices interface with physicians as joint decision makers. More and more startups are adopting this approach as the capabilities of modern AI systems are becoming better understood. A number of AI companies that planned to sell pure software products are increasingly bringing a services capability in-house and booking the associated costs.

Everyone in the business knows about this. If you’re working with interesting models, even assuming the presence of infinite accurately labeled training data, the “human in the loop” problem doesn’t ever completely go away. A machine learning model is generally “man amplified.” If you need someone (or, more likely, several someone’s) making a half million bucks a year to keep your neural net producing reasonable results, you might reconsider your choices. If the thing makes human level decisions a few hundred times a year, it might be easier and cheaper for humans to make those decisions manually, using a better user interface. Better user interfaces are sorely underappreciated. Have a look at Labview, Delphi or Palantir’s offerings for examples of highly productive user interfaces.

 

 Since the range of possible input values is so large, each new customer deployment is likely to generate data that has never been seen before. Even customers that appear similar – two auto manufacturers doing defect detection, for example – may require substantially different training data, due to something as simple as the placement of video cameras on their assembly lines.

 

Software which solves a business problem generally scales to new customers. You do some database back end grunt work, plug it in, and you’re done.  Sometimes you have to adjust processes to fit the accepted uses of the software; or spend absurd amounts of labor adjusting the software to work with your business processes: SAP is notorious for this. Such cycles are hugely time and labor consuming. Obviously they must be worth it at least some of the time. But while SAP is notorious (to the point of causing bankruptcy in otherwise healthy companies), most people haven’t figured out that ML oriented processes almost never scale like a simpler application would. You will be confronted with the same problem as using SAP; there is a ton of work done up front; all of it custom. I’ll go out on a limb and assert that most of the up front data pipelining and organizational changes which allow for it are probably more valuable than the actual machine learning piece.

 

In the AI world, technical differentiation is harder to achieve. New model architectures are being developed mostly in open, academic settings. Reference implementations (pre-trained models) are available from open-source libraries, and model parameters can be optimized automatically. Data is the core of an AI system, but it’s often owned by customers, in the public domain, or over time becomes a commodity.

That’s right; that’s why a lone wolf like me, or a small team can do as good or better a job than some firm with 100x the head count and 100m in VC backing. I know what the strengths and weaknesses of the latest woo is. Worse than that: I know that, from a business perspective, something dumb like Naive Bayes or a linear model might solve the customer’s problem just as well as the latest gigawatt neural net atrocity. The VC backed startup might be betting on their “special tool” as its moaty IP. A few percent difference on a ROC curve won’t matter if the data is hand wavey and not really labeled properly, which describes most data you’ll encounter in the wild. ML is undeniably useful, but it is extremely rare that a startup have “special sauce” that works 10x or 100x better than somthing you could fork in a git repo. People won’t pay a premium over in-house ad-hoc data science solutions unless it represents truly game changing results. The technology could impress the shit out of everyone else, but if it’s only getting 5% better MAPE (or whatever); it’s irrelevant. A lot of “AI” doesn’t really work better than a histogram via “group by” query. Throwing complexity at it won’t make it better: sometimes there’s no data in your data.

 

Some good bullet points for would be “AI” technologists:

Eliminate model complexity as much as possible. We’ve seen a massive difference in COGS between startups that train a unique model per customer versus those that are able to share a single model (or set of models) among all customers….

Nice to be able to do, but super rare. If you’ve found a problem like this, you better hope you have a special, moaty solution, or a unique data set which makes it possible.

Choose problem domains carefully – and often narrowly – to reduce data complexityAutomating human labor is a fundamentally hard thing to do. Many companies are finding that the minimum viable task for AI models is narrower than they expected.  Rather than offering general text suggestions, for instance, some teams have found success offering short suggestions in email or job postings. Companies working in the CRM space have found highly valuable niches for AI based just around updating records. There is a large class of problems, like these, that are hard for humans to perform but relatively easy for AI. They tend to involve high-scale, low-complexity tasks, such as moderation, data entry/coding, transcription, etc.

This is a huge admission of “AI” failure. All the sugar plum fairy bullshit about “AI replacing jobs” evaporates in the puff of pixie dust it always was. Really, they’re talking about cheap overseas labor when lizard man fixers like Yang regurgitate the “AI coming for your jobs” meme; AI actually stands for “Alien (or) Immigrant” in this context. Yes they do hold out the possibility of ML being used in some limited domains; I agree, but the hockey stick required for VC backing, and the army of Ph.D.s required to make it work doesn’t really mix well with those limited domains, which have a limited market.

Embrace services. There are huge opportunities to meet the market where it stands. That may mean offering a full-stack translation service rather than translation software or running a taxi service rather than selling self-driving cars.

In other words; you probably can’t build a brain in a can that can solve all kinds of problems: you’re probably going to be a consulting and services company. In case you aren’t familiar with valuations math: services companies are worth something like 2x yearly revenue; where software and “technology” companies are worth 10-20x revenue. That’s why the wework weasel kept trying to position his pyramid scheme as a software company. The implications here are huge: “AI” raises done by A16z and people who think like them are going to be at much lower valuations. If it weren’t clear enough by now, they said it again:

To summarize: most AI systems today aren’t quite software, in the traditional sense. And AI businesses, as a result, don’t look exactly like software businesses. They involve ongoing human support and material variable costs. They often don’t scale quite as easily as we’d like. And strong defensibility – critical to the “build once / sell many times” software model – doesn’t seem to come for free.

These traits make AI feel, to an extent, like a services business. Put another way: you can replace the services firm, but you can’t (completely) replace the services.

I’ll say it again since they did: services companies are not valued like software businesses are. VCs love software businesses; work hard up front to solve a problem, print money forever. That’s why they get the 10-20x revenues valuations. Services companies? Why would you invest in a services company? Their growth is inherently constrained by labor costs and weird addressable market issues.

This isn’t exactly an announcement of a new “AI winter,” but it’s autumn and the winter is coming for startups who claim to be offering world beating “AI” solutions. The promise of “AI” has always been to replace human labor and increase human power over nature. People who actually think ML is “AI” think the machine will just teach itself somehow; no humans needed. Yet, that’s not the financial or physical reality. The reality is, there are interesting models which can be applied to business problems by armies of well trained DBAs, data engineers, statisticians and technicians. These sorts of things are often best grown inside a large existing company to increase productivity. If the company is sclerotic, it can hire outside consultants, just as they’ve always done. A16z’s portfolio reflects this. Putting aside their autonomous vehicle bets (which look like they don’t have a large “AI” component to them), and some health tech bets that have at least linear regression tier data science, I can only identify only two overtly data science related startup they’ve funded. They’re vastly more long crypto currency and blockchain than “AI.” Despite having said otherwise, their money says “AI” companies don’t look so hot.

My TLDR summary:

  1.  Deep learning costs a lot in compute, for marginal payoffs
  2. Machine learning startups generally have no moat or meaningful special sauce
  3. Machine learning startups are mostly services businesses, not software businesses
  4. Machine learning will be most productive inside large organizations that have data and process inefficiencies

 

 

46 Responses

Subscribe to comments with RSS.

  1. asciilifeform said, on February 21, 2020 at 8:17 pm

    > Those who use the latest DL woo on the huge data sets they require will have huge compute bills unless they buy their own hardware. For reasons that make no sense to me, most of them don’t buy hardware.

    This is doubly astonishing in light of the fact that “cloudism” produces an exhaust product of dirt-cheap yet entirely-usable (Moore’s “law” has been dead for nearly a decade now) surplus irons.

    But — purchasing surplus gear, dusting the fans, pricing bulk DC rack space, physically hoisting the gear — I suspect is seen as “too much like work.” The fast-money idiots aren’t out to do work, they’re out to collect buckets full of freshly-printed VC dough without smudging their pressed shirts.

    > That’s right; that’s why a lone wolf like me, or a small team can do as good or better a job than some firm with 100x the head count and 100m in VC backing

    Even 10,000 salaried chair-warmers rarely add up to equal just *one* pair of hands attached to a head that actually gives a damn. While “100m in VC backing” is typically spent as described e.g. here.

    • sightline said, on February 22, 2020 at 2:31 am

      > But — purchasing surplus gear, dusting the fans, pricing bulk DC rack space, physically hoisting the gear — I suspect is seen as “too much like work”.

      This, so much this. If your startup says “we need a quarter of our B round to build out a datacenter to host our solution” you will have an immensely harder job raising money. Especially when the Board sees how many people you need to throw at that particular problem vs paying Amazon.

      Source: I raised money for a startup that used a quarter of its B round to build out space in a colo.

      • asciilifeform said, on February 22, 2020 at 4:55 am

        > If your startup says “we need a quarter of our B round to build out a datacenter to host our solution” you will have an immensely harder job raising money.

        Seems logical that the VCs prefer the 100m to get spent “at the company store” — i.e. at cloudism providers in which they own a stake, rather than at a colo house. Thereby they not only immediately get a good portion of the dough right back, but help to inflate valuations, as was famously described by PG:

        “By 1998, Yahoo was the beneficiary of a de facto Ponzi scheme. Investors were excited about the Internet. One reason they were excited was Yahoo’s revenue growth. So they invested in new Internet startups. The startups then used the money to buy ads on Yahoo to get traffic. Which caused yet more revenue growth for Yahoo, and further convinced investors the Internet was worth investing in. When I realized this one day, sitting in my cubicle, I jumped up like Archimedes in his bathtub, except instead of “Eureka!” I was shouting “Sell!””
        ( “What happened to Yahoo” )

        > Especially when the Board sees how many people you need to throw at that particular problem vs paying Amazon.

        Maybe I am an ignoramus, but why would this require a great many people? (Were there 10,000s of machines in this company?)

        • Scott Locklin said, on February 22, 2020 at 3:55 pm

          The other thing that gets me: 1 dude in a data center with 10 or even 100 machines, even if you put him up in the Ritz-Carlton, seems cheaper than orchestration with 4-5 EC2 devops guys, even in they have free EC2 instances. I’m sure it boils down to some idiotic accounting nonsense (aka stuff like the devops guys count towards baseline valuation of the company, and the hardware doesn’t).

          • GS said, on February 25, 2020 at 7:48 pm

            1 dude in a datacenter cannot support the same technologies as 5 devops guys in front of AWS. You have apples mixed in with your oranges.

          • sightline said, on February 25, 2020 at 8:31 pm

            > The other thing that gets me: 1 dude in a data center with 10 or even 100 machines, even if you put him up in the Ritz-Carlton, seems cheaper than orchestration with 4-5 EC2 devops guys, even in they have free EC2 instances.

            No disagreement here – at small numbers, the overhead of cloud providers is *much* higher than just running a rack or two, BUUUUUT:
            1. At 1000+ machines you’re starting to talk about real overhead costs, paying people to think hard about network architecture, matching compute and storage, all that devops stuff, etc, etc;
            2. Scaling is more “chunky”, i.e. you can’t just spin up servers for a day or week to try something (or do a major PoC or whatever) and you have to scale people more proportionately with server count and yes employee count absolutely is a metric people look at in valuation, and
            3. Costs ratchet up but not down, if you buy 1000 servers for 2 million bucks you’re more or less stuck with them whereas a pivot on EC2 hardware you can get rid of your costs super easily assuming you haven’t gone hog wild with reserved instances.

            > Maybe I am an ignoramus, but why would this require a great many people? (Were there 10,000s of machines in this company?)

            We were doing realtime data collection and analysis of 100k+ of customer servers so yeah, we were facing down the barrel of at least 1000s of machines at scale.

  2. pindash91 said, on February 21, 2020 at 10:25 pm

    Curious what you think of things like transfer learning which is popularized by the FastAI people as ways to actually make progress inside a corporation without massive compute and massive data

    • Scott Locklin said, on February 22, 2020 at 2:00 am

      “how’s that working out for you?”

      • pindash91 said, on February 27, 2020 at 4:10 am

        The issue, is that my area of professional work is around stat arb market making – which you covered in a post quite well I might add – and mostly we deal with tabular data. I have yet to seen large models built on the class of problems I work on. Probably because you would just be a small hedge fund if you had such a model. Fast AI’s tabular API leaves much to be desired for time series. On the other hand playing with the notebooks on colab. I could get some pretty cool stuff working on image classification problems

    • Douglas K said, on February 27, 2020 at 12:01 am

      had to go look up that specialized meaning, “Transfer learning (TL) is a research problem in machine learning (ML)”
      Research problem means it’s not anywhere near implementation. Like AI and autonomous cars, it would doubtless change everything: but we’re not getting any of those in a foreseeable future.
      Also the whole problem of AI is finding a way to transfer learning, just like the dumb humans do. Solve TL and AI gets a lot closer. That suggests to me it’s a very hard problem..

      • Scott Locklin said, on February 27, 2020 at 1:23 am

        There’s a little stuff done here; you can rent/buy models that do stuff, and train them to do slightly different stuff. Sort of like there has been some progress with image recognition using deep learning. Neither means the AI apocalypse is near.

        Man Colorado looks comfy. Makes me wish I moved there back in 2011 when I had the oppos.

      • pindash91 said, on February 27, 2020 at 4:05 am

        Not quite true, you are right about the general case, but Transfer learning is easily done for simple cases, you can grab a colab notebook from fastai and get world class results on your sub problem of image classification by starting with the mnist dataset and using it to learn the last few layers to classify species of cats or types of cars

        • lozhida said, on February 28, 2020 at 7:55 pm

          Do you know of many companies that need a solution to detect types of cats or cars?

          • pindash91 said, on February 28, 2020 at 8:18 pm

            I think the example is quite indicative, if you are trying to do a project of automatically labeling x-rays it works pretty well and then you can have a human follow up. Jeremy Howard gives an example of predicting sentiment on movie reviews from imdb using a nlp net trained on Wikipedia. Again this isn’t magic, but if you have a network that can predict the next word of Wikipedia it implicitly embeds a model of the world, training on the much smaller set of imbd reviews. I don’t think my comment was that radical. I was simply saying that there are techniques for being productive without a lot of hardware.

      • pindash91 said, on February 27, 2020 at 5:17 am

        Here is an example: https://youtu.be/qqt3aMPB81c

  3. Rickey said, on February 22, 2020 at 5:35 am

    Just like AI is not the solution to every problem, I work with persons who think that using software is the solution to every problem even when a manual method is more efficient and practical. For example, we use ArcGIS to to plot maritime boundaries from databases and also create our own areas. They will spend all afternoon using Arc Toolbox clipping, joining, etc. tools to create areas to avoid territorial seas or other prohibited zones when you could spend 15 minutes doing it manually. Also, their final product needs to be manually refined since our customers what to see the raw data (e.g. a text file of the lat-long coordinates in a specific format) rather than just a proprietary Shapefile and don’t want to deal with 600 coordinates to avoid the territorial seas around an island when 16 will suffice. Don’t even get me started on our hydrographers who plot areas using seconds to five decimal places because that is the default setting for the software. I calculated that to be approximately the thickness of a fingernail. Even differential GPS will “only” give you an accuracy of a few cm.

    1 minute = 60 sec = 1 NM = 6000 ft
    1 sec = 100 ft
    .1 sec = 10 ft
    .01 sec = 1 ft
    .001 sec = .1 ft = 1.2 in
    .0001 sec = .12 in
    .00001 sec = .012 in
    .012 in x 25.4 mm/in = .3 mm

    • Scott Locklin said, on February 22, 2020 at 3:50 pm

      Deep blasphemy here: I’ve always maintained that stuff like CAD very often gets in the way of engineering success. People screwing around with solid designer instead of bits of graph paper and a 2 minute chat with a machinist can turn a task of a few minutes into a multi-day or week adventure of goofing off in front of a computer.

      • John Doe said, on March 12, 2020 at 10:04 am

        I happen to enjoy week-long adventures of goofing off in front of a computer on company time and will therefore continue to use AutoCAD.

  4. MadRocketSci said, on February 24, 2020 at 2:52 pm

    “For reasons that make no sense to me, most of them don’t buy hardware.”

    WHY don’t they buy their own hardware? I’m rather aghast that these people would willingly decide to pursue a new way to be a serf, and let a renteir hold all their data and product hostage. It makes NO SENSE. I thought all these IT geeks prided themselves on being able to do their own sysadmin stuff?

    If you can only afford half the compute-nodes, will your model train a little slower? Maybe, but you can train it whenever you want, tweak it, move your own data around, etc without paying rent. Is hardware not cheap?

    • Scott Locklin said, on February 24, 2020 at 3:56 pm

      I think it started with accountants. You don’t have to depreciate EC2 instances on tax forms. Now it’s just “common knowledge” -aka something totally false and imbecilic but accepted as the obvious truth. Like much modern “knowledge.”

      • Y.T. said, on February 24, 2020 at 11:48 pm

        Mostly agree with the article but this take is off. You have to expense EC2 instances, which is even worse from an accounting perspective. The actual amount of depreciation on this hardware is very high — that is, the actual dollar cost of driving it the new car off the lot is extremely high. There isn’t a market for heavily used GPUs. Colleague in data center construction told me in 3Q of last year there was $100B/quarter going into new data centers. Someone is depreciating that. Renting is way better than buying. Except that the actual hardware is expensive. If you MUST use expensive hardware, then you’re giving your value to the hardware makers. It’s as simple as that.

        Scott, do reach out. We do ML in a multi-threaded array language you love, happy to chat.

        • Sigmund Waite said, on February 25, 2020 at 6:22 am

          Uh, Scott, when I have Firefox magnify
          your Web pages enough to be able to read
          the words, the lines of text are MUCH
          wider than my screen so that I have to use
          the horizontal scroll bars side to side
          for each line.

          Please fix this with the standard HTML
          control that with magnification
          keeps the number of pixels of the screen
          width constant and re-flows the text to
          fit in the screen width.

          This is a very old story: Web pages
          formatted with HTML can be easy to read
          with all the length they want, but the
          width is in short supply. Length is
          plentiful; width is in short supply.

          Also use larger and darker default fonts.

          On to your post:

          From the AI/ML — hold on while I upchuck
          from the claims of intelligence and
          learning — you scooped out and
          commented on only about 100 ml of the
          solar system sized flaming, bubbling,
          reeking, sticky, toxic sewage that is
          AI/ML.

          Since you are/were a physicist, the AI/ML
          crowd has taken the work of Newton,
          Maxwell, Einstein, and Schroedinger back
          to some corrupted version of Ptolemy and
          added flim-flam scam hype, and just now I
          feel like being kind.

          Our culture is awash in brilliant work for
          science, pure and applied math, and
          analysis. Then along comes AL/ML with
          just empirical fitting to huge volumes of
          data back to Ptolemy but worse.

          AI/ML has gone through spring, summer,
          fall, and AI winter some several times.
          In one of those times I was in an AI group
          in a world famous lab. We published lots
          of papers, etc. I looked at the work —
          based on some work at MIT — and just
          upchucked.

          The problem we were solving was real time
          monitoring of server farms and networks
          for unexpected problems.

          Well, for that problem and a lot more
          problems that AI/ML is trying, the work
          essentially must fit in the framework of
          statistical hypothesis testing with rates
          of false alarms and missed detections,
          power of a test, a trivial test, etc.

          Well the AI we were doing said nothing
          about rates of either false alarms or
          missed detections, and the empirical
          non-linear fitting now can’t do that well
          or at all now.

          So while in that group I put my feet up,
          popped open a cold diet cola, reviewed
          some of my grad school and other work, and
          had some ideas, was clear on assumptions,
          did some theorems and proofs, wrote some
          code to do what the theorems needed, got
          some data, and published a paper.

          So, net, I blew out of the water with the
          doors blown off the AI approaches then and
          apparently also since, now, and for the
          foreseeable future — upchuck.

          So, the needed paradigm is just some good
          math, maybe somewhat new, with
          assumptions, theorems, and proofs all
          appropriate for some specific class of
          problems. Ptolemy didn’t do that, but
          Newton did. AI is going back to silly,
          inferior versions of Ptolemy.

          My work in anomaly detection was
          appropriate for the class of real problems
          I was addressing, but various parts of
          applied math, engineering, medicine,
          quality control, etc., have long done
          excellent work with rates of false alarms
          and detections known, etc. AI/ML have
          taken these bottles of excellent wine and
          filled them with reeking, toxic sewage.

          For the A16Z essay, I read it some days
          ago and saw that A16Z missed a much bigger
          point than just that AI is not similar to
          some old software that Silicon Valley
          invested in or still much worse that AI
          is sewage — if the technical work were
          good, the business problems A16Z found
          would take care of themselves.

          For good technical work, it is just
          essential to evaluate the work, at the
          beginning, just on paper, with the
          assumptions derivations, theorems and
          proofs, algorithms, etc. The unique,
          world class, all time grand champions of
          such evaluations is the US Federal
          Government, in particular, NSF, NIH,
          DARPA, ONR, etc. Silicon Valley — goose
          egg, flat 0, not even up to a grade of F,
          a long way from even a gentleman C, and
          hopeless for doing well.

          Being able to evaluate technical work is
          were A16Z and the rest of Silicon Valley
          fail miserably, and that failure is so
          fundamental and so debilitating that the
          problems A16Z mentioned are worse than
          baby talk, drool, or poop.

          Finally, for the problems computer science
          is addressing and the progress they are
          attempting, the work is being done in the
          wrong departments — computer
          science
          has picked its low fruit and
          now is stuck without good methodology for
          the future.

          We need computers. Right. So, for how to
          use these computers, Silicon Valley goes
          to computer science. Wrong!

          For making progress on such problems, what
          computer science can contribute that is
          useful is old; what is new, due to the
          lack of good methodology, is useless.

          Outrageous arrogant incompetence.

        • TC said, on March 1, 2020 at 9:28 pm

          Ok YT, I’ll bite …i code in apl and k, so i’m always interested in array languages

        • TC said, on March 1, 2020 at 9:30 pm

          YT, i’ll bite … I use APL & k, so i’m always interested in array languages

      • sightline said, on February 25, 2020 at 10:55 pm

        See my response above. I think it really started with a) fast scalability, b) reducing initial cash outlay (i.e. you may pay more in the long run but your first X customers are going to be much cheaper because you can add servers as you need them, not based on anticipated demand, and c) VC clustering behavior (“Well if Netflix/Dropbox/etc is doing it, why do you need to build your own”. At some point it is easier to acquiesce than argue.)

        And per YT, EC2 instances would be pure COGS, the same as depreciation on your servers. If you had friendly accountants, you MIGHT be able to exclude that server D from your EBITDA but you’re getting killed on cash burn either way.

        • Scott Locklin said, on February 26, 2020 at 12:18 am

          I know how it works for running a web service thing. It doesn’t make any sense for fitting neural nets like Andreessen was talking about. Even thinking about doing this indicates brain dead reaction “because this is how we do it for other things.”

          • sightline said, on February 26, 2020 at 1:08 am

            It makes sense to me from a real-world path-dependency POV, although I admit that fitting neural nets is not my domain.

            If you start with your small proofs of concept in the cloud (because it’s easy), and move to your medium size training sets for your early customers (my a, b, and c above), you’re really invested in the company structure, employee knowledge, and technology for the cloud by the time you get to the really big stuff. So at that point it DOESN’T make any sense, but you’re so deep into it that you don’t have the time, expertise or investor appetite to say WHELP WE’RE BUILDING OUT INFRASTRUCTURE.

            • sigterm said, on March 2, 2020 at 5:26 pm

              I would need real-world details behind this argument.

              It is not out of the ordinary to use any cloud as a simple provider of virtual machines and VLANs, populated by something like Ansible. Run your Ansible scripts again on a homegrown set of servers and switches, transfer your data, and you’re back into business. The transfer might be easier if you pick a datacentre with a good pipe to your cloud provider’s.

              The employee knowledge and technology stays relevant, and maybe needs to be augmented with two hardware guys.

              Using services specific to your cloud provider might seem cheap, but personally I was surprised by the complexity of, say, AWS. Both AWS and the people needed to make AWS work for you are not cheap.

  5. Ryan said, on February 28, 2020 at 6:48 am

    “Gross Margins, Part 1:” is why I do everything in C++ now, because you write the code once and run it a million times,
    C++ is way faster and uses a fraction of the memory vs Python, and compute time and memory cost money

    • maggette said, on March 2, 2020 at 3:38 pm

      “C++ is way faster and uses a fraction of the memory vs Python”
      For everything that is computational expensive Numba or Cython + Python is just fine.

      I still don’t like using Python for production software projects…but this has nothing to do with performance.

  6. AJMoosie said, on March 3, 2020 at 12:32 am

    I recall you writing about Chile’s semi-automated luxury communism under Allende. By some miracle, have you read Fully Automated Luxury Communism: A Manifesto by Aaron Bastani? If so, what should the take away be? Is any of it remotely plausible or is it utter nonsense?

  7. darkdervish said, on March 3, 2020 at 6:12 am

    One of the best clear-and-real articles I’ve read over past year. I encourage people, who do know the recipees and demistify most of AI junk being pumped around conferences and internet.

    There is no intelligence in AI right now. Just algorithms and grid placement under million of training cycles, completly unaware of expected result idea and unready to any change.

  8. pagefaeries said, on March 9, 2020 at 11:21 pm

    https://www.linkedin.com/pulse/new-ai-business-model-ben-lamm/

    Reads like a prospectus for an Enron special financing vehicle.

  9. mrm said, on March 25, 2020 at 1:34 pm

    This is a huge admission of “AI” failure. All the sugar plum fairy bullshit about “AI replacing jobs” evaporates in the puff of pixie dust it always was. Really, they’re talking about cheap overseas labor when lizard man fixers like Yang regurgitate the “AI coming for your jobs” meme; AI actually stands for “Alien (or) Immigrant” in this context. Yes they do hold out the possibility of ML being used in some limited domains; I agree, but the hockey stick required for VC backing, and the army of Ph.D.s required to make it work doesn’t really mix well with those limited domains, which have a limited market.

    Thank your for stating this. I’ve often said the same thing, and don’t understand why it’s not a more common pushback to this robot-utopia BS. Your robot has a name and makes $1.50/hr, assuming he’s not literally slave labor. GM isn’t in Mexico because they’re famed roboticists.

  10. mrm said, on March 25, 2020 at 1:39 pm

    There was this sci-fi/fantasy dystopia I read about as a child, where the main characters discover a civilization that tries to pretend it’s a highly automated, highly technological utopia, but the gears and the machinery and the industrial workings are nowhere in sight. In reality, they’ve just done a horrifyingly good job of hiding their slaves.

    I think this was written in the 70s or earlier – I wonder what they knew then about where our world has ended up now?

    • asciilifeform said, on April 10, 2020 at 8:52 pm

      Lem’s “Futurological Congress”.

  11. mrm said, on March 25, 2020 at 1:50 pm

    I am currently an engineer in a country that doesn’t need engineers because they’ve dismantled all their industry. We don’t need scientists, because you can’t squeeze a magic 100x return out of careful study of the real world next quarter.

    Where we are today reminds me a lot of a description I’ve read of Antonine Rome: The labor of free-men had a market value of 0, because the slaves in the provinces could do anything you could do for free for their masters, the lords of the colonial latifundia. Ownership concentrated into the hands of the slaveholders, because you couldn’t compete with free. Politicians would support the remaining freemen through patronage, provided they promised their votes and support in mad political street-battles between factions. This is where our ruling class wants to take us: They try to sell it to me as desirable, I view it as justification for violent revolution and not to particular about the ideological details.

  12. Paul said, on March 28, 2020 at 4:39 pm

    Hi Scott,

    i am a big fan of yours. I (20y, German) am thinking about studying physics and would like to know if you would study physics if you were 18 again. Do you think you have to be a genius to study physics? I did an IQ Test and got a 112, so a little bit above average. I have great respect for mathematics in physics, thats why I am asking.

    Thank you!

    Paul

  13. DeflAition – Piekniewski's blog said, on April 13, 2020 at 5:16 pm

    […] AI companies should be viewed more like software startups or rather service companies. Another blogger Scott Locklin took A16Z post apart and did an excellent job of stating out loud some of the things written between the lines in the […]

  14. Leon said, on April 30, 2020 at 3:22 pm

    Hi Scott,

    I responded to your post here: https://avoidboringpeople.substack.com/p/we-wanted-her-instead-we-got-tinder

    with main takeaways being:

    – Not only are AI companies less profitable, but we should expect that margin profile to last for a while
    – An AI company is more similar to a services company and hence should get a lower valuation multiple than a software company, closer to 2x than 10x rev
    – We should start to see lower valuations than expected for late stage AI company private rounds

    Let me know if I mischaracterised any of your points.

    • Scott Locklin said, on April 30, 2020 at 4:27 pm

      Nope; you’re on the money. Nobody should give a shit what I think. I’ve always thought this, as a worker in the business, as have most other sane long term investors who want to get long “AI” somehow. I figure I stripped away some of the investor happy talk that A&H need to put in there to be considered respectable citizens.

  15. […] mins (+16 mins) – Andreessen-Horowitz craps on “AI” startups from a great height: this is a pretty contrarian take but I found it valuable (given the focus on this inside MS and […]

  16. Ranko Mosic said, on January 5, 2021 at 10:24 am

    “Putting aside their autonomous vehicle bets (which look like they don’t have a large “AI” component to them)”
    Self-driving cars are all about AI. a16z is a major investor in Waymo.

  17. newempire said, on January 20, 2021 at 3:43 am

    Great blog m8. Great to see someone who actually knows what they’re talking about. I got absolutely trashed on reddit by elon musk fanboys for casually joking that AI companies are not profitable and have no future as an isolated startup domain

  18. […] Andreessen-Horowitz craps on “AI” startups from a great height […]

  19. […] AI companies should be viewed more like software startups or rather service companies. Another blogger Scott Locklin took A16Z post apart and did an excellent job of stating out loud some of the things written between the lines in the […]


Leave a comment