Locklin on science

“AI” and the human informational centipede

Posted in fraud, stats jackass of the month by Scott Locklin on September 2, 2017

Useful journalism about technology is virtually nonexistent in the present day. It is a fact little commented on, but easily understood. In some not too distant past, there were actually competent science and technology journalists who were paid to be good at their jobs. There are still science and technology journalists, but for the most part, there are no competent ones actually investigating things. The wretches we have now mostly assist with press releases. Everyone capable of doing such work well is either too busy, too well paid doing something else, or too cowardly to speak up and notice the emperor has no clothes.

Consider: there are now 5 PR people for every reporter in America.  Reporters are an endangered species. Even the most ethical and well intentioned PR people are supposed to put the happy face on the soap powder, but when they don’t understand a technology, outright deception is inevitable. Modern “reporters” mostly regurgitate what the PR person tells them without any quality control.

The lack of useful reporting is a difficulty presently confronting Western Civilization as a whole; the examples are obvious and not worth enumerating. Competent full time reporters who are capable of actually debunking fraudulent tech PR bullshit and a mandate to do so: I estimate that there are approximately zero of these existing in these United States at the moment.

What happens when marketing people at a company talk to some engineers? Even the most honest marketing people hear what they want to hear, and try to spin it in the best possible way to win the PR war, and make their execs happy.  Execs read the “news” which is basically marketing releases from their competitors. They think this is actual information, rather than someone else’s press release.  Hell, I’ve even seen executives ask engineers for capabilities they heard about from reading their own marketing press releases, and being confused as to why these capabilities were actually science fiction. So, when your read some cool article in tech crunch on the latest woo, you aren’t actually reading anything real or accurate. You’re reading the result of a human informational centipede where a CEO orders a marketing guy to publish bullshit which is then consumed by decision makers who pay for investments in technology which doesn’t do what they think it does.

centipede pyramid

How tech news gets made

Machine learning and its relatives are the statistics of the future: the way we learn about the way the world works. Of course, machines aren’t actually “learning” anything. They’re just doing statistics. Very beautiful, complex, and sometimes mysterious statistics, but it’s still statistics. Nobody really knows how people learn things and infer new things from abstract or practical knowledge. When someone starts talking about “AI,” based on some machine learning technique, the Berzerker rage comes upon me. There is no such thing as “AI” as a science or a technology. Anyone who uses that phrase is a dreamer, a liar or a fool.

You can tell when a nebulous buzzword like “AI” has reached peak “human information centipede;” when oligarchs start being afraid of it. You have the famous example of Bill Joy being deathly afraid of “nanotech,” a previously hyped “technology” which persists in not existing in the corporeal world. Charlatan thinktanks like the “center for responsible nanotechnology” popped up to relieve oligarchs of their easy money, and these responsible nanotech assclowns went on to … post nifty articles on things that don’t exist.

These days, we have Elon Musk petrified that a near relative of logistic regression is going to achieve sentience and render him unable to enjoy the usufructs of his toils. Charlatan “thinktanks” dedicated to “friendly AI” (and Harry Potter slashfic) have sprung up. Goofball non-profits designed to make “AI” more “safe” by making it available as open source (think about that for a minute) actually exist. Funded, of course, by the paranoid oligarchs who would be better off reading a book, adjusting their exercise program or having their doctor adjust their meds.

Chemists used nanotech hype to drum up funding for research they were interested in. I don’t know of anything useful or interesting which came out of it, but in our declining civilization, I have no real problem with chemists using such swindles to improve their funding. Since there are few to no actual “AI” researchers existing in the world, I suppose the “OpenAI” institute will use their ill gotten gainz to fund machine learning researchers of some kind; maybe even something potentially useful. But, like the chemists, they’re just using it to fund things which are presently popular. How did the popular things get popular? The human information centipede, which is now touting deep reinforcement networks as the latest hotness.

My copy of Sutton and Barto was published in 1998. It’s a tremendous and interesting bunch of techniques, and the TD-gammon solution to Backgammon is a beautiful result for the ages. It is also nothing like “artificial intelligence.” No reinforcement learning gizmo is going to achieve sentience any more than an Unscented Kalman filter is going to achieve sentience. Neural approaches to reinforcement learning are among the least interesting applications of RL, mostly because it’s been done for so long. Why not use RL on other kinds of models? Example, this guy used Nash Equilibrium equations to build a pokerbot using RL. There are also interesting problems where RL with neural nets could be used successfully, and where an open source version would be valuable: natural language, anomaly detection. RL frameworks would also help matters. There are numerous other online approaches which are not reinforcement learning, but potentially even more interesting. No, no, we need to use RL to teach a neural net to play freaking vidya games and call it “AI.” I vaguely recall in the 1980s, when you needed to put a quarter into a machine to play vidya on an 8-bit CPU, the machines had pretty good “AI” which was able to eventually beat even the best players. Great work guys. You’ve worked really hard to do something which was doable in the 1980s.

“The bot learned the game from scratch by self-play, and does not use imitation learning or tree search. This is a step towards building AI systems which accomplish well-defined goals in messy, complicated situations involving real humans.”

No, you’ve basically just reproduced TD-gammon on a stupid video game.  “AI systems which accomplish well-defined goals in messy … situations” need to have human-like judgment and use experience from unrelated tasks to do well at new tasks. This thing does nothing of the sort.  This is a pedestrian exercise in what reinforcement learning is designed to do. The fact that it comes with accompanying marketing video (one which probably cost as much as a half year grad student salary, where it would have been better spent) ought to indicate what manner of “achievement” this is.

Unironic use of the word “AI” is a sure tell of dopey credulity, but the stupid is everywhere, unchecked and rampaging like the ending of Tetsuo the Iron Man.

Imagine someone from smurftown took a data set relating spurious correlations in the periodic table of the elements to stock prices, ran k-means on it, and declared himself a hedge fund manager for beating the S&P by 10%. Would you be impressed? Would you you tout this in a public place? Well, somebody did, and it is the thing which finally caused me to chimp out. This is classic Price of Butter in Bangladesh stupid data mining tricks. Actually, price of butter in Bangladesh makes considerably more sense than this. At least butter prices are meaningful, unlike spurious periodic element correlations to stock returns.

This is so transparently absurd, I had thought it was a clever troll. So I looked around the rest of the website, and found a heart felt declaration that VC investments are not bets. Because VCs really caaaare, man. As if high rollers at the horse races never took an interest in the digestion of their favorite horses and superfluous flesh on their jockeys. Russians know what the phrase “VC” means (туалет). I suppose with this piece of information it still could be a clever Onionesque parody, but I have it on two degrees of Erdős and Kevin Bacon that the author of this piece is a real Venture Capitalist, and he’s not kidding. More recently how “Superintelligent AI will kick ass” and “please buy my stacked LSTMs because I said AI.” Further scrolling on the website reveals one of the organizers of OpenAI is also involved. So, I assume we’re supposed to take it seriously. I don’t; this website is unadulterated bullshit.


Gartner: they’re pretty good at spotting things which are +10 years away (aka probably never happen)

A winter is coming; another AI winter. Mostly because sharpers, incompetents and frauds are touting things which are not even vaguely true. This is tragic, as there has been some progress in machine learning and potentially lucrative and innovative companies based on it will never happen. As in the first AI winter, it’s because research is being driven by marketing departments and irresponsible people.

But hey, I’m just some bozo writing in his underpants, don’t listen to me, listen to some experts:





Edit Add (Sept 5, 2017):

Congress is presently in hearings on “AI”. It’s worth remembering congress had hearings on “nanotech” in 2006.


“By 2014, it is estimated that there could be $2.6 trillion worth of products in the global marketplace which have incorporated nanotechnology. There is significant concern in industry, however, that the projected economic growth of nanotechnology could be undermined by either real environmental and safety risks of nanotechnology or the public’s perception that such risks exist.”

Edit Add (Sept 10, 2017) (Taken from Mark Ames):


26 Responses

Subscribe to comments with RSS.

  1. Mark Plus said, on September 2, 2017 at 11:54 pm

    This apocalyptic nonsense about AI and the singularity can cause people to make bad decisions about their lives. For example, one of the singularity’s true believers, a professor of economics at Smith College named James Miller (apparently he’s not good enough to teach economics to men), goes around telling people they should not save for retirement because AI will enable them to “live forever” or upload their minds or whatever in about 20 years.

    The guy also wrote a book about Eliezer Yudkowsky’s greatness which reads like he has a boner for this “Friendly AI” doomsday-prevention scammer.

    Of course if you look at Yudkowsky through the lens of atomized individualism, you might plausibly see him as a
    visionary thinker. But if you’re more woke about human biodiversity, another interpretation of Yudkowsky’s behavior suggests itself.

    • Scott Locklin said, on September 3, 2017 at 2:20 am

      I guess Professor Miller won’t mind if we distribute his pension to people who heeded this advice.
      Yudkowsky is just a the loudest carnival barker in the field at present. There are plenty of idiots talking about this who should be laughed at.

  2. […] Useful journalism about technology is virtually nonexistent in the present day. It is a fact little commented on, but easily understood. In some not too distant past, there were actually competent … Read more […]

  3. pucenoise said, on September 3, 2017 at 1:51 am

    There is no will to work on hard problems, which is why technological progress is slowing down and immense resources are wasted on succulent turds like modern “artificial intelligence.”

    Take non-equilibrium statistical physics, my thesis topic of choice. Working on it is probably an academic death sentence because it is not sexy and because it is very challenging to produce sexually arousing bullshit inspired by it. However it is clearly an area of tremendous importance. In fact, I had to switch to an engineering department to work on it, because physics departments are broke and enamoured with ex-string theorist well-fare projects like topological insulators.

    The issue, I think, is that we have been unwilling as a society to eat the catastrophic corrective that is a financial collapse; instead, we scuttled it under the rug. Presently the dead walk among us, those bloated, sclerotic zombie companies like IBM and GE or soon to be zombified Intel. Meanwhile, the young upstarts, rather than being based upon real technological breakthroughs like the transistor, are reshuffling old innovations to produce useless products, like Twitter, Facebook, Snapchat, and of course, the spoiled princess of the bunch, Google.

    Large companies devote much time to simply manipulating their stock. A cardiac arrest of the system may be the only way to provide much needed oxygen and sunlight to more important sectors of the economy and science.

    • Scott Locklin said, on September 3, 2017 at 2:41 am

      Non equilibrium stat mech is the type of difficult thing that tenured types should be thinking about. Most tenured types simply work on whatever they wrote their dissertations on. No skin in the game. No pride.

      Education system is a vast bubble; tenured goofballs, usurious scientific publishers, purveyors of scientific fraud, imaginary subjects relating to degenerates misfiring neurono-libidinal peccadilloes, economists, tenured HR administrators; the world has entirely too many of these. I’m hoping it explodes in my lifetime.

      • pucenoise said, on September 3, 2017 at 7:29 pm

        I concur wholeheartedly.

        Hopefully a better system will emerge so that people who want to make a legitimate contribution will have a chance…

    • PRCD said, on September 3, 2017 at 10:44 pm

      The name of the game in tech is acquiring other companies to boost stock prices with the belief that the sum of the parts is at least worth the sum of the parts. Lipservice is paid to innovation, but tech executives can’t think past the next quarterly results, let alone long-term.

      We should ask ourselves, however, what innovations are actually needed? We can now communicate on stuff Star Trek never imagined. We suffer not for lack of tech but from government regulations and financialization that make small-scale enterprise unprofitable. For example, farming has become large-scale and capital-intensive because government price fixing has eroded margins forcing farmers to absorb smaller ones and buy more equipment. A large chunk of the profits the legitimate tech industry produces are skimmed-off by NY financial companies. Meanwhile, Asia continues to develop whatever industries they steal from us as we short-sightedly offshore our manufacturing and now engineering to them.

  4. Mark Plus said, on September 3, 2017 at 5:59 pm

    “Funded, of course, by the paranoid oligarchs who would be better off reading a book”

    I keep seeing stories about how Elon Musk is a voracious reader, however. Perhaps he should stop reading transhumanist fantasies like Nick Bostrom’s and concentrate on textbooks about real things like, oh, astronautics and battery technology.

    • Scott Locklin said, on September 3, 2017 at 6:11 pm

      Sutton’s book on rocket science is pretty easy reading. Ignition is actually fun. Reading the periodic table of the elements would help him with batteries.

      I used to amuse myself with Bill Gates reading list (Business Adventures was actually very good -heard about it from a friend who noticed it there). Most of it is schlock.

      • Rod Carvalho said, on September 21, 2017 at 2:10 pm

        I became acquainted with Vaclav Smil’s work via Bill Gates’s reading list. Smil’s talks are interesting.

  5. fpoling said, on September 3, 2017 at 7:09 pm

    Thanks for the links! The second one ends with a note about efficiency of Gradient descent in neural networks. I remember reading praises for Gradient descent in optimization problems in a book from eighties. Do we have anything else or should I expect to read more praises in 30 years for the method in yet another application?

    • pucenoise said, on September 3, 2017 at 7:32 pm

      I switched from physics to computer science for my first attempt at a PhD, and wound up in a machine learning course at one of the top 5 machine learning schools.

      I was stunned to discover that many CS students have limited training in fuddy duddy 19th century math, and were absolutely enamoured with gradient descent, support vector machines, and other techniques that can be understood with roughly a single course in vector calculus.

      Not that I have anything wrong with this (SVM’s in particular are a noble and effective tool), but it seems that a large percentage of the machine learning community is oblivious to more modern mathematical methods, although you can see some very clever people at the fringes using quite neato mathematics (see diffusion manifold learning and other PDE/kernel methods, topological data analysis etc)

  6. Mark Plus said, on September 4, 2017 at 5:59 pm

    Here we go:

    Elon Musk says AI could lead to third world war


  7. Mark Plus said, on September 8, 2017 at 4:03 pm

    The Seven Deadly Sins of Predicting the Future of AI


  8. Mark Plus said, on September 9, 2017 at 9:12 pm

    Scott, any thoughts on Jerry Pournelle’s passing?

    As I recall, Pournelle promoted Gerard K. O’Neill’s ideas back in the 1970’s, and then he played a role in creating President Reagan’s Star Wars boondoggle in the 1980’s.

    Also I doubt his science fiction has aged well. He might have been influential in some ways, but I just don’t have the impression that he legacy amounts to all that much; and he endorsed the kinds of technological mirages you have criticized on your blog.

    • Scott Locklin said, on September 10, 2017 at 12:04 am

      I’m a big admirer of Pournelle and the spirit he represents. Unfortunately my only brush with him was his comment on my “myths of technological progress” article making the argument that we have progress because iphones.
      I haven’t read his or any science fiction in years, but always enjoyed his and Larry’s stuff when I did.

    • benespen said, on September 11, 2017 at 10:49 pm

      I think Pournelle’s books have started to look more plausible than they did 15 years ago. Pournelle started writing sci-fi in the 70s set in the early 2000s, and he was disappointed that he lived to see the technology he wrote about never materialized. However, the world he described in books like the Mercenary is disturbing close to what we are getting. When I read some of his CoDominium books in the early 2000s, I felt like they were a throwback to the politics of the 1970s, with out of control student protests and centrist establishmentarians banding together to fend of radicals of the left and right. Now I feel like he was on to something.

      • Toddy Cat said, on September 12, 2017 at 4:02 pm

        Yes, and Pournelle did predict the Internet, private space exploration, something like IPhones, and the personal computer revolution. No, he didn’t get everything right, but when examined in the context of the early-mid 1970’s, he did a lot better than most “Futurists” of the time, and in a much more entertaining way.

        Also, some of Pournelle’s tech enthusiasm has to be placed in the context of his times. Pournelle was born in 1933, when almost all planes were still biplanes, and the fastest that any human had ever travelled was about 250 miles per hour. By the time he was 36, humans had nuclear power, had landed on the moon, and had travelled five times the speed of sound. Had things kept progressing at that rate, Pournelle’s predictions would have actually looked pretty conservative. Why they didn’t is another question, but predictions of a Moon base, a Mars mission, and large permanent space colonies didn’t look at all unreasonable in the late 1960’s.

        By the way, rumor has it that Pournelle’s work is very popular in China, It would certainly be ironic if Jerry Pournelle, an American patriot if ever there was one, helped China to become the first great spacefaring country…

  9. pucenoise said, on September 10, 2017 at 7:40 pm

    Hey look, Google’s GENIUS deepmind AI taught itself how to walk!

    Wow that really changed my mind! I’m sure a quick search on the internet wouldn’t come up with an older, equally overrated algorithm which accomplished more or less the same thing years ago!

  10. maggette said, on September 11, 2017 at 11:43 am

    It is true. Real journalism is probably close to extinction. Journalism on science and technology (and technology may include a lot of scientific or close to scientific view on socials issues, like economics etc) even more so.

    As pathetic at it may sound, it is probably the duty of bloggers like Locklin to give some insight.

    On AI and the AI winter:

    To me it is not only a problem of journalists and governments….the companies and their leadership itself are basically incappable of making sound decisions in that field.

    In my experience “that stuff with “big data””, predictive analytics, BI, machine learning (supervised, unsupervised, reinforced) are all more or less the same thing in the head of many CEOs/CTOs and strategy consultans.

    At present I am actually contributing to destroy the reputation of ML. I am in a highly disfunctional project (budget, expectations etc) with a big big firm on big data and machine learning. And not for the first time.

    I observed the following cycle in big companies:
    1) Big technology firm or service company (IBM,CSC,SAS) shares highly dishonest PR material on their capapbilities, their software stack and machine learning in general.
    2) Journalist publishes the PR material without even understanding it
    3) CEO/CTO/strategy consultant reads that stuff while trying to look important sitting on the airplane and reading the Financial Times
    4) Big meetings get scheduled, big projects gets “planed”, huge budget allocated, PowerPoint slides with unrealist expectations presented to everyone…
    5) Project does not deliver and is over budget, not adding any bussiness value
    go back to 1)

  11. Evil Superintelligent AI said, on September 19, 2017 at 7:16 pm

    This was a refreshing read. Wrt Congressional hearings over AI and nanotech: remember they also banned “undetectable plastic firearms” when Glocks came on the market in the ’80s.

    Scott, a question for you from a budding technologist: any thoughts on which areas of physics research are likely to actually result in useful technologies in the near future? I’m looking to join a research group, but I’m not really far enough along in my studies that I trust my BS detector is properly calibrated. I’m an old guy who already pissed years of my life into the sucking maw of the government, so I’d like to avoid doing it again working on something that will perpetually be really cool in 20 years.

    • Scott Locklin said, on September 19, 2017 at 9:03 pm

      Well, various kinds of device physics are closest to being useful to someone. I am far enough away from that world now I can’t really make any intelligent recommendations here. There was some research doing aluminum-> silicon interconnects that seemed like a good idea back 15 years ago, but I don’t know as anything ever came of it (aluminum is pretty reactive). Someone told me in the 90s to study “Photonics” but nothing really came of that.

  12. maggette said, on September 20, 2017 at 7:39 am

    Another unbearable AI hype story is collapsing: from what I hear Swiss Re is not at all happy how IBM Watson is performing

  13. mitchellporter said, on September 20, 2017 at 11:07 am

    640 IQ ought to be enough for anybody.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: