Locklin on science

Ten predictions for 2030

Posted in Progress by Scott Locklin on January 27, 2020

Here’s a couple of predictions I’ll make for what happens in the next 10 years in science and technology.

 

  1. Computing and tech: Moore’s law (and Kryder’s law) will continue to fail. Computers in 2012 are about the same as computers in 2020 as far as I can tell; I’m typing this on a vintage 2011 laptop which is still actually faster than most 2020 laptops. I keep looking for server hardware that significantly (10x) beats a 2012 era Xeon box; please let me know if you can find any. Compiler engineers will become rare and in demand. The proliferation of programming languages will continue, with no clear winners. Operating systems will look much the same as they do now, with the possible exception of ideas from Mainframe days influencing modern time-sharing “clouds.” No major changes to computer UI; no virtual reality, no mass adoption of “wearables.”  ZNP systems become key pieces of technological infrastructure. Socially, the internet will continue to grow as an educational alternative to increasingly useless Universities. First mover effect will keep most existing tech companies in their present positions.
  2. Autonomous vehicles: There will be no stage-4, let alone stage-5 autonomous vehicles in 2030. “Driver” will still be the most common US job description. The whole thing always was a preposterous LARP; obviously so from the get-go; cruise control with collision avoidance and GPS is not autonomous driving. Rideshare companies will become federated taxicabs or fail.  The “gig economy” of underpaid servants will continue to substitute for manufacturing jobs for non college graduates in America.
  3. The “AI” apocalypse: There will be zero profitable companies based entirely around Deep Learning style machine learning. By 2030 there may be one profitable and large company besides Fair Isaac which is based on classic machine learning techniques such as boosting. Some older area of machine learning such as PGMs will come back into fashion as frameworks for GPUs are developed. No major disruptions to blue or white collar jobs.
  4. Data science: Significant breakthroughs in classical statistics will happen. Ideas such as sequential aggregation of experts or topological data analysis may form the nucleus of these new ideas. The new ideas will generally be ignored by people who aren’t in hedge funds. Arguably this is ongoing, and it is one of the few things I’m excited about for the future.
  5. Privacy: a high profile murder or kidnapping will result from the use of commodified adtech tracking as a service. It’s pretty trivial to track individuals based on their tracking bullshit now. The actual value of this for advertising is much lower than it is for intelligence work for most situations. While lawsuits are the main thing protecting people from this in the West, other civilizations have fewer scruples, and the internet is a global system. One with various only marginally regulated payment technologies. Cheap Chinese cameras and other “IoT” garbage, hacked or backdoored, will play a role in a significant political scandal.
  6. Quantum computing: The actual use case of Quantum Computing, aka factoring primes from very large integers, will not be achieved. The largest number legitimately factored by a quantum computer will be on the order of 1000. Not 1000 bits; something on the order of the number 1000; aka 10-11 bits. By “legitimately” I mean including all the gates needed to perform the Shor algorithm; about 72 * n^3. That’s on the order of 100,000 gates. RSA will continue to be used. “Quantum information theorists” will join sociology majors as a common form of training for underemployed baristas.
  7. Transportation/Energy Infrastructure: no significant changes. Electric cars will not become more than 5% of the US fleet. Electric aircraft efforts will fail.  Large scale passenger train efforts in the US will continue to fail due to bureaucratic sclerosis. No flying cars. No reusable spacecraft (single digit percent likelihood something like skylon works and is funded). “Renewables” will continue to grow in popularity without appreciably changing the US (or world) energy infrastructure. Some environmental disaster may be attributable to a renewable energy technology. No increased adoption of fission in the West. Imbeciles in 2030 will predict fusion breakthroughs by 2050.
  8. Engineering/manufacturing/mining: no major changes to manufacturing technology. Solid printing will continue to be a fringe and prototyping technology rather than a driver of new manufacturing techniques. Nanotech will continue in its nonexistence outside of bad science fiction plots. Fracking will grow in importance. Undersea mining will either create an ecological disaster or (hopefully) be banned before it does.
  9. Agriculture: agricultural productivity will continue to grow. Obesity and diabetes will become a worldwide problem.
  10. Scientific breakthroughs: none in physics, chemistry, astronomy, earth sciences. Some potential in biology (bioinformatics), medicine, paleontology and archaeology (via genetic and imaging technologies).

One of my firm beliefs as a technologist is the slow down in progress in the last 50 years. Had microelectronics not happened along, this would be abundantly obvious; most of the external technological world looks pretty much like it did 50 years ago. Now we have the situation where even microelectronics are not obviously getting much better, and this is a fact roundly ignored by most people who should know better. The disasters of coming years will mostly be from a lack of technological breakthroughs, and a lack of political recognition of this fact by the managerial classes who are living in a fantasy world where whiggism is as rational an ideology as it was 100 years ago. I assume this will result in continued “ghost dance” type crazes among managerial class types in the West. The present social unrest being experienced across the west is a result of this gross slow down in technological progress.  “The establishment” which has been in place across the West since 1945 are a technocratic elite. Their role was to increase productivity and distribute the fruits of this to the citizens of their countries. No productivity increase; no fruits to distribute; the social unrest will continue until new forms of social organization are found.

 

Speaking of which, I’m not terribly optimistic about new forms of social organization. While there are a few bright areas in terms of decentralized systems, by and large, the increase in surveillance technology and nation wide witch hunts via “social media” do not bode well. Countries are converging towards a sort of “social credit” system, which is plain old classism, abetted by surveillance technologies and enforced in the West via witch hunts and private enterprise. It’s a fragile and foolish system, to say nothing of being stupid and evil.

I could be wrong; we could spin up a sort of Manhattan project towards some achievable goal. A real one; not the Potemkin bureaucratic spoils nonsense that passes for such efforts in current year. There could be a large war which causes increased investment in technology. Doesn’t seem likely though; US can coast on its supremacy in military technology, the financial system and control of the seas for another 20 years at least, and the turds in charge will continue to think it some virtue of theirs, rather than the virtues of their grandparents.

Links to other people’s predictions:

https://futurebase.com/timeline/2030

https://www.quantumrun.com/future-timeline/2030

https://www.weforum.org/agenda/2016/11/8-predictions-for-the-world-in-2030/

https://www.news.com.au/lifestyle/real-life/wtf/time-traveller-man-from-2028-makes-specific-predictions-for-the-future/news-story/c410773ecf14d3b34df9deeeb5153b7a

A couple of dumb sugar plumb fairy predictions about 2020 (salted with a few good ones):

https://www.discovermagazine.com/mind/what-youll-need-to-know-in-2020-that-you-dont-know-now

https://www.wired.com/2000/07/the-year-2020-explained/

https://blogs.chicagotribune.com/news_columnists_ezorn/2010/01/2020.html

https://paulbuchheit.blogspot.com/2010/01/10-predictions-for-world-of-january-1.html

https://www.businessinsider.com/stratfor-predictions-for-the-next-decade-2010-1/china-doomed-1

 

90 Responses

Subscribe to comments with RSS.

  1. glaucous_noise said, on January 27, 2020 at 11:08 pm

    Not sure I agree with your assessment on physics in general. Part of the problem is I see nano/quantum information as just marketing to fight for oxygen for chem/physics in our idiocracy where money is pointlessly funnelled into nonsense like cancer research, so the popular claims are at best aspirational.

    For instance, the head of one of the big name corporate quantum computing initiatives told me that he felt QC is much closer to IBM SSEC than it is to a mature technology, sort of a Frankenstein that might find use as a co-processor in a few decades.

    In the guise of this research, a lot of slow but good work is being done on advanced materials or things like single photon sources which could all have incremental, niche applications which matter in the long run.

    I do still feel pessimistic and will probably leave the field after I graduate but it’s 1% more optimistic than you’re making it out to be.

    • Scott Locklin said, on January 27, 2020 at 11:17 pm

      I assume stuff like what you say (except for QC) will continue to mature in physics. I have a former roomie who has made a nice career for himself in wide bandgap semiconductors for example. Those things won’t create new industries or new ways of looking at the world the way physics has done in the past. The fashionable stuff right now all appears to be “glass bead games.” Most clever and ambitious people get out of the field when they realize it’s a lot of wanking, and the job opportunities are terrible. Anyway, good luck!

      Also; latest Horgan is pretty good:

      https://blogs.scientificamerican.com/cross-check/whats-wrong-with-physics/?amp

      • Cameron Braun said, on February 23, 2020 at 1:49 am

        Scott, in the article you linked Search says “I actually feel sorry for people that do not understand the laws of physics in their full mathematical glory because they are missing out on something that is truly divine.” I have no physics training but nevertheless have long since been unable to shake the feeling that his sentiment is correct. I don’t have the time to blindly slog through scores of books to achieve his understanding, but as a source I trust, could you expedite the process by recommending a few pertinent materials that would allow me to achieve a respectable understanding of “the laws in their full mathematical glory”?

        On a side note, if you ever publish a book, I’ll be on the wait list.

        • Scott Locklin said, on February 23, 2020 at 2:37 am

          I dunno, Maxwells Equations are pretty cool. Everything else: he’s exaggerating. Stuff like topology is cooler than physics; even the physics with topology in it.

          • Cameron Braun said, on February 26, 2020 at 12:37 am

            Thanks Scott. You should consider posting a list of book recommendations.

  2. CLF said, on January 27, 2020 at 11:31 pm

    I’m not convinced data science will see lots of progress. The serious “data scientists” are charlatans promulgating their own messed up religion of analytics of which they claim only they can do as the high priesthood. I guess I’m technically one of those people but I at least only call myself a business statistician. Really data science is just a bunch of sql queries to extract data, some cross tabs and maybe generalize OLS with some MLE for the real sophisticates. Business rarely wants the real hardcore methods and instead just uses simple analysis to backstop already extant methods and decisions.

    • Scott Locklin said, on January 27, 2020 at 11:37 pm

      I guess I should have used the words “classical statistics” more. I’ve worked (and continue to work) with the two ideas I listed above, and they sure look like the beginnings of something really revolutionary to me.

      Most of industrial data science is, of course, just as you describe it.

      • CLF said, on January 27, 2020 at 11:43 pm

        It’s severely disappointing as a field but at least one can get paid. I dearly hope some progress can be made in classical statistics since most of the field seems to be a bit moribund. At least all the development from the 50 -80s can now be exploited via decent hardware and broader knowledge amongst practitioners.

        • Scott Locklin said, on January 28, 2020 at 12:08 am

          On the bright side of things; knowing a lot of classical stats, even in absence of new results, is powerful and puts you head and shoulders above most of the people in DS.

          • CLF said, on January 28, 2020 at 12:16 am

            That’s what I beat into my young employees. It’s a fun career mixed with terror and imposter syndrome when doing the hard work. Thanks for the great article Scott.

            • maggette said, on January 28, 2020 at 8:54 am

              I am a machine learning engineer, data scientist, data/software engineer and scientific programmer by trade. I used and use a mix of classical statistical modelling, optimization methods, machine learning, control theory, simulations and nummerical techniques to solve industrial and engineering problems.

              My custumors are either big slow moving ship, that have a lot of money to burn, or small “hidden champions”.

              For the big companies, my latest conservative estimate is: of all machine learning projects, roughly 75% are failures. 40% don’t even achieve their technical goals, 35% achieve their technical goals, but fail to make sense from a commercial perspective. Like you said: lot’s of these projects are just a rebranded version of bussiness intelligence anyways (which can add value…like accounting, or sales does)

              The most satisfying projects have been the ones for the “hidden champions”. Their innovation-proccess is different. It’s not management heads, that have a “strategic vision”. It’s more “Guys, I just got a call from our main customer about product X and if we are not able to reduce costs by 4% and increase failure rate by at least 12% this quarter, we are dead by end of the year!”

              Still: 75% fail. But they fail fast, and everybody agrees on the failure (which does not happen in the big company projects, where embarassing stuff gets sold as an achievement).

              I do not agree that knowing classical statistics will put you ahead by much. In NLP and immage processing, classical statistical models are just not competitive anymore. And chances are, even for many simple regression or classification problems, I XGBoost the shit out of your Generalized Methods of Moments. Worst case it performs just as well. I know it hurts. I also would argue that for real world problems information theory, control theory and solid CS/software engineering skills are equally important.

              I get it: the AI hype is crazy and the incompetence is some times unbearable. I just myself saved one of my (big company) customer hire a start-up, that wanted to solve the problem at hand with deep reinforcement learning….even though I could show in that meeting on a white board, that we are actually talking about an quadratic programming problem! So Excel Solver…not GPU!

              But some of the progress in ML is real and I implemented it in real world applications myself that saved a lot of money…a. It didn’t “change the world” or whatever.

              So here are my more optimistic predictions:
              A) there will be a single programming language/compiler/library ecosystem, that is a great to acceptable fit for both: numerical and scientific parallelized HPC computing as well as real world multiplatform software engineering. The language will have build in autodiff support and have natural implementations of DL and RL training as well as GPU support and a powerfull library for probabilistic programming. No more wrappers around C++ code. It will have a strict type system. Different solvers for optimization problems and maybe even stuff like LAPACK will have a natural implementation in that language. Apache Spark, Apache Flink and Pulsar will be improved and rewritten in this language, as will Scikit learn, TensorFlow and PyTorch. The language will be used for system programming as well as for mobile, embedded or cloud applications. I predict it to be based on an LLVM compiler or an improved version of it.

              B) people will understand that ML is when used only a part of a bigger sofware solution. Data Engineers will become a bit more recognition. There will be more hybrid approaches of complex ML models, iformation geometry, classical statistics, probabilistic graph models and other techniques!

              C) As an extension of B) but worth it’s own point, there will be a come back of GOFAI-stlye symbolic AI methods as part of the hybrids in B). I even will throw a name out there: these system will be called “intelligent systems” :).

              D) People will realize that “real world meat space” engineering is at least as important as software engineering.

              E) People will realize that incremental progress is still progress.

              F) I will become a billionaire and build Skylon myself, because it’s awesome!

              • Scott Locklin said, on January 28, 2020 at 6:16 pm

                It’s rare I come across a NLP problem where you need anything better than Naive Bayes or its semisupervised equivalents. I did use Latent Dirichlet once; it seemed pretty good. Never shipped it though. I’ve literally never come across an actual IRL image recognition problem that would make anyone a dollar, but, for example, I’d reach for Viola-Jones (aka boosting) and eigenfaces type ideas before dweeb learning. If only because you don’t need to buy or rent special hardware to fit things.

                Though you didn’t say so exactly, you’re sorely mistaken if you think classical stats skills are obsolete somehow. It’s true that xgboost is an amazingly useful piece of technology, but if you’re dealing with noisy data, or very large data, you’re going to get a lot farther with classical techniques. You’re also going to understand a lot more about why you might be right, and where you might be making an error, especially if you know some of the “new statistics.” If nothing else, understanding how to write different loss functions for xgboost is going to get you better results where the last bit of ROC curve counts.

                I’d be curious where you think optimistic prediction-A is going to come from. Everything I see in language and compiler design is hot garbage. Academics in this area are all old, or are clueless mountebanks like the Julia dudes. The continued use of C and Fortran ought to give anyone making such predictions pause. Mind you, I’m always interested in new ideas that might pan out, but this rant originated with a pal and I complaining about what wanking hot garbage something like Haskell and computer “science” and tech in general is.

                • asciilifeform said, on January 28, 2020 at 6:38 pm

                  Re: language/compiler design, clueless mountebanks, etc.: I found that good old Ada in fact delivers where the current-day “safe language” mountebanks merely promise; and with a written paper standard, and without any Haskellian “hair shirts”. Bonus: the resulting compiled binaries closely correspond algorithmically to the source, which permits proper auditing; additionally, Ada does not need heavy runtimes/garbage-collectors, which permits deployment to inexpensive embedded machinery.

                  • Scott Locklin said, on January 28, 2020 at 6:48 pm

                    I had considered spending the time to learn SPARK-Ada for exactly the reason you say. People make big claims about Rust as a C replacement, but whenever I look at it my eyes bleed. As you say, Ada exists and has existed for a very long time. I assume the older generation worked a lot of the bugs out. Why fool with something like Rust? I’m willing to bet the culture is better as well.

                    • asciilifeform said, on January 28, 2020 at 6:52 pm

                      The “culture” of Ada is in fact very different — there is scarcely any public source, aside from the compiler’s (GNAT). But this turns out to be a win — you’re reprieved from wading through an ocean of unreadably-“open” filth, and forced to confront the job of writing your entire system properly, from the ground up.

                • maggette said, on January 29, 2020 at 9:26 am

                  Sure. You know me. Of xourse I do think that stats skills is important. And I do think that a solid understanding of Bayesian and Frequentists statistics is important (by the way..the “war” against these both viewpoints is childish. A practioner will always use both). I to this day never shipped a solution that was ML only. There are a lot of important topics where IMHO you can’t do anything with ML. Like Extreme Events stuff or working with small data sets. Like you said, have at least an understanding of what is going on in “new stats”.

                  For example I am pretty pissed that of all the beating the “human pro level break throughs” I ahve the feeling nobody talks about Pluribus and multi player poker (https://www.cs.cmu.edu/~noamb/papers/19-Science-Superhuman.pdf). That’s actually Monte Carlo search and game theory.

                  Like I said in B): to create “Intelligent systems” I think a proper mix of statistical techniques, ML (including deep learning and reinforcemnet learning), signal processing, information theory, simulations, optimization and meta heuristics, game theory and probably more hands on hard coded “Intelligence” and algorithms (like GOFAI) plus lot’s of CS will help to write solutions that really do something.

                  Regarding B)…yeah, probably wishful thinking. Like you or Jeremy Howard I think we took the wrong turn somewhere in the 50s and 60s when it comes to languages (APL and stuff), but if I have to live in a C like world (and I do..after writing this post I will switch back to my editor window and write scala code:)), I think Lattner is walking somehow in the right direction (https://en.wikipedia.org/wiki/Chris_Lattner). The end of Moores Law is a real thing. The next step has to be something that is easy to develop for parallelization and on multiple platforms, embedded and mobile included
                  .

                • Daddy Botox said, on January 29, 2020 at 3:16 pm

                  Hey Scott, can you expand on why you don’t like Haskell?

                  I was a static typing zealot(ML/Haskell/Coq) for some time, but then I let it go.
                  I found no evidence that static type checking results in “better” software.
                  Except may be for Compcert, a (mostly) formally verified C compiler
                  written in Coq and proven to not have middle-end bugs(can’t fuck up a
                  correct C program, as almost all passes are proven to preserve program semantics).
                  In general I think that software is a buggy shitfest,
                  but I don’t think it’s possible to formally prove all software,
                  because the cognitive load of doing this is insane.

                  https://en.wikipedia.org/wiki/CompCert

                  https://www.absint.com/compcert/

                  “””
                  The striking thing about our CompCert results is that the
                  middle-end bugs we found in all other compilers are absent.
                  As of early 2011, the under-development version of CompCert
                  is the only compiler we have tested for which Csmith cannot
                  find wrong-code errors. This is not for lack of trying:
                  we have devoted about six CPU-years to the task.
                  The apparent unbreak­ability of CompCert supports a
                  strong argument that developing compiler optimizations
                  within a proof framework, where safety checks are explicit
                  and machine-checked, has tangible be­ne­fits for compiler users.”
                  “””

                  • Scott Locklin said, on January 29, 2020 at 9:58 pm

                    OCaML has long been on my list of things to try more seriously, for the type system and the obvious compromises made for usability. Wish it had a threading model which was worth two shits, but ultimately I’ll probably learn medieval German before spending any time on OCaML (or F#).

                    The proof systems like Coq of course are of interest, in particular for stuff like smart contract formal proofs, but at some point a man has to delegate crap like this; I can’t be an expert in everything, and this looks like something where you have to be an expert to take advantage of it. Hadn’t heard of comptcert; cool. I wonder how much this causes problems with performance/optimization.

                    Haskell; one of my pals became quite expert in it during a period of protracted unemployment, and declared it lacking in real world utility and applicability. I’ve also noticed that open source industrial projects in it I have come across are generally worthless. They usually work in some sense, but break down mysteriously under real world load. I prefer to not name names here, as negotiations are underway and all, but most of them end up recoding their goofy Haskell thing in something boring like Java, which has the benefit of actually functioning properly. Haskell seems like the type of programming language which appeals to programmers who enjoy huffing their own farts and showing the world how special and unique they are. Yes, I’m saying this as a J enthusiast, but the J world is very different; simple, time-tested programming model which has real world applications, and the community is pragmatic and conservative rather than novelty seeking weebs. When I come across haskell based things, it usually seems like it is in some skunkworks well outside of production where the local boffins say, “I know, let’s do this in haskell!” Let’s face it; if you have an IQ above that of celery, most of programming is boring. We always think there must be a better way, and enjoy mastering complex tools which promise much, even if they never deliver, if only for the ego gratification of not being a nodejs or Fortran bro. I’m done with that; if a tool can’t obviously deliver, I don’t care about it. Novelty seeking in software for its own sake: there are better things to do with my time, like read cartoons or keep up to date with the latest 4chan memes; I literally get more out of this than messing around with “new” ideas in programming languages. This is probably material for an extended rant about software engineers being too precious for their own good. Dopes will sing the praises of weird shit like monads and not write unit tests or test the network stack for black holes.

                • EccentricOrbit said, on February 10, 2020 at 5:57 pm

                  I’d be rather interested in that rant about programmers and their obsessions: I can’t figure out *what* is driving their fanaticism sometimes. It’s like they all like to screw with each other by taking something straightforward and making it baroque and overcomplicated. I’ve fought people over pointers before – they’re a useful and necessary language construct, they make the CPU obey the will of the programmer, therefore they’re “dangerous” and must be wrapped in a tower of duct tape and abstractions and should never be used. WTF? They don’t require superhuman skills to use.

                  • glaucous_noise said, on February 10, 2020 at 9:15 pm

                    I think the issue is that typical meat sack trench manning front line infantry programmers are not very bright, because what they do is not very hard. Making a website to sell people snacks and have some brain dead debt serf drive it to them for an artificially investor subsidized delivery fee is not hard if you’re only really working on the front end (although the actual backend routing algorithms require actual engineers, who I would not term programmers; they’re actual scientists).

                    A lot of your work place social progress depends on what your peers and boss think of you, not on your actual output in many of these inflated positions, and so there is a tendency to take what is a rudimentary task and render it as complex as possible so that your manager believes you’re a genius. This is the origin of many needlessly convoluted object oriented structures, for instance.

  3. rademi said, on January 27, 2020 at 11:36 pm

    This looks somewhat plausible, but I’m not so sure about some of the details.

    For example: GPUs and other forms of quantitative increases still represent growth. (We’ve got problems, though, with people not understanding each other, and some other nonsense.)

    Also, I’ve got 64G ram on my laptop. The thing has occasional problems with cooling and power stability, so it’s not a total win, and part of the reason for these problems has to do with the character of outsourced production.

    But, also, I wonder what you mean by “reusable spacecraft”? I mean, … technically, NASA’s shuttle was reusable, and people are working on bringing the costs down. Anyways, I do not understand what you are trying to say, there?

    Generally speaking, though, we had our politics structured around a very bad economic theory (the “efficient market hypothesis”), and we’ve exported most of our factories to other countries, and now we’ve got a lot of very bored and grumpy people with nothing much to do. And… the consequences of this situation are what I think you’re describing?

    From my point of view: we’ve a lot of the wrong habits, and we’ve been training our managers poorly (not to mention the “reproducibility crisis” of peer reviewed science publishing). And, there’s a lot of inertia there, and we need to turn both our economy and our politics around. Observations of the sort which lead to the sort of predictions you’ve posted here are a start, but it’s not going to be easy.

    • CLF said, on January 27, 2020 at 11:45 pm

      The replication crisis is truly sobering.

    • Scott Locklin said, on January 27, 2020 at 11:47 pm

      I don’t know what the solutions are, though I am certain what we’re doing now isn’t helpful, and anyone trying different things will likely be crushed. The social class of mandarins who organize our society are more interested in protecting their sinecures than fixing anything. People overtly state they no longer believe in merit as something useful in hiring decisions: you can’t fix anything with that attitude. This makes technology something like interpretive dance.

      “reusable spacecraft” -I guess I mean, reusable spacecraft that are cheap enough to operate I can go to LEO for the price of a flight to Australia. Skylon has potential … if it works. And of course, going to space on the cheap means you can do a lot of interesting engineering. The Shuttle was a piece of crap that should have been abandoned immediately; it never worked, and costs spiraled well past that of disposable launch vehicles.

      • asciilifeform said, on January 28, 2020 at 12:43 am

        The Shuttle — initially conceived as an orbital nuke dispenser — was repurposed as a techno-prop in the project of bankrupting the USSR. And, from this POV, “worked” quite well.

        • Scott Locklin said, on January 28, 2020 at 5:56 pm

          I think the original shittle design really was supposed to make heavy lift cheaper, but congressional limitations hampered a really reusable design (aka launch from an aircraft) and somehow they overlooked the preposterous labor involved in keeping it safe. I know the Soviets were perplexed enough by the whole thing they built a better one, but like many American wins, I’m pretty sure that was an accident.

          • rademi said, on January 28, 2020 at 7:04 pm

            I suspect that all useful progress started as accident.

  4. asciilifeform said, on January 28, 2020 at 12:21 am

    #1 — the personal computer, as actually experienced by the user, is ever more of a bucket of lag with each ever-more elephantine “updated” OS. Even outside of the MS zoo. The spiraling bloat has managed to wipe out most of the theoretical gain from cheap RAM, SSD, fat net pipes, special-purpose co-processors, for virtually all “everyday” applications. Even for number-crunching, I am purchasing 2012-era irons for massively-parallel computations and have no intention to stop, it is overwhelmingly the biggest “bang for the buck” available on the market. (And — as an added bonus — skips the NSAware rampant in the post-2012 northbridges.)

    #2 — The most energy-economical “self-driving car” is clearly no car, i.e. dispensing with the delusion that transporting a mass of office plankton to/from cube farms every day is somehow an economic necessity. I don’t expect this “car” to catch on, however, the human-zoo keepers aren’t about to dispense with the familiar universal-busywork philosophy.

    #3, #4 are boondoggles, in the finest 1980s traditions. Will add up to the same fat 0 as previous iterations of same.

    #5, #6 “products”, without exception, are vehicles for peddling NSAware (“privacy-enhancing” broken-at-birth cryptosystems, “end-to-end”-which-aren’t crypto-telegraphs, continued iterations of the cipher-with-masterkey SSL circus, artfully-trapdoored “post-quantum alternatives to RSA”, elementary honeypots in the spirit of e.g. TOR, etc.) to “politically-active” ignoramuses. Observe that to this very day you and I still cannot buy, e.g. a telephone which runs on a one-time-pad — even though they’ve existed since the 1940s. “Because It Would Be Wrong”(TM)(R).

    #7 will be an interesting “slow train wreck” as the plutonium plundered by the “civilized world” from the USSR’s corpse is finally exhausted (last I knew, the production capability for it, at least in USA, has evaporated.)

    • Scott Locklin said, on January 28, 2020 at 5:53 pm

      I’ve considered learning enough ipotato bugman coding to create a OTP app. I assume they can be rooted by state actors though. If you create a reasonably secure piece of hardware and stick a zener diode in it to generate random noise (for true paranoids), you can share keys with physical proximity for a real ansible. I assume such things could be created for something like $10. I guess I have better things to do than troll the NSA, but it should be possible, and it would probably cause some problems.

      • asciilifeform said, on January 28, 2020 at 6:14 pm

        Funnily enough, I’ve been trolling NSA for years — and still not gassed. See below.

        Inexpensive, fully-auditable TRNG is a solved problem. (I solved it. Linked item was my own design; several hundred units sold; partner and I disagreed re: what to do next, we parted ways. But the full source is public, anyone who wants to, can re-print.)

        I did not use zener or geiger counter as these require high-voltage supplies — which inevitably include oscillators, which in turn inevitably leak periodicity into the output.

        For the record, the idea of crypto-anything as a shitphone “app” is laughable. Dedicated iron, sans shitware, is the ticket. What’s not clear is whether anyone would buy: the unwashed seem to prefer idiot trendisms advertised in glossy magazines, rather than the real deal. (The “real deal” will never get advertised other than in “crackpot” WWW, for very obvious reasons.)

        • Scott Locklin said, on January 28, 2020 at 6:41 pm

          I had a Zener diode making random bits in my TRS-80 color computer; maybe it was no good somehow, but it was pretty cheap hardware. I assume you can Santha–Vazirani (or whatever randomness-extractor) away any line noise.

          Anyway the idea would be, bring two hardware RNG thingees together like wonder twins power activate, exchange a few gigs of random bits, and have the ability to ansible infinite text messages or a work week worth of phone calls. I suppose the difficulty is managing the toxic waste; making sure your counterpart burned the bits afterwords. There may be a ZNP way of doing this by now for all I know. I’m no cryptographer; I only take an interest in the use of cryptanalytic techniques for predicting things.

          • asciilifeform said, on January 28, 2020 at 6:50 pm

            Some time for “target practice”, apply your predictor tools to the output of various mainstream RNGs — you’re in for quite a few surprises. Baking a proper RNG is nontrivial, if you’re actually serious (for instance need to consider aging effects. Zeners do not age gracefully.)

            Re: OTP — we actually considered this as a product; the difficulty turned out to be in an unexpected place — it is very tricky to reliably send modulated data over modern-day telephony (of whichever type, GSM or the ubiquitous VOIP garbage masquerading as land line these days.) Whereas if you make it a pure VOIP box, you’re stuck with centralized points of failure (the extant net, as available to typical consumer, makes truly peer-to-peer connectivity virtually impossible.)

        • rademi said, on January 28, 2020 at 8:13 pm

          I am pretty sure that what we are seeing is abuse of the “NSA brand name”, and not actual NSA activity.

          I remember when I read some of the stuff that Snowden dumped onto the internet, that it indicated that the NSA mostly just liaisoned with the FBI for domestic issues (which makes sense — that’s how bureaucracies operate — they mostly just pass the buck to whoever is supposed to face the problems and hopefully has the time and energy to do so).

          But I don’t think many people actually tried to read much of that mess. (Which is probably why that data dump was a mistake.)

          • asciilifeform said, on January 28, 2020 at 8:19 pm

            Yes, the linked item was “an abuse” of their trademark, I perpetrated it.

            This is the first time I saw anyone confuse it, however briefly, with the original, however.

            • rademi said, on January 29, 2020 at 5:14 pm

              I didn’t mean the linked item – I meant the “nsaware” items mentioned.

              • rademi said, on January 29, 2020 at 5:20 pm

                (I mean… nsa people themselves probably think that nsaware stuff is kind of funny — this kind of craziness totally distracts from whatever they’re doing, and as long as their people aren’t actually involved, anyone trying to track this stuff down isn’t going to be bothering them except in a vague existential sense. It’s still annoying to have to deal with, though, and for those of us outside the nsa, it’s using that “brand” to annoy us.)

  5. Javier Acuna said, on January 28, 2020 at 6:50 am

    Hi Scott,

    I’m happy to see that you’re posting again.

    Here is a link to interesting predictions made by Rodney Brooks, inventor of the Roomba: https://rodneybrooks.com/blog/ He is also skeptic about driverless cars but much more optimistic about electric cars.

    Cheers,

  6. Rohan Jayasekera said, on January 28, 2020 at 8:04 am

    Fun story about old hardware being more cost-effective than new: back in 1975 or possibly slightly earlier (source: https://rogerdmoore.ca/INF/ERInstallationHistory.htm by the late great Roger D. Moore), APL timesharing company I.P. Sharp Associates increased its computing power by replacing its pair of IBM 370/145 mainframes by a pair of IBM 360/75 mainframes. That’s right, they went back an entire generation. The 75s were very powerful and because they were old gear were now available at a relatively low price. I had wondered whether they might be less reliable, but the uptime stats published by the company (I was a mere user at the time, in high school, before later becoming an employee) just continued to improve as the system software, which was 100% written by the company with nothing from IBM, continued to improve in what today would be called a DevOps environment (hardware downtime being exceedingly rare – someone once admitted to me that the reason the system had crashed was that he’d tripped over a cable that he’d left exposed).

  7. tuhdo said, on January 28, 2020 at 1:00 pm

    If you can make use of all available cores, a 3970X with 32 cores 64 threads at 4GHz can make more than a 10X speedup vs old Xeon.

    It depends on your tasks though.

    • Scott Locklin said, on January 28, 2020 at 5:43 pm

      Sticking more cores on silicon isn’t interesting; it just makes the compiler or programmer’s job more difficult. Compilers are shit at taking advantage of this, unless you code in Fortran or type “make -j” a lot. The reality is, in any kind of real world performance, the latest from Intel or AMD is a lot like the 2012 era machine in real world performance, with a much lower cost of ownership on the older hardware.

      (they don’t go back to Sandy Bridge, but as it was generally better than Ivy Bridge…)
      https://cpu.userbenchmark.com/Compare/Intel-Core-i5-9600K-vs-Intel-Xeon-E3-1270-V2/4031vsm11264

      • asciilifeform said, on January 28, 2020 at 6:18 pm

        On top of this, recent x86 iron thermally downclocks when you occupy all of the cores.

      • tuhdo said, on January 28, 2020 at 10:42 pm

        User benchmark is not a good site for measuring CPU performance. For example:

        – Upgrading from an intel core i7 2600k: https://www.anandtech.com/show/14043/upgrading-from-an-intel-core-i7-2600k-testing-sandy-bridge-in-2019/6

        For specific tasks, with improved AVX instructions, the performance doubled. For general purpose tasks, such as the gaming benchmarks can be used as examples, the performance improvement per core is around 20% on average, surely higher on 5GHz overclocked, while consume less power and having more cores.

        But ever since Sandy Bridge, Intel got not competition until recently, sure your Xeon is still good for now. I too still buy many Sandy Bridge for its price.

        Having more cores is definitely worth it, as your operating system is already a well-threaded software that can distributed tasks among cores quite well. Currently, most programs are still single threaded but thanks to the OS, you can simply use cores by opening more programs and let the OS handle it. It is useful if you want to open many heavy-weight programs at once, like a cluster of Windows 10 VMs.

        Software was bottlenecked by Intel’s quad-core and minor generational improvements for so long. As more cores are pushed to mainstream, given time, software will follow suit. Parallelization should not be a job for compilers, but for programmers as well. It is now a good time for programmers to start learning proper multi-core programming.

        • Scott Locklin said, on January 29, 2020 at 12:23 am

          Sure I use the cores I have access to sometimes, and AVX is nice when you have a language that can take advantage of it (still rare… literally 8 years later), but a 2x or 10x computation speed on single thread goes a LOT farther. They can’t do this. Barring huge breakthroughs, we’re not getting a 10x improvement next decade. Maybe not even 2x. That has huge consequences, and people aren’t talking about it.

          To say nothing of the fact that many algorithms are inherently single threaded, and people tried the multicore meme with the transputer and never really got it to work outside of embarrassingly parallel computations.

          • asciilifeform said, on January 29, 2020 at 12:43 am

            I have found that the various “fancy” SIMD instructions are not necessarily a win, even when used in carefully hand-written assembly.

            Presently I suspect that they tie up ALU pipe and are more or less a straight scam, i.e. a clever scheme to sell new irons in the face of a thoroughly dead&buried Moore’s Law. (And yes, it is possible to write routines where they appear to win — it’s what benchmark writers are paid for…)

            • Scott Locklin said, on January 29, 2020 at 4:30 am

              I’m assured that the J updates to the interpreter got considerable speedups from fiddling with AVX; which is pretty funny as it already regularly beat compiled code in Rust, D and so on. The J team is really exceptional though. And of course, an array oriented language is going to get the most mileage out of something like AVX.

              https://code.jsoftware.com/wiki/Scripts/Simple_AVX_Benchmark

              You might have a look at their diffs.

              • asciilifeform said, on January 29, 2020 at 5:21 am

                IMHO the question of “do SIMD instructions win?” can only be resolved by comparing logically-equivalent and carefully hand-tightened asm routines, for a concrete problem, “with vs. without” — in a compiler, the appearance of “SIMD wins” can just as easily result from a back-end which spits comparatively poor SIMD-less code as from any intrinsic win of SIMDism.

                Now, it is possible that my hands grow from my arse; but in my experience thus far, SIMDistic routines in fact invariably eat more clock cycles than a thought-out conventional equivalent. On my particular irons. (2011-2013 era AMD Opterons.)

                • Scott Locklin said, on January 29, 2020 at 6:16 am

                  5x on matrix multiplies and array finds looks pretty good to me. That’s about 90% of where my code lives. There’s more where that came from; lapack with tuned blas will always do better (probably another 10x), but it has to be tuned (using custom ATLAS BLAS) to individual machines. Of course, the mere fact that code is such garbage that you need something like ATLAS to get your moneys worth kind of says it all: software (and hardware) is shit.

                  • asciilifeform said, on January 29, 2020 at 4:29 pm

                    If juggling matrices routinely — have you tried GPUism?

                    • Scott Locklin said, on January 29, 2020 at 9:16 pm

                      A little bit; it doesn’t help when you’re doing matrix jiggles to hyoooge (or even merely beeeeeg) data sets (aka most of machine learning and quantitative finance) due to the memory bus latency issues. At least without heroic interventions. It also doesn’t help if you have to write a bunch of code to do tricky things like machine learning or stats algos in modified PDP-11 assembler aka Cuda/C.

                      I’d be in the market for a high level language that allowed me to suck the juices of the many cores for the corner cases where they are useful for smaller data, and worth the time to code up something custom. GPUs have fairly slow clocks but lots of cores and good internal memory latency (there effectively isn’t anylatency last I checked, which is probably half the advantage of the things; giant shared Cache is wonderful: giant shared cache is probably where classic CPUs will eventually move), so they’re appropriate for dweeb learning type tasks. Generally speaking I’m not so interested in that (like everything popular: overrated), and neither Theano nor Torch really meet my needs for a higher level language that targets GPUs. J would be perfect for the obvious reasons, and an individual made some preliminary explorations towards this. A complete R runtime which did the same thing might be even better, despite the horror that is R as a language. I know Haskell and Java have a little work on this also, but I’m not investing further in either sets of tooling for reasons I am sure you can understand.

                      At the end of the day, doing map reduce on a standard CPU (or many) and hitting the gym on long computations hits the sweet spot of “fast enough” and “I don’t have to spend all my time coding up ephemeral bullshit on the GPU.”

  8. Jeremy Malloch said, on January 28, 2020 at 11:25 pm

    I see you omitted conformal prediction from your data science description – are you still bullish on it?

    • Scott Locklin said, on January 29, 2020 at 4:15 am

      I include Vovk and company’s ideas in with the sequential aggregation of experts ideas; they’re actually somewhat related. Other stuff with juice: information geometry, Dempster-Shafer (so I’m told; haven’t deep dived), lots of new results in numerical linear algebra, maybe some of the active learning stuff, some other things I won’t mention.

  9. Cameron B. said, on January 30, 2020 at 5:45 am

    It’s great to see you back, Scott. Years ago a friend – and Baylor College of Med. researcher – championed a shirt emblazoned with “2030” (or some nearby year) on the front. People would ask what it meant and he’d spread his gospel. Apparently at the current rate the entire US population will be obese by 2030. So if nothing else we’ll be more attractive by 2030.

    • Bryce said, on January 30, 2020 at 6:20 pm

      This is part of the reason US military supremacy is at an end. Sure, our tech works against other state powers, but we can’t find men who are qualified to serve. Too many young men are either physically unfit, have a criminal record, are too stupid, or have other options. It definitely doesn’t help that the Left is declaring our rights illegal and everyone an American so that there is no point in serving to defend the nation.

    • Scott Locklin said, on January 30, 2020 at 8:54 pm

      #national_goals achieved!

  10. veilwar said, on February 19, 2020 at 2:20 pm

    In regards to reusable spacecraft, surprised that you put the percentage so low, and make no mention of SpaceX. The Falcon has already achieved first stage reusability, and I’d think that SpaceX has at least a shot at making Starship work on the basis of their demonstrated capabilities.

    • Scott Locklin said, on February 19, 2020 at 2:28 pm

      As I have already stated, “first stage reusability” is a LARP; it doesn’t work if you actually use the full heavy lift capability of the rocket. And the rest of it is vaporware. Face it: the US can’t do manned space flight without the Russians. Probably won’t be able to for the rest of our lives, barring some bloody revolution involving the death of millions of government bureaucrats.

      • Raul Miller said, on February 19, 2020 at 4:31 pm

        I’m less than convinced here.

        I do see several advantages for the russians, some of which will persist. And, they do currently have a significant lead on us. But the biggest one — that we bought into the “efficient market hypothesis” seems to be evaporating.

        That said, it might be that you are aware of some issues which I am not? (That in itself is not convincing, of course, it just means that I might be convinced at some later point.)

        • Scott Locklin said, on February 19, 2020 at 11:24 pm

          Well the fact that nobody has built a serious space capsule yet in the 20 odd years that we’ve known this needs doing. I mean, Musk talks about it, builds things that look cool. Boeing … we all know about Boeing. We might build something that gets us back to LEO, at 10-100 times the cost of just hitching a ride with the Russians, but thinking about nonsense like the 2024 return to the moon goal: I don’t see that ever happening. Humans may return to the moon; North Americans might also: don’t think US citizens will.

          • Raul Miller said, on February 20, 2020 at 12:46 am

            Cost is not in and of itself a reasonable metric for most activities, and especially for most things involved in space travel.

            The assumption we make is that there’s a fixed relationship between cost and value. And, we assume, that it’s competition which makes that be the case.

            But, this only holds for tightly regulated contexts. Absent adequate regulations, and adequate participation, you see massive spikes in economic activity. Bursting bubbles is the sort of thing where regulation partially smoothes the market.

            Cost winds up being an approximate measure of how much time people are spending on a subject, but it says very little about what they are doing, what their motivations are, what their skill sets are, and so on. But, also, in the context of Russia, you’re not seeing cost as a measure of time invested, because Russia doesn’t run on dollars.

            Anyways… cost is highly variable except where regulations dampen that.

            That said, right now, you are right, at least sort of — the USA no longer has much of any industrial capacity to speak of. And that drives up our costs in a variety of areas and cripples our management vision across the board. This is a problem that can be fixed, but it will take some time, and the efforts will meet some resistance.

            And, *that* said, “we” also need to become a lot more suspicious about some of our problems. For example, spacex’s crewed launch system blew up on the pad, and there wasn’t enough monitoring gear on site to make an adequately informed decision about what the causes were. So now that program is delayed, and nasa is annoyed with them. See also: cost is highly variable…

            • Scott Locklin said, on February 20, 2020 at 2:20 pm

              Let’s put it this way, as I drink my morning coffee, I drink it from a Skylab cup my parents bought for me back in 76 or something. It’s extremely well made; the logo I think is actually baked into the ceramic. It even has an aesthetic slight curve to it that was probably difficult to achieve. Not only is the US incapable of building something like Skylab and launching it into space (remember the ISS is basically a product of the 1970s era technology of the US Space Shuttle), it’s incapable of building the freaking coffee cup.

              I think it’s well beyond industrial capacity; the US has destroyed its human capital as well. The people who could build such things are generally not well employed; guys I know in Aerospace are pretty miserable and only really persist out of optimism for the mission.

              • Raul Miller said, on February 20, 2020 at 4:12 pm

                Yeah, I blame Reagan for essentially shutting us down.

                And, yes, without the experiences that come with industry, people (and the nation) become incompetent in those areas.

                That said, people who aren’t willing to put up with misery to get where they think they need to go are not to be trusted. So the guys you know, in Aerospace, sound like the good guys.

              • glaucous_noise said, on February 20, 2020 at 4:45 pm

                Yep it’s horribly depressing, PhD student in EEE here finishing my last year and the astronomical difficulty finding a job in either a government or industrial position that isn’t pointless or pillaged by financial engineers is demoralizing.

                The good news is a nightmare freight train of a recession is barrelling towards the US at the moment and will probably trigger a global reset. From my vantage point the Europeans have made far smarter scientific investments and if they don’t implode too (not a foregone conclusion) there may be hope yet.

  11. OB said, on February 24, 2020 at 6:54 am

    Any predictions on how climate change (and ever larger natural disasters) will affect scientific progress, and tech and finance industries?

    • Scott Locklin said, on February 24, 2020 at 3:53 pm

      Being wrong about predictions isn’t interesting.

      • glaucous noise said, on February 27, 2020 at 4:00 pm

        I predict that the coronavirus will cause a 3,000 point drop in the Dow Jones between Feb 24 and Feb 27

  12. gbell12 said, on February 27, 2020 at 6:10 am

    Hi Scott – Have you looked at Tim Morgan’s work and thesis that declining ERoEI (or ECoE) is the cause of the malaise we see in the West (and in productivity)?

    https://surplusenergyeconomics.wordpress.com/professional-area/

    • Scott Locklin said, on March 9, 2020 at 12:43 pm

      Yep; I saw that; not quite what I had in mind, but it is very possible that this become a pervasive societal problem in the West. Which, FWIIW, is one of the reasons why I work at Brave.

      • bobbybabylon said, on March 10, 2020 at 8:16 am

        Can you explain why compiler engineers will become rare and in demand? I just began studying parsing and combinators myself – basic stuff right now like baby Lisps in C and whatnot.

        • Raul Miller said, on March 10, 2020 at 3:13 pm

          I think I see a similar problem, but not quite the same.

          Here’s an issue: it’s relatively easy to put together a new, partial implementation of a compiler. This is especially true when you also get to make up the language.

          But retaining existing systems is much, much more difficult. Fixes require more study. It’s fairly trivial to incorporate bad ideas. And, you need quite a lot of insight and experience to make anything better.

          Meanwhile… the underlying equipment varies, based on the manufacturer and designer. And the standards that mostly get taught are almost invariably aimed at other issues (which assume a working infrastructure). And then there’s shenanigans… (Malware hasn’t gone away, but we do seem to be getting better at ignoring it, for example.)

          It’s not impossible, but this combination of factors puts a heavy burden on the people who try and keep things on track.

          • bobbybabylon said, on March 20, 2020 at 6:24 am

            >But retaining existing systems is much, much more difficult. Fixes require more study. It’s fairly trivial to incorporate bad ideas. And, you need quite a lot of insight and experience to make anything better.

            That makes perfect sense. In a sad way. So we are going to need people skilled in maintaining gcc and llvm/clang. Are the compiler devs dying/retiring out? And what about AOT/JIT for scripting languages?

            • Raul Miller said, on March 20, 2020 at 1:50 pm

              Compiler devs do die out — old age takes its toll (and occasionally, other problems take their toll). llvm/clan has an advantage over gcc there — it’s developers are newer and likely younger. So that’s one part of the picture.

              Another part of the picture is that you need a culture of people who prefer maintenance activities to preserve older efforts. But you also need goals for them, some of which are very tough and forward looking (or they’ll get bored), some of which are very easy and achievable (or they won’t make progress).

              And then there’s the problem of spurious deprecations. More and more, I’ve been seeing deprecations announced with no surrounding reasoning. Apparently no rational thinking. If you deprecate something without an obvious replacement what you’re really doing is chasing people away. And if you get enough people to follow your lead, you’ve succeeded in crippling some part of a user base.

              (This (spurious deprecations) isn’t so much gcc/clang, but a near example of this in the context of OpenGL is the deprecation of OpenGL 1.0. The motivation there almost makes sense (OpenGL ES — for mobile phones — needs to be a subset of OpenGL as a whole). But without replacements for all the OpenGL 1.0 books and reference works that are designed to support the same ideas in OpenGL ES, what this accomplishes is deprecating OpenGL students, and a withering of the community. (Sadly, this isn’t the only context where I’ve seen “deprecation” being abused (really: someone being lazy) — it’s just the first example that comes to mind for me.))

              Short form: entropic issues hit everywhere, and coping with them takes effort, motivation, coordination and planning.

      • bobbybabylon said, on May 14, 2020 at 10:35 pm

        Is this more in line with what you meant?

        https://nakedsecurity.sophos.com/2020/05/14/woman-stalked-by-sandwich-server-via-her-covid-19-contact-tracing-info/

        I’ll see if I can get the people at my work to look into Brave.

  13. Previsões para 2030 - Slonik said, on April 22, 2020 at 2:14 pm

    […] Ten predictions for 2030 […]

  14. gmachine1729 said, on May 13, 2020 at 1:27 pm

    I appreciate seeing someone on the English language press so realistic about how we’ve reached a bottleneck in science and technology. And who also thinks “computer scientists” are full of shit. Many American “computer scientists” are essentially scientifically illiterate. And most software engineers nowadays don’t even realize that software engineering isn’t actually real engineering, it’s more like a trade, and different from typical trades which involve people doing physical hands on operation and building of things. Software engineering is quite an anomaly, a craft (as opposed to engineering) in virtual computer world.

    I especially liked your https://www.takimag.com/article/the_myth_of_technological_progress/

    I disagree though that American military supremacy will remain for 20 years. My take is that militarily, Russia has caught up much and also surpassed America in many ways over the last decade or two. It was pretty much on par with America’s all throughout the Cold War only to decline some in 90s. There is also China’s rise. Also, Americans are becoming less and less competent over time. Not to mention that America is not very good organizationally. And the idiots in the decision making class in America.

    Since I am fluent with Chinese (and also know some Russian), I can tell you that English language press is much more “full of shit” than in other languages.

    I am annoyed by the likes of Stephen Hawking and Brian Greene and Michio Kaku. Truth is that physics, especially fundamental physics is already a solved problem. The subatomic particle problem was pretty much solved by end of 70s. That stuff didn’t really lead to any technological applications either. Most of the stuff in physics that actually affects our every day lives in the form of technology is classical mechanics, electromagnetism, and statistical mechanics. Quantum effects really only manifest at the subatomic particle level. I believe even with neutrons, which are ~ 1000 times heavier than electrons, you can ignore quantum for practical purposes. Even in something like Manhattan project, relativity (which is even more useless for practical purposes other than merely using E = mc^2 for some scattering stuff) and quantum only play a peripheral role from what I gather. Most of it is still classical physics of very “applied” nature. I know almost nothing about semiconductors, but even with that, the electronic energy states and conduction bands which involve some quantum mechanics is only one small part of it.

    I even once heard Michio Kaku say that we need more physicists. That’s just fucking ridiculous man. There’s a glut of them who can’t find jobs. I also get annoyed when people try to get “philosophical” about physics. Doesn’t do any good for somebody who wants to actually understand physics, which is more about applying correctly some objective physical principles and the mathematics behind it and matching up with the experimental evidence/data. Science and science fiction are completely different. And having an Einstein quote with “imagination is more important than knowledge” in a high school math class in America is a terrible way to mislead young kids who don’t actually know anything. Being good at science is about thinking coherently about and pertinently to the natural phenomena, not imagining a bunch of spurious relationships and “weird superfluous things”, which is ever more common in students who lack “historical grounding”. There’s a really unhealthy misguided “look towards the future” culture and mentality in America.

    • glaucous_noise said, on May 13, 2020 at 3:11 pm

      “english language press is more full of shit than in other languages”

      care to qualify this statement? the English language press is probably the largest in the world, and the Chinese language press is dominated (I assume) by the Chinese Communist Party, so I’m not buying that they aren’t on par in terms of bullshit. The scale of the American-centric English language media dwarfs all others, so I would guess you have some kind of sampling bias, because bullshit is a universal favorite of humanity no matter what they speak, and I’ve never really found arguments that Americans are definitively worse in this regard to be compelling.

      I definitely am on board with what you’re saying about computer “scientists”; any discipline which must append the word scientist to it is probably an oxymoron (try it for yourself: political “science”, social “science”, military “science”, pole dancing “science”).

      I am a semiconductor engineer, and while I see where you are coming from regarding quantum and why you may have drawn the conclusions you have, they are not really correct. Firstly it is crucial to note how poorly we can understand chemical and biochemical systems at the single molecule or chemical reaction scale. This may be chalked up to how poorly understood quantum mechanics is. Regarding semiconductors, there is nothing peripheral about band theory or field theory. They are too difficult to use in modelling in many cases, but are nevertheless needed in research. For new materials, the interpretation of experiments is impossible without solid state physics, and modeling (e.g. tight binding) is ubiquitous. When studying the transport in new materials, interpreting the results requires a sophisticated grasp of graduate level topics in scattering theory and phonons.

      Because semiconductor devices are vastly more complex than lone materials, some of the physics is simplified, as engineering goals are not the same as scientific ones. However you cannot disregard the body of research that goes into the materials which precede the device research. When judging new III-V materials or materials such as GaN, engineers rely upon the work done by applied physicists and chemists in determining band gaps, effective masses, and phonon properties, the latter of which are incomprehensible without band theory.

      Finally, computer modelling has had a profound impact on aerospace, mechanical, chemical, and electrical engineering (computational fluid dynamics for the first two, transport modeling of various kinds for the third, and computational electromagnetics finally). Computer models are also present in semiconductor TCAD where they use models of scattering derived from band theory (and of course, fudge factors and creativity too), and transport kernels derived from Boltzmann’s equation. However, quantum corrections have been used for decades to account for many problems (for instance, the fact that without them, the density at the oxide-semiconductor interface diverges). Notably, these corrections are based upon Bohmian mechanics, not conventional Copenhagen mechanics. Indeed, the vast majority of electrical engineers are subconscious Bohmians, although they are not very philosophical and tend to frown upon the woo that comes out of the physics department. A good example is tunneling time. It has been known since at least the 80’s that tunneling is not instantaneous. With Bohmian formulas (and thinking) you can obtain good enough models easily. With Copenhagen “thinking” (if you want to call it that), you can write many papers and obtain no formulas or results.

      Then there is a quantum frontier in nanoelectronics, which is a multi-million dollar business these days . The biggest TCAD company in the world acquired a many-body transport code from a big university for something like 10 million if I recall. Small peanuts by some standards for sure, but there is a broad awareness of the fact that quantum effects emerge at very small scales and even at high temperatures, although I myself hold reservations about how needed new models are. In other words, to extend computer modelling down to the nanoscale, you need to account for quantum effects either by proving they are irrelevant or by including them.

      Finally, there are niche and explicitly quantum devices. I wouldn’t call lasers niche per se but they cannot be understood without quantum mechanics or even conceptualized. SQUID’s are widely used in military and medical applications and rely explicitly on quantum interference. Countless proposed devices are piggy backing on the quantum computing hype (although everyone I know in engineering is highly pessimistic about quantum computing).

      Hopefully this long winded reply gives you a sense of the relevance of quantum to applications. Personally, I take my PhD advisor’s position: “Some of us think Bohr had his head up his ass!”; I think that explains quantum’s problems.

      • glaucous noise said, on May 13, 2020 at 4:13 pm

        also, I want to add one more thing.

        Many engineers I’ve met think quantum is irrelevant because it is not transparent to them where it is applied. But the same can be said about non-Markovian, nonlinear, or non-equilibrium physics. Practically all engineering takes place in an approximately Markovian, approximately linear, and approximately equilibrium world. Obviously we make things that are wildly non-equilibrium/non-Markovian/nonlinear like jets and transistors, but we have hardly any ability to control the precise properties of these systems. A transistor is just a sledgehammer that slams electrons against a rail; the fact that engineers ever got away with using simple diffusion equations is a). no longer true according to many engineers I know in industry who use sophisticated Monte Carlo and hydrodynamic models and b). a consequence of how ludicrously crude our technology was in the 50’s by comparison to today.

        To your point though, it is remarkable how little we understand about the four non’s: nonlinear, nonclassical, non-Markovian, and non-equilibrium. But quantum is just one area of extreme physics that engineering barely has any control over; you could say the same about the other three.

        The lack of presence of these topics is more a consequence of how absurdly difficult they are to understand and engineer than a consequence of their irrelevant. They are a major cause, along with politics, for the slow progress of technology.

        • Raul Miller said, on May 13, 2020 at 5:51 pm

          Eh… to some degree.

          Moore’s law was a rule of thumb which turned out to be useful from an organizational perspective. That it has been useful for as long as it has is surprising and perhaps even disturbing.

          But, also, computers could be at least an order of magnitude faster than what we currently have. We can see this by comparing the switching speeds we see on computer busses with the switching speeds we see being used in telecommunications.

          There are also some other problems in the way, though — architectural (in the computer sense), organizational, political, economic, structural, historical, even medical, among others and I do not know how any of those will all play out. On the other hand, the outfits who screw this up too badly wind up on the bad end of those realms…

      • gmachine1729 said, on May 14, 2020 at 4:11 am

        Thanks for the reply. I actually don’t know quantum mechanics well. I mean I know how Schrodinger equation works, have seen examples in theory such as quantum tunneling, know of Born-Oppenheimer approximation. And I know more deeply Planck’s radiation law, relativistic mechanics. And in not terribly great detail, Pauli exclusion principle, Bose-Einstein and Fermi-Dirac statistics. In case people here don’t believe me, I’ll post some expository physics I wrote (in Chinese):

        Derivation of Lorentz transformations and mass energy equivalence
        https://gmachine1729.livejournal.com/164658.html
        https://gmachine1729.livejournal.com/165999.html

        Classical gauge theory
        https://gmachine1729.livejournal.com/163812.html

        Derivation of 3D sound wave equation
        https://gmachine1729.livejournal.com/163152.html

        I don’t know anything about semiconductor physics, tried to learn a bit about it, but realized my background in classical physics was not good enough, so learned more of that instead. In particular, I don’t know anything in detail about diffusion, which is heavy on statistical mechanics. So of course, I am not exactly qualified to judge. But I am somewhat qualified to. Like, I clearly know a good amount of classical physics well. So I sort of gather that quantum mechanics, while important for semiconductors, is only one part of the picture. You need some quantum mechanics for more precise/accurate understanding of atoms and molecules which is important for the materials stuff in semiconductors. But it does feel the core of it is still classical physics and chemistry, with some incorporation of quantum elements when appropriate/useful.

        You say that quantum mechanics is poorly understood. I disagree. That stuff has been repeatedly demonstrated in the lab. Schrodinger equation predicts energy spectra and in the lab the detected spectra are the same as predicted. Of course, Schrodinger equation can only be analytically solved for hydrogen atom, even that one is kind of a mess. For molecules, you have to use stuff like Hartree-Fock, numerical methods. For materials, you obviously cannot use quantum mechanics entirely from first principles, which would be too complicated. I think it is more like an individual human has a clear upper bound on the level of complexity his brain can manage.

        English language is way more full of shit than Chinese media. As for the Chinese language media, some is directly party run, most isn’t, though the party can shut them down anytime, not that it would really care to, unlike for the more extreme cases, such as blocking some media from Taiwan and Hong Kong and Falun Gong. People in China don’t have weird cults of personality over Jack Ma and Pony Ma and internet billionaires. Unlike in America, when there’s so much bullshit in the press about how Bill Gates and Steve Jobs are simply way smarter and more visionary, and it’s because of this American culture of innovation. People are generally more realistic about science and engineering, what’s possible and what’s not. In contrast, in America, people in general, even some in science and engineering, live under delusions and spurious relationships.

        • glaucous_noise said, on May 14, 2020 at 4:04 pm

          Ask yourself what the essential property of a semiconductor is; pick just one. If you picked “a gap in the energy spectrum that results in two species of carriers”, you’ll stop trying to argue that the core of semiconductor physics is classical. After the core material science, once you’re talking about diodes and MOS capacitors, you can get away with classical, but an engineer who does not understand solid state physics won’t get far with the fundamentals. Heck, part of the engineering of modern transistors at Intel involved straining the bands strategically to carefully tune the hole effective mass; good luck doing that with only classical physics.

          As far as quantum mechanics being understood is concerned, ask yourself this: does anybody ever really solve Maxwell’s equations in the real world (e.g. when designing a radar system)? They don’t. Instead, they make countless simplifications of the problem. Now, for quantum problems, how do you intuitively simplify the problem? Because it is not an intuitive theory, engineers generally do not know how to do this, and rely on purely mathematical approaches. That doesn’t work well in the real world. You have to be able to intuitively simplify problems. So, the fact that quantum agrees with a bunch of contrived experiments in a lab is cool, but does not qualify it as an understood theory.

  15. LM said, on May 14, 2020 at 11:17 pm

    I guess my question here then to you is: why exactly are we so stuck?

    Seems either we are up against some hard limits, or that there has been some shift of thinking or policy that is blocking progress. Some elaboration on this would be highly appreciated and interesting.

    • Raul Miller said, on May 15, 2020 at 9:41 pm

      Offshoring is the opposing trend. By shipping too much of the critical work overseas and dedicating ourselves to producing salespeople, we’ve lost contact with too many of the people with the critical skills.

      This problem did not develop overnight — and will not be quick to solve, either.

      • Scott Locklin said, on May 17, 2020 at 4:19 pm

        There are very good research studies by people like Norbert Weiner and Burton H. Klein (underappreciated hero of mine) on how to build innovative organizations which make technological breakthroughs. It’s all fairly simple and common sense: do what people used to do when we made such breakthroughs. Manufacturing stuff abroad is not on the list of things to do; in fact, you want engineers and craftsmen in the same goddamned room, just like they used to be in Aerospace companies, Bell Labs and so on. Anyway, subject for another article, and the eventual book. Big heresy for quarterly bean counters and userers who look at companies like Boeing as concentrations of wealth to be looted.

        One of the things that makes software companies reasonably successful is they’re often able to monetize individual innovations into useful businesses. Of course, most of software engineering is in the past, and people perpetually reinvent the wheel and bike shed things that don’t matter, but it still kind of works.

        • Jeremy Malloch said, on May 22, 2020 at 1:06 am

          This is the first I’ve seen you mention a book – super excited to hear if you’re writing one however.

        • CS said, on October 7, 2020 at 5:40 pm

          > Manufacturing stuff abroad is not on the list of things to do; in fact, you want engineers and craftsmen in the same goddamned room, just like they used to be in Aerospace companies, Bell Labs and so on.

          In semiconductors, the breakdown of this is leading to some really bad engineering. Silos are probably a bigger problem, but the layers of abstraction in VLSI design these days has “designers” completely clueless with regards to what they’re physically building, how much area it takes up, how much power it consumes, etc.

          • glaucous noise said, on October 7, 2020 at 10:37 pm

            Don’t worry, I have friends at all of the big semiconductor companies, and they’ve told me that their managers hired a bunch of machine learning and data engineers to handle backend-of-line/layout and other stuff. They tell me that designers are culturally now regarded as inferior to software programmers and CS PhD’s, a huge advance for the backwards, dated, and ineffective semiconductor industry as far as I’m concerned. If you think about it, all of the problems you mention are ultimately due to the fact that humans are fallible. However, neural networks are infallible, and I have it on good faith that they can solve all problems.

            The future of the semiconductor industry shines brightly, and beacon of hope for us all.

            • Raul Miller said, on October 8, 2020 at 1:49 am

              I think that that kind of staffing arrangement is not about innovation but about survivability.

              Conceptually, when it eventually falls apart, there might be one or more motivated individuals with sufficient experience to do some useful innovating.

            • CS said, on October 8, 2020 at 1:46 pm

              You jest (I think?), but I could see machine learning/data, done correctly, helping a lot with layout and such. Not so sure it’ll help without strong designers, but hey, we’ll see.

  16. […] Ten Predictions for 2030: “Electric cars will not become more than 5% of the US fleet…Large scale passenger train efforts in the US will continue to fail due to bureaucratic sclerosis. No flying cars. No reusable spacecraft (single digit percent likelihood something like skylon works and is funded). ‘Renewables’ will continue to grow in popularity without appreciably changing the US (or world) energy infrastructure. Some environmental disaster may be attributable to a renewable energy technology. No increased adoption of fission in the West. Imbeciles in 2030 will predict fusion breakthroughs by 2050.”  Locklin on Science […]

  17. LK-99 said, on August 1, 2023 at 1:51 pm

    > Scientific breakthroughs: none in physics, chemistry, astronomy, earth sciences

    This post hasn’t aged well with the discovery of LK-99.


Leave a comment