Locklin on science

The Fifth Generation Computing project

Posted in non-standard computer architectures by Scott Locklin on July 25, 2019

This article by Dominic Connor  reminds me of that marvelous artifact of the second AI winter (the one everyone talks about), “Fifth generation computing.”  I was not fully sentient at the time, though I was alive, and remember reading about “Fifth generation computing” in the popular science magazines.

It was 1982. Let’s not rely on vague recollections of what happened this year; some familiar things which happened that year. Tylenol scare, mandated breakup of the Bell System  and the beginning of the hard fade of Bell Labs, Falklands war, crazy blizzards, first artificial heart installed,  Sun Microsystems founded, Commodore 64 released. People were starting to talk about personal robotics; Nolan Bushnell (Atari founder) started a personal robot company. The IBM PC was released the previous year, somewhere mid year they had sold 200,000 of them, and MS-DOS 1.1 had been released. The Intel 80286 came out earlier in the year and was one of the first microprocessors with protected memory and hardware support for multitasking. The Thinking Machines company, attempting to do a novel form of massively parallel computing (probably indirectly in response to the 5thgen “threat”), would be founded in 1983.

Contemporary technology

The “AI” revolution was well underway at the time; expert system shells were actually deployed and used by businesses; Xcon, Symbolics, the Lisp Machine guys were exciting startups. Cyc -a sort of ultimate expert systems shell, would be founded a few years later. The hype train for this stuff was even more  lurid than it is now; you can go back and look at old computer and finance magazines for some of the flavor of it. If you want to read about the actual tech they were harping as bringing MUH SWINGULARITY, go read Norvig’s PAIP book. It was basically that stuff, and things that look like Mathematica. Wolfram is really the only 80s “AI” company that survived, mostly by copying 70s era “AI” symbolic algebra systems and re-implementing a big part of Lisp in “modern” C++. 

Japan was dominating the industrial might of the United States at the time in a way completely unprecedented in American history. People were terrified; we beat those little guys in WW-2 (a mere 37 years earlier) and now they were kicking our ass at automotive technology and consumer electronics. The Japanese, triumphant, wanted to own the next computer revolution, which was still a solidly American achievement in 1982. They took all the hyped technology of the time; AI, massive parallelism, databases, improved lithography, prolog like languages, and hoped by throwing it all together and tossing lots of those manufacturing-acquired dollars at the problem, they’d get the very first sentient machine. 

1) The fifth generation computers will use super large scale integrated chips (possibly in a non Von-Neumann architecture).
2) They will have artificial intelligence.
3) They will be able to recognize image and graphs.
4) Fifth generation computer aims to be able to solve highly complex problem including decision making, logical reasoning.
5) They will be able to use more than one CPU for faster processing speed.
6) Fifth generation computers are intended to work with natural language.

Effectively the ambition of Fifth generation computers was to build the computers featured in Star Trek; ones that were semi-sentient, and that you could talk to in a fairly conversational way.

 

People were terrified. While I wasn’t even a teenager yet, I remember some of this terror. The end of the free market! We’d all be Japanese slaves! The end of industrial society! DARPA dumped a billion 1980s dollars into a project called the Strategic Computing Initiative in an attempt to counter this (amusingly one of the focuses was … autonomous vehicles -things which are still obviously over the rainbow). Most of the US semiconductor industry and main frame vendors began an expensive collaboration to beat those sinister Japanese and prevent an AI Pearl Harbor. It was called the Microelectronics and Computer Technology Corporation (MCC for some reason), and it’s definitely ripe for some history of technology grad student to write a dissertation on it beyond the Wikipedia entry.  The Japanese 5th gen juggernaut was such a big deal, the British (who were still tech players back then) had their own copy of this nonsense, called the “Alvey Programme” -they dumped about a billion pounds in todays money into it. And not to be left out, the proto-EU also had their own version of this called ESPIRIT with similar investment levels. 

 

Prolog was of course the programming language of this future technology. Prolog was sort of the deep learning of its day; using constraint programming, databases (Prolog is still a somewhat interesting if over -lexible database query language), parallel constructs and expert system shell type technology, Prolog was supposed to achieve sentience. That’s not worked out real well for Prolog over the years: because of the nature of the language it is highly non-deterministic, and it’s fairly easy to pose NP-hard problems to Prolog. Of course in such cases, no matter how great the parallel model is, it still isn’t going to answer your questions.

 

One of the hilarious things about 5th generation computers is how certain people were about all this. The basic approach seemed completely unquestioned. They really thought all you had to do to build the future was take the latest fashionable ideas, stir them together, and presto, you have brain in a can AI. There was no self respecting computer scientist who would stand up and say “hey, maybe massive parallelism doesn’t map well onto constraint solvers, and perhaps some of these ambitions, we have no idea how to solve.” [1] This is one of the first times I can think of an allegedly rigorous academic discipline collectively acting like overt whores, salivating at the prospects of a few bucks to support their “research.” Heck, that’s insulting to actual whores, who at least provide a service.

 

 

Of course, pretty much nothing in 5th generation computing turned out to be important, useful, or even sane. Well, I suppose VLSI technology was all right, but it was going to be used anyway, and DBMS continue to be of some utility, but the rest of it was preposterous, ridiculous wankery and horse-puckey. For example; somehow they thought optical databases would allow for image search. It’s not clear what they had in mind here, if anything; really it sounds like bureaucrats making shit up about a technology they didn’t understand. For more examples:

“The objective stated (Moto-oka 1982 p.49) is the development of architectures with particular attention to the memory hierarchy to handle set operations using relational algebra as a basis for database systems. “
“The objective stated (Moto-oka 1982 p.53) is the development of a distributed function architecture giving high efficiency, high reliability, simple construction, ease of use, and adaptable to future technologies and different system levels.”
“The targets are: experimental, relational database machine with a capacity of 100 GB and 1,000 transactions a second; practical, 1,000 GB and 10,000 transactions a second. The implementation relational database machine using dataflow techniques is covered in section 8.3.3.”
“The objective stated (Moto-oka 1982 p.57) is the development of a system to input and output characters, speech, pictures and images and interact intelligently with the user. The character input/output targets are: interim, 3,000-4,000 Chinese characters in four to five typefaces; final, speech input of characters, and translation between kana and kanji characters. The picture input/output targets are: interim, input tablet 5,000 by 5,000 to 10,000 by 10,000 resolution elements; final, intelligent processing of graphic input. The speech input/output targets are: interim, identify 500-1,000 words; final, intelligent processing of speech input. It is also intended to integrate these facilities into multi-modal personal computer terminals.”
“The Fifth Generation plan is difficult and will require much innovation; but of what sort? In truth, it is more engineering than science (Fiegenbaum & McCorduck 1983 p 124). Though solutions to the technological problems posed by the plan may be hard to achieve, paths to possible solutions abound.” (where have I heard this before? -SL)

The old books are filled with gorp like this. None of it really means anything.  It’s just ridiculous wish fulfillment and word salad.  Like this dumb-ass diagram:

 

There are probably lessons to be learned here. 5thGen was exclusively a top down approach. I have no idea who the Japanese guys are who proposed this mess; it’s possible they were respectable scientists of their day. They deserve their subsequent obscurity; perhaps they fell on their swords. Or perhaps they moved to the US to found some academic cult; the US is always in the market for technological wowzers who never produce anything. Such people only seem to thrive in the Anglosphere, catering to the national religious delusion of whiggery.

Japan isn’t to be blamed for attempting this: most of their big successes up to that point were top-down industrial policies designed to help the Zaibatsus achieve national goals. The problem here was … no Japanese computer Zaibatsu worth two shits which had the proverbial skin in the game -it was all upside for the clowns who came up with this, no downside.  Much like the concepts of nanotech 10 years ago, quantum computing or autonomous automobiles now; it is a “Nasruddin’s Donkey bet” (aka scroll to bottom here) without the 10 year death penalty for failure.

Japan was effectively taken for a ride by mountebanks. So was the rest of the world. The only people who benefited from it were quasi-academic computer scientist types who got paid to do wanking they found interesting at the time.  Sound familiar to anyone? Generally speaking, top down approaches on ridiculously ambitious projects, where overlords of dubious competence and motivation dictate a  R&D direction that don’t work so well; particularly where there is software involved. It only works if you’re trying to solve a problem that you can decompose into specific tasks with milestones, like the moon shot or the Manhattan project, both of which had comparatively fairly low risk paths to success. Saying you’re going to build an intelligent talking computer in 1982 or 2019 is much like saying you’re going to fly to the moon or build a web browser in 1492. There is no path from that present to the desired outcome. Actual “AI,”  from present perspectives, just as it was in 1982, is basically magic nobody knows how to achieve. 

Another takeaway was many of the actual problems they wanted to solve were done in a more incremental way while generating profits. One of the reasons they were trying to do this was to onboard many more people than had used computers before. The idea was instead of hiring mathematically literate programmers to build models, if you could have smart enough machines to talk to people and read charts and things the ordinary end user might bring to the computer with questions, more people could use computers, amplifying productivity. Cheap networked workstations with GUIs turned out to solve that in a much simpler way; you make a GUI, give the non spergs some training, then ordinary dumbasses can harness some of the power of the computer. This still requires mentats to write GUI interfaces for the dumbasses (at least before our glorious present of shitty electron front ends for everything), but that sort of “bottom up, small expenditures, train the human” idea has been generating trillions in value since then.

The shrew-like networked GUI equipped microcomputers of Apple were released as products only two years after this central planning dinosaur was postulated. Eventually, decades later, someone built a mechanical golem made of microcomputers which achieves a lot of the goals of fifth generation computing, with independent GUI front ends. I’m sure the Japanese researchers of the time would have been shocked to know it came from ordinary commodity microcomputers running C++ and using sorts and hash tables rather than non-Von-Neumann Prolog supercomputers. That’s how most progress in engineering happens though: incrementally[2]. Leave the moon shots to actual scientists (as opposed to “computer scientists”) who know what they’re talking about. 

 

1988 article on an underwhelming visit to Japan.

1992 article on the failure of this program in the NYT.

 

[1] Some years later, 5 honest men discussed the AI winter upon them; yet the projects inexorably rolled forward. This is an amazing historical document; at some point scholars will find such a thing in our present day -maybe the conversation has already happened. https://www.aaai.org/ojs/index.php/aimagazine/article/view/494 … or PDF link here.

[2] Timely Nick Szabo piece on technological frontiersmanship: https://unenumerated.blogspot.com/2006/10/how-to-succeed-or-fail-on-frontier.html

26 Responses

Subscribe to comments with RSS.

  1. clf28264 said, on July 25, 2019 at 6:56 pm

    I deal with this sort of technical wankiness daily at work in 2019, but thankfully I am the boss and mostly end up rolling over my developers and their “new ideas!” that are retreading of stuff from 50 years ago. Its a persistent issue caused by the disgusting culture present in software development and engineering that borders on self anointed priesthood. Stop telling me the future is around the corner or we need to do x, your code sucks and you’re unable to test it yourself. Most of the problems we solve today could have been solved years ago, it just was cheaper to automate via a human than to engineer a solution. Further, until we stop living the in the computing paradigm of the 50’s and 60’s (which is frankly amazing and incredible!) where the “sophisticates” talk up the future and not the people doing real work, these stupid fads will continue. I for one welcome the nuclear AI winter that is on the horizon! (Disclosure, I am trained as a statistician and worked as a financial quant)

  2. Anonymous said, on July 25, 2019 at 8:53 pm

    Kinda reminds me of opensource, how it was hyped: many eyes, revolution, freedom, blah blah. Kinda symbolic where it ended: lots of buggy quirky ill-designed software that is effectively closed-source because noone understands all that shit (except hackers), which is mostly a take it or leave it deal because no meaningful infrastructure to fork maintenance. OTOH lots of social activity^W^W mutual masturbation, people proudly calling themselves ‘hackers’, support of big biz. Yikes! But the basic idea is solid, just not being developed much. Take any promising tech like Lisp, Forth, Smalltalk, even Unix — always the same pattern: a few folks with a brain sort of saw the light but once a ‘community’ formed the core ideas got ossified and innovation shifted to the periphery.

    Honestly, I don’t have much of a gripe about it. It can be infuriating when you have to deal with the fallout but otherwise it’s just the baseline of the human condition. What I do see a problem with is when people who get the general idea of what’s happening pose questions like ‘What do we do about it? How do we change that?’ This idea of ‘we’ has to be shot in the head and buried deep. There’s no we, there’s a small bunch of smart folks and huge bunch of dumb folks and the former better disentangle themselves from the latter. It always saddens me when e.g. scientists bitch about funding but instead of talking about a path towards self-sufficiency they talk about reforms.

  3. Cameron said, on July 26, 2019 at 5:29 am

    Another great entry. Welcome back!

  4. maggette said, on July 26, 2019 at 8:27 am

    As always, great content…

    Regarding the problems with a top down approach in technology development it will be interesting to see how , the “China Experiment” works out. I

    MHO they are not top down only, they have several concepts to incubate smaller companies and let them work. And even in their top-down approach, they don’t develop everything themselves but aquire knowledge/technology on the market (for example they bought the German robotics company Kuka).

    But still, they build whole cities that concentrate on single aspects of tech and economy, similar to the stuff the soviets did. Very central planing flavour of RDD efforts and econcomic planing. Will be interesting to see if they can get any technological breakthroughs this way.

    I don’t agree with everything, but still interesting:

    • Scott Locklin said, on July 26, 2019 at 3:02 pm

      China’s top down approach has been interesting. As he points out, copying things works really well, especially when you have a moat, like, say, Chinese language and a totalitarian government willing to enforce their idea of intellectual property. They could probably make progress in autonomous vehicles, because driving there is insane anyway, and of course, making cars work more like trains by altering city infrastructure would have worked in the 70s. I expect they won’t do this though. There’s certainly nothing interesting coming out of their research labs that I’ve seen, though perhaps I have a high bar for this sort of thing. You’d think by now there would be something from a Chinese lab which is worth … say … ReLu or something.

      • fpoling said, on July 26, 2019 at 6:02 pm

        That reminds mind me about Soviet Union copying B-29 after the WWII and later IBM 360. The government was so sure that they can spy and copy Western tech at will that they abandoned most home grown computer tech. Which lead to the disaster when they tried and failed to copy 80286 CPU which required to copy the whole technological culture to work.

        • Scott Locklin said, on July 26, 2019 at 8:27 pm

          They did pretty good with rockets, information theory, radar (still ahead of us), lead cooled reactors, and a few kinds of material science. But yes, copying, you know your probability of success is closer to 1 as someone else did it already.

          • fpoling said, on July 27, 2019 at 9:52 am

            Soviet Union did not copy US rockets. Surely they started with German ones, but from that it was home grown stuff. This is pretty much like US. With computers the policy from seventies or so was just to copy each new generation of US tech abandoning attempts to develop home-grown stuff. And it turned out that at some point the price of copying became too high to bear. It is interesting to speculate about alternative history where https://en.wikipedia.org/wiki/BESM-6 (the Russian version of the article has a lot more details) would not be abandoned in favor of borrowing from IBM and Dec.

            It is also interesting that it was Taiwan and South Korea, not China, who was able to mass produce more technologically advanced chips than US. As compared with China, the governments there are not that authoritarian and they has not copied US tech on massive scales.

          • Toddy Cat said, on July 30, 2019 at 12:37 pm

            The Soviets did pretty well with industrial-age technology (T-34, AK-47, Sturmovik, SS-6, etc.) but really faltered as the information age got going in the 1960’s, forcing them to rely more and more on Western technology stolen by the T Directorate. The utter paranoia of the Soviet regime when it came to any sort of “unauthorized” information flow (access to Xerox copiers and even electric typewriters was severely restricted for quite a while) almost certainly had something to do with this. We’ll see if this has any impact on China, as Xi’s regime becomes more and more restrictive with regard to this.

  5. fpoling said, on July 26, 2019 at 5:54 pm

    I am not sure that it was that obvious that Manhattan project would succeed. The required speed of assembling the critical mass was very uncertain and it was not that obvious that a practical nuclear bomb was possible. With Plutonium if not for its phase transition under pressure it would be outside technical feasibility.

    The Moon landing on the other hand is a good example of top-down engineering project. There were multiple paths to success including assembling things in orbit with smaller rockets and all components can be developed and tested in parallel.

    • Scott Locklin said, on July 26, 2019 at 8:25 pm

      Manhattan project had a lot of risk in it for sure, but … centerfuge, calutrons and plutonium were all decent punts with obvious mechanisms we knew at least could work. Which approach can you identify as being anywhere near stronk AI aka “brain in a can will drive your audi, do your day job?” There’s no point A to point B; it’s vastly more terra incogneta than controlled break-even fusion, and we haven’t done a hell of a lot with the latter either.

  6. asciilifeform said, on July 30, 2019 at 4:31 pm

    AI-flavoured vapour played approximately the same role in ’80s “n-th gen computer” efforts as the El Dorado legend played in the age of conquistadors. I.e. a social engineering hack to loosen up the royal purses. This time however the gambit failed, and now appears that we’re stuck with 1970s-era computing (i.e. buffer overflows, Unix, buggy CPU archs with 35 years of layered legacy crud for which the — necessarily incomplete — docs take up entire bookcase, multi-gigabyte unreadably-“open” sources, and other such “joys”.)

    • Scott Locklin said, on July 30, 2019 at 11:01 pm

      Please make us a less shitty computing system. I’m actually thinking of drinking Chuck Moore kool-aide and building a bunch of dumb forth machines. Really all I want to do most of the time is smash arrays together without too many cache stalls.

  7. […] The Fifth Generation Computing Project 2 by o_nate | 0 comments on Hacker News. […]

  8. I C Things said, on August 1, 2019 at 3:13 am

    This article reminded me of all the hype I heard and read about as a teenager during the late 70’s and early 80’s of what the world would be like circa 2020. I am still waiting for my commercial fusion reactors, holographic computer memory, flying cars, underwater cities, single stage to orbit reusable launch vehicles, etc.. Except for fusion and holographic memory which I thought were mostly engineering issues, by BS detector pegged at the other ones. Most persons are poor drivers on controlled roads, so even if flying cars could be commercially viable, I would not want some moron flying a one ton vehicle over my house which is vastly more complicated to operate safely. Most persons do not look at the legal aspects of new technology. I always wondered why would anyone spend the great expense to construct habitable structures underwater when we have more land than we know what to do with. The physics of multi-stage rockets to reach Earth orbit and beyond was developed in the 1930’s. Single stage to orbit research programs and the grossly inefficient Space Shuttle were merely done for their “cool” factor. None of the commercial space launch companies are even entertaining those ideas. Bottom line, whenever I see a new technology being hyped, I put the financial, technical, engineering/manufacturing, social and legal aspects into my BS filter.

  9. 8men said, on August 1, 2019 at 7:50 am

    perl is king. the only decent thing that came out in the last 20 years.

  10. bob said, on August 1, 2019 at 2:06 pm

    A comment and a recollection, from one who was a bit more “there” than you (active AI researcher/practitioner at the time):

    What looks like obvious (and hilarious?) wankery from nearly 40 years on was perhaps not quite so obvious at the time – and I’d maybe even venture that 40 years from now some things that seem ground-breaking and insightful at the moment will be just as hilarious and mockable. You’re correct that there was a lot of budget protection/exploitation going on, and even then it was clear that this was not going to solve what’s now referred to as artificial general intelligence, but I don’t think it was as mendacious and scammy as you seem to cast it.

    As for the recollection, I was at the AAAI conference at which DARPA announced the Strategic Computing Initiative. As part of that announcement, it was made clear that the plan was for large defense contractors to play the primary role – despite the fact that new ideas and innovations were usually to be found in academia and small businesses. When asked why this was, the DARPA PM said, in effect “we have to pay the BigCos to get smart about things we think are important, and this is a way to do that.”

    • Scott Locklin said, on August 2, 2019 at 11:49 am

      I dunno, the “engineering by buzzword stew” and “panic by the establishment that the sky is falling” is a pretty good tell. Modern AI research is also basically nonsense, brought about by way too much hype for some modest but real results, just as it was back then. Same with “Quantum Computing.” The way top down R&D works on real things, you plan something out in detail and plan around the gap pieces you don’t know how to do. The gap pieces have to be reasonably small, and if the gap is wide enough, you have to have several possible approaches. All great top-down engineering achievements look like this. ICBM research, supersonic jets, atom bombs, digital computers: they all happened this way. 5thgen computing, there was no way to bridge the gap from VAX-11 era computing to Captain Kirk’s computer. It was just a wish, like nanotech or something.

      I dunno what DARPA does now, other than fund dumb python development, but I do know they were interested in maintaining research and engineering infrastructure back when there was an obvious geopolitical rival.

      • asciilifeform said, on January 27, 2020 at 11:27 pm

        I did a tour of duty in a typical “too big to fail” subcontractor salt mine, and in the process once sat, as a muzzled observer, on a (non-secret) DARPA committee. It’s a straight “cut the freshly-printed moolah cake” affair, exactly like e.g. Putin’s “nanotech incubator”, or Zimbabwe’s “oil from solid rock” ministry. With a superficial coating of “first-world” veneer.

  11. 8men said, on August 1, 2019 at 8:31 pm

    the low hanging fruits have been pulled off the tree. Now progress is much more difficult. Probably most progress was a one time, one shot, short story of the 20th century. We thought it could go on like a line forever.

    Actually i think nature, the way we humans and our mind are configured to matter, our environment and so on can only manipulate so much and not much more. the 3 body problem never really solved. differential equations never really solved except special cases etc.

    Probably the only way forward is to invent things that no longer have to make any sense: maybe something like art. Science can only answer so many questions past which there are no other answers. We can invent anything. Surrealism anyone ?

    • Scott Locklin said, on August 1, 2019 at 9:42 pm

      You should read Charles Murray’s “Human Achievement” – the early-mid 20th century already saw a drastic slow down in human progress compared to the decades before. And that was the era of the invention of the digital computer, nuclear energy, flight to hypersonic flight into space and antibiotics. Before that, the invention of stuff like steam engines and calculus, vaccination, vitamins and the germ theory of disease was an absolute cataclysm; insanely more important. And before that, well, exercise for you; think about what happened in 1820-1860. 1860-1900. 1900-1940, 1940-1970. Almost literally nothing important has happened since 1970 except for improvements in lithography.

      I am comfortable with the statement, “our civilization is declining.” Pretending it isn’t, and that some glorious new future of AI or quantum computing or cold fusion or whatever is upon us is what bothers me. We’ve lost sight of how to make progress, or even educate people. Or maybe you’re right and the fruit is off the vine. Stop pretending; deal with the new reality, do your best with it instead of wasting your life on nanotech or whatever.

      Also, perl :thumsup:

      • 8men said, on August 3, 2019 at 10:12 am

        to be fair, I wouldn’t knock down all “progress” in the last 20 years.
        youtube (listen to all the albums of the world) blogs facebook smartphones whatsapp especially google maps street view lets you see places you would have never gone to etc. are all very interesting developments, but they were made by combining all previous stuff together in a clever way. (mostly done between 2000 – 2010, 2010 to 2020 hasn’t brought much novelty at all it seems)

        Basic science and discoveries are probably close to the end of the road. Higgs boson and gravity waves, confirm theories but are invisible and the closest thing to almost nothing and so far removed from anything game changing, higss boson exists in a nanosecond in a very complex machine etc.

        I think the paradigm of science and a common world are done with. Now we can manipulate circuits in minds and brains, do really far out things but then the whole concept of science disappears. The whole concept of community sharing a common world can be put in discussion.

        Maybe the future could be so many atomized beings living in their own make believe world or something like that…

        The best we can do is make the normal things work and work well: in poor countries make decent streets and decent homes decent public like transportation, simple and low tech but decent: this would do much more to alleviate proverty than all the BS ideology. (and also in the rich world).

        Rich world make decent low costing homes transportation low cost health care education, the simple things the normal things.

        But normal things aren’t the next big thing, the next startup the next unicorn, this BS ideology that we will all become bill gates…

  12. 8men said, on August 3, 2019 at 10:15 am

    to be fair, I wouldn’t knock down all “progress” in the last 20 years.
    youtube blogs facebook smartphones whatsapp especially google maps street view lets you see places you would have never gone to etc. are all very interesting developments, but they were made by combining all previous stuff together in a clever way. (mostly done between 2000 – 2010, 2010 to 2020 hasnt brought much novelty at all it seems)

    Basic science and discoveries are probably close to the end of the road. Higgs boson and gravity waves, confirm theories but are invisible and the closest thing to almost nothing and so far removed from anything game changing, higss boson exists in a nanosecond in a very complex machine etc.

    I think the paradigm of science and a common world are done with. Now we can manipulate circuits in minds and brains, do really far out things but then the whole concept of science disappears. The whole concept of community sharing a common world can be put in discussion.

    Maybe the future could be so many atomized beings living in their own make believe world or something like that…

    The best we can do is make the normal things work and work well: in poor countries make decent streets and decent homes decent public like transportation, simple and low tech but decent: this would do much more to alleviate proverty than all the BS ideology. (and also in the rich world).

    Rich world make decent low costing homes transportation low cost health care education, the simple things the normal things.

    But normal things aren’t the next big thing, the next startup the next unicorn, this BS ideology that we will all become bill gates…

  13. MadRocketSci said, on August 19, 2019 at 1:19 pm

    Hello. Thank you for your writing! Thanks also for taking the piss out of some of the manic hype in our world today. (Esp. that recent AI post) You seem like someone it would be very interesting to get to know, and clearly have a very broad understanding of a lot of interesting science and technology. Short post for now, got to get to work.

    • Madrocketsci said, on August 19, 2019 at 1:25 pm

      PS: Im probably more of an optimist wrt certain technologies than you, but skeptical perspectives are important. Been burned before. Theres a lot of spin out there. Cant say much more on a phone keyboard.

  14. MadRocketSci said, on August 26, 2019 at 1:55 pm

    You seem like someone who is incredibly well read, and who is knowledgeable in many subjects. I’ve been through decades of school, but I feel like I haven’t been able to retain anywhere near the detail that I would have liked. (Broad and blurry outlines, where to find things again if I need to remember, but not the details.)

    How do you study? How do you go about retaining and organizing the information that you’ve drawn on in your articles and blog-posts?

    What sort of things do you read?


Leave a comment