Locklin on science

Not all programmers are alike: a language rant

Posted in Clojure, Design by Scott Locklin on September 12, 2012

I came across this video presentation the other day.  It’s an hour long weird-assed advocacy for Clojure by a guy (“Uncle Bob”) who has used OOprogramming most of his professional life. This entire tirade is probably useless to anyone who has not watched it already. Since I’m annoyed that I spent a time sliced hour of my life listening to it, I don’t recommend you listen to it either. This is possibly my most useless, therapeutic “I can’t believe he said that” WordPress post of all time.

http://skillsmatter.com/podcast/agile-testing/bobs-last-language/wd-4946

This guy gives an amusing talk, but he’s wrong in countless ways, and I have to talk about it. His premise started out reasonably well; there really hasn’t been much progress in language design over the years, I grew progressively more angry while listening. He posits that Clojure could be “the last programming language.” While I am a fan and advocate of Clojure, I emphatically disagree with him.

  • He bags on graphical languages as not being a new programming paradigm likely to influence the future of coding. Such languages are   already very good and widely used in  fields dealing with data acquisition, control and analysis (Labview, Igor). Labview beats the snot out of anything else for building, say, custom spectrometer control and data acquisition systems. I know, because I had to do this.   I’ve seen mooks try to do the same thing in C++ or whatever, and laugh scornfully at the result.  It’s worth noticing that such graphical languages also very easy to write in a mature Lisp with GUI hooks; you can find a nice one in the source code for Lush (packages/SN2.8/SNTools/BPTool if you’re interested -I’ve seen them in Common Lisp as well). Interface designers are bad at making them, but some day there will be more of these. Why there are not more people automating the dreary assed LAMP/Rails stack with graphical languages, I don’t know. Probably because such drudges don’t know how to write a graphical language. This would actually be a very good Clojure application, once someone writes a native GUI for Clojure which compares to, say, Lush’s Ogre system (which, like everything else in Lush, is a small work of genius).
  • Programming paradigms are indeed useful for keeping idiots out of trouble, but a language is more useful if you can break paradigms, or switch to other paradigms when you need to.  Sure, most weak-brained people who are over impressed with their own cleverness shouldn’t try to break a paradigm, but sometimes you have to. I mean, macros are almost by definition broken paradigms, and that’s where a lot of Lisp magic happens. If you look at things that succeed, like C or C++ (or to a lesser extent, OCaML), there is a lot of paradigm breaking going on. Clojure is mostly functional, and partially parallel, but the ability to drop back into Java land is paradigm-breaking gold.
  • He thinks Clojure is an OO language. If you hurt your eyes staring at C++, Java and UML for most of your career: Forth or APL probably looks object oriented. I am sure you could write some OO style code in Clojure, but it would be breaking the programming paradigm, which he considers bad. I don’t consider that bad (though it will break parallelism); hell, I had a stab at array programming in Clojure. JBLAS array programming in Clojure is not a great fit, but the ability to do things like this is one of the things that makes Clojure useful.
  • Anyone who thinks, like this fella does, that garbage collected virtual machines are always a good idea has never done serious numerics, data acquisition or real time work, which is half of what makes the world go around. Most people who consider themselves programmers are employed effectively selling underpants on the internet using LAMP. Therefore most people think that’s what programming is. To my mind, that’s not as important to civilization as keeping the power on and the phone company running. Sure, some of the power and phone company run on virtual machines (Erlang is awesome -though slow): a lot of it don’t, and won’t ever be, as long as we’re using Von Neuman architectures and care about speed. Virtual machines are generally only optimized for what they are used for. People brag about how fast the JVM is; it’s not fast. Not even close to what I consider fast. For some things it is damn slow. Example: my ATLAS based matrix wrappers beat parallel Colt on the JVM by factors of 10 or more. And that’s with the overhead of copying big matrices from Clojure/Java. And that’s after the JVM dudes have been working on array performance for …. 20 years now? R* and kd-trees are preposterously slow on the JVM compared to the old libANN C++ library, or naive kd-tree implementations. Factors of 100k to 1E6. I may be wrong, but I’m guessing trees confuse the bejeepers out of the JVM (if some nerd becomes indignant at this assertion: you’re only allowed to comment if you have a kd-tree or R* tree  running on the JVM within a factor of 100 of libANN for sorts and searches on dimensions > 5 and 100k+ rows).  Sure, the JVM is modestly good at what it ends up being used for. What if I do other things? So don’t tell me “the last programming language” won’t have a compiler. A proper “last programming language” would work like OCaML: with compiler when you need it, and bytecode VM when you don’t.

Of course, there will never be a “universal language.”  Some languages are very good for specific purposes, and not so good in general. Some are  useful because they have a lot of legacy code they can call. All languages have strengths and weaknesses. Some languages are vastly more powerful than others, and can’t be used by ordinary people. Human beings have hierarchies in their ability to program, just as they have hierarchies in their abilities to play basketball, chess or run. Part of it is personal character, lifestyle and willingness to take it to the next level. Part of it is innate. Anyone who tells you otherwise is selling something.

There is also the matter that “programming” is an overly broad word, kinda like “martial arts.” A guy like “Uncle Bob” who spends his time doing OO whatevers has very little to do with what I do. It’s sort of like comparing a guy who does Tai Chi to a guy who does Cornish Wrestling; both martial arts, but,  they’re different. My world is made of matrices and floating point numbers. His ain’t.

As for Clojure: it’s a very good language, but the main reason it is popular is the JVM roots, and the fact that Paul Graham is an excellent writer. The JVM roots make it popular because there are many bored Java programmers. They also make it more useful because it can call a bunch of useful Java. Finally, Clojure fills a vast gaping void in the Java ecosystem for a dynamically typed interactive language that can seamlessly call Java code that Java programmers already know about. REPL interactivity beats the living shit out of eclipse, even if you never do anything Lispy.

IMO, there are better designed lisps; Common Lisp probably is (parts of it anyway). On the other hand, design isn’t everything: Clojure is more useful to more people than Common Lisp. Consider the differences between lein and ASDF. Lein as a design is kinda oogly; it’s basically a shell script which Does Things. Yet, it works brilliantly, and is a huge win for the Clojure ecosystem. Common Lisp native ASDF is probably very well designed, but it is practically useless to anyone who isn’t already an ASDF guru. ASDF should be taken out back and shot.
Clojure won’t be the last language. I forecast a decent future for Clojure. It will be used by Java programmers who need more power, and Lisp programmers who need useful libraries (it’s unbeatable for this, assuming you do the types of things that Java guys do). I will continue to invest in it, and use it where it is appropriate, which is lots of different places. I’ll invest in and use other tools when that is the right thing to do.

Foreshadowing: I’ve been playing around in APL land, and have been very impressed with what I have seen thus far.

86 Responses

Subscribe to comments with RSS.

  1. Scott Murphy said, on September 12, 2012 at 10:47 pm

    All this talk of Clojure, I kind of miss Lisp. I have been working with Haskell a lot and really like it too. But it is very short on parens and macros (Template haskell sucks).
    Also, the purity gets old sometimes… That is why monads became arrows and arrows implemented FRP. Which is so close to imperative prog it is sort of silly.

    I am also not convinced FP doesn’t just swap one set of programming pitfalls for another.

    • Scott Locklin said, on September 12, 2012 at 11:00 pm

      Never found a reason to experiment with Haskell. It seems to be gaining in popularity though. Is there an elevator pitch for looking at it?

      I’m sure FP has pitfalls; probably mostly unknown or being currently discovered, which is one of the pitfalls of ideas that haven’t been fully explored yet.

      • Scott Murphy said, on September 14, 2012 at 10:10 pm

        Really strong typing that is still mostly easy to work with.
        Polymorphism that is useful (really)
        Libraries written by people who really focus on code quality.(For the most part.)
        Very fast for a language with dynamic programming abilities.

        Killer Apps: “yampa (FRP)”, Yesod (web)

        No fanboy but I do like it.

        • Scott Locklin said, on September 14, 2012 at 10:26 pm

          I’m guessing the type system proves type safety at run time the way it does in OCaML? That could be helpful. I kind of liked OCaML though, if I had to write something like that.

          • Scott Murphy said, on September 16, 2012 at 3:24 pm

            Yeah they both have that ML look. I haven’t used OCaML but I suspect you are right.

  2. asciilifeform said, on September 12, 2012 at 11:04 pm

    Quicklisp has replaced ASDF, and is reasonably automatic.

    • Scott Locklin said, on September 12, 2012 at 11:16 pm

      It certainly took them long enough!

      BTW, your post on Engelbart’s violin is glorious:
      http://www.loper-os.org/?p=861

      • asciilifeform said, on September 13, 2012 at 12:22 am

        I am glad to hear that you enjoyed it! Any comments/observations?

        • Scott Locklin said, on September 13, 2012 at 12:31 am

          Well, I just read it. I’ll say something smart over on your blog later. I am not often inspired by things I read in nerd-blogs, but that one was truly glorious.

    • didi said, on September 13, 2012 at 2:19 am

      Quickilisp uses ASDF underneath and do it beautifully.

      You can, for instance, create your project using ASDF locally and use Quicklisp to load it. It will automatically fetch any library your project depends on from the repositories (as long as there is one there)

  3. Dan Gackle said, on September 13, 2012 at 1:13 am

    Please write about what you see in APL. I’d like to read that.

  4. mimokrok said, on September 13, 2012 at 1:45 am

    > Clojure fills a vast gaping void in the Java ecosystem for a dynamically typed interactive language that can seamlessly call Java code that Java programmers already know about

    Groovy filled this gap several years before Clojure came out.

  5. Rohan Jayasekera said, on September 13, 2012 at 2:15 am

    I’m fascinated that you’re now discovering APL, which is a very old language from the 60s. I started learning it in 1971 (at the age of 13; it was my first programming language) and used it professionally in the mid-70s through the early 90s. Among other things I co-wrote a global banking risk management system that was used by various banks (including Europe’s largest bank) … and since response times for trading systems must be short, it was a prime example of what everyone said nobody should ever do in APL or any other “slow” language, yet APL allowed two programmers to write the guts of a demanding system in just a few months while still meeting the contractually-required response times.

    Speaking of Paul Graham, iirc when he co-wrote Viaweb they kept it a secret that they did it in LISP, afraid that they’d be copied. If I ever meet him I’m going to tell him that they need not have bothered. We were not only open about our use of APL, we were downright evangelical about it (we not only did applications but also sold SHARP APL system software in competition to IBM). Yet as far as I could tell nobody who felt threatened by us adopted it. You may not be surprised by this, given your comments above about variations among programmers and their abilities.

    As APL’s inventor Ken Iverson better understood what he’d done back in 1962 (e.g. the syntactic treatment of higher-order functions), he initially came up with ways to “explain” things that were somewhat forced but permitted backward compatibility. Eventually he found that the constraints were just too onerous, and created a new language called J that you should also have a look at if you haven’t already.

    If you’d like to talk to a veteran about APL, feel free to email me at 1 (the digit not the letter) at sympatico dot ca.

    • Scott Locklin said, on September 13, 2012 at 2:56 am

      I don’t want to give away my impending blog post in the comments section, but one of the great strengths of the APL ecosystem so far are the user community. Lispy people are a preposterously pricklish and unhelpful bunch in comparison.
      I’ve fiddled with Kona/K3 and Q, but I am blown away by J. Recommended to me by an old Futures trader who has been writing old school APL since the dark ages of punched cards and teletype.

      • asciilifeform said, on September 13, 2012 at 3:12 am

        It may be worth considering how the Lisp programmers came by their prickliness. Partly it is from the habit of not needing or wanting anybody else’s help to do something – as described here. The use of a programming system which maximally empowers the individual encourages this kind of psychology.

        But part of it definitely stems from the incessant stream of ignorant criticisms leveled at the Lisp community by outsiders: witness the rabid idiots pushing for the removal of macros, dynamic typing, CLOS, the handy LOOP, the compiler’s presence at run-time, and other nice things treasured by those of us who actually earn their living with Common Lisp systems and wouldn’t trade them for anything presently available.

        • Scott Locklin said, on September 13, 2012 at 3:15 am

          Whatever the reason, it doesn’t afflict APL guys.
          I suspect APL types have a large overlap with applied mathematicians, who tend to be a good natured and self-deprecating lot. They’re also paid well, as they have a large and mission critical foothold in the finance business. I know being paid well makes me more agreeable.

          • asciilifeform said, on September 13, 2012 at 3:26 am

            The APL community is protected from idiocy by an impenetrable force-field: distance from the poisonous mainstream “programming culture.” As exemplified by heathen pits like Hacker News and the like.

            APL and Common Lisp both thrive in fields where well-paid, thinking people are hired to actually solve non-trivial problems, rather than churn out code. Fields in which employment ads are never seen, or if they are, will never stoop to mentioning a programming language: high finance, military R&D, oil and gas exploration, and the like. The projects happen soberly and behind firmly-closed doors, and the salivating hordes of social-networking lemmings never hear of them. From which they naturally conclude that APL, Common Lisp, ADA, etc. are long dead and buried.

            • Scott Locklin said, on September 13, 2012 at 3:50 am

              Lisp is pretty sparse on the ground compared to APL in the finance world. I’m starting to grasp why. Kx systems gets away with charging $100k for their APL variant language, Q. There are at least a thousand people using it, which is huge IMO. The jobs are pretty sweet; very highly paid, but they often involve making Q talk to crap like Visual Basic for pointy headed trader orcs.
              Of course, Lisp is a much more general solution, used in all kinds of amazing places, but for number crunching through order books, Q/APL is a more practically useful solution.

              • asciilifeform said, on September 13, 2012 at 12:08 pm

                This is not a comment about APL, but: while $100K sounds like a princely sum to programmers who have never used anything but free programming systems, it is chump change in the world of serious tools. A laboratory robot that pours liquids from one vial to another can cost $300K. A tow truck or passenger bus often costs $200K.

                • Scott Locklin said, on September 13, 2012 at 7:34 pm

                  Sure, but when was the last time you spent $100k on a mere language implementation? One of an obscure, mostly dead language that looks like line noise! That’s for a 4-core machine too, so imagine the bills for a whole bank!

                  • asciilifeform said, on September 13, 2012 at 7:52 pm

                    Evidently, someone believes that he is getting $100k/core of value out of a language implementation. Unlike the “freetard” crowd, I find this at least believable. It is in fact possible to get this kind of return on investment out of a programming system (like Symbolics Genera, when it was available) which (if used as prescribed) boosts the effective IQ of a developer and results in a more versatile, understandable, and maintainable product. Not to mention bringing unthinkably-complicated ideas into the realm of the plausible.

                    Though a somewhat more likely scenario is the “golden toilet”, where costs rocket upward due to the buyer’s unimpeded access to large sums of Other People’s Money.

                    • Scott Locklin said, on September 13, 2012 at 7:57 pm

                      Believe it or not, it is kind of worth that for its ability to deal with large time series. I’m considering scraping together my loose change as well, but I’ll probably just write something in a similar dialect. I still consider it remarkable, because they have to train people to use it, which is another $100k a head or so, when they can find people capable of it, which is probably another $50k or $100k in hiring and recruiter time!

          • Dan Gackle said, on September 13, 2012 at 3:37 am

            The prickly (and one can often drop the penultimate ‘l’) character of Lisp programmers is something I’ve often observed, and I’ve known it to turn at least one gifted programmer off of Lisp. Interestingly, he recently came back to it via Clojure. He says that Clojure is better community-wise. Have you not found that?

            Community makes a big difference. I doubt there’s anything about Lisp per se that lends itself to prickishness; I think it’s more likely that something got “in the water” culturally, and hysteresis did the rest. That alone is a good argument for a cultural reset, which Clojure may be.

            Your observation about APLers is very interesting. APL’s level of abstraction is so high that it actually touches the mythical declarativey line across which the thing becomes accessible to non-programmers. In APL’s case, the “non-programmers” are mathematically sophisticated, of course. Nevertheless it’s a different mentality than with most programming – maybe even most functional programming. I remember when a professor introduced me to APL years ago, he described it as “the mathematicians’ favorite language”. And he started showing me cool things you could do with it – none of which resembled what you’d think of as a normal computer program, especially back then. It’s what they used to call a problem-oriented language.

            • asciilifeform said, on September 13, 2012 at 3:49 am

              I am not the least bit surprised to hear that APL wins among mathematicians; human thought does not gain at all from being shoehorned into the ASCII character set.

              Where would mathematics be if chalkboards and paper were as limited in their display capabilities as the dumb terminals which mainstream programming systems still, for some unknown reason, continue to emulate?

              Even the finest human working memory has a finite “cache size”, and the use of specialized symbols in place of verbose text helps to make better use of it. “The right notation is worth 80 IQ points,” and so forth.

              • Scott Locklin said, on September 13, 2012 at 4:14 am

                While A+ and various commercial ones continue to use funky symbols (which I can appreciate), the more modern ones (K, Q and J) use a notation that looks a lot like line noise. It still works. Your brain gets used to it. It is incredibly concise, and it works a lot like an abbreviated Lisp with right to left evaluation instead of s-expr.
                It was designed specifically to do math on matrices, so it has a lot of win in situations where the problem maps onto math on matrices (most numerics stuff). I’m surprised to find out it does a lot of other things pretty well also. I think all languages influence how you think. A language where you can express a whole program in one line loans itself to disciplined thinking; that much is clear from some of the results I have looked at.

            • Scott Locklin said, on September 13, 2012 at 4:10 am

              I haven’t interacted much with the Clojure community; just the Common Lisp guys -the basics of Clojure are pretty easy once you set up an environment. As for CL, while grizzled old badasses like Richard Fateman were approachable and helpful, as were evangelist/educators like Peter Seibel, if I needed to figure out where an iterator was, or I had any complaints about the hyperspec being unclear, or the lack of numerics libraries or some SLIME thing being broken, I was generally denounced as a no good skunk and told to go back to the Matlab lemonade stand where I belong. I think there was some kind of bunker mentality defensiveness thing going on. It seemed like a lot of the community was also involved in building Yet Another Common Lisp, rather than extending one of the more useful ones with libraries which could help mortals like me. Obviously anyone who writes a language or a chunk of one is a smart guy, but it still cheeses me off that The World’s Best Language sucked at simple things like providing transparent libraries. The attitude towards n00bs seemed overtly hostile, rather than, “holy smoke dude; look at this cool thing!”

              A friend of mine describes Common Lisp as the ruins of a forgotten civilization. I figure that makes the present day inhabitants of the ruins hostile barbarians, punctuated by the occasional Roman Nobleman like Dr. Fateman. No offense to asciilifeform; some of my best friends and all. I just like the analogy.

              The Lush guys, I exclude from this assessment. They were very nice people, and very patient and helpful, especially considering all the dumb questions they got from n00bs, and the fact that they were very busy doing other things.

              • asciilifeform said, on September 13, 2012 at 5:17 am

                The “Common Lisp as the ruins of a forgotten civilization” analogy is spot-on. Take, for instance, the fact that CL treats characters as logically-distinct entities from bytes (or even small integers.) A character can be assigned attributes such as glyph type, font, emphasis, and so forth. This is an artifact left over from the glorious Symbolics systems, where said character could be sent directly to an object in the windowing system and automagically display correctly using said attributes. Whereas nowadays the character-type is an endlessly-maligned feature, that was at last grudgingly forgiven (by some, though not all) when Unicode was introduced and every other programming system had to be clumsily retrofitted with it. The “C” notion of strings is laughably inadequate in quite a few ways, and yet it is the cultural default today – partly because the infrastructure that rewards “civilized” behaviour in characters and strings spent so many years languishing in ruins (arguably it is mostly in ruins still.)

                Likewise, paths in Common Lisp are logically-distinct entities from mere strings: they can be meaningfully-decomposed into various parts on the fly, and re-assembled to one’s liking with substituted values for said parts. To the denizen of modern programmer culture, where MS-Windows and Unix filesystems are all that is, was, or could ever be – it seems like a pointless frill.

                So the barbarians pick apart the stones of Roman roads and baths, to build their dour little hovels. Just as proponents of “modern” Lisps seem keen on introducing the “C-world” idiocies of characters-as-unsigned-bytes, paths-as-strings-of-unsigned-bytes, and strings-as-null-terminated-blobs-of-goo into the Lisp world. As well as the far more destructive “it is OK for a program to crash upon encountering an error condition” idiocy.

            • asciilifeform said, on September 13, 2012 at 4:16 am

              The unfortunate aspect of “cultural resets” is that, like all revolutions, they are tremendously destructive, discarding many valuable things due to “guilt by association with the Ancien Régime” – and sometimes from sheer novelty-for-novelty’s-sake.

              Clojure threw out quite a few babies with the bathwater, including not only arguably-minor features such as reader macros, but the far more important Common Lisp condition system. Clojure barfs Java stack traces where CL would offer a sane condition-and-restart scenario. In my attempts to experiment with Clojure, I could not help but barf right along with it. CL conditions make errors meaningful – and often correctable in a reasonable and automatic way at run-time: something I have yet to encounter in any other programming system’s exception-handling mechanisms.

              I understand that Java interoperability makes it suitable for certain applications (although I never understood why ABCL was never hyped for these applications…) but it truly bothers me that Clojure is being hailed as a “modern Lisp.” In every single respect where it differs from Common Lisp as a language, rather than socially, I find it to be a serious step backwards.

              I’ve seen this before, with NewLisp and its tacit re-introduction of 1970s mistakes in Lisp design (dynamic scope, bringing the Funarg Problem back from the grave, and the like.) All for the sake of the coveted cultural reboot. The results of such reboots inevitably resemble the stately mansions abruptly turned into communal housing for the rabble during the French and Russian revolutions; once the drunks and beggars have the run of the place, everything about it that drew envious glances from outsiders seems to vanish without a trace.

              • Scott Locklin said, on September 13, 2012 at 4:43 am

                I used to use the ABCL IDE instead of emacs, back when it was only half functioning. I built a few cool things in it using Norvig’s book, but my employer at the time was not interested in embedding it into the rest of their code (which was Java and Matlab). Never fiddled with the Java bits.

                I won’t disagree with you about the stacktrace barfs you get out of an unhappy Clojure environment, that, and the lack of native profiling facilities is offensive and disgusting. I guess the thing is, to the legions of Java guys out there, these are features. I never investigated the Java interop in ABCL (I wasn’t interested at the time), but in Clojure it is dirt simple. This solved several important problems for me, and it is the main reason I chose to use it. That, and the fact that a lot of other people are using it. I know you like Lisp all the way down, but I got problems to solve that are already half done in Java. That was my philosophy with Lush as well. FWIIW, when I was using it, ABCL also barfed Java stack traces most annoyingly.

                The exception handling in Clojure is a bad design. If I came to rely on having a good one like in CL or OCaML, I’d be pissed too. I guess I’ve never spent time with a good one though, so I haven’t missed it yet. The error sauce is pretty weak in Lush, which is what you’d expect when you have access to pointers.

                FWIIW, your phrase “Clojure is the False Lisp, which Reeketh of the Cube Farm” cracks me up. It does smell a bit like a carpety cube farm, but it works pretty well for what it is. Beats writing Java.

              • fogus said, on September 13, 2012 at 3:21 pm

                > Common Lisp condition system.

                A condition system in Clojure is a library — a few in fact. Granted, none provide a full-CL-like capability and none are integrated in the language proper (i.e. its stock REPL).

                > In every single respect where it differs from Common Lisp as a language,
                > rather than socially, I find it to be a serious step backwards.

                It’s unfortunate that you’re unable to find a single advantage to any of its differences, but then again Common Lisp is in all likelihood the ultimate language. It’s understandable that someone satisfied with Common Lisp has no desire to use Clojure – or anything else for that matter.

                > All for the sake of the coveted cultural reboot.

                I have very strong doubts that Clojure was created for the purpose of creating a new Lisp culture. Instead, I think it was created to solve the problems that its creator was trying to solve. The culture of Clojure just happened.

  6. vemv said, on September 13, 2012 at 3:10 am

    That one “guy” turns out to be the author of Clean Code, a must-have for journeyman devs lately.

    Clojure *does* distill a subset of OO… http://clojure.org/protocols

    >> Therefore most people think that’s what programming is.

    In fact programming (and cognition, in general) is all about abstraction. There will always be a case for not-GC langs, but they’ve become a mere optimization.

    When Robert talks about ‘the last language’, he obviously is making a tongue-in-cheek, deliberately controversial claim. But truth is Clojure’s is one of the saner sets of defaults yet.

    • Scott Locklin said, on September 13, 2012 at 3:19 am

      As I said, Clojure is pretty good. Python has a better claim as a “last language” IMO, as it’s more friendly and suited to people of differing skill levels.
      I’d rather use Clojure, personally, but I think Python is a lot more “lasty” than Clojure is.

  7. Scott Burson said, on September 13, 2012 at 5:23 am

    Ascribing Clojure’s success to Paul Graham’s writing skills, without ever mentioning Rich Hickey, seems odd and risks misleading readers unfamiliar with the situation. I suppose what you’re saying is that Graham’s writings have helped to evangelize Lisp in general, and Clojure has benefitted from that, which I guess could be true to some extent. But I’m sure Clojure’s success has far more to do with Hickey’s elegant, bold, and creative language design.

    • Scott Locklin said, on September 13, 2012 at 5:35 am

      I don’t think it is much of a stretch to ascribe some of Clojure’s success to Graham’s essays. Hickey himself was surprised at how popular it got. Obviously, this wasn’t everything, otherwise we’d all be using Arc instead.

      • asciilifeform said, on September 13, 2012 at 12:18 pm

        Well, Arc turned out to be a mere Scheme with shortened operator names – so that Paul Graham could use Vi with less wrist pain… Hence the nearly-total lack of interest, especially from those of us who have mastered non-crippled editors.

        • Scott Locklin said, on September 13, 2012 at 8:48 pm

          PG uses vi?
          Thanks a lot; I think you have ruined one of my programmy heroes for all time. I wonder if a chording keypad works for emacs keystrokes?

          • asciilifeform said, on September 13, 2012 at 11:04 pm

            Emacs is infinitely-programmable, so the answer is yes. I’ve personally used it with a set of pedals (a cheap Kinesis unit.)

            • Scott Locklin said, on September 13, 2012 at 11:07 pm

              I’m picturing your development environment as looking like a pipe organ.

  8. er guiri de lamiga de la prima esa said, on September 13, 2012 at 12:27 pm

    Great writing. I dabbled a bit in Clojure, and while I thought it was funny, it never felt as good as LISP or Scheme. I used LISP when I started in NLP, but C surely beats the hell out of it performance-wise.

    Forget Uncle Bob. He has made a good point or two in his life, but some of his stuff is insufferable. E.g., in the clean code book he claims that no function should have more than two lines. He then applies it to a very, very simple example, and manages to make the code unreadable. Do you think he then moderates his view? No way.

    • Scott Locklin said, on September 13, 2012 at 7:53 pm

      Lol; maybe Uncle Bob should be using APL, where this is actually possible. Of course, it’s all unreadable.

    • Robert Martin said, on September 13, 2012 at 9:52 pm

      Two lines is good. Four or five aren’t bad. More than that is questionable. The real rule is that you should not be able to extract one function from another.

      Am I moderate? No. Was the code unreadable? Of course not.

      I’d be happy if you forgot me. Since you haven’t I suggest you quote me a bit more accurately.

  9. etrading said, on September 13, 2012 at 12:55 pm

    Are you familiar with the LMAX Disruptor work ? They made Java go fast, but they needed custom collections and CAS based lock free concurrency to do it.

    • Scott Locklin said, on September 13, 2012 at 7:38 pm

      I did look at their whitepapers; interesting project. It is, of course, a different kind of fast than other kinds of fast or “fast in general.”

    • John Flanagan said, on September 17, 2012 at 4:38 am

      I have, in fact, looked quite closely at their work. My current job has, as its centerpiece, a lockfree shared memory message passing system which I built from scratch. It isn’t quite as fast as Disruptor (~300 ns to pass a message rather than ~50 ns), but I had a different set of requirements which mandated a different design.

      My two crucial requirements:
      – A slow consumer can’t be allowed to cause the producer to block.
      – Writers and readers are separate processes that can die ungracefully at any point.

      Between these requirements, I wasn’t able to figure out a safe way to use a ring buffer. Instead I had to use singly linked lists, which is the chief reason for my system being 250 ns slower. Less cache locality, more pointer chasing. Lockfree singly linked lists are dead easy to implement, and have the advantage that every list write or read is a singe go/no-go operation, so either end can die at any time without leaving the queue in an inconsistent state.

      I found the lockfree stuff to be the interesting part of Disruptor. The other stuff, while impressive work, is an example of the contortions that Java forces you to go through to avoid garbage collection. You see those kinds of memory management backflips over and over in every Java system that needs to approximate real-time performance.

      In the fullness of time, I may revisit the ring buffer concept for my system, but I have more important things to do right now, like get the rest of the trading system written so we can actually start making money.

      • JohnOS said, on September 17, 2012 at 8:06 am

        I’m looking at implementing a variation on Disruptor too. In C++, using STL collections to get memory contiguousness for cache locality, and using boost::atomic. RingBuffer seems right for things like order flow, but not for per instrument data like market data. There we can use a fixed size structure, and apply the typical mkt data design pattern of Q depth of one: if a new tick arrives before the previous one was consumed simply overwrite the stale tick with the new one.

        • JohnOS said, on September 17, 2012 at 8:07 am

          And thanks for the 250ns vs 50ns data – very interesting to see the cache locality effect quantified like that…

          • John Flanagan said, on September 17, 2012 at 2:07 pm

            However, at that level of performance, you start to be at the mercy of Amdahl’s Law. There’s no way to do any useful PROCESSING of a message in 50 ns (and, for that matter, it probably took well over 50 ns to assemble the message before enqueueing it), so the messsage processing time becomes the dominant factor for both latency and throughput. Which is admittedly a nice place to be, if you compare it to, say, socket based IPC, where the queue time is likely to be vastly larger than the message processing time.

            • asciilifeform said, on September 17, 2012 at 2:32 pm

              > There’s no way to do any useful PROCESSING of a message in 50 ns

              This might be the time to break out the FPGAs.

              • John Flanagan said, on September 17, 2012 at 3:54 pm

                Less useful than you would think. 50 ns is not a lot of time. Even on a top end 3.4GHz CPU, it’s only about 170 clocks. Disruptor is impressive because it’s shuttling a message between two threads in about 170 clocks.

                High end FPGA clock speed is only around 500 MHz. Which means that 50ns would only give you about 25 clocks to do anything with. And if you somehow had a Disruptor type thing running on the FPGA, it probably would be taking a lot longer than 50ns to be delivering a message, due to the lower clock rate, so you’d already be in the hole by however much longer the message took to be delivered to you due to the slow clock.

                FPGAs can compete with general purpose CPUs on throughput only when they can either massively parallelize a problem, and/or can massively pipeline the problem. If the problem is massively parallelizable, then you also could be just throwing more general purpose CPUs at it. And if the problem is massively pipelineable, that doesn’t mean that you will get good LATENCY from the FPGA, because pipelining improves throughput, not latency. And the slow clock rate means your latency probably isn’t going to be very impressive.

                • asciilifeform said, on September 17, 2012 at 4:13 pm

                  Don’t be fooled by the slow clocks. An x86 CPU accomplishes very little with each tick of the clock (progressively less in the past decade, as well, due the love of marketing hucksters and their chumps for high clock rates. Perverse incentives, perverse results.)

                  FPGAs are interesting because they open up huge swaths of “design space” that were previously out of reach to mere mortals. For instance, you can junk the whole concept of CPUs, threads and message-passing. Go with a straight-dataflow paradigm, where all operations are part of a dependency graph (and if your chip is large enough, exist at all times as physical objects which wait for their inputs to become available, and signal their successors within picoseconds of their output becoming ready.)

                  If you are in a line of work where bits must move as quickly and reliably as is physically-possible, you need to stop thinking in terms of processes and threads and start thinking in terms of flip-flops and transmission lines. And design your own machine architecture. I am told that houses of HFT and other high-finance have been buying up top-of-the-line FPGA hardware. Not being an insider, I don’t know for certain what they are doing; but I suspect that HFT specialists understand that general-purpose PC hardware is a crock of shit, plain and simple,that cannot be relied upon to display text in a word-processor in real time, much less perform operations with sub-millisecond hard-real-time constraints.

                  • John Flanagan said, on September 17, 2012 at 4:33 pm

                    Speaking as an insider, I can tell you that most HFT firms playing around with FPGAs are doing so because of slick-talking FPGA marketing hucksters. The more that perverse incentives change, the more they stay the same. 🙂

                    I might have to dive into that design space eventually, but I would need to be bumping my head on the performance ceiling of conventional hardware first, and I’m not there yet.

                    • Scott Locklin said, on September 17, 2012 at 6:09 pm

                      I keep hearing from outsiders who are certain that FPGA is used in HFT, but everyone I know in the business says exactly what you say. The only guy I know who might actually end up doing something real is a professor type who is building a fast network adaptor.
                      FPGA snake oil has been around for a long time; they tried to sell LBL on “Matlab FPGA supercomputers” at one point. It’s a neat idea in principle, but in practise…

                    • asciilifeform said, on September 17, 2012 at 6:13 pm

                      > I keep hearing from outsiders who are certain that FPGA is used in HFT, but everyone I know in the business says exactly what you say.

                      To be fair, if you were secretly using FPGAs to rake in the HFT cash, this is exactly what you would want your opponents to believe. This could even be one of the reasons for the flood of snake oil.

                    • Scott Locklin said, on September 17, 2012 at 6:36 pm

                      Considering I’ve heard from, I dunno, a few dozen actual HFT types on the subject, including one who worked on a failed pilot program at one of the big houses, and I belong to forums with 100’s more, you’d think one would fess up. Nope: lots of outsiders who are very sure about FPGA secret sauce though. Because the guys selling FPGA sauce are all on the outside giving talks about why their technology is great.
                      Never even saw a single job ad relating to it. And I have seen HFT job ads relating to very abstruse mathematical tools.
                      Edit add: I did hear from one guy who uses a bank of FPGA’s to calculate risk exposure, which isn’t quite the same thing as trading with them.

        • John Flanagan said, on September 17, 2012 at 2:22 pm

          You will probably find that approach unworkable in a Disruptor style queue for market data because:
          1) You can’t tell when a reader has already progressed past that tick, and would thus miss your update (which is bad)
          2) You can’t tell when a reader is currently processing that tick, which means you could potentially write over the prior record when he has only read part of it, making the tick inconsistent (which is also bad).

          Those issues may be avoidable, but only at the cost of additional atomic instructions on both the read and write ends, which would significantly impair the performance.

          If you are writing a latency sensitive tick consumer, you should do so with the design requirement that it reads the ticks as fast as the ticks come in (modulo some reasonable amount of queueing depth). If you can’t process the ticks as fast as they come in, then your latency will be crap anyway.

          There are definitely applications which could reasonably benefit from coalescing of ticks, but those apps should either be designed to listen to a throttled feed (which does the coalescing on their behalf), or listen to their price stream on a separate thread, and coalesce internally to feed the slow thread.

  10. Ephicks said, on September 13, 2012 at 3:14 pm

    > Factors of 100k to 1E6. I may be wrong, but I’m guessing trees confuse the bejeepers out of the JVM (if some nerd becomes indignant at this assertion: you’re only allowed to comment if you have a kd-tree or R* tree running on the JVM within a factor of 100 of libANN for sorts and searches on dimensions > 5 and 100k+ rows)
    Seriously? I can’t let you say that!!
    The only possible explanation I am able to think of is that you have performed Java tests an a swapping machine!
    I have done the following test with ANN and a Java code (developped for astronomical applications):
    – kd-tree of 1_000_000 uniformly distributed points in dimension 6
    – 100_000 k-nn queries (with k=10) from uniformly distributed points
    Here the ANN commands lines:
    – data_size 1000000 dim 6 gen_data_pts distribution uniform
    – build_ann split_rule standard shrink_rule none
    – query_size 100000 dim 6 gen_query_pts distribution uniform
    – epsilon 0 near_neigh 10 run_queries standard
    ANN Results:
    –> kd-tree creation: 2.36s, queries exec time: 15.36s (1.536e-4s by query)
    With Java code (not multi-threaded):
    –> kd-tree creation: 2.14s, queries exec time: 25.20s (2.52e-4s by query)
    I expected ANN to be faster than Java at kd-tree creation. Its code may still be optimized.
    For queries on kd-trees, C is often ~2x faster than Java (array bounds check?).
    We are very far from 100k to 1E6!!
    (Ubuntu 10.04, Java7, 4GB RAM, Intel E8400)

    • Scott Locklin said, on September 13, 2012 at 7:51 pm

      I got excited for a minute, then I looked at your numbers. You’re right about where the Clojure kd-tree on github is; in fact, that one might be a bit faster (it’s a beautifully written piece of code). I’m perpetually confused by numbers like this, because in Lush + libANN, I get build and query times MUCH faster than this. Yes, factors of 100k to 1M (less on the build, but it’s the query which is important to me). The high end of that estimate is a bit of a trick, as I’ve got a custom query which just grabs “nearest leaves” without calculating the distances (this works for most cases where your NN isn’t weighted), but something is weird here.

      • Ephicks said, on September 14, 2012 at 9:00 am

        I have compared the comparable: exact searches on a kd-tree with leaf size of 1 in C and in Java.
        To compare two languages, it is not fair to use different algorithms like a Java kd-tree with exact searches
        and approximate searches in a different data structure in C (like it is possible to do with ANN).
        Moreover, in my tests a query is performed in ~0.25ms. A factor a 100k leads to 2.5 nano second per query.
        A 3Ghz processor have a clock cycle of ~1 nano second. I let you conclude.

        • Ephicks said, on September 14, 2012 at 9:14 am

          Sorry, a 3Ghz processor have a clock cycle of ~0.3 nano second. It does not change the conclusion.

        • Scott Locklin said, on September 14, 2012 at 10:35 pm

          Somehow I missed that you were querying 100k points. That’s pretty good, though I still win using plain old libANN kd-trees (I was getting 20us or so/query on a 2Ghz core duo; less with the leaves trick). I don’t suppose you share your source?
          One place I have yet to look for good kd-trees is the weka library. If they can do something like what you are getting, the idea becomes usable.
          Thanks, anyway, for demonstrating that it is in principle possible on the JVM to do reasonably well at trees.

  11. cc said, on September 13, 2012 at 6:31 pm

    ” Some languages are vastly more powerful than others, and can’t be used by ordinary people”
    — Please expand on this. What language? Have you used it and what for?

  12. winestock said, on September 13, 2012 at 7:10 pm

    In case you’re wondering why you’re getting more than your usual number of comments, then blame me. I submitted your article to Hacker News. The comments on APL are icing on the cake. That conversation between you and asciilifeform on Common Lisp and jobs behind closed doors is icing on top of another layer of icing.

    • Scott Locklin said, on September 13, 2012 at 7:41 pm

      You should send some traffic the way of asciilifeform if you don’t already; his blog is great, and he is a true keeper of the sacred flame. I’m just a plumber who solves problems.
      While I appreciate the linkage, I thought this was the most meaningless thing I’ve written this year. It was an emotional reaction to something fairly meaningless. Damn internets.
      When I write about APL, though (next post, or maybe one later: I’ve only been fiddling with APL for a few days), that’s something that people should read, because I will unleash great nerd secrets.

      • winestock said, on September 13, 2012 at 8:30 pm

        I have Loper on my feed reader, but asciilifeform has asked that his essays not be posted to Hacker News, anymore; he is viewed as a crank over there. I only post to Hacker News and my Twitter account, so I can’t spread his words any further.

        There’s a line from the Unix-Haters Handbook that describes him: “the hopeless dream-keeper of intergalactic space.” If he’s reading this, then I know that he’ll get the reference. As you said, he’s the keeper of the sacred flame. I quoted him respectfully in my essay and I’ll do so again when I write some more.

        I’ll spread your APL article as soon as I see it.

  13. Robert Martin said, on September 13, 2012 at 10:04 pm

    Cool your jets Scott, as another poster said, the talk was somewhat tongue-in-cheek.

    The real issue I was trying to get people to think about was simply this: Have we seen every kind of language we are likely to see?

    I think the answer to that is difficult to determine. I think it’s entirely possible that in the last 60 years we’ve examined virtually every type of language there is. One line of evidence suggesting this is the fact that most “new” languages are actually old. Clojure, F#, Scala, Ruby, Python, Java, C# are great examples. There’s nothing particularly new about any of them.

    Oh, there are some clever refinements, and cute little tricks in each of those language. But there’s nothing of the ground shaking newness of Prolog or Forth, or Haskell, or Erlang.

    Now, if we have seen every kind of language there is, including graphical languages. Then it might just be time for us to do what every other industry in our position has done. Consolidate. Maybe, just maybe, we should prune the tree of languages down to just a few.

    And if we do that, then (and here’s where we get really tongue-in-cheek) might it be possible to reduce that number down to one? Other industries have done this! Chemistry used to have a huge menagerie of notations and nomenclatures. Biology too. And mathematics used to be a free-for-all.

    All these fields benefited immensely from such consolidation. We might too. Imagine if there were just one language. Your resume would say: “programmer”. The code published in articles would all be in the same language. The systems you worked on would all be in the same language.

    If we did reduce our languages down to just a few, maybe just one, what language would that be? If I were voting right now, I’d vote for Clojure, for all the reasons I described in my talk. If the vote is taken 50 years from now, the language would likely _not_ be clojure. But I’d wager good money that it would be some lisp variant.

    • Scott Locklin said, on September 13, 2012 at 10:33 pm

      Hey Bob; thanks for entering the lion’s den. I usually send an email to the fellow I’m bagging on, but it got Hacker-Newsed too quickly. Funny, as I said on the top, this is just me venting: nothing to see here, move along, but everyone loves a fight.

      I agree with you that we’ve explored much of the space of possible languages, and that nothing new has happened in a long time. This doesn’t mean that we’re done; it just means the heroic era of computer science that gave us all these languages is done, just as the heroic era of space travel that gave us Neil Armstrong is done. I also agree that language profusion is annoying, if only because it forces me to be a polyglot.

      The problem is, people need/want/can deal with different things. Languages which win aren’t always best, even at what they do.

      I’ve already cast my vote for Python as the most “lasty” language presently available. It does most things that need doing, and can be happily and efficiently used by people in a wide spectrum of skill levels. I don’t think Python is ideal: I’d rather use Clojure, but I think Python is a lot more “lasty” than Clojure is. If only because most people freak out when confronted with emacs and a whole lot of parentheses, more than they do with white space. If you want to make Clojure “last” based on aesthetics, you should pick Common Lisp. If libraries and utility, well, Python wins.

      I don’t think there will be a last language: people use computers for too many different things. You look at physics and math: they also have many mutually unintelligible notations. It’s hard for an atomic physicist to read a solid state paper, and almost impossible to read a general relativity or string theory paper.

      Finally: go build something in Labview. Graphical languages definitely have a lot more to offer. It’s not very “programmy” but it allows people to solve complicated problems using a computer in a very efficient way. There is no reason the whole LAMP stack doesn’t have a graphical language to make it go; interfacing with hardware is much more difficult than pasting a database to a browser. It would make most programming that happens today obsolete. I think it only hasn’t happened yet because people who do this kind of programming are unaware that it is pretty easy to build a graphical language.

      • Dan Gackle said, on September 14, 2012 at 1:01 am

        Do spreadsheets count as a graphical language? I think there is room for innovation there. Spreadsheets are basically REPLs. The language is limited, but could be generalized (for example, to include subroutines, or types). People already do a lot with spreadsheets and could in principle do much more.

        • Scott Locklin said, on September 14, 2012 at 1:10 am

          I’d count them as such. Doubly so with companies like “Advantage for Analysts” where you can draw flowcharts in your spreadsheet. Don’t think they sell it any more though, from a quick look at their website. Used to be an inhouse tool for doing structured finance at an investment bank.

          • Dan Gackle said, on September 14, 2012 at 1:22 am

            I must say that yours is the first credible defense of graphical programming that I’ve heard in a long time. Most people who go in for that sort of thing naively underestimate what’s involved. History is littered with absurdly oversimplified boxes-and-lines programming environments. The demos always show the same thing: boxes with 2 in them feeding into a box with + in it and 4 coming out the other end. The trouble, of course, is that this doesn’t scale to anything remotely complex by programming language standards.

            Nevertheless, the existence of productive graphical environments like Labview counts for a lot – as you point out. My take on this is that they aren’t good for general-purpose programming and the trick is to carve out, in each case, the domain that the tool *is* good for. I know nothing of Labview but it seems like this argument applies to it. It definitely applies to spreadsheets, which are not a general-purpose programming tool (hence the need to bolt on VB scripts and such). What hasn’t got a lot of attention is how interesting these subsets of general-purpose computation can be. For one thing, they are much simpler, while still being powerful enough for many purposes. It’s no coincidence that they tend to be favored by domain experts/modelers who are not professional programmers.

            (I suppose I should add that I’m working on an attempt to make spreadsheets more computationally powerful.)

            • Scott Locklin said, on September 14, 2012 at 1:41 am

              I figure National Instruments continued existence, and the fact that 3/4 of the Advanced Light Source runs on labview rather indicates it’s a useful idea. Critics have never used it, or have only used something fairly bad like Simulink (which is bad).
              See if you can’t get a demo from A4A. Their tool is pretty good. Also written in a non-standard language, which I won’t reveal, in case it’s a trade secret.

              • asciilifeform said, on September 14, 2012 at 5:25 pm

                I suspect that what people actually like about programming systems such as LabView and spreadsheets is the rapid OODA loop, rather than the graphics. Programming should resemble the experience of shaping a ball of clay, where every action you take results in immediate and meaningful feedback, and the idiocy of having to “run” your creation before it comes alive is dispensed with. A minimum of hidden state lets you maintain an accurate mental model of your work at all times. Whereas the computer as we know it today is a sewer of uncertainty.

                • Dan Gackle said, on September 14, 2012 at 6:29 pm

                  Yes. Spreadsheets are REPLs and the (hundreds of) millions of spreadsheet users have the same kind of interactive computing experience that Lisp, Smalltalk, APL, Forth, etc., programmers enjoy. A spreadsheet user told me, “I like to get things into Excel so I can play with the data.” That phrase, “play with data”, captures how people compute with spreadsheets and why many are so passionate about them. I find it interesting that spreadsheets – hardly a niche technology – have so close an affinity with the classical ‘elite’ languages that would appear to be at the opposite end of the spectrum. It suggests that the spectrum itself (the way we think about how to categorize programming languages) is wrong.

                  People often remark that spreadsheets are a functional programming tool, and one can see why: it’s because spreadsheet formulas are pure expressions. But I think this analogy is overrated. It fails to take into account the spreadsheet UI, which certainly allows you to mutate the values of cells and trigger side effects. Indeed these side effects (recalculation) are the essence of the liveness of the spreadsheet. Perhaps the closest affinity is to Smalltalk, with its emphasis on total liveness of both data and code. (It’s no coincidence that Alan Kay has written about and been keenly interested in spreadsheets for 30+ years.) But there are also clear links both to Lisp – because formula language, though infix, is an expression language that can be made more powerful by becoming more Lisp-like, e.g. by adding lambdas – and to APL, because the cells of most spreadsheets are organized into arrays that share a single computation (i.e. the same formula) and are piped into other arrays that reference them. What I want to do (I’d like to say “what I’m doing”, but the proof of that can only be in the pudding) is develop these latent qualities of spreadsheets into an implementation that realizes their power, but that remains familiar and comfortable to spreadsheet users.

                • Scott Locklin said, on September 14, 2012 at 10:46 pm

                  I think the graphics are important for another reason: you don’t have to memorize anything. It’s all right there in front of you. If you can draw a flowchart, you can make the computer do stuff. At some point, it’s not even programming, though it accomplishes things that previously only programming could do. I mean, are people doing CAD “doing programming?”
                  I think a lot of the process of doing programming is fetishized as well. Some guy on HN was whining that Labview doesn’t have version control which works like text language VC. The desire for such a thing is not focusing on the problem: Labview solves the problem without focusing on a process that makes conventional programmers and pointy headed manager types happy. Solving problems, not ritualistically following a process.

                  I think it’s a very useful technology which is underappreciated. TIme will tell if I’m right or not. Meanwhile, people happily and efficiently solve big problems in Labview that would suck otherwise.

                  FWIIW, I haven’t found source control doodads in the J environment yet either. I also haven’t figured out why this is so, but it will be interesting to see how it is handled.

                  • asciilifeform said, on September 14, 2012 at 10:55 pm

                    It isn’t the graphics per se, it is the explorability. The latter isn’t limited to, nor necessarily flows out of, programming systems where the user must drag boxes and arrows around with a mouse. The Symbolics machines had excellent explorability, because it was possible to click on any symbol or graphical construct and be taken to: its source code; a well-organized catalogue of every place where said construct occurs in the system; and all relevant documentation, complete with listings of other relevant concepts. Note also that no clear boundary existed between the “operating system” and the user’s code. Everything existing on the machine at a given time was explorable in this way. It is very much possible for a largely text-based programming system to be explorable, though I know of none available today of which this can be said.

                    • Scott Locklin said, on September 14, 2012 at 11:04 pm

                      It’s certainly true in labview; most of the device drivers for hardware are understandable internally by clicking through them. You also get useful information about types and such by rolling across wires and interfaces, or stepping through the algorithm and looking at what’s going on in all the pieces in “debug” mode.

                      None the less, the graphics *do* make it easier to make the computer roll over and do tricks. It’s easier to take in a complex algorithm in a screen of flow charts than a giant page of text. It’s also easier to know how/where to make changes, even when the guy who wrote it is kind of an obscurantist idiot.

                      You should write one! Even if it’s only interesting to you from the “how to” sense of things, it might give you some ideas. CLIM must be good enough to do it by now.

                    • asciilifeform said, on September 15, 2012 at 1:35 am

                      There appears to be a maximum depth set for these threads, so this will go here:

                      > It’s easier to take in a complex algorithm in a screen of flow charts than a giant page of text.

                      I can’t speak for everyone, but when I program, I like to be able to make use of the rather-hefty chunk of my brain that evolved as a language co-processor. Language provides compact abstractions in a way that is difficult to beat using graphics except for inherently-visual tasks (the motion of mechanical parts, etc.)

                      I have never met a visual programming system that I didn’t instantly loathe. Perhaps if I ever see a diagramatic representation of an algorithm that doesn’t immediately strike me as being merely a bloated version of its textual representation, I will reconsider. On the condition, I should add, that said system doesn’t force me into endlessly repetitive, idiotic click-and-drag motions: something they all seem to have in common.

                      Note the fact that chip designers have the option of using schematic editors, but almost never do so for modern VLSI-scale work: hardware-description languages dominate. This is not an accident.

                    • Scott Locklin said, on September 15, 2012 at 2:38 am

                      Labview doesn’t exist to implement algorithms. It exists to make complex hardware function with a minimum of fuss. That’s the point; it ain’t coding. It’s doodling something your computer understands. Aesthetically you’d hate it, but from a productivity standpoint, you can’t beat it. Solving the problem, not following the pattern of solving a problem that you are familiar with. You can solve things in a day which would take a month or a year working in the standard way, and you can solve it in a way that can be easily modified by a child without breaking anything.
                      Writing code to do this sort of “scientific hardware control” would be like writing code to make an image in photoshop; possible (people still do it), but a titanic waste of time. IMO, virtually all code (LAMP, cloud, whatever) is a titanic waste of time, which could be made simpler using visual tools like this, obsoleting entire fields of programmy drudgery.
                      Most “programmers” would be better off doing something else. If they’re smart, they should think about more important things than selling underpants on the internets. If they’re not, they should be counting fruit in a grocery store. I’d rather hire one retard who can draw doodles than have to deal with a team of 100 code monkeys and managers.

                    • Dan Gackle said, on September 15, 2012 at 3:14 am

                      In Labview, does the graphical logic tend to model physical instruments that are being worked with in, say, a lab? If so, that’s a big deal: it means that there’s a physical correlate to the computation and explains why that domain is a good fit for visual representation. I don’t believe that general-purpose programming can be done this way; we use textual and symbolic notation for a reason. But if I understand you, you’re not arguing that it can, so much as that there are large domains currently done the symbolic way that could more easily be done a visual way. I’m skeptical, unless those domains are modeling something visual (or at least physical) to begin with. It would be interesting to see a working example of, say, a nontrivial CRUD web app done this way. I assume that’s the kind of program you’re talking about when you say LAMP stack? Do you think something like HN or Reddit, for example, could be hacked together more easily in a good visual programming environment? (Personally, I doubt it.)

                    • Scott Locklin said, on September 15, 2012 at 3:33 am

                      Some of the things in Labview are physical, and a wire leading to a gizmo looks kind of the same in Labview as it does in reality. Most are not. It’s a decent data analysis package as well. As is Igor.

                    • asciilifeform said, on September 15, 2012 at 1:31 pm

                      > Some of the things in Labview are physical, and a wire leading to a gizmo looks kind of the same in Labview as it does in reality.

                      Consider the rat’s nest of wires at the back of your computer. I, for one, would have a far easier time with them if they could be magically converted into an editable, textual list of connections and then back, every time I reach for one. My patience for “help the mouse find the cheese” children’s mazes ran out when I was five. Schematics are great when the system in question is simple enough to avoid producing a rat’s nest with intersecting wires of identical appearance, but this is not often the case in large designs.

                    • Scott Locklin said, on September 15, 2012 at 11:31 pm

                      The “wire” interface would work a lot better if the wires automatically shortened and neatened themselves, were colour coded, and told you what they carry when you hover your finger over them like it does in labview.
                      Labview works better than text for what it does. I’ve seen the results of both; anyone who doesn’t use Labview for what it is good for is being deliberately obtuse, even if they have a really good excuse.

              • Snickers said, on July 4, 2015 at 10:16 pm

                Simulink is bad, really? Tell that to major automotive and aerospace firms that rely on it to write software in their product

                • Scott Locklin said, on July 5, 2015 at 1:22 am

                  Compared to Labview, it’s very bad. Probably beats the hell out of writing code to do the same thing though.

                  • Snickers said, on July 7, 2015 at 1:38 am

                    What is so bad about Simulink, in comparison to Labview?

    • Alpheus said, on March 29, 2018 at 2:16 am

      As a mathematician, I find the notion that mathematics isn’t a free-for-all to be kindof funny. Sure, if you’re studying college algebra or calculus. But when you get to the fringes, the notation can get pretty wild.

      Even in something as well-established as basic calculus, rather than settle down on a single notation, we teach students two or three different notations. And then physics and engineering introduce their own.

      So I don’t see us settling down to a single language….although, if we did, I would wish that we settled down on something sensible, like Common Lisp, Forth or Smalltalk. Which, incidentally, I find *very* attractive because of their simplicity, which, in turn, gives them a power far out of reach of a typical Algol-based language. (And these languages, along with J, stumbled onto a very important idea: the precedence of operators we see in mathematics is for chumps. In computer programming, precedence merely gets in the way of understanding the problem, to the point that when you work in a language that has it, you’re better off neutralizing it by putting everything in those dreaded parentheses that anti-Lispers seem to choke on so much…)


Leave a reply to Ephicks Cancel reply