Locklin on science

Wolfram Alpha, Semantic Web, and back to the pre AI winter future

Posted in semantic web, Wolfram Alpha by Scott Locklin on May 20, 2009

The latest nerd buzz has been about Steven Wolfram’s entry into the search engine business. I’ll draw the conclusion before giving you the meat in case you want the executive summary: it kind of sucks.

This isn’t a big surprise. Most things suck. As my friend Philip says, simple solutions are often better. I remember when we worked together, he was fond of pointing this out in more specific cases -for example, “you have to get up pretty early in the morning to beat linear regression!”

So, what is it? Basically, Wolfram took the Mathematica engine, and added a cheap natural language interface to it. Online Mathematica is pretty helpful, as it’s still one of the most powerful computer algebra systems in the world. The natural language interface? Well, it is an extremely cheap natural language interface. Not even up to the very mediocre standards of Ask Jeeves search engine, which was the first popular natural language engine hooked up to the web. As far as I can tell from fiddling with it, Wolfram added the mathematical equivalent of ^X doctor in emacs. This is a type of code with a long and hoary history of niftiness, but fundamental uselessness. It’s also a type of code with an old history of being used in Mathematica type systems: for example, the last commercial version of Macsyma (I think released in 1998 or so) had a very advanced natural language interface. These sorts of things are easy to write in functional programming. They’re sort of what Lisp and ML type languages were invented for in the first place. If you want some classic examples of how this works, you can look at the source code for ^X doctor in emacs (in /usr/share/emacs/lisp/play/ ) or go look at Peter Norvig’s book, Paradigms in Artificial Intelligence. The relevant programs are in this handy link. Have a look at Eliza and Mycin. They’re both what used to be called “expert system shells.” What they really are, are interpreters where it is easy to update the rules, or write rules for itself.

Expert system shells were hot shit in the pre-AI winter days. If you read the “Journal of Advances in Computers” (my lodestone for this sort of historical context: I read the whole damn thing in the LBNL library while avoiding writing my dissertation), you can see all the excitement from the time when people first write such things, dating from around 1957 when they invented the first one, the “General Problem Solver.” Most of early AI research up until the AI winter (early 1980s) consisted in riffs on this basic theme. One of the great inventions which came out of this sort of thing was, in fact, the computer algebra system (CAS), of which Wolfram is the foremost vendor at present. While CAS do a lot more than the primitive expert system shells, it is effectively a subset of the old “General problem solver.”

So, it’s no surprise that Wolfram was able to cobble together a natural language system that understands very simple mathematical commands in a sort of pidgin English. I guess the real “breakthrough” of Wolfram Alpha is that it uses data found on the web. What would be really impressive if he had something like an augmented transition network (ATN to AI nerds) to parse data he found online and place it in context. Briefly, an ATN was a pre-AI winter technique used to parse grammars which work like the English language. The place you’re most likely to have heard of it is in Gödel, Escher, Bach by Hofstadter, wherein he makes the now hilarious claim that ATN’s will eventually become powerful enough to form a sort of Sentient AI. This is hilarious because ATNs are useless on languages which are unlike English in sentence structure. So, if you could build a Sentient Computer Program (an idea which itself seems hopelessly funny now) using only ATNs as Hofstadter thought we might one day, it would imply that people who speak a declinated language like Latin or Russian are not sentient. Putting aside the ethnic jokes this makes possible, there are all kinds of other parsing problems which humans easily solve which ATN’s haven’t got the remotest chance with. One example is parsing HTML. I mean, we don’t parse HTML directly unless we’re HTML nerds, but our browsers easily turn it into stuff we can read and make sense of. ATNs can’t help us do this, as the language structure of HTML doesn’t map to ATNs any better than Arabic does. I’m guessing, since Wolfram is a smart guy, he must have something like an ATN for some kinds of data based HTML. If he can get it to work properly, this would be an important breakthrough. It obviously doesn’t work right yet, and if he does have something like an ATN to help parse information found in HTML, it probably requires lots of human intervention.

“Semantic Web” is the sort of “next big thing” for solving this problem from the other end. The idea of “semantic web” is to solve the problem by phrasing web data in ways which computers (rather than people armed with browsers) can understand more easily. I have always been confused by the idea of “semantic web.” The problem with getting everyone in the world to adopt your standard is one of motivating them. HTML is a world wide standard because it solves lots of problems. Semantic web only solves the problems of search engine engineers; it doesn’t solve any content creator problems, so i can’t see why any of them would take the trouble to use it. The only types of content creators who would want to use something like this are basically advertisers, who are pretty much useless to search engines. In fact, advertisers steal money from search engines if they appear in an unsponsored search! I mean, that’s how Google makes money! There are of course niche applications of semantic web enabling technologies; it could be very useful for internal databases. But I suspect simple html tags and ordinary search engines will work just as well for internal databases. So, to solve the problem of how to make computers able to think about information on the web, you need the right kind of parse engine for natural language HTML processing.

Does Wolfram have this “HTML parsing special sauce?” Evidently not yet. There are forms where you can submit data to the thing wikipedia style -this is probably how most data gets loaded. Maybe he never will grow special HTML parsing sauce. The problem is actually much harder than teaching computers to read and understand books in natural languages, which they are still largely incapable of. Context is hard. Still, it’s a valiant effort, and a pleasant throw back to a set of largely forgotten technologies. Why were these technologies forgotten in the first place? Mostly: K&R invented C, and Intel invented useful commodity microprocessors. There are tons of useful things you can do with C and a commodity microprocessor. These old AI techniques are not among them. They required much higher level computer languages, and the hardware to support such things. This form of AI also made a sort of unfortunate detour into technologies like Prolog, which made it really easy to ask computers for solutions to NP hard problems, without realizing that you’re asking the computer something impossible for it to solve. Finally, there was a serious AI software bubble which popped in the 80s. There were many AI startups which promised big business the world. They failed in that economic apocalypse because they were largely unable to deliver on their promises. PC style machines and the C programming language made real improvements in business productivity that all the Lisp-AI propellor heads were unable to match with the tools they were using at the time. As such, much “AI” research since the 1980s has looked a lot like signal processing and statistics; fields which map much better into procedural C and limited memory microprocessor machines. Most of the “AI” technologies before 1982 were forgotten and abandoned.

I used to think I could code up an expert system shell for something useful at work. I like forgotten technologies, and I like Lisp. The last time I had this thought, I was plagued by support questions by people I worked with, and considered writing an expert system shell to answer their questions. Why didn’t I follow through with is? Well, it’s back to Philip’s saying. The simple solution is generally hard to beat with a technologically advanced one. I put together a searchable wiki for support questions instead. Sure, it would have saved me seconds a day if I had all that content loaded into an expert system shell, but it probably would have taken me months to build the tailored expert system shell and make it work. And it might not have worked, whereas the wiki worked and was useful immediately. So, you have to give Wolfram some credit for reviving some neat technologies. Minus points for not hiring Philip as a consultant beforehand.

Fun Wolfram Alpha Easter Eggs which show its “Eliza” intestines:
http://mashable.com/2009/05/17/wolfram-easter-eggs/
http://mashable.com/2009/05/17/better-wolfram-easter-eggs/
Fun observations (to be updated as I make more of them):

  1. Alpha doesn’t parallelize in any useful ways: when you do a query, you get popped to one of a couple hundred servers on a farm, presumably each running identical instances of Mathematica + the language parser.
  2. A speech pathologist relative once pointed out that profoundly brain damaged people are still capable of “cocktail talk” or “small talk” -this can often surprise doctors, as much of social interaction is apparently small talk. The fact that this was so, and my emacs editor had a creditable Rogerian psychotherapist coded up in it gave me misanthropic ideas for helping profoundly brain damaged people reenter society in high paying jobs. Sort of like “being there.”
  3. Cyc is probably the most impressive expert system shell yet written. Unfortunately, it doesn’t seem to parse the web. Probably because this is a really hard problem.
  4. Why doesn’t he wire a standard search engine up to the thing for things Alpha doesn’t recognize (which is the larger subset of questions I have asked it)? A friend of mine wrote a search engine for things pertaining to his project with a very small engineer head count. Search qua search is actually pretty easy! If nothing else, partner with someone else’s search engine for those non numeric questions!
  5. Since Alpha doesn’t do regular search … are they looking to become some kind of Wikipedia for data? That would also be incredibly useful. But they have not made this decision in any obvious way yet. If they want to be Wikipedia of data + data engine, they should probably be more overt with that, and cut out the Hal-9000 jokes.
Follow

Get every new post delivered to your Inbox.

Join 337 other followers