Locklin on science

Quantum computing as a field is obvious bullshit

Posted in non-standard computer architectures, physics by Scott Locklin on January 15, 2019

I remember spotting the quantum computing trend when I was  a larval physics nerdling. I figured maybe I could get in on the chuckwagon if my dissertation project didn’t work out in a big way (it didn’t). I managed to get myself invited to a Gordon conference, and have giant leather bound notebooks filled with theoretical scribblings containing material for 2-3 papers in them. I wasn’t real confident in my results, and I couldn’t figure out a way to turn them into something practical involving matter, so I happily matriculated to better things in the world of business.

When I say Quantum Computing is a bullshit field, I don’t mean everything in the field is bullshit, though to first order, this appears to be approximately true. I don’t have a mathematical proof that Quantum Computing isn’t at least theoretically possible.  I also do not have a mathematical proof that we can make the artificial bacteria of K. Eric Drexler’s nanotech fantasies. Yet, I know both fields are bullshit. Both fields involve forming new kinds of matter that we haven’t the slightest idea how to construct. Neither field has a sane ‘first step’ to make their large claims true.

Drexler and the “nanotechnologists” who followed him, they assume because we  know about the Schroedinger equation we can make artificial forms of life out of arbitrary forms of matter. This is nonsense; nobody understands enough about matter in detail or life in particular to do this. There are also reasonable thermodynamic, chemical and physical arguments against this sort of thing. I have opined on this at length, and at this point, I am so obviously correct on the nanotech front, there is nobody left to argue with me. A generation of people who probably would have made first rate chemists or materials scientists wasted their early, creative careers following this over hyped and completely worthless woo. Billions of dollars squandered down a rat hole of rubbish and wishful thinking. Legal wankers wrote legal reviews of regulatory regimes to protect us from this nonexistent technology. We even had congressional hearings on this nonsense topic back in 2003 and again in 2005 (and probably some other times I forgot about). Russians built a nanotech park to cash in on the nanopocalyptic trillion dollar nanotech economy which was supposed to happen by now.

Similarly, “quantum computing” enthusiasts expect you to overlook the fact that they haven’t a clue as to how to build and manipulate quantum coherent forms of matter necessary to achieve quantum computation.  A quantum computer capable of truly factoring the number 21 is missing in action. In fact, the factoring of the number 15 into 3 and 5 is a bit of a parlour trick, as they design the experiment while knowing the answer, thus leaving out the gates required if we didn’t know how to factor 15. The actual number of gates needed to factor a n-bit number is 72 * n^3; so for 15, it’s 4 bits, 4608 gates; not happening any time soon.

It’s been almost 25 years since Peter Shor had his big idea, and we are no closer to factoring large numbers than we were … 15 years ago when we were also able to kinda sorta vaguely factor the number 15 using NMR ‘quantum computers.’

I had this conversation talking with a pal at … a nice restaurant near one of America’s great centers of learning. Our waiter was amazed and shared with us the fact that he had done a Ph.D. thesis on the subject of quantum computing. My pal was convinced by this that my skepticism is justified; in fact he accused me of arranging this. I didn’t, but am motivated to write to prevent future Ivy League Ph.D. level talent having to make a living by bringing a couple of finance nerds their steaks.

In 2010, I laid out an argument against quantum computing as a field based on the fact that no observable progress has taken place. That argument still stands. No observable progress has taken place. However, 8 years is a very long time. Ph.D. dissertations have been achieved, and many of these people have gone on to careers … some of which involve bringing people like me delicious steaks. Hundreds of quantum computing charlatans achieved tenure in that period of time. According to google scholar a half million papers have been written on the subject since then.

QC-screenshot

There are now three major .com firms funding quantum computing efforts; IBM, Google and Microsoft. There is at least one YC/Andreesen backed startup I know of. Of course there is also dwave, who has somehow managed to exist since 1999; almost 20 years, without actually delivering something usefully quantum or computing. How many millions have been flushed down the toilet by these turds? How many millions which could have been used building, say, ordinary analog or stochastic computers which do useful things? None of these have delivered a useful quantum computer which has even  one usefully error corrected qubit. I suppose I shed not too many tears for the money spent on these efforts; in my ideal world, several companies on that list would be broken up or forced to fund Bell Labs moonshot efforts anyway, and most venture capitalists are frauds who deserve to be parted with their money. I do feel sad for the number of young people taken in by this quackery. You’re better off reading ancient Greek than studying a ‘technical’ subject that eventually involves bringing a public school kid like me a steak. Hell, you are better off training to become an exorcist or a feng shui practitioner than getting a Ph.D. in ‘quantum computing.’

I am an empiricist and a phenomenologist. I consider the lack of one error corrected qubit in the history of the human race to be adequate evidence that this is not a serious enough field to justify using the word ‘field.’ Most of it is frankly, a scam. Plenty of time to collect tenure and accolades before people realize this isn’t normative science or much of anything reasonable.

As I said last year

All you need do is look at history: people had working (digital) computers before Von Neumann and other theorists ever noticed them. We literally have thousands of “engineers” and “scientists” writing software and doing “research” on a machine that nobody knows how to build. People dedicate their careers to a subject which doesn’t exist in the corporeal world. There isn’t a word for this type of intellectual flatulence other than the overloaded term “fraud,” but there should be.

Computer scientists” have gotten involved in this chuckwagon. They have added approximately nothing to our knowledge of the subject, and as far as I can tell, their educational backgrounds preclude them ever doing so. “Computer scientists” haven’t had proper didactics in learning quantum mechanics, and virtually none of them have ever done anything as practical as fiddled with an op-amp, built an AM radio or noticed how noise works in the corporeal world.

Such towering sperg-lords actually think that the only problems with quantum computing are engineering problems. When I read things like this, I can hear them muttering mere engineering problems.  Let’s say, for the sake of argument this were true. The SR-71 was technically a mere engineering problem after the Bernoulli effect was explicated in 1738. Would it be reasonable to have a hundred or a thousand people writing flight plans for the SR-71  as a profession in 1760? No.

A reasonable thing for a 1760s scientist to do is invent materials making a heavier than air craft possible. Maybe fool around with kites and steam engines. And even then … there needed to be several important breakthroughs in metallurgy (titanium wasn’t discovered until 1791), mining, a functioning petrochemical industry, formalized and practical thermodynamics, a unified field theory of electromagnetism, chemistry, optics, manufacturing and arguably quantum mechanics, information theory, operations research and a whole bunch of other stuff which was unimaginable in the 1760s. In fact, of course the SR-71 itself was completely unimaginable back then. That’s the point.

 

its just engineering!

its just engineering!

Physicists used to be serious and bloody minded people who understood reality by doing experiments. Somehow this sort of bloody minded seriousness has faded out into a tower of wanking theorists who only occasionally have anything to do with actual matter. I trace the disease to the rise of the “meritocracy” out of cow colleges in the 1960s. The post WW-2 neoliberal idea was that geniuses like Einstein could be mass produced out of peasants using agricultural schools. The reality is, the peasants are still peasants, and the total number of Einsteins in the world, or even merely serious thinkers about physics is probably something like a fixed number. It’s really easy, though, to create a bunch of crackpot narcissists who have the egos of Einstein without the exceptional work output. All you need to do there is teach them how to do some impressive looking mathematical Cargo Cult science, and keep their “results” away from any practical men doing experiments.

The manufacture of a large caste of such boobs has made any real progress in physics impossible without killing off a few generations of them. The vast, looming, important questions of physics; the kinds that a once in a lifetime physicist might answer -those haven’t budged since the early 60s. John Horgan wrote a book observing that science (physics in particular) has pretty much ended any observable forward progress since the time of cow collitches. He also noticed that instead of making progress down fruitful lanes or improving detailed knowledge of important areas, most develop enthusiasms for the latest non-experimental wank fest; complexity theory, network theory, noodle theory. He thinks it’s because it’s too difficult to make further progress. I think it’s because the craft is now overrun with corrupt welfare queens who are play-acting cargo cultists.

Physicists worthy of the name are freebooters; Vikings of the Mind, intellectual adventurers who torture nature into giving up its secrets and risk their reputation in the real world. Modern physicists are … careerist ding dongs who grub out a meagre living sucking on the government teat, working their social networks, giving their friends reach arounds and doing PR to make themselves look like they’re working on something important. It is terrible and sad what happened to the king of sciences. While there are honest and productive physicists, the mainstream of it is lost, possibly forever to a caste of grifters and apple polishing dingbats.

But when a subject which claims to be a technology, which lacks even the rudiments of experiment which may one day make it into a technology, you can know with absolute certainty that this ‘technology’ is total nonsense. Quantum computing is less physical than the engineering of interstellar spacecraft; we at least have plausible physical mechanisms to achieve interstellar space flight.

We’re reaching peak quantum computing hyperbole. According to a dimwit at the Atlantic, quantum computing will end free will. According to another one at Forbes, “the quantum computing apocalypse is immanent.” Rachel Gutman and Schlomo Dolev know about as much about quantum computing as I do about 12th century Talmudic studies, which is to say, absolutely nothing. They, however, think they know smart people who tell them that this is important: they’ve achieved the perfect human informational centipede. This is unquestionably the right time to go short.

Even the national academy of sciences has taken note that there might be a problem here. They put together 13 actual quantum computing experts who poured cold water on all the hype. They wrote a 200 page review article on the topic, pointing out that even with the most optimistic projections, RSA is safe for another couple of decades, and that there are huge gaps on our knowledge of how to build anything usefully quantum computing. And of course, they also pointed out if QC doesn’t start solving some problems which are interesting to … somebody, the funding is very likely to dry up. Ha, ha; yes, I’ll have some pepper on that steak.


 

There are several reasonable arguments against any quantum computing of the interesting kind (aka can demonstrate supremacy on a useful problem) ever having a physical embodiment.

One of the better arguments is akin to that against P=NP. No, not the argument that “if there was such a proof someone would have come up with it by now” -but that one is also in full effect. In principle, classical analog computers can solve NP-hard problems in P time. You can google around on the “downhill principle” or look at the work on Analog super-Turing architectures by people like Hava Siegelmann. It’s old stuff, and most sane people realize this isn’t really physical, because matter isn’t infinitely continuous. If you can encode a real/continuous number into the physical world somehow, P=NP using a protractor or soap-bubble. For whatever reasons, most complexity theorists understand this, and know that protractor P=NP isn’t physical.  Somehow quantum computing gets a pass, I guess because they’ve never attempted to measure anything in the physical world beyond the complexity of using a protractor.

In order to build a quantum computer, you need to control each qubit, which is a continuous value, not a binary value, in its initial state and subsequent states precisely enough to run the calculation backwards. When people do their calculations ‘proving’ the efficiency of quantum computers, this is treated as an engineering detail. There are strong assertions by numerous people that quantum error correction (which, I will remind everyone, hasn’t been usefully implemented in actual matter by anyone -that’s the only kind of proof that matters here) basically pushes the analog requirement for perfection to the initialization step, or subsumes it in some other place where it can’t exist. Let’s assume for the moment that this isn’t the case.

Putting this a different way, for an N-qubit computer, you need to control, transform, and read out 2^N complex (as in complex numbers) amplitudes of N-qubit quantum computers to a very high degree of precision. Even considering an analog computer with N oscillators which must be precisely initialized, precisely controlled, transformed and individually read out, to the point where you could reverse the computation by running the oscillators through the computation backwards; this is an extremely challenging task. The quantum version is exponentially more difficult.

Making it even more concrete; if we encode the polarization state of a photon as a qubit, how do we perfectly align the polarizers between two qubits? How do we align them for N qubits? How do we align the polarization direction with the gates? This isn’t some theoretical gobbledeygook; when it comes time to build something in physical reality, physical alignments matter, a lot. Ask me how I know. You can go amuse yourself and try to build a simple quantum computer with a couple of hard coded gates using beamsplitters and polarization states of photos. It’s known to be perfectly possible and even has a rather sad wikipedia page. I can make quantum polarization-state entangled photons all day; any fool with a laser and a KDP crystal can do this, yet somehow nobody bothers sticking some beamsplitters on a breadboard and making a quantum computer. How come? Well, one guy recently did it: got two whole qubits. You can go read about this *cough* promising new idea here, or if you are someone who doesn’t understand matter here.

FWIIW in early days of this idea, it was noticed that the growth in the number of components needed was exponential in the number of qubits. Well, this shouldn’t be a surprise: the growth in the number of states in a quantum computer is also exponential in the number of qubits. That’s both the ‘interesting thing’ and ‘the problem.’ The ‘interesting thing’ because an exponential number of states, if possible to trivially manipulate, allows for a large speedup in calculations. ‘The problem’ because manipulating an exponential number of states is not something anyone really knows how to do.

The problem doesn’t go away if you use spins of electrons or nuclei; which direction is spin up? Will all the physical spins be perfectly aligned in the “up” direction? Will the measurement devices agree on spin-up? Do all the gates agree on spin-up? In the world of matter, of course they won’t; you will have a projection. That projection is in effect, correlated noise, and correlated noise destroys quantum computation in an irrecoverable way. Even the quantum error correction people understand this, though for some reason people don’t worry about it too much. If they are honest in their lack of worry, this is because they’ve never fooled around with things like beamsplitters. Hey, making it have uncorrelated noise; that’s just an engineering problem right? Sort of like making artificial life out of silicon, controlled nuclear fusion power or Bussard ramjets is “just an engineering problem.”

engineering problem; easier than quantum computers

 

Of course at some point someone will mention quantum error correction which allows us to not have to precisely measure and transform everything. The most optimistic estimate of the required precision is something like 10^-5 for quantum error corrected computers per qubit/gate operation. This is a fairly high degree of precision. Going back to my polarization angle example; this implies all the polarizers, optical elements and gates in a complex system are aligned to 0.036 degrees. I mean, I know how to align a couple of beamsplitters and polarizers to 628 microradians, but I’m not sure I can align a few hundred thousand of them AND pockels cells and mirrors to 628 microradians of each other. Now imagine something with a realistic number of qubits for factoring large numbers; maybe 10,000 qubits, and a CPU worth of gates, say 10^10 or so of gates (an underestimate of the number needed for cracking RSA, which, mind you, is the only reason we’re having this conversation). I suppose it is possible, but I encourage any budding quantum wank^H^H^H  algorithmist out there to have a go at aligning 3-4 optical elements to within this precision. There is no time limit, unless you die first, in which case “time’s up!”

This is just the most obvious engineering limitation for making sure we don’t have obviously correlated noise propagating through our quantum computer. We must also be able to prepare the initial states to within this sort of precision. Then we need to be able to measure the final states to within this sort of precision. And we have to be able to do arbitrary unitary transformations on all the qubits.

Just to interrupt you with some basic facts: the number of states we’re talking about here for a 4000 qubit computer is ~ 2^4000 states! That’s 10^1200 or so continuous variables we have to manipulate to at least one part in ten thousand. The number of protons in the universe is about 10^80. This is why a quantum computer is so powerful; you’re theoretically encoding an exponential number of states into the thing. Can anyone actually do this using a physical object? Citations needed; as far as I can tell, nothing like this has ever been done in the history of the human race. Again, interstellar space flight seems like a more achievable goal. Even Drexler’s nanotech fantasies have some precedent in the form of actually existing life forms. Yet none of these are coming any time soon either.

There are reasons to believe that quantum error correction, too isn’t even theoretically possible (examples here and here and here -this one is particularly damning). In addition to the argument above that the theorists are subsuming some actual continuous number into what is inherently a noisy and non-continuous machine made out of matter, the existence of a quantum error corrected system would mean you can make arbitrarily precise quantum measurements; effectively giving you back your exponentially precise continuous number. If you can do exponentially precise continuous numbers in a non exponential number of calculations or measurements, you can probably solve very interesting problems on a relatively simple analog computer. Let’s say, a classical one like a Toffoli gate billiard ball computer. Get to work; we know how to make a billiard ball computer work with crabs. This isn’t an example chosen at random. This is the kind of argument allegedly serious people submit for quantum computation involving matter. Hey man, not using crabs is just an engineering problem muh Church Turing warble murble.

Smurfs will come back to me with the press releases of Google and IBM touting their latest 20 bit stacks of whatever. I am not impressed, and I don’t even consider most of these to be quantum computing in the sense that people worry about quantum supremacy and new quantum-proof public key or Zero Knowledge Proof algorithms (which more or less already exist). These cod quantum computing machines are not expanding our knowledge of anything, nor are they building towards anything for a bold new quantum supreme future; they’re not scalable, and many of them are not obviously doing anything quantum or computing.

This entire subject does nothing but  eat up lives and waste careers. If I were in charge of science funding, the entire world budget for this nonsense would be below that we allocate for the development of Bussard ramjets, which are also not known to be impossible, and are a lot more cool looking.

 

 

As Dyakonov put it in his 2012 paper;

“A somewhat similar story can be traced back to the 13th century when Nasreddin Hodja made a proposal to teach his donkey to read and obtained a 10-year grant from the local Sultan. For his first report he put breadcrumbs between the pages of a big book, and demonstrated the donkey turning the pages with his hoofs. This was a promising first step in the right direction. Nasreddin was a wise but simple man, so when asked by friends how he hopes to accomplish his goal, he answered: “My dear fellows, before ten years are up, either I will die or the Sultan will die. Or else, the donkey will die.”

Had he the modern degree of sophistication, he could say, first, that there is no theorem forbidding donkeys to read. And, since this does not contradict any known fundamental principles, the failure to achieve this goal would reveal new laws of Nature. So, it is a win-win strategy: either the donkey learns to read, or new laws will be discovered.”

Further reading on the topic:

Dyakonov’s recent IEEE popsci article on the subject (his papers are the best review articles of why all this is silly):

https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-computing

IEEE precis on the NAS report:

https://spectrum.ieee.org/tech-talk/computing/hardware/the-us-national-academies-reports-on-the-prospects-for-quantum-computing (summary: not good)

Amusing blog from 11 years ago noting the utter lack of progress in this subject:

http://emergentchaos.com/archives/2008/03/quantum-progress.html

“To factor a 4096-bit number, you need 72*40963 or 4,947,802,324,992 quantum gates. Lets just round that up to an even 5 trillion. Five trillion is a big number. ”

Aaronson’s articles of faith (I personally found them literal laffin’ out loud funny, though I am sure he is in perfect earnest):

https://www.scottaaronson.com/blog/?p=124

 

On the Empire of the Ants

Posted in brainz, information theory by Scott Locklin on July 2, 2013

The internet is generally a wasteland of cat memes and political invective. Once in a while it serves its original purpose in disseminating new ideas. I stumbled across Boris Ryabko‘s little corner of the web while researching compression learning algorithms (which, BTW, are much more fundamental and important than crap like ARIMA). In it, I found one of the nicest little curiosity driven  papers I’ve come across in some time. Ryabko and his coworker, Zhanna Reznikova, measured the information processing abilities of ants, and the information capacity of ant languages. Download it here. There was also a plenary talk at an IEEE conference you can download here.

8135214574_43546bff19_h

In our degenerate age where people think cell phone apps are innovations,  it is probably necessary to explain why this is a glorious piece of work. Science is an exercise in curiosity about nature. It is a process. It sometimes involves complex and costly apparatus, or the resources of giant institutes. Sometimes it involves looking at ants in an ant farm, and knowing some clever math. Many people are gobsmacked by the technological gizmos used to do science. They think the giant S&M dungeons of tokomaks and synchro-cyclotrons are science. Those aren’t science; they’re tools. The end product; the insights into nature -that is what is important. Professors Ryabko and Reznikova did something a kid could understand the implications of, but no kid could actually do. The fact that they did it at all indicates they have the child-like curiosity and love for nature that is the true spirit of scientific enquiry. As far as I am concerned, Ryabko and Reznikova are real scientists. The thousands of co-authors on the Higgs paper; able technicians I am sure, but their contributions are a widows mite to the gold sovereign of Ryabko and Reznikova.

Theory: ants are smart, and they talk with their antennae. How smart are they, and how much information can they transfer with their antennae language? Here’s a video of talking ants from Professor Reznikova’s webpage:

Experiment: to figure out how much information they can transfer, starve some ants (hey, it’s for science), stick some food at random places in a binary tree, and see how fast they can tell the other ants about it. Here’s a video clip of the setup. Each fork in the path of a physical binary tree represents 1 bit of information, just as it does on your computer. Paint the ants so you know which is which. When a scout ant finds the food, you remove the maze, and put in place an identical one to avoid their sniffing the ant trails or the food in it.  This way, the only way for the other ants to find the fork the food was in is via actual ant communication. Time the ant communication between the scout ant and other foragers (takes longer than 30 seconds, apparently). Result: F. sanguinea can transmit around 0.74 bits a minute.  F. polyctena can do 1.1 bits a minute.

2383473468_26f4380bbd_z

Experiment: to figure out if ants are smart, see if they can pass on maze information in a compressed way. LRLRLRLRLRLR is a lot simpler in an information theoretical sense than an equal length random sequence of lefts and rights. Telephone transmission and MP3 players have this sort of compression baked into them to make storage and transmission more efficient.  If ants can communicate directions for a regular maze faster than a random one, they’re kind of smart. Result: in fact, this turns out to be the case.

Experiment: to find out if ants are smart, see if they can count. Stick them in a comb or hub shaped maze where there is food at the end of one of the 25 or more forks (you can see some of the mazes here). The only way the poor ant can tell other ants about it is if he says something like “seventeenth one to the left.” Or, in the case of one of the variants of this experiment,  something more like”3 over from the one the crazy Russian usually puts the food in.” Yep, you can see it plain as pie in the plots: ants have a hard time explaining “number 30” and a much easier time of saying, “two over from the one the food is usually in.” Ants can do math.

1969_Aadvark

The power of information theory is not appreciated as it should be. We use the products of it every time we fire up a computer or a cell phone, but it is applicable in many areas where a mention of “Shannon entropy” will be met with a shrug. Learning about the Empire of the Ants is just one example.

People in the SETI project are looking for  alien ham radios on other planets. I’ve often wondered why people think they’ll be able to recognize an alien language as such. Sophisticated information encoding systems look an awful lot like noise. The English language isn’t particularly sophisticated as an encoding system. Its compressibility indicates this. If I were an alien, I might use very compressed signals (sort of like we do with some of our electronic communications). It might look an awful lot like noise.

We have yet to communicate  with dolphins. We’re pretty sure they have interesting things to say, via an information theoretical result called Zipf’s law (though others disagree,  it seems likely they’re saying something pretty complex). There are  better techniques to “decompress” dolphin vocalizations than Zipf’s law: I use some of them looking for patterns in economic systems. Unfortunately marine biologists are usually not current with information theoretical tools, and the types of people who are familiar with such tools are busy working for the NSA and Rentech. Should I ever make my pile of dough and retire, I’ll hopefully have enough loot to strap a couple of tape recorders to the dolphins. It seems something worth doing.

The beautiful result of Ryabko and Reznikova points the way forward. A low budget, high concept experiment, done with stopwatches, paint and miniature plastic ant habitrails produced this beautiful result on insect intelligence. It is such a simple experiment, anyone with some time and some ants could have done it! This sort of “small science” seems rare these days; people are more interested in big budget things, designed to answer questions about minutae, rather than interesting things about the world around us. I don’t know if we have the spirit to do such “small science” in America any longer.  American scientists seem like bureaucratized lemmings, hypnotized by budgets, much like the poor ants are hypnotized by sugar water. The Rube-Goldberg nature of this experiment could only be done by a nation of curious tinkerers; something we no longer seem to have here.

Dolphin language could have been decoded decades ago. While it is sad that such studies haven’t been done yet, it leaves open new frontiers for creative young scientists today. Stop whining about your budget and get to work!

Mormon nuclear fusion

Posted in Design, energy by Scott Locklin on July 2, 2013

Most of you have never heard of Philo T. Farnsworth. Philo T. Farnsworth is famous for never getting credit for inventing the Television machine.  I never thought Television was particularly interesting (either as a device, or in any other way), though I have to admit, the Television machine is a pretty impressive accomplishment for a 14 year old farm-boy Mormon. Even more impressive was his successful attempt to build a fusion machine.

fusor410c

Farnsworth, like all good inventors, took a workman like approach to nuclear fusion. Thousands of morons (as opposed to Mormons) in the scientific establishment have been trying for literally, decades, to the tune of hundreds of billions of dollars, to achieve what Farnsworth did, using what amounts to a pile of junk.  His solution is still considered pretty  good, and if it were given a fair trial, it might even beat the billion dollar efforts out there in achieving break-even (aka as much fusion energy out as was put in). The Navy recently revived the idea in the form of “Polywell Fusion.” It’s so simple, anyone can build a Farnsworth Fusor in his basement; there are websites devoted to hobbyist efforts. Kids regularly build these things for science fair projects. That’s how dumb and easy they are. The most complicated thing about them is the vacuum pump they use.

ptfwfusor253

The “big science” buffoons use magnetic confinement; a copy of a Soviet idea that never went anywhere. You end up with a giant toroidal machine, with megawatts of energy going to keep the fusion plasma contained in place. Farnsworth’s idea just used some rings of metal to more or less passively keep the ionized fluid in place. It’s such a simple device, you could construct one out of TV and refrigerator parts, with the electrostatic rings made of old coat hangers. Such machines are used commercially as neutron sources, as they produce lots of fusion reactions (though nowhere near breakeven thus far).

20121004_205902%20%5B800x600%5D

Farnsworth was probably the last great American inventor. I’d like to think there will be great inventing men to come after him, but I’m pretty sure it won’t happen here any more, as the continuity is gone. Independent inventing men like Edison, Tesla and the Maxim brothers are part of America’s tradition; Farnsworth was the last of the great ones. Now we think of men as inventors when they write some crap piece of software. Farnsworth was uneducated by modern lights; only a few years of college. He was an actual farm boy, and he thought of television while ploughing the fields. TV is a rastering process, like ploughing fields. Yet, he invented all manner of machines, as well as being an accomplished mathematician.

Why won’t there be any more like him? The tinkering mentality is gone. Guys from the midwest  in the early 20th century were tinkerers who fixed things because they had to in those days. You can’t really understand physical reality by screwing around with CAD and computer models. You can only understand physical reality by, well, tinkering with it. My pal Rodrigo recently sent me a Tom Wolfe essay (about Intel’s Bob Noyce, primarily) which illustrates the point, and also demonstrates why modern bureaucratic space flight is such a galloping failure:

The engineers who fulfilled one of man’s most ancient dreams, that of traveling to the moon, came from the same background, the small towns of the Midwest and the West. After the triumph of Apollo 11, when Neil Armstrong and Buzz Aldrin became the first mortals to walk on the moon, NASA’s administrator, Tom Paine, happened to remark in conversation: “This was the triumph of the squares. ” A reporter overheard him; and did the press ever have a time with that! But Paine had come up with a penetrating insight. As it says in the Book of Matthew, the last shall be first. It was engineers from the supposedly backward and narrow-minded boondocks who had provided not only the genius but also the passion and the daring that won the space race and carried out John F. Kennedy’s exhortation, back in 1961. to put a man on the moon “before this decade is out.” The passion and the daring of these engineers was as remarkable as their talent. Time after time they had to shake off the meddling hands of timid souls from back east. The contribution of MIT to Project Mercury was minus one. The minus one was Jerome Wiesner of the MIT electronic research lab who was brought in by Kennedy as a special adviser to straighten out the space program when it seemed to be faltering early in 1961. Wiesner kept flinching when he saw what NASA’s boondockers were preparing to do. He tried to persuade to forfeit the manned space race to the Soviets and concentrate instead on unmanned scientific missions. The boondockers of Project Mercury, starting with the project’s director, Bob Gilruth, an aeronautical engineer from Nashwauk, Minnesota, dodged Wiesner for months, like moonshiners evading a roadblock, until they got astronaut Alan Shepard launched on the first Mercury mission. Who had time to waste on players as behind the times as Jerome Wiesner and the Massachusetts Institute of Technology…out here on technology’s leading edge?

Just why was it that small-town boys from the Middle West dominated the engineering frontiers? Noyce concluded it was because in a small town you became a technician, a tinker, an engineer, and an and inventor, by necessity.

TV-75-

Of course, Farnsworth was hounded by scumbags for most of his life. David Sarnoff, the evil weasel who founded NBC, and early patent troll, attempted to sue Farnsworth to penury. He ultimately failed in this endeavor, though the mind reels at the injustice of a towering genius like Farnsworth having to pay any attention to such nonsense. Who knows what wonders Farnsworth may have come up with had he been free to pursue his interests, rather than being tied up in pointless patent disputes with sleazeballs?

Consider Philo Farnsworth the next time someone tells you we live in an era of scientific progress. Where are our Philo Farnsworths today? They certainly aren’t laboring in a make work program in some government lab, nor do they seem to be inventing anything particularly interesting.

5-Squares-Filo-Farnsworth


http://www.rexresearch.com/farnsworth/fusor.htm#advanced

http://www.philotfarnsworth.com/

http://www.farnovision.com/chronicles/tfc-intro.html

https://www.neco.navy.mil/synopsis_file/N6893609C0125%20_Redacted_JA.pdf (the navy can’t update their security certs, apparently).

BTC bubbles

Posted in econophysics by Scott Locklin on April 17, 2013

Not surprisingly, Bitcoin prices are well described by the  log periodic power laws describing the dynamics of bubbles. A reminder of what a LPPL model looks like; here is a simple one:

\log(p(t)) = A + B(t_c - t)^\beta + C(t_c - t)^\beta \cos( \omega \log(t_c-t)+\phi)

I didn’t profit from this. I thought of applying LPPL to the BTC bubble well before the crash during a bullshit session with a friend, but I didn’t run the analysis until after. I have better things to do with my time than play with weird monopoly money, and the “exchanges” presently offering shorts are not even close to useful. I also think anyone who trades on LPPL is basically gambling. The most interesting parameter, t_c is hardest to fit, and, well, with all those parameters I could fit a whole lot of elephants. Just the same it is a useful enough concept to justify further research. No, I won’t be telling the world about that research on my blog. A man’s got to eat, after all. Doing bubble physics costs money.

If you don’t know about LPPL models, click on these two helpful links. The “hand wavey” idea is, if the price is formed by market participants looking at what other market participants are doing, as with Dutch tulips, pets.com, and market prices in various eras, the price is an irrational bubble which will eventually burst. This isn’t an original idea: Charles Mackay was talking about it 180 years ago. The original idea is mapping this behavior onto an Ising model,  running some renormalization group theory on it, and fitting to the result to get a forecast of bubble burstings.  Sornette, Ledoit,  Johanson, Bouchaud and Freund did it and told the world about it; may the eternal void bless them with healthy returns for being kind enough to share this interesting idea with us.

Here’s a plot of BTC close prices from MtGox (via quandl), with the LPPL model fit 10 days before the bubble pop. I wasn’t real careful with the fit; no unit root tests were done, no probabilistic estimates were made and no Ornstein Uhlenbeck processes were taken into account. This is just curve fitting. The result is compelling enough to talk about. As you can see, with these parameters, the out of sample top is fit fairly well. Amusingly, so is the decline.

test

What can we learn from this? You can see a “fair value” of around $20/BTC due to be hit in a few weeks, with perhaps a full mean reversion to $10/BTC.  BTC doesn’t seem to have a helpful “anti-bubble” decay; if anything, it is decaying faster than expected so far (it is possible I mis-fit the \omega). The fit parameters for this version of the model tell us a few interesting things about the herding behavior which you can read about in Sornette’s book.

I don’t have any strong opinions about using BTC as a currency. I think most of its enthusiasts  are naive and do not understand the nature of money and what it is good for. I do think BTC would work a lot better as a store of value with a properly functioning foreign exchange futures market. There are no properly functioning BTC futures exchanges at present; just an assortment of dreamers and borderline crooks cashing in on hype. This is more of an engineering and legal problem than it is an inherent problem with using BTC as a currency. The way things are presently set up, without shorts, any extra media attention will result only in people buying the damn things. Without the ability to easily short them, price discovery is impossible, and herding behavior is the rule. It ain’t a market without shorts. It’s a bubble maker. Shorts don’t guarantee there will be no bubbles; we see plenty in shortable markets, but a lack of shorts will virtually guarantee future BTC bubbles.