Locklin on science

Andreessen-Horowitz craps on “AI” startups from a great height

Posted in investments by Scott Locklin on February 21, 2020

Andreessen-Horowitz has always been the most levelheaded of the major current year VC firms. While other firms were levering up on “cleantech” and nonsensical biotech startups that violate physical law, they quietly continued to invest in sane companies (also hot garbage bugman products like soylent).  I assume they actually listen to people on the front lines, rather than what their VC pals are telling them. Maybe they’re just smarter than everyone else; definitely more independent minded. Their recent review on how “AI” differs from software company investments is absolutely brutal. I am pretty sure most people didn’t get the point, so I’ll quote it emphasizing the important bits.

They use all the buzzwords (my personal bete-noir; the term “AI” when they mean “machine learning”), but they’ve finally publicly noticed certain things which are abundantly obvious to anyone who works in the field. For example, gross margins are low for deep learning startups that use “cloud” compute. Mostly because they use cloud compute.

Gross Margins, Part 1: Cloud infrastructure is a substantial – and sometimes hidden – cost for AI companies 🏭

In the old days of on-premise software, delivering a product meant stamping out and shipping physical media – the cost of running the software, whether on servers or desktops, was borne by the buyer. Today, with the dominance of SaaS, that cost has been pushed back to the vendor. Most software companies pay big AWS or Azure bills every month – the more demanding the software, the higher the bill.

AI, it turns out, is pretty demanding:

• Training a single AI model can cost hundreds of thousands of dollars (or more) in compute resources. While it’s tempting to treat this as a one-time cost, retraining is increasingly recognized as an ongoing cost, since the data that feeds AI models tends to change over time (a phenomenon known as “data drift”).
• Model inference (the process of generating predictions in production) is also more computationally complex than operating traditional software. Executing a long series of matrix multiplications just requires more math than, for example, reading from a database.
• AI applications are more likely than traditional software to operate on rich media like images, audio, or video. These types of data consume higher than usual storage resources, are expensive to process, and often suffer from region of interest issues – an application may need to process a large file to find a small, relevant snippet.
• We’ve had AI companies tell us that cloud operations can be more complex and costly than traditional approaches, particularly because there aren’t good tools to scale AI models globally. As a result, some AI companies have to routinely transfer trained models across cloud regions – racking up big ingress and egress costs – to improve reliability, latency, and compliance.

Taken together, these forces contribute to the 25% or more of revenue that AI companies often spend on cloud resources. In extreme cases, startups tackling particularly complex tasks have actually found manual data processing cheaper than executing a trained model.

This is something which is true of pretty much all machine learning with heavy compute and data problems. The pricing structure of “cloud” bullshit is designed to extract maximum blood from people with heavy data or compute requirements. Cloud companies would prefer to sell the time on a piece of hardware to 5 or 10 customers. If you’re lucky enough to have a startup that runs on a few million rows worth of data and a GBM or Random Forest, it’s probably not true at all, but precious few startups are so lucky. Those who use the latest DL woo on the huge data sets they require will have huge compute bills unless they buy their own hardware. For reasons that make no sense to me, most of them don’t buy hardware.

In many problem domains, exponentially more processing and data are needed to get incrementally more accuracy. This means – as we’ve noted before – that model complexity is growing at an incredible rate, and it’s unlikely processors will be able to keep up. Moore’s Law is not enough. (For example, the compute resources required to train state-of-the-art AI models has grown over 300,000x since 2012, while the transistor count of NVIDIA GPUs has grown only ~4x!) Distributed computing is a compelling solution to this problem, but it primarily addresses speed – not cost.

Beyond what they’re saying about the size of Deep Learning models which is doubtless true for interesting new results, admitting that the computational power of GPU chips hasn’t exactly been growing apace is something rarely heard (though more often lately). Everyone thinks Moore’s law will save us. NVIDIA actually does have obvious performance improvements that could be made, but the scale of things is such that the only way to grow significantly bigger models is by lining up more GPUs. Doing this in a “cloud” you’re renting from a profit making company is financial suicide.

Gross Margins, Part 2: Many AI applications rely on “humans in the loop” to function at a high level of accuracy 👷

Human-in-the-loop systems take two forms, both of which contribute to lower gross margins for many AI startups.

First: training most of today’s state-of-the-art AI models involves the manual cleaning and labeling of large datasets. This process is laborious, expensive, and among the biggest barriers to more widespread adoption of AI. Plus, as we discussed above, training doesn’t end once a model is deployed. To maintain accuracy, new training data needs to be continually captured, labeled, and fed back into the system. Although techniques like drift detection and active learning can reduce the burden, anecdotal data shows that many companies spend up to 10-15% of revenue on this process – usually not counting core engineering resources – and suggests ongoing development work exceeds typical bug fixes and feature additions.

Second: for many tasks, especially those requiring greater cognitive reasoning, humans are often plugged into AI systems in real time. Social media companies, for example, employ thousands of human reviewers to augment AI-based moderation systems. Many autonomous vehicle systems include remote human operators, and most AI-based medical devices interface with physicians as joint decision makers. More and more startups are adopting this approach as the capabilities of modern AI systems are becoming better understood. A number of AI companies that planned to sell pure software products are increasingly bringing a services capability in-house and booking the associated costs.

Everyone in the business knows about this. If you’re working with interesting models, even assuming the presence of infinite accurately labeled training data, the “human in the loop” problem doesn’t ever completely go away. A machine learning model is generally “man amplified.” If you need someone (or, more likely, several someone’s) making a half million bucks a year to keep your neural net producing reasonable results, you might reconsider your choices. If the thing makes human level decisions a few hundred times a year, it might be easier and cheaper for humans to make those decisions manually, using a better user interface. Better user interfaces are sorely underappreciated. Have a look at Labview, Delphi or Palantir’s offerings for examples of highly productive user interfaces.

Since the range of possible input values is so large, each new customer deployment is likely to generate data that has never been seen before. Even customers that appear similar – two auto manufacturers doing defect detection, for example – may require substantially different training data, due to something as simple as the placement of video cameras on their assembly lines.

Software which solves a business problem generally scales to new customers. You do some database back end grunt work, plug it in, and you’re done.  Sometimes you have to adjust processes to fit the accepted uses of the software; or spend absurd amounts of labor adjusting the software to work with your business processes: SAP is notorious for this. Such cycles are hugely time and labor consuming. Obviously they must be worth it at least some of the time. But while SAP is notorious (to the point of causing bankruptcy in otherwise healthy companies), most people haven’t figured out that ML oriented processes almost never scale like a simpler application would. You will be confronted with the same problem as using SAP; there is a ton of work done up front; all of it custom. I’ll go out on a limb and assert that most of the up front data pipelining and organizational changes which allow for it are probably more valuable than the actual machine learning piece.

In the AI world, technical differentiation is harder to achieve. New model architectures are being developed mostly in open, academic settings. Reference implementations (pre-trained models) are available from open-source libraries, and model parameters can be optimized automatically. Data is the core of an AI system, but it’s often owned by customers, in the public domain, or over time becomes a commodity.

That’s right; that’s why a lone wolf like me, or a small team can do as good or better a job than some firm with 100x the head count and 100m in VC backing. I know what the strengths and weaknesses of the latest woo is. Worse than that: I know that, from a business perspective, something dumb like Naive Bayes or a linear model might solve the customer’s problem just as well as the latest gigawatt neural net atrocity. The VC backed startup might be betting on their “special tool” as its moaty IP. A few percent difference on a ROC curve won’t matter if the data is hand wavey and not really labeled properly, which describes most data you’ll encounter in the wild. ML is undeniably useful, but it is extremely rare that a startup have “special sauce” that works 10x or 100x better than somthing you could fork in a git repo. People won’t pay a premium over in-house ad-hoc data science solutions unless it represents truly game changing results. The technology could impress the shit out of everyone else, but if it’s only getting 5% better MAPE (or whatever); it’s irrelevant. A lot of “AI” doesn’t really work better than a histogram via “group by” query. Throwing complexity at it won’t make it better: sometimes there’s no data in your data.

Some good bullet points for would be “AI” technologists:

Eliminate model complexity as much as possible. We’ve seen a massive difference in COGS between startups that train a unique model per customer versus those that are able to share a single model (or set of models) among all customers….

Nice to be able to do, but super rare. If you’ve found a problem like this, you better hope you have a special, moaty solution, or a unique data set which makes it possible.

Choose problem domains carefully – and often narrowly – to reduce data complexityAutomating human labor is a fundamentally hard thing to do. Many companies are finding that the minimum viable task for AI models is narrower than they expected.  Rather than offering general text suggestions, for instance, some teams have found success offering short suggestions in email or job postings. Companies working in the CRM space have found highly valuable niches for AI based just around updating records. There is a large class of problems, like these, that are hard for humans to perform but relatively easy for AI. They tend to involve high-scale, low-complexity tasks, such as moderation, data entry/coding, transcription, etc.

This is a huge admission of “AI” failure. All the sugar plum fairy bullshit about “AI replacing jobs” evaporates in the puff of pixie dust it always was. Really, they’re talking about cheap overseas labor when lizard man fixers like Yang regurgitate the “AI coming for your jobs” meme; AI actually stands for “Alien (or) Immigrant” in this context. Yes they do hold out the possibility of ML being used in some limited domains; I agree, but the hockey stick required for VC backing, and the army of Ph.D.s required to make it work doesn’t really mix well with those limited domains, which have a limited market.

Embrace services. There are huge opportunities to meet the market where it stands. That may mean offering a full-stack translation service rather than translation software or running a taxi service rather than selling self-driving cars.

In other words; you probably can’t build a brain in a can that can solve all kinds of problems: you’re probably going to be a consulting and services company. In case you aren’t familiar with valuations math: services companies are worth something like 2x yearly revenue; where software and “technology” companies are worth 10-20x revenue. That’s why the wework weasel kept trying to position his pyramid scheme as a software company. The implications here are huge: “AI” raises done by A16z and people who think like them are going to be at much lower valuations. If it weren’t clear enough by now, they said it again:

To summarize: most AI systems today aren’t quite software, in the traditional sense. And AI businesses, as a result, don’t look exactly like software businesses. They involve ongoing human support and material variable costs. They often don’t scale quite as easily as we’d like. And strong defensibility – critical to the “build once / sell many times” software model – doesn’t seem to come for free.

These traits make AI feel, to an extent, like a services business. Put another way: you can replace the services firm, but you can’t (completely) replace the services.

I’ll say it again since they did: services companies are not valued like software businesses are. VCs love software businesses; work hard up front to solve a problem, print money forever. That’s why they get the 10-20x revenues valuations. Services companies? Why would you invest in a services company? Their growth is inherently constrained by labor costs and weird addressable market issues.

This isn’t exactly an announcement of a new “AI winter,” but it’s autumn and the winter is coming for startups who claim to be offering world beating “AI” solutions. The promise of “AI” has always been to replace human labor and increase human power over nature. People who actually think ML is “AI” think the machine will just teach itself somehow; no humans needed. Yet, that’s not the financial or physical reality. The reality is, there are interesting models which can be applied to business problems by armies of well trained DBAs, data engineers, statisticians and technicians. These sorts of things are often best grown inside a large existing company to increase productivity. If the company is sclerotic, it can hire outside consultants, just as they’ve always done. A16z’s portfolio reflects this. Putting aside their autonomous vehicle bets (which look like they don’t have a large “AI” component to them), and some health tech bets that have at least linear regression tier data science, I can only identify only two overtly data science related startup they’ve funded. They’re vastly more long crypto currency and blockchain than “AI.” Despite having said otherwise, their money says “AI” companies don’t look so hot.

My TLDR summary:

1.  Deep learning costs a lot in compute, for marginal payoffs
2. Machine learning startups generally have no moat or meaningful special sauce
4. Machine learning will be most productive inside large organizations that have data and process inefficiencies

Kickstarter: muppet graveyard part 2

Posted in fraud, investments by Scott Locklin on March 1, 2013

Perhaps people think I engage in hyperbole about Kickstarter projects. No, I merely speak the obvious truth. It is a place of fraud and deception, a place which takes advantage of well meaning nerds who don’t think critically. Remember my five criteria for a perfect Kickstarter marketing pitch? Let’s review.

1. Make it hardware related. Most internet dorks know nothing about hardware and are acutely aware of  and embarrassed by their lack of interaction with the real world. This is how stupid  ideas like solid printing get traction. Keyboard warriors want to work in meatspace, but they don’t know how. For a small donation, they can be hardware hackers!
2. Make it “open source.” Keyboard muppets luuuurve open source, as it gives them “free” toys to play with. It doesn’t matter if it costs money, it doesn’t matter if it actually functions; what matters is that it is freeeeeeee.
3. Make it related to their nerd-dildo (aka their “smart phone”). Modern techno-muppets have a relationship with their nerd-dildo not unlike that between Gollum and his precious. Polishing the nerd dildo and giving it even more power … tapping into the love affair between a nerd and his dildo strikes powerful emotional chords.
4. Make noises about a super great prototype which will be distributed via junky open source rep-rap solid printing.
5. Make it related to some fashionable moral crusade. If this were a mere gadget, only the most devoted Gollum would care, but keyboard warriors are going to save the goddamned planet with their open-source nerd dildo!

My next example embodies at least three of the five points. It is a piece of hardware. It is supposed to power an iphone. And it is supposed to save the environment. What is it? A Stirling engine which powers an iphone using the energy from a coffee cup. Behold, the Epiphany onE Puck!

[IMAGE REDACTED FOR LAWSUIT THREAT]

Quote from kickstarter site:

The idea behind the Epiphany onE Puck is to use a stirling engine powered solely by heat disparities, such as a hot or cold drink, a candle, ice, etc. These heat sources will provide enough power to the stirling engine to fully charge your cell phone battery. There’s nothing new about Stirling engines – they were invented in the early 1800s – but thanks to modern materials and modern electronics, we are able to put them to use in ways that weren’t previously possible.

So, now the new question is, How can a small device that powers my cell phone change the world?

Well, the fact of the matter is, it won’t change the world. It also explicitly violates the laws of thermodynamics, so it won’t do anything but line the pockets of the people pitching it. How do I know this? Well, several ways.

The first way is common sense. It’s obvious this won’t work if you have ever looked at a small gamma Stirling engine like this one. There used to be a home made coffee machine powered gamma Stirling in the lab. It was made by a skilled machinist who built scientific apparatus every damn day, and it made just enough mechanical energy to overcome friction and turn over; and this from a very hot coffee machine. There are others that actually do work on just a coffee cup; they don’t produce much more useful work than is required to overcome friction either. Small home made Stirlings are fragile things that end up using graphite pistons to overcome friction; it is a big achievement to make a little one that turns over at all. Check the model engine builder groups if you don’t believe me.

The second way is knowing about practical Stirling engines that do useful work. The ones that are actually efficient use complex mechanical tricks to extract as much energy as possible. One of the main necessities is for a better working fluid than air; so you end up with lots of pressurized seals and such, to keep in compressed helium or whatever they use. The very best ones are completely sealed and connect to the dynamo via magnet. They also require extremely efficient regenerators; this one pretty obviously has no regenerator. The efficient ones are always much larger than a coffee cup, to fit all the necessary mechanical junk in it. The model shown in the pitch is a toy gamma Stirling with none of these advanced features. One that never actually functions on video, mind you: an LED lighting up doesn’t impress me. Therefore, they fail at this project on inspection. I once had an idea to cool a beer can with a hand made, hand powered reverse Stirling cycle cooler. It can theoretically be done, but the design so far is intensely complex. Having gone through this exercise, I know they didn’t by looking at their proposal. Example from history: Philips spent decades making a 200 watt Stirling engine which, well, go look at it. It is hugely complex, and ultimately failed because it was too costly to manufacture.

The third way is to do math. How much energy is needed to charge a cell phone? Batteries in them hold 1200mAH at 3.7 volts, for 4800mWH. They use around 60mW when they’re suspended, which is why you need to keep your dumb phone hooked up to a charger all the time. But anyway, is there 4-watt-hours in a cup of coffee? That’s 4 times 860 calories, or 3440 calories. A calorie is conveniently the amount of energy required to raise 1 gram of water by 1 degree C. So, their 6 ounce coffee mug or 177 grams, at a generously hot 70 degrees C with generously cool 20 degrees in your room yields 12390 calories. So, all we have to do is get out a quarter of the energy in a hot cup of coffee to do this! I’m all excited. OK, how? Stirling engines? Let’s forget about all the practicalities of designing one, and just use the most theoretically efficient heat engine: the Carnot cycle. The efficiency of a carnot cycle is

$efficiency = 1 - \frac{T_c}{T_h}$

$T_h$ and $T_c$ must be in absolute temperature, Kelvin. So, what is the maximum possible efficiency of a heat engine at these temperature differentials?

$efficiency = 1 - \frac{273+20}{273+70} = 0.15$

It seems tantalizingly close, right? But it’s not. $T_h$ is an exponentially decaying function of time, even without assuming the Stirling engine sucks energy into it. Integrating over time (an exercise for the reader; that’s enough LaTeX for you), you get an average efficiency number closer to 0.08. Only 1000 calories of mechanical energy can be retrieved even in principle from a coffee cup heat source. That’s assuming Carnot perfection. Real Stirling engines, using the maximum of tuning and technical innovations achieve 0.5 times Carnot on the heat pumped into them. Now we’re down to 0.04, or 500 calories. This is leaving out the fact that the design they are using is at best 5% Carnot; probably significantly less than 1%. What’s left in our calculation? Oh, actually, a Stirling engine can’t magically suck all the heat out of a coffee cup: most if it is radiated to the world. Call that a generous 10%. 50 calories left! What is that in mWH? 0.05. Less than the phone uses at idle. Adding back in friction, dynamo inefficiencies and real world gamma Stirling efficiency, the real result is pretty much zero.

There is a small chance I’m wrong about this one. Maybe they have some very innovative technique which actually can extract significant energy from a coffee cup. Should they make one that does something more than light up an LED, I will apologize profusely for saying nasty things. But it sure fails the sniff test from where I am standing.

Why do I bother? I hate it when people are paid for stupid technological shit. It robs the credulous and makes people who do real things look bad. Stirling engines are cool; I hope serious people continue to work on them. I just wish muppets would leave them alone. If you want a real Stirling engine, send money to the guys at Sunpower. They’re actual experts who can get shit done.

Kickstarter: muppet graveyard

Posted in fraud, investments by Scott Locklin on January 22, 2013

If another person sends me a kickstarter proposal, Lord Humongous help me, I’ll go light the nitwits who founded it on fire. I’m sure someone reading will say, “you mean ‘bad kickstarter proposal'” but that’s uselessly tautological: I have never seen a kickstarter proposal which wasn’t on the short bus. Mind you, I’m all for capitalism, the arts and  charity, but kickstarter is a place where all socially and technologically inept proposals go to … needlessly gather internet attention that would otherwise be more productively spent on cat memes. Just because it is on the internet and you need … technology … to see it, doesn’t mean it isn’t completely imbecilic.

The most successful kickstarter proposals I’ve seen seem to embody everything that is wrong with modern life. People who wallow in self righteous moral certitude will fund monumentally stupid ideas. Are you a professional vaginal kvetcher, worried about the tremendous social injustice of video games not having enough female characters which appeal to your personal neurono-libidinal peccadilloes? Do mean, nasty, pimply faced video game players make you cry when they laugh at your impostures? No need to do anything productive and hilarious, such as attempting to write a feminist video game: idiots will give you money to further whine about it in public.  The givers get to marinate in their superior state of enlightenment over pimply faced video game players who think feminist princesses are silly. The taker gets to continue her project of publicly proving the futility of a modern liberal arts education.

Subsidies for dyspeptic feminist dunderheads are probably the best use for kickstarter. More hilarious and offensive are ding dongs who think they can build environment-saving spectrometers out of cardboard and bits of DVD, and want you to pay for their “researches.” The pitch is a model of kickstarter imbecilities, and should be preserved in amber for it sheer perfection in catering to the tastes of the  modern day techno-muppet. Let me break it down:

1. Make it hardware related. Most internet dorks know nothing about hardware and are acutely aware of  and embarrassed by their lack of interaction with the real world. This is how stupid  ideas like solid printing get traction. Keyboard warriors want to work in meatspace, but they don’t know how. For a small donation, they can be hardware hackers!
2. Make it “open source.” Keyboard muppets luuuurve open source, as it gives them “free” toys to play with. It doesn’t matter if it costs money, it doesn’t matter if it actually functions; what matters is that it is freeeeeeee.
3. Make it related to their nerd-dildo (aka their “smart phone”). Modern techno-muppets have a relationship with their nerd-dildo not unlike that between Gollum and his precious. Polishing the nerd dildo and giving it even more power … tapping into the love affair between a nerd and his dildo strikes powerful emotional chords.
4. Make noises about a super great prototype which will be distributed via junky open source rep-rap solid printing.
5. Make it related to some fashionable moral crusade. If this were a mere gadget, only the most devoted Gollum would care, but keyboard warriors are going to save the goddamned planet with their open-source nerd dildo!

I might  support such a thing if I thought it were possible or doable. Why not arm environmentalists with a bunch of spectrometers, and have them go hunt for pollution of various kinds? At least they’d be basing their ideas on something resembling science, and lowering the preposterous levels of chemical pollution is something all sane people can get behind. The matter is: the “engineering” on this gizmo is pathetic. It is some kind of refugee from a Make magazine project; it is abundantly obvious that nobody with a passing acquaintance with optics, let alone spectroscopy was involved in this project. In fact, the principal is a media guy with no apparent remedial physics making him qualified to build spectrometers. Not that this is a horrible thing; many self-taught amateurs have made important contributions to engineering and science. The thing is, amateurs need to know shit first. This guy seems to know nothing.

In a past life, I dabbled with spectrometer design. I knew enough about it that Zeiss (greatest, oldest and most careful optics company in the world) nearly hired me straight out of college to work on semi-spectroscopic optics that heals people’s eyeballs. If I weren’t unnaturally honest, I’d probably be in Jena, laboring in lucrative obscurity, pullling 6 week vacations and waiting for my Krauty pension to kick in. As such, I have a few (very rusty) bona fides in spectrometer design and can explain in laymans terms why this idea is completely retarded.

There are a couple of ways to do spectroscopy, all of which involve light interference. The one being used here utilizes a diffraction grating. A diffraction grating is, more or less, an optical gizmo with lines etched into it, which are similar in dimensions to the wavelengths of light which are of interest. When the wave front of light hits the grating, it bounces off in different path lengths, dictated by the grating dimensions. The resulting interference pattern reflects different wavelengths of light from the grating at different angles. So, red light will reflect off the grating at a different angle from blue light, because red has a longer wavelength than blue. It’s not important that you understand this, though college physics will suffice. The important thing to remember: different wavelengths of light, different angles. Here’s a useful infographic I stole from a real optics company:

The way a spectrometer with a reflective diffraction grating works, you take a small spot of light of many wavelengths, illuminate the grating, and the grating reflects the different wavelengths of light to different angles. To turn this into a spectrum, you need to detect the light at the different angles; use the grating equation to get the answer, and voila, you are a spectroscopist. Otherwise, you’re just looking at rainbows. What good is it? Well, different kinds of atoms and chemicals absorb light at different wavelengths; you see lines in the resulting spectrograph on your detector. Like this:

The light into the contraption needs to be small in physical dimension, otherwise, you won’t be able to distinguish one wavelength from the other. Remember, you have to distinguish things, otherwise these lines will overlap. You can generate the light all kinds of ways; by burning interesting shit, sticking it into an electrical discharge or by passing some other kind of  white light through something semi-transparent which absorbs distinguishing lines; whatever. The spectrometer needs to be rigid; if anything moves inside it, you’re going to be integrating a jittery blur, rather than building up a nice sharp line on the detector. The grating spectrometers I’ve used are often bolted to giant pieces of granite to avoid this sort of noise. Also, oh yeah, your grating has to be perfect, or it won’t have any ability to resolve the sharp little lines. You can see why in the grating equation; it depends on the grating ruling, d; if it varies, you get smeared out lines. If it scatters light, or has an imperfect optical figure, it will distort the image on the detector, making for blurry lines, assuming you can see any lines at all. Oh yeah, it helps if your detector is perfect as well, or at least very big, so you can resolve tiny little lines. If you have some shitty 980 pixel wide camera like in an iphone, well, you had better be able to move the detector versus the diffracted image through lots of different angles if you want it to be able to resolve thin lines.

How do they solve all these problems? Well, they use a piece of DVD for a grating, and a piece of cardboard for the rest of the “spectrometer.” I’m not exaggerating: go look at it. They have a slightly better one which doesn’t work with phones, but it’s also made of cardboard.

As you might guess, an old DVD  makes a  shitty diffraction grating. The lines on the DVD grating are not even; they’re not even really lines; more like dots and dashes. If they were perfect or even vaguely useful, physicists would use them for diffraction gratings, because they’re a lot cheaper than ones you get from Richardson or Zeiss.

There are other Ph.D. thesis worthy matters wrong with this thing, such as calibration, integration time, polarization, scattering; it’s not even worth going over these things. These objects will never do what they’re supposed to do, which is perform as spectrometers. All these things will ever do is make rainbow patterns on a camera. That is not spectroscopy. That is looking at rainbow patterns on cameras. Go look at their results! I defy anyone to point to any results of theirs and characterize it as anything but looking at rainbow patterns; something you could do more effectively with the common prism; $7.99 at Edmund Scientific. Less for a whole spectrometer with much better resolution! It gets worse. Imagine you could build a good spectrometer out of all this junk; one which does their claimed resolving power of 200. Congratulations; you have just spent a lot of time and energy building something you could purchase for a few hundred dollars. Without shopping around, I found a really awesome one, designed by people who are not walking, grinning tomatoes, with much better sensitivity, resolving power, software and bandwidth, calibrated by real optical engineers, brand spanking new and with intelligent technical support for a grand total of$2k. How much money is your time worth? If I wanted a mini spectrometer, I’d get a job at MacDonalds and purchase one that is guaranteed to work. I mean, I could actually build a really badass visible light mini spectrometer in my workshop, but … why?

Oh yeah, we’re saving the environment with our cardboard cutout spectrometers. Right. Are visible light grating spectrometers useful for environmental remediation? No, they are not. If you want something like that, you need a much more powerful spectrometer.  Best bet is to use a mass spectrometer, which is another sort of spectrometer. Second best, and distant relative, maybe an FTIR. Finally, for a couple hundred bucks, the amateur environmentalist can buy a useful spectrophotometer and do Real Things, rather than jerking off with costly open source nonsense that will never work.

“Kickstarter the startup” is probably a great idea. The way the world presently works, people will fork out money for good intentions and bullshit that sounds cool. Kickstarter ideas… A functioning market would allow me to short things.

Investments for dummies

Posted in investments by Scott Locklin on November 20, 2010

Since I have a weblog, a trading skunkworks and occasionally work for people in the quantitative finance domain, I’m occasionally asked by friends and acquaintances about investments. “Should I buy gold?” “What do you think of investing in company X?” or my favorite, “where should I put my money?”

The fact of the matter is, I don’t know the answer to these questions, and compared to most people, I probably am an expert on such things. I’d say, in reality, very few people in the world really knows the answers to these questions, and if they know, they’re not going to be telling you. To really understand why, consider what you’re investing in when you buy a stock.

When you buy a unit of stock, you’re buying a legal contract entitling you to part of the profits of a corporation. What is a corporation? It’s a legal arrangement for providing goods and services to the public, and providing some vaguely defined way of sharing the profits with the owners. The owners being, the people who own stock in the company. The owners are protected from legal risk incurred by the actual agents of the corporation. In other words, if a Lockheed executive tries to bribe a congressman and actually gets into trouble for it, the shareholders won’t go to jail. This is socially useful in that the shareholders can’t be expected to be accountable for the tens of thousands of Lockheed employees. While shareholders are protected from legal indemnity, they’re not protected against the financial shenanigans of the agents of the corporation. This is something that people rarely think about: if the corporation they’re invested in is manned by criminals, they probably won’t realize any returns. Even assuming the agents of the corporation are honest, that doesn’t mean they’re not dumb, or at least optimizing a utility which isn’t aligned with that of the owners. For example: many companies will incur massive debts; debts which could eventually bankrupt the company. Accounting systems are also a bone of contention. While most American companies are reasonably honest, the way that the accounting is done is hugely relevant to how a company is valued.

There are a couple of ways ordinary humans think about stocks. They may think the idea behind the company which issued the stock is a good, see a stock going up in price, and so they buy into the trend. They may actually know something about the the company: perhaps they notice lots of other people lining up to pay \$4 for a cup of sugary caffeine water at the local coffee house, and so, see it as a good investment. That’s all well and good, but if you don’t know about the company’s plans, the intimate details of it’s accounting methods, and who is running the joint, you really don’t know anything about it. If you’re buying on the trend, well, that can work too, but unless you’re willing to sit around and white knuckle the trend to its ultimate conclusion and time it well enough to sell at the top, you are just gambling. Not that there is anything wrong with that.