Wednesday, December 21, 2011
SPRUCE - A Way of Thinking About Software
Analyzing and comparing software can be a complex task and I needed a way to break it up into components to avoid being overwhelmed by the details. These six top-level categories help keep me organized: Security, Performance, Reliability, Usability, Community and Economy. I call it Spruce to make it easy to remember. It works equally well when thinking about operating systems, languages, frameworks and individual applications.
A brief summary of Spruce:
Security - Protection of sensitive data through passwords and encryption is the visible part. The invisible part that is hard to measure is how much exploitable surface area is exposed to an attacker. That may not be initially obvious and it generally takes experience to develop a sense for the size of the risk. There is overlap with reliability with regards to attack-resistance.
Performance - We're concerned with the resources it requires relative to its alternatives. How well does it scale as the problem size increases and what trade-offs are unavoidable to achieve scale (ex: consistency vs. availability)? There can be overlap here with economy if it requires expensive hardware to achieve reasonable performance.
Reliability - This is about attack-resistance, fault-tolerance, error-correction and recovery. How gracefully does it deal with hardware/power failures, incorrect input and outright data corruption? There is overlap with security with regards to dealing with attacks. Can it keep running even under adverse conditions or does it go down every time the wind shifts direction? Has it been battle-tested or are you the brave pioneer? If redundancy is required there is overlap with economy.
Usability - This is considered from the point-of-view of the user or programmer as appropriate. I'm concerned with documentation, user-experience and API design. How well does it adapt to problems the original developer did not anticipate? Is it a pleasure to use or does it make you regret your career path?
Community - This is anyone who can provide you with help and enhance the usefulness of the product. It ranges from support from the original developer to a vibrant third-party community pushing the tech forward. Is it easy to get answers to questions and solve problems? How often is it mentioned on Stack Overflow and GitHub? Can you find developers who are eager to work with it or do they consistently forget to return your calls when you tell them the name of the underlying tech?
Economy - We're interested in the total cost of ownership relative to its alternatives. The visible parts are licensing fees, support contracts and hardware requirements. The invisible parts are the impact it has on other decisions. If it turns out you made the wrong decision how expensive is it to correct the mistake?
Engineering is all about trade-offs so it's rare that any tool excels in all of these areas. Reliability may be emphasized over performance or economy. Community might trump everything else. The key thing is to simply be aware of what the trade-offs are and be conscientious about them.
Sunday, December 11, 2011
Quicksort with Hungarian Folk Dance
This is brilliant (and finally explains the music that plays from my computer every time I call a sort function!)
Wednesday, September 28, 2011
Better Bit Mixing - Improving on MurmurHash3's 64-bit Finalizer
Austin Appleby's superb MurmurHash3 is one of the best general purpose hash functions available today. Still, as good as it is, there are a couple of minor things about it that make me uneasy. I want to talk about one of those things today.
Here is the 64-bit finalizer from MurmurHash3:
UInt64 MurmurHash3Mixer( UInt64 key ) { key ^= (key >> 33); key *= 0xff51afd7ed558ccd; key ^= (key >> 33); key *= 0xc4ceb9fe1a85ec53; key ^= (key >> 33); return key; }
The goal of a finalizer (sometimes called a "bit mixer") is to take an intermediate hash value that may not be thoroughly mixed and increase its entropy to obtain both better distribution and fewer collisions among hashes. Small differences in input values, as you might get when hashing nearly identical data, should result in large differences in output values after mixing.
Ideally, flipping a single bit in the input key results in all output bits changing with a probability of 0.5. This is called the Avalanche effect. Cryptographically-secure hashes are willing to spend an impressive number of computer cycles getting this probability close to 0.5. Hash functions that don't have to be cryptographically-secure trade away some avalanche accuracy in favor of performance. MurmurHash3 gets close to 0.5 with a remarkable economy of effort: Just two multiplies, three shifts and three exclusive-ors. Austin reports MurmurHash3 avalanches all bits to within 0.25% bias.
The overall structure of the finalizer (shifts and multiplies) was designed by Austin. The constants were chosen by "a simple simulated-annealing algorithm" (his description.) The part that troubles me is the simulation was driven with random numbers. That doesn't seem right. After all, if the inputs were truly random what would be the point of mixing? It seems to me that a good mixer should be able to take a series of, say, counting numbers and produce output that is virtually indistinguishable from random.
First question: Could I build a better mixer by training the simulation on low-entropy inputs such as counting numbers and high-likelihood bit patterns? Here are the results for MurmurHash3 and the top 14 mixers my simulation found:
Mixer | Maximum error | Mean error |
---|---|---|
MurmurHash3 mixer | 0.005974634384355 | 0.000265202074121 |
Mix01 | 0.000791851287732 | 0.000190873655461 |
Mix02 | 0.000885192071844 | 0.000197317606821 |
Mix03 | 0.000828116139837 | 0.000191444267889 |
Mix04 | 0.000991926125199 | 0.000199204707592 |
Mix05 | 0.000932171539344 | 0.000195036565653 |
Mix06 | 0.000819874127995 | 0.000199337714668 |
Mix07 | 0.000805656657567 | 0.000194049828213 |
Mix08 | 0.000906415252337 | 0.000191828700595 |
Mix09 | 0.000927020281943 | 0.000194149835046 |
Mix10 | 0.000877774261186 | 0.000194585176663 |
Mix11 | 0.000955867323390 | 0.000194238221367 |
Mix12 | 0.000932377589640 | 0.000193450189656 |
Mix13 | 0.000789996835068 | 0.000190778327016 |
Mix14 | 0.000800917500758 | 0.000191264175101 |
The maximum error tends to be about seven times lower. Mean error is lower too but only slightly. The answer, then, is yes, it does get better results with low-entropy keys. But that leads to the second question: Does training on low-entropy keys result in better or worse performance with high-entropy keys? To find out, I tested it with 100 million cryptographic-quality random numbers:
Mixer | Maximum error | Mean error |
---|---|---|
MurmurHash3 mixer | 0.000212380000000 | 0.000040445117187 |
Mix01 | 0.000177410000000 | 0.000040054211426 |
Mix02 | 0.000179150000000 | 0.000039797316895 |
Mix03 | 0.000170070000000 | 0.000040068117676 |
Mix04 | 0.000185470000000 | 0.000039775007324 |
Mix05 | 0.000192510000000 | 0.000039626535645 |
Mix06 | 0.000195660000000 | 0.000040216433105 |
Mix07 | 0.000193810000000 | 0.000039834248047 |
Mix08 | 0.000196590000000 | 0.000039063793945 |
Mix09 | 0.000174280000000 | 0.000039541943359 |
Mix10 | 0.000181790000000 | 0.000039569926758 |
Mix11 | 0.000181140000000 | 0.000039501779785 |
Mix12 | 0.000183920000000 | 0.000039622690430 |
Mix13 | 0.000175610000000 | 0.000039437180176 |
Mix14 | 0.000182000000000 | 0.000040092158203 |
Yes, we can do slightly better than MurmurHash3 even on random test keys.
On average this is a small but measurable improvement that comes at no performance cost. For the worst-case, low-entropy keys that concern us the most, it provides a significant improvement.
Here's a handy tip: A great use for a mixer like this is to use it to "purify" untrustworthy input hash keys. If a calling function provides poor-quality hashes (even counting numbers!) as input to your code then purifying it with one of these mixers will insure it does no harm.
Here are the parameters for the mixers I tested:
Mixer | Mixer operations | ||||
---|---|---|---|---|---|
MurmurHash3 mixer | 33 | 0xff51afd7ed558ccd | 33 | 0xc4ceb9fe1a85ec53 | 33 |
Mix01 | 31 | 0x7fb5d329728ea185 | 27 | 0x81dadef4bc2dd44d | 33 |
Mix02 | 33 | 0x64dd81482cbd31d7 | 31 | 0xe36aa5c613612997 | 31 |
Mix03 | 31 | 0x99bcf6822b23ca35 | 30 | 0x14020a57acced8b7 | 33 |
Mix04 | 33 | 0x62a9d9ed799705f5 | 28 | 0xcb24d0a5c88c35b3 | 32 |
Mix05 | 31 | 0x79c135c1674b9add | 29 | 0x54c77c86f6913e45 | 30 |
Mix06 | 31 | 0x69b0bc90bd9a8c49 | 27 | 0x3d5e661a2a77868d | 30 |
Mix07 | 30 | 0x16a6ac37883af045 | 26 | 0xcc9c31a4274686a5 | 32 |
Mix08 | 30 | 0x294aa62849912f0b | 28 | 0x0a9ba9c8a5b15117 | 31 |
Mix09 | 32 | 0x4cd6944c5cc20b6d | 29 | 0xfc12c5b19d3259e9 | 32 |
Mix10 | 30 | 0xe4c7e495f4c683f5 | 32 | 0xfda871baea35a293 | 33 |
Mix11 | 27 | 0x97d461a8b11570d9 | 28 | 0x02271eb7c6c4cd6b | 32 |
Mix12 | 29 | 0x3cd0eb9d47532dfb | 26 | 0x63660277528772bb | 33 |
Mix13 | 30 | 0xbf58476d1ce4e5b9 | 27 | 0x94d049bb133111eb | 31 |
Mix14 | 30 | 0x4be98134a5976fd3 | 29 | 0x3bc0993a5ad19a13 | 31 |
Tuesday, September 27, 2011
About Me
I have co-founded four startups, assembled and managed teams of developers and delivered products that millions of people use every day.
Some bits of trivia:
Remember that pinball game that shipped with Windows from Windows95 through XP? That was made by me and my co-founders at my first startup, Cinematronics. We were later acquired by Maxis.
My second startup, Eclipse Entertainment, created the first browser-based 3D graphics engine. It was acquired by WildTangent.
I built a compressor called Quantum and for a few years in the 90s it was the best-performing lossless data compressor. It was the first compressor with a workable solution to the optimal parsing problem and typically outperformed PkZip by 20%-30%. Borland, Microsoft and Novell licensed it. Microsoft used it for their .CAB files (it knocked two diskettes off the size of Windows) and almost all of their products were compressed with Quantum.
I created World Class Chess which was the first commercial chess program which offered a user-configurable opening book library. I made it configurable so my chess-playing friends would populate the opening book for me. That plan back-fired in an amusing way: They tended to enter whatever obscure openings they were studying at the time. As a result the program liked to play openings it couldn't understand and when it exhausted its book it proceeded to waste time rearranging its pieces to match its own ideas about pawn structure, mobility and king safety. (If you're a programmer you might enjoy thinking about how to represent a chess opening book library in just 8 bits per move. It used to be important to fit it in as little space as possible.)
I created Launch, a Windows shell that replaced the dreaded Program Manager. It sold well for a few years and had a community of enthusiastic users.
I created Savant, the first Scrabble game that was fast enough to do a tree search in the end-game. Its middle-game was weaker than the best program at the time but it made up for it with near-perfect end-game play. Computers are faster now and the best Scrabble games today do Monte-Carlo simulations at all stages of the game.
I built and shipped (in Encarta) a natural-language search engine at Microsoft. Its accuracy outperformed our best keyword-based search engine at the time.
Optimizing for performance is one of my passions. You can read about a couple of my programs at the following links:
There Ain't No Such Thing As The Fastest Code
http://downloads.gamedev.net/pdf/gpbb/gpbb16.pdf
It's a Wonderful Life
http://downloads.gamedev.net/pdf/gpbb/gpbb18.pdf
I wrote much of Borland's Turbo C run-time library. I often think back to what a great experience it was to be a part of that team. The stars were aligned for Borland in those days. It was magical and, in many ways, you could say that I've spent much of my career attempting to recreate that magic. I've come close but have never been totally successful. Such things are rare. I was too young to appreciate it at the time.
My interests are focused around machine learning, natural language processing, information theory, high-performance computing and scalability.
I lived and worked in Japan for two years though I've pretty much forgotten what little Japanese I once knew.
I enjoy games and puzzles: Go (Weiqi/Weichi/Baduk), Chess and Scrabble.
In 2002 I took a year off and rode a motorcycle around the world and had some adventures. I was interrogated by the Russian secret police and told them everything I knew in great detail. They grew bored and let me go.
Some things I believe: Working with good people is soul-enriching. Mentoring younger people is a way of repaying those who mentored me. Shipping is hard but real artists ship. Nothing can replace perseverance.