Airport searches.

For the record, my personal bugbear is the privacy implications of the new "advanced imaging" machines appearing at airports. But then again, I've distrusted supermarket loyalty cards and freeway FastLane programs, too -- my experience is that data which is collected will eventually escape your control (if you had any to begin with), and that no one is willing to offer a believable privacy pledge for such data (it would have to have 3rd-party audits for compliance, for example).

All that said, I just read a very reasonable article discussing the health impacts of the new scanners. I'm not going to play the alarmist card — ironically, the risks of injury from passing through a scanner are most likely about the risk of injury in a terrorism-related incident, that is to say extremely small — but it's prudent to admit honestly where risks are unknown, and to call the lie when deceptive arguments are used.

So let's not get all "ooh, scary, radiation" about this — quantum-physically speaking, everything is radiation at some wavelength — but it's worth keeping the risks in mind so you can make your own decisions, especially if you'd otherwise feel peer-pressured to "just do it". I'm just glad that as a nation we've apparently decided that this is where we're going to step up and draw the line, libertarians and liberals together. Contemplating our freedoms progessively and silently eroding away one by one is a far more worrying prospect.


Words With Pirates

I've caught the Words With Friends bug. Worse: the Words With Pirates subtype. (It's all the tile-placement and strategy fun of Words with less of the tedious racking your brain for obscure words; a more relaxing variant for when I don't want to think so hard.)

Today I discovered that I didn't actually understand the full word-creation rules for Words With Pirates. For the benefit of other similarly unenlightened folk, here's the accepted word list as a regular expression:


In more verbose format, these are all the words:

ar arg argh gar garg gargh gyar gyarg gyargh har harg hargh harhar harharhar hyar hyarg hyargh yar yarg yargh

In addition, an exclamation mark is always allowed at the end, and any non-zero number of r's may be substituted for any r.

Note that the words 'har', 'harhar', and 'harharhar', with optional exclamation marks at the end, are allowed. This is a little unusual, since the 15x15 grid should allow 'harharharhar' and 'harharharharhar' to also be played -- but these are not in the dictionary. Nevertheless, these oddball forms will probably prove useful to those stuck with excess a's.

For the benefit of the obsessive, there are 15 A's (worth 2 points), 9 G's (worth 3 points), 6 H's (worth 5 points), 28 R's (worth 1 point), 3 Y's (worth 10 points), and 3 !'s (worth 10 points), for a total of 64 tiles.


CoffeeScript and TurtleScript

Via reports on the OSCON 2010 Emerging Languages Camp, I recently discovered CoffeeScript, a very interesting "dialect" of JavaScript. The original idea for CoffeeScript seems to be to clean up the syntax of JavaScript, while preserving direct correspondence as much as possible. Over time, it seems to have grown more and more "neat features" which have increasingly-hairy desugarings into JavaScript — but the JavaScript translation of a CoffeeScript program is still very readable.

This has some relationship to my TurtleScript project, which also sought to clean up JavaScript to some degree. In TurtleScript I've tried to pare down as much of the language as possible, using the smallest number of syntactic forms and JavaScript features as possible, with the smallest possible parser/compiler/runtime system, inspired by Crockford's Simplified JavaScript.

CoffeeScript has some very elegant simplifications: for example, everything is an expression, which reduces the statement/expression syntactic distinction in C-like languages. Making the body of a function into an expression removes unnecessary return statements. Making for and while into expressions is a cute means to introduce (the power of) array comprehensions without additional syntax. CoffeeScript also has a nice way to express functions both with and without lexical this binding (a JavaScript skeleton in the closet).

Unfortunately, the full CoffeeScript language includes many many syntactic forms — even more than full JavaScript. Some of these can trivially be moved into a library, at a slight cost in compactness. While having varied syntax that precisely and concisely expresses programmer intent is certainly admirable, it makes the language harder to learn and, in my opinion, less elegant. For a scalable, learner-friendly language system I like something like NewSpeak more: a language whose goal is to grow smaller, not larger.

Adopting an IFRAME

Warning: heavy geek content.

Google recently rolled out a new Gmail feature: if you pop open a new little compose or chat window, and then close your main window, the small child window doesn't go away.

This is a pretty esoteric little tweak, but the underlying functionality is quite profound: in order to make this feature work, the entire document context (all the JavaScript responsible for running Gmail's UI) needs to get teleported from the main window out into the child window. For speed and memory reasons, you don't want to run a copy of the main code in the child window — no, you want to wait until just before you close the main window and then instantly teleport the guts of the main window over to the child. How's Google doing that?

In the browser model, separate windows are usually separate security contexts, safely sandboxed from each other. So this magic teleportation trick is also creating a wormhole between the different security domains. My curiosity was piqued, but Google (both blog and search engine) was strangely silent on how their new trick was pulled off.

Here's the story I eventually discovered. Google had long been thinking about a way to make this feature work. Their initial proposal was something called GlobalScript (or sometimes, "SharedScript"). This proposal met resistance and a new simpler solution was found: the "Magic IFRAME".

The guts are hidden inside bug 32848 in webkit's bug tracker. You put all of your application's good stuff inside an <iframe> element — we've guessed that already. You pass your child window a copy of that IFRAME, something like:

function doSomethingCoolThatNeedsANewWindow() {
     var childWin =;
     childWin.onload = function() {

Now, the crux: when your main window is about to close, you just adoptNode the shared <iframe>:

// adoptNode in this case does removeChild() internally
// but does not unload the content

Voila! This used to completely unload and reload the iframe, erasing the existing application context, but with this one small hack Chrome (and now Safari, too) suppresses the reload and allows the untouched <iframe> context to teleport over to the child.

This doesn't work on Mozilla yet. And it's a pretty ugly hack, tweaking the behavior in just this one narrow case. (And Mozilla is a little bit cranky about adoptNode() being used without an prior importNode().) But this hack also allows work-arounds for two other long-standing iframe-reparenting "bugs". It suggests that <iframe> is now the <module> tag Crockford wanted, especially now that the <iframe> can be sandboxed in various ways as well. By putting each piece inside an <iframe> you can now build a robust module system for browser JavaScript.

Congratulations, OLPC!

Today OLPC announced a partnership with Marvell to develop the XO-3. This is great news — according to my little birdies this will put development efforts at OLPC on substantially more solid financial footing. And software development for new tablet computers is non-trivial! Here's hoping that OLPC, which led the netbook movement, can similarly spur the nacent tablet computer market. So far tablets are seen as content consumers, with all "real" content creation (ie not just jotting notes or makng minor edits) being done on seperate "real" computers. OLPC's vision should insist on fully productive machines which allow kids to create and contribute, not just passively consume. (In particular, the killer app for kids on XO laptops to date is making movies and telling stories.)

In addition to funding further software development, the Marvell partnership ought to give OLPC the muscle to continue pushing forward the hardware state of the art. A lot of the reality of modern electronics manufacturing depends on guaranteeing enough production volume to sustain the interest of suppliers and their engineers. For low volumes you instead get "least effort" solutions from your partners, which result in higher costs and poorer results. So I'm cautiously optimistic that the "capture" of OLPC resources for Marvell's high volume "first world" tablet efforts will in the end be a means to accomplishing the XO-3 goals for "the rest". But care and caution are waranted. OLPC is not large enough to do multiple things at once; its resources and attention are dissipated easily.


Baker's (Square Dance Concepts)

In a previous post I introduced the "Baker's" square dance concept. By analogy to "Baker's dozen", it was proposed the "Baker's" meant to do 13/12 of the call it modified.

But let's reconsider: a "baker's half dozen" isn't 6 1/2 eggs, it's 7. Let's redefine the "Baker's" concept to mean "one more part" of the call. (Hence the title of this post: "one more square dance concept".)

So a "Baker's Square Thru" is a square thru 5. A "Baker's Eight Chain 3" is an eight chain 4. A "Baker's Right and Left Grand" goes 5 hands around. Note that this is different from "do the last part twice"; we continue the alternation of parts in the base call.

Let's push this a little further to include calls with arithmetic sequences. "Baker's Remake the Wave" would end with 4 quarters by the left. "Baker's Quarter Thru" would be a remake the wave. "Baker's Three Quarter Thru" is a reverse order remake.

Slightly more controversial: "Baker's Swing the Fractions" would end with zero quarters by the left (would that still set a roll direction?). "Baker's Touch By 1/4 By 1/2" from a single file column of 6 would have the very outsides touch 3/4. Sue Curtis suggests that "Baker's Reverse Echo As Couples Trade" (from a tidal one-faced line) would be "trade, as couples trade, as couples as couples trade", or equivalently "trade, couples trade, couples of 4 trade".

This definition for "Baker's" seems a lot more fun. Do any other square dance callers have concept-worthy last names?

SDR 0.6

Another holiday, another SDR release. Version 0.6 (now on github!) comes with definitions for almost all of the Mainstream square dance program, excepting thars, stars, and that pesky allemande left. The basic abstract syntax tree representation of calls has been refactored, building on a implicitly-typed expression representation — clearly we're approaching fulfillment of Greenspun's tenth rule. On the REPL front of this quest, SDR now includes PMSD, a text UI incorporating an interactive javascript context. The test case transcripts probably provide the best overview of its use.

The SDR web frontend now supports implemented bigon, hexagon, and octagon dancing. There's also a square resolver based on Dave Wilson's ocean wave method. Since the previous release, I've also reimplemented square breathing using the Cassowary constraint solver. Expanding the square to make space for inserted tandems, couples, etc is formulated as a linear programming problem, optimized so that the resulting formation is compact. Resolving overlaps is formulated as a mixed integer program, which allows for either-or constraints: if dancers overlap on both X and Y axes, either the X overlap must be resolved, or the Y overlap must be resolved, or the dancers have to form a star.

Although there's plenty of core engine work left to do, I've started to shift back to working on the game UI. Hooking the newly improved dance engine up with the Sphinx voice recognition engine, tuning the recognizer, and generally putting all the bits together is expected to be the focus of the next (0.7) release.

Hunting with Google Wave

We used Google Wave this past weekend during a practice for January's Mystery Hunt. It was an interesting experience, but Wave needs better group support before it's really usable for large-group collaboration. Further, wave threads with lots of discussion start becoming unwieldy: what usually ends up evolving is a collaboratively-edited document/summary/link forest on top, and lots of discussion and comments trailing off below. This forces you to keep scrolling between the "useful stuff" at top and the end of the comment thread, rapidly vanishing off below. The terrible scrollbar implementation doesn't help. This seems like a basic UI problem — and one that wikis have to some degree, too: anyone who's done any collaborative editing of a wiki knows the temptation to insert little inline comments, which spawn discussions that threaten to overwhelm the text. I suspect the solution is some sort of split view or toggle so you can keep an eye on both the discussion and the document at once — think of the little chat window integrated into a collaboratively-edited Google Spreadsheet, but with history and linking and all that wave goodness.

So, as presently constituted, the web Google Wave app is Not Quite There Yet (there are also some instability, compatibility, and slowness issues, but they seem more straightforward to solve). But the underlying Wave technology remains very exciting, and I believe it's something Sugar should build on, as I've written before.

Chrome OS, litl, and OLPC
Well, Google just announced Chrome OS, to be available next Christmas. Some parts of it are very similar to what you can already put under the tree this Christmas from litl (actually, you can buy one right now) — and other parts are familiar from my time at OLPC. For example, the legacy-free BIOS and signed bootloader are what OLPC shipped two years ago on the XO-1. The XO-1 was not an "always connected" device, though — in fact the opposite: it assumed connectivity was infrequent and low-bandwidth. At OLPC, the signed code scheme was part of an theft-deterrence system, crucial for OLPC's target third-world deployments. It wasn't very popular among our first-world users, and I'm not actually sure why Google implemented it for Chrome OS. Phone company lock-in (due to largely misplaced concerns about maintaining the integrity of their networks) are why phone apps are locked down with signature schemes; this isn't necessary (and verges on Evil) for what's intended as a general-purpose computing device.

The somewhat oddly-technical (and often slightly-wrong) middle section of the Chrome OS presentation veered off at one point about filesystem partitions (!) and how having a read-only partition is a novel feature of their OS. Read-only partitions are one of the oldest security mechanisms — my boot floppy had the write-protect notch taped over back when I was booting DOS — and a near-universal feature of thin-client deployments (which Chrome OS certainly is). OLPC maintained a mostly-read-only root, but primarily to extend the lifetime of the flash disk (flash lifetime was not touched on in the Chrome OS presentation). Litl mounts a read-only root with a writable unionfs on top, which actually works much better in practice: improved maintainability because all the various Linux system daemons can still "do their thing" as they expect, but you can wipe the top partition whenever you like to return to a clean state (at litl we do this on every update). (If you haven't hacked the lower levels of a Linux distribution, you'd probably be surprised at how many system daemons assume they can write various places in the root partition, and you really don't want to maintain hacked versions of all of them.) Since ChromeOS gave an Ubuntu shout-out at one point, I strongly suspect the unionfs scheme is actually what they are doing as well — and, for that matter, what all the other Ubuntu Mobile downstreams are doing. Not new.

The emphasis on a read-only root partition is rather misleading from a security standpoint (as much of the middle portion of the presentation was). If you're not storing your files locally, it doesn't mean that you suddenly have no security concerns. It just means you have different security concerns. Cross-site scripting attacks give a malicious website access to your Google account through your web browser: these sorts of things are the malware for a WebOS. You have a different attack surface, but a vulnerability in your browser or flash plugin still gives access to private data. Mentioning that they encrypt the data on disk seems to be pure snake oil: your browser has access to the unencrypted data, and that's your most vulnerable surface anyway.

Overall, Chrome OS is a nice validation of some of the cloud-computing ideas we've been working on at litl, and it's always nice to see more legacy-free hardware (like the OLPC XO-1 and the litl webbook), but the presentation was oddly underwhelming. They're not really "reimagining the PC" — they're just removing all the applications on the PC except for Chrome. You still interact with "the PC" the same way you currently interact with Chrome. For reimagination, watch the videos at litl.

litl's technical secrets revealed!

Update: Lucas has written up some additional technical details — and he mentions our update system, which was one of the first bits I worked on when I arrived. We heavily dog-food the updater, using buildbot to push out the latest bits to developers every night.

Update 2: Welcome, slashdot! The videos at give a good idea of what the UI is like.

Update 3: Non-technical press coverage: Wired, WSJ, Xconomy, CrunchGear

Update 4: More words: dignacio on litl's server side, J5, twitter

litl launches today, so I can finally talk a bit about the cool technologies behind our software stack.

On the server side, litl is a cloud computer, built on Google App Engine, Amazon S3, and Django — all of which are fantastic technologies. All machine data is stored in the cloud, so you can have a gorilla stomp on your litl, pick up another one, log on and instantly recreate your environment. (Since we developers are always abusing our prototype hardware, we've tested this a lot!)

On the client side, the litl software (naturally, code-named "big") is built on gjs, which is the smallest possible wrapper around JavaScript necessary to make it a large-scale programming language. I've really enjoyed programming in JavaScript, which might seem odd to people who (a) have had frustrating experiences with global variables and crazy incompatibilites trying to make JavaScript work on the web, and (b) know that I'm a static types and compiler-bondage sort of guy. So I'll spend a little time here talking about gjs.

From a language standpoint gjs adds just one feature: a minimal module/namespacing mechanism built on a single top-level definition: the name "imports". Modules are imported using (typically constant) definitions, such as:

const StringUtil = imports.stringUtil;

s = StringUtil.sprintf("%d", 3);

The dynamic stringUtil property of the imports object is an object whose properties are the top-level definitions in the file stringUtil.js, found on the import path. Subdirectories are additional dot-separated components, as in Java package names; imports.util is a dynamic object representing modules found in a util directory on the path. You may need to compare this to the namespacing mechanism in the abandoned ES4 proposal to appreciate how small and elegant this is.

Further, this module system integrates with the GObject system via GObject introspection annotations for library code. This allows easy integration with libraries written in C or any other introspectable language. For example:

const Gtk =;
Gtk.init(0, null);

let w = new Gtk.Window({ type: Gtk.WindowType.TOPLEVEL });
w.connect('destroy', Gtk.main_quit );

let button = new Gtk.Button({ label: "Hello, world!" });
button.connect('clicked', function() { w.destroy(); } );


The gjs system is built on the SpiderMonkey JavaScript engine, the one used in Firefox, so JavaScript execution benefits from all the JIT and performance work done upstream. Further, it means that we can code in JavaScript 1.8, Mozilla's dialect of JavaScript with lots of bells and whistles (mostly borrowed from Python):

gjs> function range(n) { for (let i=0; i<n; i++) yield i; }
gjs> [i*2 for (i in range(5))]

(In a later post perhaps I'll describe how you can use the yield expression to build a continuation-based system for writing asynchronous callbacks for UI code in a natural manner.)

Overall, JavaScript as a system programming language feels a lot like Lisp must have for the programming generation before mine: minimal syntax, very powerful and orthogonal core abstractions, and (dare I say it) not much type-checking or busy-work to get in your way. (gjs is not a homage to Sussman, but it should be!) JavaScript is a programming language for those who know what they're doing and aren't afraid of a little functional abstraction. (Now if only there was a way to disable semicolon insertion!)

OK, enough about gjs. The litl software stack is based on Canonical's distribution of Ubuntu for mobile/embedded devices, and the Canonical folks have been great partners. It's been a joy to get changes integrated upstream, and Canonical has done a lot of excellent work accommodating their distribution to litl's unique needs. On the UI side, we use X11 and some GTK widgets, but implement our own (very simple) window manager. Most of the actual look and feel is done with Clutter (hardware-accelerated 3d animated effects, whee!), and we have a Flash-based API for external developers. We also have hardware-accelerated h.264 video.

Regardless of the technical fun, though, I have to say that the best thing about working at litl is its management: developing with all the other rockstars here is made the more fun by the knowledge that management will help ensure that our goals are realistic and that we'll be able to hit our targets, with time left over to polish before release. It's just much more fun to code when you know you'll be proud of the end result.

Echo^n Square Thru A(5)

[Square Dance primer for computer scientists: square dance calls are (recursive) sequences of basic actions: walk forward, turn around, do a square dance call. "U-turn back" is a square dance call, which means simply "turn around". Some calls take numeric arguments: "square thru N" for N=1 means "pull by"; "pull by" is another square dance call which is roughly "step past the person you're facing". For N>1, "square thru N" means "pull by, turn in, mirror square thru (N-1)". The word "mirror" in that definition is a square dance concept. It is a function which takes a square dance call and transforms it, yielding square dance calls or actions. "Mirror" means to simply do the call as if you were looking in a mirror, exchanging right and left. (If you've ever tried to comb your hair looking into the display from a webcam, which is not mirrored the way you expect it to, you know how hard it can be to do well-practiced actions mirror-wise. "Mirror" is a simple square dance concept.) So the definition of square thru N I gave above is recursive, defined in terms of a transformation of the call itself. The concept "twice" takes a call and merely means to repeat the call twice.

Concepts can take multiple arguments, including arguments which are themselves concepts, not just calls. (Some would call these supercalls or meta-concepts, but we'll gloss over this for now.) Which brings us to the subject of today's post: the ECHO concept.]

"ECHO <concept> <call>" means do <concept> <call> then do <call> — like you were hearing the last part of the phrase echoed back from a distance. So, "ECHO mirror square thru 4" means do a "mirror square thru 4" and then a "square thru 4". "Echo twice u-turn back" means u-turn back three times.

Most dancers have a bit of a formal grammar in their heads, something like:

call: <simple_call> | <concept> <call> | <metaconcept> <concept> <call> ;
simple_call: square thru <number> | u-turn back | pull by | ... ;
concept: mirror | twice | ... ;
metaconcept: echo | ... ;

Examples of valid utterances:
"mirror square thru 4", "echo mirror square thru 4", "echo twice u-turn back".
Some invalid utterances:
"square thru 4 mirror", "mirror square", "echo mirror".

Now here comes the fun part: how do you parse the expression "echo echo twice u-turn back"?

It doesn't parse according to the grammar above: you have to mentally curry an argument to make "echo twice" into a concept (since when provided with a concept the remaining part is a 'function taking a call', aka a concept). You end up with something like "echo (echo twice) u-turn back", which would mean, "echo twice u-turn back, u-turn back". Thus, "echo twice u-turn back" means do the u-turn back 3 times; "echo (echo twice) u-turn back" thus means to do the call 4 times, and "echo (echo (echo twice)) u-turn back" is 5 times. Adding another echo adds another u-turn back. Bo-ring.

Because we're adventurous types, let's throw the grammar out the window and treat echo as a simple macro-expander, which grabs the shortest possible phrase following it as its argument. Now we'll parse "echo echo twice u-turn back" as "echo (echo) twice u-turn back", that is, "echo twice u-turn back, twice u-turn back". That's 5 u-turn backs. The next one, "echo echo echo twice u-turn back" becomes "echo (echo) (echo (echo) twice u-turn back)", which is 8. Continuing, "echo (echo) (echo (echo) (echo twice u-turn back))" is 13. Adding more echos yields subsequent terms in the Fibonacci sequence. Now we're talking!

The sequence constructed above grows in duration as an exponential function of the length of the utterance, but we already know of another such construction: the humble "square thru N" also grows exponentially with its input, since the value of its numeric argument "999...9" is also exponential with the number of digits uttered. Let's say that the goal is to allow the square dance caller time to enjoy a nice beverage and sporting event on TV while the dancers work out the implications of a short call. Exponential is okay, but we could certainly do better.

Some readers might be aware of the "baker's function", which naturally involves multiplying everything by 13/12. One might imagine "Baker's Square Thru 6" called using this function as a concept; this is equivalent to "square thru 6 1/2 times". (Computer scientists: define "square thru 1/2" as "do the first half of a pull by" then you can continue using the recursive definition above.)

But, for packing the most dancing into the fewest words — super-exponentially! — I submit "Ackermann's Square Thru 5". (Use f(n)=A(n,n) as computer scientists do.) Guy Steele has suggested that the caller might use "Ackermann's Square Thru Your Couple Number", finish the tip calling for two couples, and then use three-couple figures for the rest of the night. Perfect for the lazy caller who wants to do 25% less.

(Credits: Justin Legakis discovered the Fibonacci connection. Guy Steele had most of the good ideas here. Bill Ackerman is the recipient of the eponymous super-exponential concept. I just wrote it all up.)

Flash Filesystems

Dave Woodhouse in a recent article calls shenanigans on hardware translation for flash devices. I agree: flash memory is a Horse Of A Different Color, and trying to gussy it up to look like a rotating disk of magnetized rust is using a bad and leaky abstraction that will only end in tears. But I don't think the engineers' better judgment will prevail: the use of hardware translation is driven by Windows/DOS compatibility concerns, since Microsoft (to my knowledge) has shown no desire in writing a new filesystem for flash drives. OLPC used a raw flash device in the XO-1, but in their follow-on had to switch to a hardware-translation device because market/scale economics were making those devices cheaper and cheaper while the original raw flash device was (a) not increasing in volume (aka, getting relatively more expensive), (b) not increasing in size (no one wanted to make new ones), and (c) getting discontinued (aka, impossible to buy). The best one can hope for is that a raw interface be offered in addition to the standard "Windows-compatible" one, for specialized embedded or high-performance applications — but the chicken and egg problem applies: until there are compelling gains, these interfaces won't be purchased in sufficient volumes to yield reasonable prices, and no one is writing the optimized filesystems because you can't find reasonably-priced flash devices to run them on. The end result is likely to be that Worse Is Better, and we'll be left with another set of legacy chains. Given enough time and transistors, the hardware may eventually grow Heroic Measures to work around the bad abstraction (see: the x86 instruction set).

If your filesystem is large enough and the amount of data being rewritten small enough, the flash problems "probably" won't bite you until after the obsolescence of the device — flash storage doesn't have to be good, it just has to be "not too bad" until, say, 3 years after purchase. Like non-removable rechargeable batteries that slowly degrade over time, you'll find your filesystem slowly degrading — one more reason to eventually buy a new one, and I've never known manufacturers to be especially sorry about that. Heroic Measures may never be needed/taken.

Leaving amateur market economics (and despair), let's revisit a cryptic and probably overlooked paragraph in my olpcfs proposal:

Implementations tuned for flash memory can use Adaptive Packed Memory Arrays for efficient CAS. This is an open research topic, but there is some evidence that a high-performance scalable flash-based filesystem can more profitably be implemented using cache-oblivious B-Trees and APMAs, rather than more "traditional" filesystem structures.

Here I'm trying to build a little bridge between filesystem designers and functional data structure researchers, two groups who probably rarely sit down together for a beer. I think Packed Memory Arrays are the "B-trees of flash filesystems": a better way to build an on-disk index given the peculiar characteristics of flash memory. Just as BeOS demonstrated that your filesystem could be "B-trees all the way down", I believe you could build a compelling filesystem using PMAs as the primary data structure. Ultimately, I suspect that the development strategy Dave Woodhouse describes — small incremental changes to existing filesystems whose philosophies are "close enough" — will probably prevail over a ground-up PMA rewrite. Incremental improvements and shared codebases are the Right Strategy for successful open source projects: you get more developers and testers for those parts which aren't flash-specific, and you've already gotten yourself out of the Cathedral with some working code to kick things off.

But if anyone's interested in attempting a clean slate design, PMAs are my advice. Maybe you'll come up with something so wonderful it will make a compelling book (and inspire folks like me), even if ultimately you don't win in the marketplace.

(But maybe you'll win! Flash storage and your optimized filesystem will prevail, and one day we'll think of rotating media in the same way we now think of core memory, floppy disks, tape drives, and the x86 instruction set... er, hmm.)

Tags: ,

JDoctest 1.5 released
I've released JDoctest 1.5, with support for integrating Javascript doctests of your Java code into the JUnit test framework. I use this to implement continuous testing.

Waiting for Godot/the 87 bus

A: What time?
B: Should be here at 11:30
A: 2, 3 minutes. (spits)
A: Look. Here.
B: Davis at 11:22. But that's down there.
A: 2, 3 minutes.
B: Yeah. 11:30.
A: Maybe look, you see yet?
B: No, no, I can't see past the construction.
A: 2, 3 minutes. No more.
A: It's hot. Inside my house, 90% humidity. Outside: 100.
A: How old are you? 26? 28?
B: No, 32. (looks down street)
A: You see?
B: No, not yet.
A: 2, 3 minutes. (spits)
A: Maybe 10.
A: You married?
B: No.
A: Maybe is better.
A: You're 26?
B: 32.
A: I was 28.
A: You see yet?
B: No. I can't see. (shrugs)
A: 2, 3 minutes. (spits)

Posted via

Sugar waves: time to get started!
While I was abroad, it seems that Google released their wave server/federation code. So you can now get started tackling some of the projects I outlined in my previous post on Waves and Sugar: getting a basic federation server running on the school server and/or writing a server running locally on the XO which exports the content of the Journal/filesystem for collaboration. I'm hoping someone other than me thinks this is an exciting idea, and runs with it!

OLPC kerfuffle

It seems I missed a nice little Slashdot/Negroponte/Krstić OLPC/Sugar row while I was away in Europe savoring the efficient intercity rail. While Tomeu and OLPCnews choose to blame particular words or phrases ("Sugar", "$100 laptop"), both J5 and gregdek seem to think the Real Problem was the fact that OLPC wouldn't upstream its patches.

I beg to disagree. As I wordily tried to explain in a comment to gregdek's post, I think Ivan is mostly right here: OLPC tried to do "seven new things" (as Mary Lou explained to me when I was hired) -- and "new things" end up costing a lot of debugging and development time, in one of the Iron Laws Of Writing New Code And Making New Hardware. But another problem was just Picking The Wrong Partners. With the exception of the display (one of the few unalloyed successes of the XO hardware), most of our hardware and software partners were working at cross purposes. Red Hat didn't really want to build an embedded OS product, "mesh networking" to Marvel meant household networks between your TV and your stereo with maybe 10 participants, the Geode was an orphaned offering from AMD, the display and flash NAND controller was a unloved one-off, etc. Success is found by aligning your partners' interests with your own.

At Litl our OS partner has many other embedded-systems clients, and has developed the toolsets to handle forks and customization without all the angst I'd grown used to at OLPC. We just say, "we need feature X turned on/off" or "set to Y" or just "somehow Z needs to happen" and it's done. We're not fighting, we're not destabilizing their core OS, we don't waste time with elaborate justifications why This Is The Right Thing To Do. If the change is appropriate to upstream, they upstream it, and maybe the work benefits the other embedded-systems clients. There's no drama, because We All Want The Same Thing.

I think "upstreaming" is entirely the wrong hero for OLPC here. If anything, I think OLPC's best hope is to continue to aggressively innovate -- no one buys a computer because its code has been upstreamed. Unfortunately, I don't think OLPC has enough current funding to do anything but follow the upstream, so maybe this current round of praise for Fedora-ization is just the old cliché: making a virtue of necessity.

SDR 0.5

Over the 4th of July weekend, I released SDR 0.5, now with a GWT/GAE frontend you can check out at It's still incomplete -- breathing and collision resolution need to be hooked up, it doesn't do "centers X sides Y" yet, and it's missing lots of call definitions -- but at least there's something tangible to play with that doesn't require mucking around in a javascript shell. Progress!

(Oh, and if you click '+' and then 'Add to Home Screen' when looking at in Safari on your iPhone, you get a cute "there's an app for that" mobile version.)

Google Wave and Sugar: what's next?

So, yesterday I posted a entry discussing how Google Wave actually implements the collaborative vision Sugar (and OLPC) were working towards. (It's a shame we didn't have better contacts with Google while I was at OLPC; Google was actually on OLPC's board, and beta-ing Waves would have been a very fruitful partnership.)

If you agree with the premise: what should the next steps in a Waves-ification of Sugar be? Eventually Google promises to release "the lion's share" of its source code, for both server and client, so getting the google server installed on the school server is one task — but not one which can be done immediately. Implementing Network Principles is another necessary precondition, in order to use Wave's DNS-based participant naming system ("" or whatever). That's something which can be done now. What else?

Eventually, when the source code drops, making the waves client work offline would be important, since Waves (and embedded gadgets) basically replace Write and Sugar's bulletin board. Waves edit XML trees, so porting existing activities to use XML-based file formats will go far in integrating them into a new Wave World Order. I haven't seen any demo of a waves-based drawing activity/gadget (tuxpaint is a favorite of most kids), so Waves Paint would be a fun project if you want to start playing with the Waves extension APIs.

More controversially, work on Waves-enabling a next-gen Journal could be interesting. As predicted by proponents of the Journal for some time, the "wave of the future" (so to speak) is filesystem-independent storage. Waves in Google's demo are titled and searched like email messages, not as "documents" in a filesystem hierarchy. However, we had repeated requests to unify Journal storage and traditional filesystems, for (a) better interoperability with existing systems, and (b) sneakernet collaboration. In my mind, this would mean exporting a waves-like view of an existing filesystem, as I proposed for the Journal, where directories are interpreted as slightly-special tag markers. One could imagine implementing a "Wave Server" based around this idea, in effect using the filesystem as the wave database. With the magic of Wave Federation, these "filesystem" waves can interoperate with other wave clients and servers. This standalone file-based server might also server as the basis for "one child under a tree" wave editing. (For that matter, a robust sneakernet implementation of the Wave Federation Protocol would also be very useful!)

Exciting times! Wave promises to bring OLPC/Sugar's vision of ubiquitous collaboration to life at long last. Google has the funding and development resources to tackle what is in effect a bottom-up reorganization of the software stack. OLPC/SugarLab's role is peripheral but vital: strongly lead the development of offline or "flakey connectivity" aspects of this technology so that it can be used everywhere from the solar-powered jungle to the dense urban cities, and to provide the educational software on top of the platform so that kids can *learn* as they collaborate and create.

Google Waves of Sugar

Google announced Google Wave today, an "attempt to imagine what email would be like if it were invented today". It has a robust collaboration infrastructure strongly grounded in distributed systems theory. If I were (re)starting the OLPC project today, this is certainly what I'd base Sugar on — with as much corporate support from Google as possible, since the project is strongest when it has strong allies.

I believe you could make each school server into a Wave Provider, using the Network Principles I drafted while employed at OLPC to ensure appropriate DNS-style naming. Any document format based on XML can be made collaborative using the Operational Transform mechanism. The journal's large scale history/versioning mechanism could be based on the same principles, as So6/LibreSource demonstrated. And the UI demonstrated in the keynote (which I haven't even fully watched yet) would be an exciting way to implement the neglected Bulletin Board feature.

It would be an exciting start to the Sugar project! But given the existing code base, is there now too much bathwater around the baby to consider swapping the child out?


I liked the new Star Trek movie, but I wish they'd paid some attention to physics. When the main technical substance is named "red matter," you know the science consultant isn't on call. C'mon, name it Rubidium Dilanthumide or something -- technobabble's not hard if you're trying at all.

But it was the orbital mechanics that really annoyed me. You can't, y'know, just drop a spiky anchor straight down to earth from orbit. Nor can you "fall" out of the belly of a space plane: you're already falling. That's what being in orbit is. And there's this thing called an atmosphere? You ever heard about it? Air resistance? Friction? It makes things hot. And winds! C'mon, at least give your unanchored space tether thingy some sort of guidance rockets along it's length to keep it going "straight down". It would make it cooler. Your heroes rocketing down, enveloped in huge plasma fireballs, dodging the giant blasts from the cable's guidance jets... It would be science-tastic.

Also, Starfleet: the bottom of a gravity well is not a great place to build an Enterprise. How exactly did you get that thing up into orbit? Without setting the corn fields on fire, I mean. Maybe another long spiky anchor chain lowered from space? And some hamsters in a wheel to crank it up?

And while I'm ranting about atmospheric physics: although I liked Spock's ship's dramatic swoop down into the atmosphere as a popcorn-munching crowd pleaser, from an orbital mechanics standpoint? Not so much. There's all this atmosphere in the way, and that's a space ship. And you thrust backwards to go "down" from orbit. And the scale's all wrong w.r.t. the length of the "drill cable" and the distance to orbit and the color of the sky and amount of atmosphere... but I can probably stop now.

Dear JJ Abrams: please hire someone who knows something about space for your sequel. I can deal with conventions like "explosions in space still make sounds" because it's more fun that way and "artificial gravity onboard all ships" because it makes the filming affordable-- but I expect at least a token attempt to make orbital space something other than a really dark room high up.

Posted via

Somerville Open Studios!

I'm one of the painters participating in Somerville Open Studios this weekend: Sat/Sun noon-6pm, 40 Quincy St. Come on by and see some of my paintings! Mars, dogs, square dancers, swamps, and golden spirals will be ably represented. I regret that I wasn't able to complete my ironic juxtaposition painting pitting Fermat's spiral against fibonacci's sequence; you'll have to wait until next year for that one.

SDR 0.4; JDoctest 1.4
I released SDR 0.4 last night (while watching a fabulous Woz on Dancing With the Stars), and JDoctest 1.4. SDR's release mostly signifies that I'm finally moving on from the breathing code; JDoctest has a number of minor improvements.

Found a litl work

I started work at litl yesterday, did all the necessary administrivia and got a build environment set up, and am typing this today using their software stack. There are a lot of similarities between my old and new jobs: both target poorly served "nonexpert" audiences — in litl's case, families at home — and are taking advantage of the opportunity to build a clean-slate integrated hardware/software solution that's new and different and not another rehash of the Common Desktop Environment of 1993. Both litl and OLPC write highly design-driven software and employ some of the same design firms, so there are more subtle aesthetic similarities as well.

The most sobering difference (for me) is the effective organization at litl. After spending years fighting uphill battles at OLPC, it's refreshing to find what seems a model development environment: clear and explicit specifications for new features, release notes that incorporate the specs (with green annotations indicating future features or deviations), and test cases. Lots of test cases — both in the code for machine use, and written in English for people to use.

This isn't rocket science. But writing specifications for a complete software/hardware system is not a little work, and as you become more ambitious the difficulty increases. Both OLPC's Sugar and litl reinvent the standard desktop paradigm, so you can't take even "the program launches when you click on it" for granted: you have to define every such interaction (what does "the program" mean?), how it should work, and test that it does so. Software at OLPC never quite made the transition from self-directed research project to production code. It saddens me to think what might have been if OLPC had been able to either properly develop Sugar, or successfully contract out the work — if only it was *Sugar* with the careful 200-page spec, exact lists of features implemented/in-process, heaps of test cases, and proper staging of features...

As part of a much larger company, litl also avoids the self-hosting trap. Email, HR, site hosting, databases, bug trackers, and collaboration support for the devel team are all sourced out to companies who do it well, so precious development time isn't wasted by having your rock star developers installing ubuntu for the 13th time on some new server, upgrading, doing backups, moving machines from one network to the next, tuning trac, etc. Some of this is inevitable, and some is actually necessary -- sometimes there *is* a vital business need for some tool which it's easier to have in-house and patch — but OLPC was suckered into bad decisions by the lure of fast free bandwidth from MIT. At OLPC we didn't have to pay for hosting or bandwidth as long as we kept our servers under our desks — so we ended up paying far more than we saved in lost developer hours as we tinkered.

Anyway, I'm looking forward to the change of pace and to actually delivering the features in the spec! At least one of my OLPC friends will be joining me, and litl is hiring (although they self-describe as "a small team of rockstars"); I can point you in the right direction if you want to learn more about working here.

Tags: ,

Migrated repositories to git
I've been slowly migrating my CVS repositories to git. The FLEX compiler repository (my decade of thesis work at MIT) and JUtil are now newly git-ified -- and I've released version 1.4 of JUtil in the process, for good measure. I've also added FLEX and JUtil to ohloh, for good measure. Enjoy!
Tags: ,

SDR can partner trade
I've released Square Dance Revolution 0.3. It still doesn't do anything terribly useful yet, but its square dance engine can do a partner trade now (barely). Contrary to the advice I always give others, I haven't developed SDR in the "crappy first draft, then refine" style -- since it's primarily for my own enjoyment, I've spent lovely leisurely hours playing with grammars, grammar generators, doctesting, drafting 3d models of Baypath Barn, and other such fun stuff. It's classic cathedral development: it may take years of off-hours, but boy will it be lovely when it's done!


The Python doctest module is really great: it makes it easy to simultaneously write test suites and demonstrate the usage for your modules. Python's interactive interpreter is key to its coolness: it's really easy to load the code you're working on, type some examples at the prompt, and turn the session into documentation and a test case.

I've been dusting off my Square Dance Revolution project, written in Java, and I thought: gee, it would be nice to use doctests here. A bit of inspiration from doctestj and Rhino, and a bit of elbow grease and: voila! JDoctest is born!

JDoctest is a Javadoc plugin which implements doctests by calling out to the Rhino javascript interpreter. Rhino's interactive javascript session makes Java as fun to program in / debug / test as Python is. (Rhino makes it easy to call between Javascript and Java.) Copy and paste those examples into javadoc comments, add a @doc.test tag, and you've got a test / use case example. I've added hooks to google-code-prettify to make the output beautiful, too.

Here's a simple example using JDoctest, and the SDR sources are now filled with more complex examples (for example). (New SDR release soon, I promise.) Enjoy!

Making shut-the-box fair
The standard game of "shut the box", analyzed in previous posts, has a fundamental flaw: it's not fair! On average in a two-player game, the first player will lose 2.5% of their stake in each game. Can we make a more fair two-player game?

The simplest solution is to change how ties are treated: with optimal strategies, 1% of the games end in ties. We can resolve all ties in favor of the first player, but that's not good enough. We actually need to treat ties like box-shutting, and award double payoffs. And it's still not enough, especially since the 2nd player can alter their strategy to avoid ties if they become too painful.

So let's leave ties alone, and alter the box-shutting payoff. Although the 2nd player has an advantage in the points race, because they know what 1st player score they have to beat, the 1st player has an advantage in shutting the box: if there are able to do so, the game ends immediately and the 2nd player doesn't have a turn. The 2nd player doesn't have the opportunity to, for example, achieve a shut box themselves, tying the game.

In the standard game, box-shutting pays out double: if everyone had to put up a $1 stake to play, then they have to pay another $1 to the player who is able to shut the box. We'll call this a "2x" payout. Even odds work out to a 3.8x payout (accounting for the fact that optimal strategies change as the payout rises). It would be a little awkward to ante up $2.80 a person in a $1 ante game, but it turns out that a 4x payout ($3 more from each person on a shut box) is probably okay: this only gives the 1st player a 0.1% advantage. That might be enough for a casino to live on, but it's probably acceptable among friends.

Another way to even the odds is to force the players to alternate turns. This is hard in the physical game, where there's only one physical box with tiles, but it is natural in a computer version of shut-the-box — and it probably improves gameplay by elimininating the long waits between a player's turns. The first player still has a slight advantage, but computation indicates that this advantage is limited to 0.4% of the stake ("acceptable among friends"). It's hard to compute optimal strategies in a 3 player game, due to combinatorial explosion, but it appears the first player's advantage grows as the number of players does.

Even more shutting of boxes.
In previous posts I've talked about mathematics and strategies for the "shut the box" game. I've been describing a two-player game, but shut the box can be played with any number of players. How do strategies change?

It's easy to think about the limit case and work backwards. If there are an infinite number of players, eventually someone will "shut the box" — it happens in about 1 in 10 games, after all. Since the round will eventually end this way, then your final score doesn't matter: all that's important is whether you manage to shut the box. Every player's strategy is identical. (With an infinite number of players, there is no "last" player — in fact, even if there were one, there would be an infinitesimal chance of their touching the dice: they game would almost certainly be ended by a previous player's shutting the box before the "last turn" occurred.)

A strategy which maximizes your chances of shutting the box, ignoring final score, is easy to implement and evaluate, and differences from the "largest result" strategy described previously can again be tabulated on a single side of a sheet of paper. This "most zeros" strategy manages to shut the box in 9.8% of games played, but still loses in a two player game — even though the optimal 1st player strategy only manages the shut the box in 9.5% of its games. As first player against an optimal 2nd player, "most zeros" loses 3.3% of the stake in each game, compared to only 2.5% lost by an optimal 1st player (game results: 9.5% 1st player shut box, 8.2% 2nd player shut box, 37.3% 1st player by points, 43.8% 2nd player by points, 0.9% ties). As second player against an optimal 1st player, "most zeros" loses 2.8% of the stake; an optimal second player would win 2.5% of the stake (game results: 9.5% 1st player shut box, 8.8% 2nd player shut box, 41.1% 1st player by points, 39.7% 2nd player by points, 0.9% ties).

Since the box is shut with a fairly high probability, we converge to the "most zeros" strategy fairly quickly as the number of players increases. Intermediate strategies are parameterized by the "score to beat" (the minimum score achieved by previous players) and the probability distribution of the minimum score to be achieved by future players. The complexity of computing exact solutions increases quickly.

In my next post on the topic, I'll discuss an "alternating turns" variant of shut-the-box which might be a more enjoyable real-time-collaborative computer game.

Just on the off chance that people read my blog but not Chris Ball's, I'll point out his recent Multipointer Remote Desktop demo. This is a key technology for improving collaboration, and I think it's relevant both to OLPC and to other little companies which might be thinking about collaboration features. Multi-pointer stuff is just a much different experience than "watching" someone else drive the display, as numerous studies (such as this one) have substantiated.
Tags: ,

Pinot/Journal news
The magic of RSS just informed me that Pinot has had it's 0.90 release, fixing some of the problems with i18n and OpenSearch that I'd found in my Journal2 work, and actually implementing my tagged cd idea! Coolness. I had been thinking about Recoll recently, simply because I like its simple plug-in interface for indexing new formats, but it looks like Pinot has pulled ahead again.

Fabrice, how about importing the HTML-indexing code from Omega like Recoll does, and using the same plugin interface and filters? You'd immediately expand greatly the types of documents you can index, and make it easy for third-parties to create one ones. A plugin basically just takes the file-to-be-indexed and spits out an HTML document with the indexable text. Additional xapian fields can be specified by including appropriate <meta> tags in the output HTML. Full details are in the Recoll manual; it's a pretty straightforward scheme.