Log in

No account? Create an account

Congratulations, OLPC!

Today OLPC announced a partnership with Marvell to develop the XO-3. This is great news — according to my little birdies this will put development efforts at OLPC on substantially more solid financial footing. And software development for new tablet computers is non-trivial! Here's hoping that OLPC, which led the netbook movement, can similarly spur the nacent tablet computer market. So far tablets are seen as content consumers, with all "real" content creation (ie not just jotting notes or makng minor edits) being done on seperate "real" computers. OLPC's vision should insist on fully productive machines which allow kids to create and contribute, not just passively consume. (In particular, the killer app for kids on XO laptops to date is making movies and telling stories.)

In addition to funding further software development, the Marvell partnership ought to give OLPC the muscle to continue pushing forward the hardware state of the art. A lot of the reality of modern electronics manufacturing depends on guaranteeing enough production volume to sustain the interest of suppliers and their engineers. For low volumes you instead get "least effort" solutions from your partners, which result in higher costs and poorer results. So I'm cautiously optimistic that the "capture" of OLPC resources for Marvell's high volume "first world" tablet efforts will in the end be a means to accomplishing the XO-3 goals for "the rest". But care and caution are waranted. OLPC is not large enough to do multiple things at once; its resources and attention are dissipated easily.


Baker's (Square Dance Concepts)

In a previous post I introduced the "Baker's" square dance concept. By analogy to "Baker's dozen", it was proposed the "Baker's" meant to do 13/12 of the call it modified.

But let's reconsider: a "baker's half dozen" isn't 6 1/2 eggs, it's 7. Let's redefine the "Baker's" concept to mean "one more part" of the call. (Hence the title of this post: "one more square dance concept".)

So a "Baker's Square Thru" is a square thru 5. A "Baker's Eight Chain 3" is an eight chain 4. A "Baker's Right and Left Grand" goes 5 hands around. Note that this is different from "do the last part twice"; we continue the alternation of parts in the base call.

Let's push this a little further to include calls with arithmetic sequences. "Baker's Remake the Wave" would end with 4 quarters by the left. "Baker's Quarter Thru" would be a remake the wave. "Baker's Three Quarter Thru" is a reverse order remake.

Slightly more controversial: "Baker's Swing the Fractions" would end with zero quarters by the left (would that still set a roll direction?). "Baker's Touch By 1/4 By 1/2" from a single file column of 6 would have the very outsides touch 3/4. Sue Curtis suggests that "Baker's Reverse Echo As Couples Trade" (from a tidal one-faced line) would be "trade, as couples trade, as couples as couples trade", or equivalently "trade, couples trade, couples of 4 trade".

This definition for "Baker's" seems a lot more fun. Do any other square dance callers have concept-worthy last names?

SDR 0.6

Another holiday, another SDR release. Version 0.6 (now on github!) comes with definitions for almost all of the Mainstream square dance program, excepting thars, stars, and that pesky allemande left. The basic abstract syntax tree representation of calls has been refactored, building on a implicitly-typed expression representation — clearly we're approaching fulfillment of Greenspun's tenth rule. On the REPL front of this quest, SDR now includes PMSD, a text UI incorporating an interactive javascript context. The test case transcripts probably provide the best overview of its use.

The SDR web frontend now supports implemented bigon, hexagon, and octagon dancing. There's also a square resolver based on Dave Wilson's ocean wave method. Since the previous release, I've also reimplemented square breathing using the Cassowary constraint solver. Expanding the square to make space for inserted tandems, couples, etc is formulated as a linear programming problem, optimized so that the resulting formation is compact. Resolving overlaps is formulated as a mixed integer program, which allows for either-or constraints: if dancers overlap on both X and Y axes, either the X overlap must be resolved, or the Y overlap must be resolved, or the dancers have to form a star.

Although there's plenty of core engine work left to do, I've started to shift back to working on the game UI. Hooking the newly improved dance engine up with the Sphinx voice recognition engine, tuning the recognizer, and generally putting all the bits together is expected to be the focus of the next (0.7) release.

Chrome OS, litl, and OLPC
Well, Google just announced Chrome OS, to be available next Christmas. Some parts of it are very similar to what you can already put under the tree this Christmas from litl (actually, you can buy one right now) — and other parts are familiar from my time at OLPC. For example, the legacy-free BIOS and signed bootloader are what OLPC shipped two years ago on the XO-1. The XO-1 was not an "always connected" device, though — in fact the opposite: it assumed connectivity was infrequent and low-bandwidth. At OLPC, the signed code scheme was part of an theft-deterrence system, crucial for OLPC's target third-world deployments. It wasn't very popular among our first-world users, and I'm not actually sure why Google implemented it for Chrome OS. Phone company lock-in (due to largely misplaced concerns about maintaining the integrity of their networks) are why phone apps are locked down with signature schemes; this isn't necessary (and verges on Evil) for what's intended as a general-purpose computing device.

The somewhat oddly-technical (and often slightly-wrong) middle section of the Chrome OS presentation veered off at one point about filesystem partitions (!) and how having a read-only partition is a novel feature of their OS. Read-only partitions are one of the oldest security mechanisms — my boot floppy had the write-protect notch taped over back when I was booting DOS — and a near-universal feature of thin-client deployments (which Chrome OS certainly is). OLPC maintained a mostly-read-only root, but primarily to extend the lifetime of the flash disk (flash lifetime was not touched on in the Chrome OS presentation). Litl mounts a read-only root with a writable unionfs on top, which actually works much better in practice: improved maintainability because all the various Linux system daemons can still "do their thing" as they expect, but you can wipe the top partition whenever you like to return to a clean state (at litl we do this on every update). (If you haven't hacked the lower levels of a Linux distribution, you'd probably be surprised at how many system daemons assume they can write various places in the root partition, and you really don't want to maintain hacked versions of all of them.) Since ChromeOS gave an Ubuntu shout-out at one point, I strongly suspect the unionfs scheme is actually what they are doing as well — and, for that matter, what all the other Ubuntu Mobile downstreams are doing. Not new.

The emphasis on a read-only root partition is rather misleading from a security standpoint (as much of the middle portion of the presentation was). If you're not storing your files locally, it doesn't mean that you suddenly have no security concerns. It just means you have different security concerns. Cross-site scripting attacks give a malicious website access to your Google account through your web browser: these sorts of things are the malware for a WebOS. You have a different attack surface, but a vulnerability in your browser or flash plugin still gives access to private data. Mentioning that they encrypt the data on disk seems to be pure snake oil: your browser has access to the unencrypted data, and that's your most vulnerable surface anyway.

Overall, Chrome OS is a nice validation of some of the cloud-computing ideas we've been working on at litl, and it's always nice to see more legacy-free hardware (like the OLPC XO-1 and the litl webbook), but the presentation was oddly underwhelming. They're not really "reimagining the PC" — they're just removing all the applications on the PC except for Chrome. You still interact with "the PC" the same way you currently interact with Chrome. For reimagination, watch the videos at litl.

litl's technical secrets revealed!

Update: Lucas has written up some additional technical details — and he mentions our update system, which was one of the first bits I worked on when I arrived. We heavily dog-food the updater, using buildbot to push out the latest bits to developers every night.

Update 2: Welcome, slashdot! The videos at http://litl.com/support/ give a good idea of what the UI is like.

Update 3: Non-technical press coverage: Wired, WSJ, Xconomy, CrunchGear

Update 4: More words: dignacio on litl's server side, J5, twitter

litl launches today, so I can finally talk a bit about the cool technologies behind our software stack.

On the server side, litl is a cloud computer, built on Google App Engine, Amazon S3, and Django — all of which are fantastic technologies. All machine data is stored in the cloud, so you can have a gorilla stomp on your litl, pick up another one, log on and instantly recreate your environment. (Since we developers are always abusing our prototype hardware, we've tested this a lot!)

On the client side, the litl software (naturally, code-named "big") is built on gjs, which is the smallest possible wrapper around JavaScript necessary to make it a large-scale programming language. I've really enjoyed programming in JavaScript, which might seem odd to people who (a) have had frustrating experiences with global variables and crazy incompatibilites trying to make JavaScript work on the web, and (b) know that I'm a static types and compiler-bondage sort of guy. So I'll spend a little time here talking about gjs.

From a language standpoint gjs adds just one feature: a minimal module/namespacing mechanism built on a single top-level definition: the name "imports". Modules are imported using (typically constant) definitions, such as:

const StringUtil = imports.stringUtil;

s = StringUtil.sprintf("%d", 3);

The dynamic stringUtil property of the imports object is an object whose properties are the top-level definitions in the file stringUtil.js, found on the import path. Subdirectories are additional dot-separated components, as in Java package names; imports.util is a dynamic object representing modules found in a util directory on the path. You may need to compare this to the namespacing mechanism in the abandoned ES4 proposal to appreciate how small and elegant this is.

Further, this module system integrates with the GObject system via GObject introspection annotations for library code. This allows easy integration with libraries written in C or any other introspectable language. For example:

const Gtk = imports.gi.Gtk;
Gtk.init(0, null);

let w = new Gtk.Window({ type: Gtk.WindowType.TOPLEVEL });
w.connect('destroy', Gtk.main_quit );

let button = new Gtk.Button({ label: "Hello, world!" });
button.connect('clicked', function() { w.destroy(); } );


The gjs system is built on the SpiderMonkey JavaScript engine, the one used in Firefox, so JavaScript execution benefits from all the JIT and performance work done upstream. Further, it means that we can code in JavaScript 1.8, Mozilla's dialect of JavaScript with lots of bells and whistles (mostly borrowed from Python):

gjs> function range(n) { for (let i=0; i<n; i++) yield i; }
gjs> [i*2 for (i in range(5))]

(In a later post perhaps I'll describe how you can use the yield expression to build a continuation-based system for writing asynchronous callbacks for UI code in a natural manner.)

Overall, JavaScript as a system programming language feels a lot like Lisp must have for the programming generation before mine: minimal syntax, very powerful and orthogonal core abstractions, and (dare I say it) not much type-checking or busy-work to get in your way. (gjs is not a homage to Sussman, but it should be!) JavaScript is a programming language for those who know what they're doing and aren't afraid of a little functional abstraction. (Now if only there was a way to disable semicolon insertion!)

OK, enough about gjs. The litl software stack is based on Canonical's distribution of Ubuntu for mobile/embedded devices, and the Canonical folks have been great partners. It's been a joy to get changes integrated upstream, and Canonical has done a lot of excellent work accommodating their distribution to litl's unique needs. On the UI side, we use X11 and some GTK widgets, but implement our own (very simple) window manager. Most of the actual look and feel is done with Clutter (hardware-accelerated 3d animated effects, whee!), and we have a Flash-based API for external developers. We also have hardware-accelerated h.264 video.

Regardless of the technical fun, though, I have to say that the best thing about working at litl is its management: developing with all the other rockstars here is made the more fun by the knowledge that management will help ensure that our goals are realistic and that we'll be able to hit our targets, with time left over to polish before release. It's just much more fun to code when you know you'll be proud of the end result.

Echo^n Square Thru A(5)

[Square Dance primer for computer scientists: square dance calls are (recursive) sequences of basic actions: walk forward, turn around, do a square dance call. "U-turn back" is a square dance call, which means simply "turn around". Some calls take numeric arguments: "square thru N" for N=1 means "pull by"; "pull by" is another square dance call which is roughly "step past the person you're facing". For N>1, "square thru N" means "pull by, turn in, mirror square thru (N-1)". The word "mirror" in that definition is a square dance concept. It is a function which takes a square dance call and transforms it, yielding square dance calls or actions. "Mirror" means to simply do the call as if you were looking in a mirror, exchanging right and left. (If you've ever tried to comb your hair looking into the display from a webcam, which is not mirrored the way you expect it to, you know how hard it can be to do well-practiced actions mirror-wise. "Mirror" is a simple square dance concept.) So the definition of square thru N I gave above is recursive, defined in terms of a transformation of the call itself. The concept "twice" takes a call and merely means to repeat the call twice.

Concepts can take multiple arguments, including arguments which are themselves concepts, not just calls. (Some would call these supercalls or meta-concepts, but we'll gloss over this for now.) Which brings us to the subject of today's post: the ECHO concept.]

"ECHO <concept> <call>" means do <concept> <call> then do <call> — like you were hearing the last part of the phrase echoed back from a distance. So, "ECHO mirror square thru 4" means do a "mirror square thru 4" and then a "square thru 4". "Echo twice u-turn back" means u-turn back three times.

Most dancers have a bit of a formal grammar in their heads, something like:

call: <simple_call> | <concept> <call> | <metaconcept> <concept> <call> ;
simple_call: square thru <number> | u-turn back | pull by | ... ;
concept: mirror | twice | ... ;
metaconcept: echo | ... ;

Examples of valid utterances:
"mirror square thru 4", "echo mirror square thru 4", "echo twice u-turn back".
Some invalid utterances:
"square thru 4 mirror", "mirror square", "echo mirror".

Now here comes the fun part: how do you parse the expression "echo echo twice u-turn back"?

It doesn't parse according to the grammar above: you have to mentally curry an argument to make "echo twice" into a concept (since when provided with a concept the remaining part is a 'function taking a call', aka a concept). You end up with something like "echo (echo twice) u-turn back", which would mean, "echo twice u-turn back, u-turn back". Thus, "echo twice u-turn back" means do the u-turn back 3 times; "echo (echo twice) u-turn back" thus means to do the call 4 times, and "echo (echo (echo twice)) u-turn back" is 5 times. Adding another echo adds another u-turn back. Bo-ring.

Because we're adventurous types, let's throw the grammar out the window and treat echo as a simple macro-expander, which grabs the shortest possible phrase following it as its argument. Now we'll parse "echo echo twice u-turn back" as "echo (echo) twice u-turn back", that is, "echo twice u-turn back, twice u-turn back". That's 5 u-turn backs. The next one, "echo echo echo twice u-turn back" becomes "echo (echo) (echo (echo) twice u-turn back)", which is 8. Continuing, "echo (echo) (echo (echo) (echo twice u-turn back))" is 13. Adding more echos yields subsequent terms in the Fibonacci sequence. Now we're talking!

The sequence constructed above grows in duration as an exponential function of the length of the utterance, but we already know of another such construction: the humble "square thru N" also grows exponentially with its input, since the value of its numeric argument "999...9" is also exponential with the number of digits uttered. Let's say that the goal is to allow the square dance caller time to enjoy a nice beverage and sporting event on TV while the dancers work out the implications of a short call. Exponential is okay, but we could certainly do better.

Some readers might be aware of the "baker's function", which naturally involves multiplying everything by 13/12. One might imagine "Baker's Square Thru 6" called using this function as a concept; this is equivalent to "square thru 6 1/2 times". (Computer scientists: define "square thru 1/2" as "do the first half of a pull by" then you can continue using the recursive definition above.)

But, for packing the most dancing into the fewest words — super-exponentially! — I submit "Ackermann's Square Thru 5". (Use f(n)=A(n,n) as computer scientists do.) Guy Steele has suggested that the caller might use "Ackermann's Square Thru Your Couple Number", finish the tip calling for two couples, and then use three-couple figures for the rest of the night. Perfect for the lazy caller who wants to do 25% less.

(Credits: Justin Legakis discovered the Fibonacci connection. Guy Steele had most of the good ideas here. Bill Ackerman is the recipient of the eponymous super-exponential concept. I just wrote it all up.)

Flash Filesystems

Dave Woodhouse in a recent article calls shenanigans on hardware translation for flash devices. I agree: flash memory is a Horse Of A Different Color, and trying to gussy it up to look like a rotating disk of magnetized rust is using a bad and leaky abstraction that will only end in tears. But I don't think the engineers' better judgment will prevail: the use of hardware translation is driven by Windows/DOS compatibility concerns, since Microsoft (to my knowledge) has shown no desire in writing a new filesystem for flash drives. OLPC used a raw flash device in the XO-1, but in their follow-on had to switch to a hardware-translation device because market/scale economics were making those devices cheaper and cheaper while the original raw flash device was (a) not increasing in volume (aka, getting relatively more expensive), (b) not increasing in size (no one wanted to make new ones), and (c) getting discontinued (aka, impossible to buy). The best one can hope for is that a raw interface be offered in addition to the standard "Windows-compatible" one, for specialized embedded or high-performance applications — but the chicken and egg problem applies: until there are compelling gains, these interfaces won't be purchased in sufficient volumes to yield reasonable prices, and no one is writing the optimized filesystems because you can't find reasonably-priced flash devices to run them on. The end result is likely to be that Worse Is Better, and we'll be left with another set of legacy chains. Given enough time and transistors, the hardware may eventually grow Heroic Measures to work around the bad abstraction (see: the x86 instruction set).

If your filesystem is large enough and the amount of data being rewritten small enough, the flash problems "probably" won't bite you until after the obsolescence of the device — flash storage doesn't have to be good, it just has to be "not too bad" until, say, 3 years after purchase. Like non-removable rechargeable batteries that slowly degrade over time, you'll find your filesystem slowly degrading — one more reason to eventually buy a new one, and I've never known manufacturers to be especially sorry about that. Heroic Measures may never be needed/taken.

Leaving amateur market economics (and despair), let's revisit a cryptic and probably overlooked paragraph in my olpcfs proposal:

Implementations tuned for flash memory can use Adaptive Packed Memory Arrays for efficient CAS. This is an open research topic, but there is some evidence that a high-performance scalable flash-based filesystem can more profitably be implemented using cache-oblivious B-Trees and APMAs, rather than more "traditional" filesystem structures.

Here I'm trying to build a little bridge between filesystem designers and functional data structure researchers, two groups who probably rarely sit down together for a beer. I think Packed Memory Arrays are the "B-trees of flash filesystems": a better way to build an on-disk index given the peculiar characteristics of flash memory. Just as BeOS demonstrated that your filesystem could be "B-trees all the way down", I believe you could build a compelling filesystem using PMAs as the primary data structure. Ultimately, I suspect that the development strategy Dave Woodhouse describes — small incremental changes to existing filesystems whose philosophies are "close enough" — will probably prevail over a ground-up PMA rewrite. Incremental improvements and shared codebases are the Right Strategy for successful open source projects: you get more developers and testers for those parts which aren't flash-specific, and you've already gotten yourself out of the Cathedral with some working code to kick things off.

But if anyone's interested in attempting a clean slate design, PMAs are my advice. Maybe you'll come up with something so wonderful it will make a compelling book (and inspire folks like me), even if ultimately you don't win in the marketplace.

(But maybe you'll win! Flash storage and your optimized filesystem will prevail, and one day we'll think of rotating media in the same way we now think of core memory, floppy disks, tape drives, and the x86 instruction set... er, hmm.)

Tags: ,

JDoctest 1.5 released
I've released JDoctest 1.5, with support for integrating Javascript doctests of your Java code into the JUnit test framework. I use this to implement continuous testing.

Waiting for Godot/the 87 bus

A: What time?
B: Should be here at 11:30
A: 2, 3 minutes. (spits)
A: Look. Here.
B: Davis at 11:22. But that's down there.
A: 2, 3 minutes.
B: Yeah. 11:30.
A: Maybe look, you see yet?
B: No, no, I can't see past the construction.
A: 2, 3 minutes. No more.
A: It's hot. Inside my house, 90% humidity. Outside: 100.
A: How old are you? 26? 28?
B: No, 32. (looks down street)
A: You see?
B: No, not yet.
A: 2, 3 minutes. (spits)
A: Maybe 10.
A: You married?
B: No.
A: Maybe is better.
A: You're 26?
B: 32.
A: I was 28.
A: You see yet?
B: No. I can't see. (shrugs)
A: 2, 3 minutes. (spits)

Posted via LiveJournal.app.

Sugar waves: time to get started!
While I was abroad, it seems that Google released their wave server/federation code. So you can now get started tackling some of the projects I outlined in my previous post on Waves and Sugar: getting a basic federation server running on the school server and/or writing a server running locally on the XO which exports the content of the Journal/filesystem for collaboration. I'm hoping someone other than me thinks this is an exciting idea, and runs with it!

OLPC kerfuffle

It seems I missed a nice little Slashdot/Negroponte/Krstić OLPC/Sugar row while I was away in Europe savoring the efficient intercity rail. While Tomeu and OLPCnews choose to blame particular words or phrases ("Sugar", "$100 laptop"), both J5 and gregdek seem to think the Real Problem was the fact that OLPC wouldn't upstream its patches.

I beg to disagree. As I wordily tried to explain in a comment to gregdek's post, I think Ivan is mostly right here: OLPC tried to do "seven new things" (as Mary Lou explained to me when I was hired) -- and "new things" end up costing a lot of debugging and development time, in one of the Iron Laws Of Writing New Code And Making New Hardware. But another problem was just Picking The Wrong Partners. With the exception of the display (one of the few unalloyed successes of the XO hardware), most of our hardware and software partners were working at cross purposes. Red Hat didn't really want to build an embedded OS product, "mesh networking" to Marvel meant household networks between your TV and your stereo with maybe 10 participants, the Geode was an orphaned offering from AMD, the display and flash NAND controller was a unloved one-off, etc. Success is found by aligning your partners' interests with your own.

At Litl our OS partner has many other embedded-systems clients, and has developed the toolsets to handle forks and customization without all the angst I'd grown used to at OLPC. We just say, "we need feature X turned on/off" or "set to Y" or just "somehow Z needs to happen" and it's done. We're not fighting, we're not destabilizing their core OS, we don't waste time with elaborate justifications why This Is The Right Thing To Do. If the change is appropriate to upstream, they upstream it, and maybe the work benefits the other embedded-systems clients. There's no drama, because We All Want The Same Thing.

I think "upstreaming" is entirely the wrong hero for OLPC here. If anything, I think OLPC's best hope is to continue to aggressively innovate -- no one buys a computer because its code has been upstreamed. Unfortunately, I don't think OLPC has enough current funding to do anything but follow the upstream, so maybe this current round of praise for Fedora-ization is just the old cliché: making a virtue of necessity.

SDR 0.5

Over the 4th of July weekend, I released SDR 0.5, now with a GWT/GAE frontend you can check out at square-dance.appspot.com. It's still incomplete -- breathing and collision resolution need to be hooked up, it doesn't do "centers X sides Y" yet, and it's missing lots of call definitions -- but at least there's something tangible to play with that doesn't require mucking around in a javascript shell. Progress!

(Oh, and if you click '+' and then 'Add to Home Screen' when looking at square-dance.appspot.com in Safari on your iPhone, you get a cute "there's an app for that" mobile version.)

Google Wave and Sugar: what's next?

So, yesterday I posted a entry discussing how Google Wave actually implements the collaborative vision Sugar (and OLPC) were working towards. (It's a shame we didn't have better contacts with Google while I was at OLPC; Google was actually on OLPC's board, and beta-ing Waves would have been a very fruitful partnership.)

If you agree with the premise: what should the next steps in a Waves-ification of Sugar be? Eventually Google promises to release "the lion's share" of its source code, for both server and client, so getting the google server installed on the school server is one task — but not one which can be done immediately. Implementing Network Principles is another necessary precondition, in order to use Wave's DNS-based participant naming system ("SuzyQStudent@lima.peru.schoolserver.laptop.org" or whatever). That's something which can be done now. What else?

Eventually, when the source code drops, making the waves client work offline would be important, since Waves (and embedded gadgets) basically replace Write and Sugar's bulletin board. Waves edit XML trees, so porting existing activities to use XML-based file formats will go far in integrating them into a new Wave World Order. I haven't seen any demo of a waves-based drawing activity/gadget (tuxpaint is a favorite of most kids), so Waves Paint would be a fun project if you want to start playing with the Waves extension APIs.

More controversially, work on Waves-enabling a next-gen Journal could be interesting. As predicted by proponents of the Journal for some time, the "wave of the future" (so to speak) is filesystem-independent storage. Waves in Google's demo are titled and searched like email messages, not as "documents" in a filesystem hierarchy. However, we had repeated requests to unify Journal storage and traditional filesystems, for (a) better interoperability with existing systems, and (b) sneakernet collaboration. In my mind, this would mean exporting a waves-like view of an existing filesystem, as I proposed for the Journal, where directories are interpreted as slightly-special tag markers. One could imagine implementing a "Wave Server" based around this idea, in effect using the filesystem as the wave database. With the magic of Wave Federation, these "filesystem" waves can interoperate with other wave clients and servers. This standalone file-based server might also server as the basis for "one child under a tree" wave editing. (For that matter, a robust sneakernet implementation of the Wave Federation Protocol would also be very useful!)

Exciting times! Wave promises to bring OLPC/Sugar's vision of ubiquitous collaboration to life at long last. Google has the funding and development resources to tackle what is in effect a bottom-up reorganization of the software stack. OLPC/SugarLab's role is peripheral but vital: strongly lead the development of offline or "flakey connectivity" aspects of this technology so that it can be used everywhere from the solar-powered jungle to the dense urban cities, and to provide the educational software on top of the platform so that kids can *learn* as they collaborate and create.

Google Waves of Sugar

Google announced Google Wave today, an "attempt to imagine what email would be like if it were invented today". It has a robust collaboration infrastructure strongly grounded in distributed systems theory. If I were (re)starting the OLPC project today, this is certainly what I'd base Sugar on — with as much corporate support from Google as possible, since the project is strongest when it has strong allies.

I believe you could make each school server into a Wave Provider, using the Network Principles I drafted while employed at OLPC to ensure appropriate DNS-style naming. Any document format based on XML can be made collaborative using the Operational Transform mechanism. The journal's large scale history/versioning mechanism could be based on the same principles, as So6/LibreSource demonstrated. And the UI demonstrated in the keynote (which I haven't even fully watched yet) would be an exciting way to implement the neglected Bulletin Board feature.

It would be an exciting start to the Sugar project! But given the existing code base, is there now too much bathwater around the baby to consider swapping the child out?


I liked the new Star Trek movie, but I wish they'd paid some attention to physics. When the main technical substance is named "red matter," you know the science consultant isn't on call. C'mon, name it Rubidium Dilanthumide or something -- technobabble's not hard if you're trying at all.

But it was the orbital mechanics that really annoyed me. You can't, y'know, just drop a spiky anchor straight down to earth from orbit. Nor can you "fall" out of the belly of a space plane: you're already falling. That's what being in orbit is. And there's this thing called an atmosphere? You ever heard about it? Air resistance? Friction? It makes things hot. And winds! C'mon, at least give your unanchored space tether thingy some sort of guidance rockets along it's length to keep it going "straight down". It would make it cooler. Your heroes rocketing down, enveloped in huge plasma fireballs, dodging the giant blasts from the cable's guidance jets... It would be science-tastic.

Also, Starfleet: the bottom of a gravity well is not a great place to build an Enterprise. How exactly did you get that thing up into orbit? Without setting the corn fields on fire, I mean. Maybe another long spiky anchor chain lowered from space? And some hamsters in a wheel to crank it up?

And while I'm ranting about atmospheric physics: although I liked Spock's ship's dramatic swoop down into the atmosphere as a popcorn-munching crowd pleaser, from an orbital mechanics standpoint? Not so much. There's all this atmosphere in the way, and that's a space ship. And you thrust backwards to go "down" from orbit. And the scale's all wrong w.r.t. the length of the "drill cable" and the distance to orbit and the color of the sky and amount of atmosphere... but I can probably stop now.

Dear JJ Abrams: please hire someone who knows something about space for your sequel. I can deal with conventions like "explosions in space still make sounds" because it's more fun that way and "artificial gravity onboard all ships" because it makes the filming affordable-- but I expect at least a token attempt to make orbital space something other than a really dark room high up.

Posted via LiveJournal.app.

Somerville Open Studios!

I'm one of the painters participating in Somerville Open Studios this weekend: Sat/Sun noon-6pm, 40 Quincy St. Come on by and see some of my paintings! Mars, dogs, square dancers, swamps, and golden spirals will be ably represented. I regret that I wasn't able to complete my ironic juxtaposition painting pitting Fermat's spiral against fibonacci's sequence; you'll have to wait until next year for that one.

SDR 0.4; JDoctest 1.4
I released SDR 0.4 last night (while watching a fabulous Woz on Dancing With the Stars), and JDoctest 1.4. SDR's release mostly signifies that I'm finally moving on from the breathing code; JDoctest has a number of minor improvements.

Found a litl work

I started work at litl yesterday, did all the necessary administrivia and got a build environment set up, and am typing this today using their software stack. There are a lot of similarities between my old and new jobs: both target poorly served "nonexpert" audiences — in litl's case, families at home — and are taking advantage of the opportunity to build a clean-slate integrated hardware/software solution that's new and different and not another rehash of the Common Desktop Environment of 1993. Both litl and OLPC write highly design-driven software and employ some of the same design firms, so there are more subtle aesthetic similarities as well.

The most sobering difference (for me) is the effective organization at litl. After spending years fighting uphill battles at OLPC, it's refreshing to find what seems a model development environment: clear and explicit specifications for new features, release notes that incorporate the specs (with green annotations indicating future features or deviations), and test cases. Lots of test cases — both in the code for machine use, and written in English for people to use.

This isn't rocket science. But writing specifications for a complete software/hardware system is not a little work, and as you become more ambitious the difficulty increases. Both OLPC's Sugar and litl reinvent the standard desktop paradigm, so you can't take even "the program launches when you click on it" for granted: you have to define every such interaction (what does "the program" mean?), how it should work, and test that it does so. Software at OLPC never quite made the transition from self-directed research project to production code. It saddens me to think what might have been if OLPC had been able to either properly develop Sugar, or successfully contract out the work — if only it was *Sugar* with the careful 200-page spec, exact lists of features implemented/in-process, heaps of test cases, and proper staging of features...

As part of a much larger company, litl also avoids the self-hosting trap. Email, HR, site hosting, databases, bug trackers, and collaboration support for the devel team are all sourced out to companies who do it well, so precious development time isn't wasted by having your rock star developers installing ubuntu for the 13th time on some new server, upgrading, doing backups, moving machines from one network to the next, tuning trac, etc. Some of this is inevitable, and some is actually necessary -- sometimes there *is* a vital business need for some tool which it's easier to have in-house and patch — but OLPC was suckered into bad decisions by the lure of fast free bandwidth from MIT. At OLPC we didn't have to pay for hosting or bandwidth as long as we kept our servers under our desks — so we ended up paying far more than we saved in lost developer hours as we tinkered.

Anyway, I'm looking forward to the change of pace and to actually delivering the features in the spec! At least one of my OLPC friends will be joining me, and litl is hiring (although they self-describe as "a small team of rockstars"); I can point you in the right direction if you want to learn more about working here.

Tags: ,

Migrated repositories to git
I've been slowly migrating my CVS repositories to git. The FLEX compiler repository (my decade of thesis work at MIT) and JUtil are now newly git-ified -- and I've released version 1.4 of JUtil in the process, for good measure. I've also added FLEX and JUtil to ohloh, for good measure. Enjoy!
Tags: ,

SDR can partner trade
I've released Square Dance Revolution 0.3. It still doesn't do anything terribly useful yet, but its square dance engine can do a partner trade now (barely). Contrary to the advice I always give others, I haven't developed SDR in the "crappy first draft, then refine" style -- since it's primarily for my own enjoyment, I've spent lovely leisurely hours playing with grammars, grammar generators, doctesting, drafting 3d models of Baypath Barn, and other such fun stuff. It's classic cathedral development: it may take years of off-hours, but boy will it be lovely when it's done!


The Python doctest module is really great: it makes it easy to simultaneously write test suites and demonstrate the usage for your modules. Python's interactive interpreter is key to its coolness: it's really easy to load the code you're working on, type some examples at the prompt, and turn the session into documentation and a test case.

I've been dusting off my Square Dance Revolution project, written in Java, and I thought: gee, it would be nice to use doctests here. A bit of inspiration from doctestj and Rhino, and a bit of elbow grease and: voila! JDoctest is born!

JDoctest is a Javadoc plugin which implements doctests by calling out to the Rhino javascript interpreter. Rhino's interactive javascript session makes Java as fun to program in / debug / test as Python is. (Rhino makes it easy to call between Javascript and Java.) Copy and paste those examples into javadoc comments, add a @doc.test tag, and you've got a test / use case example. I've added hooks to google-code-prettify to make the output beautiful, too.

Here's a simple example using JDoctest, and the SDR sources are now filled with more complex examples (for example). (New SDR release soon, I promise.) Enjoy!

Making shut-the-box fair
The standard game of "shut the box", analyzed in previous posts, has a fundamental flaw: it's not fair! On average in a two-player game, the first player will lose 2.5% of their stake in each game. Can we make a more fair two-player game?

The simplest solution is to change how ties are treated: with optimal strategies, 1% of the games end in ties. We can resolve all ties in favor of the first player, but that's not good enough. We actually need to treat ties like box-shutting, and award double payoffs. And it's still not enough, especially since the 2nd player can alter their strategy to avoid ties if they become too painful.

So let's leave ties alone, and alter the box-shutting payoff. Although the 2nd player has an advantage in the points race, because they know what 1st player score they have to beat, the 1st player has an advantage in shutting the box: if there are able to do so, the game ends immediately and the 2nd player doesn't have a turn. The 2nd player doesn't have the opportunity to, for example, achieve a shut box themselves, tying the game.

In the standard game, box-shutting pays out double: if everyone had to put up a $1 stake to play, then they have to pay another $1 to the player who is able to shut the box. We'll call this a "2x" payout. Even odds work out to a 3.8x payout (accounting for the fact that optimal strategies change as the payout rises). It would be a little awkward to ante up $2.80 a person in a $1 ante game, but it turns out that a 4x payout ($3 more from each person on a shut box) is probably okay: this only gives the 1st player a 0.1% advantage. That might be enough for a casino to live on, but it's probably acceptable among friends.

Another way to even the odds is to force the players to alternate turns. This is hard in the physical game, where there's only one physical box with tiles, but it is natural in a computer version of shut-the-box — and it probably improves gameplay by elimininating the long waits between a player's turns. The first player still has a slight advantage, but computation indicates that this advantage is limited to 0.4% of the stake ("acceptable among friends"). It's hard to compute optimal strategies in a 3 player game, due to combinatorial explosion, but it appears the first player's advantage grows as the number of players does.

Even more shutting of boxes.
In previous posts I've talked about mathematics and strategies for the "shut the box" game. I've been describing a two-player game, but shut the box can be played with any number of players. How do strategies change?

It's easy to think about the limit case and work backwards. If there are an infinite number of players, eventually someone will "shut the box" — it happens in about 1 in 10 games, after all. Since the round will eventually end this way, then your final score doesn't matter: all that's important is whether you manage to shut the box. Every player's strategy is identical. (With an infinite number of players, there is no "last" player — in fact, even if there were one, there would be an infinitesimal chance of their touching the dice: they game would almost certainly be ended by a previous player's shutting the box before the "last turn" occurred.)

A strategy which maximizes your chances of shutting the box, ignoring final score, is easy to implement and evaluate, and differences from the "largest result" strategy described previously can again be tabulated on a single side of a sheet of paper. This "most zeros" strategy manages to shut the box in 9.8% of games played, but still loses in a two player game — even though the optimal 1st player strategy only manages the shut the box in 9.5% of its games. As first player against an optimal 2nd player, "most zeros" loses 3.3% of the stake in each game, compared to only 2.5% lost by an optimal 1st player (game results: 9.5% 1st player shut box, 8.2% 2nd player shut box, 37.3% 1st player by points, 43.8% 2nd player by points, 0.9% ties). As second player against an optimal 1st player, "most zeros" loses 2.8% of the stake; an optimal second player would win 2.5% of the stake (game results: 9.5% 1st player shut box, 8.8% 2nd player shut box, 41.1% 1st player by points, 39.7% 2nd player by points, 0.9% ties).

Since the box is shut with a fairly high probability, we converge to the "most zeros" strategy fairly quickly as the number of players increases. Intermediate strategies are parameterized by the "score to beat" (the minimum score achieved by previous players) and the probability distribution of the minimum score to be achieved by future players. The complexity of computing exact solutions increases quickly.

In my next post on the topic, I'll discuss an "alternating turns" variant of shut-the-box which might be a more enjoyable real-time-collaborative computer game.

Just on the off chance that people read my blog but not Chris Ball's, I'll point out his recent Multipointer Remote Desktop demo. This is a key technology for improving collaboration, and I think it's relevant both to OLPC and to other little companies which might be thinking about collaboration features. Multi-pointer stuff is just a much different experience than "watching" someone else drive the display, as numerous studies (such as this one) have substantiated.
Tags: ,

Pinot/Journal news
The magic of RSS just informed me that Pinot has had it's 0.90 release, fixing some of the problems with i18n and OpenSearch that I'd found in my Journal2 work, and actually implementing my tagged cd idea! Coolness. I had been thinking about Recoll recently, simply because I like its simple plug-in interface for indexing new formats, but it looks like Pinot has pulled ahead again.

Fabrice, how about importing the HTML-indexing code from Omega like Recoll does, and using the same plugin interface and filters? You'd immediately expand greatly the types of documents you can index, and make it easy for third-parties to create one ones. A plugin basically just takes the file-to-be-indexed and spits out an HTML document with the indexable text. Additional xapian fields can be specified by including appropriate <meta> tags in the output HTML. Full details are in the Recoll manual; it's a pretty straightforward scheme.

Mystery Hunt lessons
It's almost February, but I still haven't posted my wrapup of the 2009 MIT Mystery Hunt. Before I get to the puzzles, let me just say that the mystery hunt has made job interviews very awkward for me! Every time I've been on the job market, I find that I want to cite "lessons learned" from mystery hunts past -- which then forces me to explain what the mystery hunt is ("well, imagine the world's hardest scavenger hunt, with puzzles"), and I end up sounding like a huge geek. Which may not be a bad thing, I guess...

Why does the Mystery Hunt keep cropping up? Well, in my recent interview it started with a discussion of the difference between "enforced" and "permissive" management styles. There is a great and good tendency in any large endeavor, corporate or puzzle-hunt, to draw up "best practices" to codify lessons learned about the most effective ways to do things. But what happens next? There is a crossroads between two fundamental strategies. The "enforcement" strategy tries to figure out how we can forcibly prevent people from doing things other than the "best thing". Bureaucracy and long lists of rules follow, along with authority: someone who will not let X do Y unless they have been convinced that Z has happened.

A better strategy is "permissive": assuming good-faith on the part of all involved, just passively monitor the situation, so that "bad things" can be remedied without intruding on all the people trying to do good things. This is a much more efficient strategy, especially if your resources are constrained — it's less costly to monitor than to prevent — but it involves a lot of trust. Some cultures can sustain the necessary trust, and some cannot.

Mystery hunt example: how do we prevent people from calling in too many wrong guesses at an answer? Team ACME solved the problem by just putting the communal answer phone at the front of the room, so that everyone would notice if someone got crazy with the guessing. Team Codex uses an enforcer: all answers must be called in by the designated operator-on-duty. Lest you get the idea that permissive==good and enforcement==bad, let me state clearly that Codex has solid reasons for its choice: it has so many remote team members (not physically present in team headquarters) that monitoring is difficult -- the simple "phone in the front of the room" doesn't work when your rooms are scattered across Seattle, Zurich, and countless other places.

At OLPC we faced the same issue with system administration: how to we keep our network and computers secure? OLPC came from the permissive MIT/Media Lab heritage: since it is a research and learning organization, MIT uses the permissive strategy, letting users do what they please on the academic network and focusing effort on detecting "bad behavior" (worms, cracked machines, attacks, misconfigurations) and efficiently and effectively black-holing small pieces of the network to contain these problems. If a naive user doesn't properly patch their personal computer and it gets hacked, that machine is noticed almost immediately and dropped off the network. "Traditional" system adminstration focuses on enforcement: by forcing all users to use only patched versions of approved software, we can proactively prevent problems -- but at the cost of also preventing Real Work and Unexpected Solutions.

Back to the mystery hunt: since the hunt is a competitive event, it's easy for people to get carried away with secrecy to protect the team's competitive advantages. But the mystery hunt starts at noon on a Friday, with people arriving that morning from around the country, and it is crucial to get all these new people integrated into the team as quickly as possible. Every minute that some team member can't print/can't access the network/can't access the wiki/can't access the chat rooms is a minute that could be spend advancing the goal: solving puzzles. Permissive strategies are a big win here, especially because the hunt code of conduct, observed by all seriously competitive teams, ensures the necessary trust. Other teams aren't actively seeking to hack into our systems; preventing Google from sucking everything into its archives and index is usually enough.

Another OLPC example: build systems. During the run-up to a stable build, it's very important to keep build changes under control. How to prevent developers from making uncontrolled changes to the stable build? The enforcement strategy appoints a build master Who Alone Is Powerful and forces all changes to be made via the build master. The permissive strategy says that developers are generally trustworthy (and are accountable to management if not!) and retains the ability of anyone to affect the build at any time. Instead of making it arbitrarily harder to fix things as the build approaches stability, we (a) trust the developers to commit only appropriate fixes to stable, (b) watch all the changes (which has the benefit of catching even unintentional changes) and (c) back out any well-intentioned fixes that might adversely affect stability. The buck still stops with the build master to determine what's "safe" for the stable build, but their direct intervention is limited to backing out the infrequent "bad thing" instead of being interposed in every frequent good thing.

This post is very long, and I've only discussed one of the "mystery hunt lessons". Others include effective ways to collaborate with remote resources/solvers, and the importance of making "the right thing easy". The best way to get people to do "the right thing" is to make it the easiest thing to do. For example: a perennial problem is keeping remote hunters "in the loop" about what puzzles have been solved and what new puzzles have been made available. Further, there's a lot of internal bookkeeping that needs to be done on these events. My best solution to the problem was a chatbot who lived in the chatroom ACME used to keep in touch with remote solvers. When a solution was phoned in, we'd type "bot, the solution to FROBNOTIC UPGRADES is BAZNIC". When a new puzzle was made available, "bot, new puzzle: GROOVY FROOD". This was the (only) means to update our collaboration software: the puzzles/answers would be added to the wiki's automatically generated "status blackboard", and new puzzle pages would be created in the wiki to track work done. But, by performing the operation in the shared chat room, it notified the remotes at the same time. Suddenly no one had to be reminded to update the remote hunters! (Codex has a similar system: all answers are phoned in by a dedicated operator, and you are encouraged to communicate with the operator via the shared ringhunters chat room. Codex doesn't have an effective a "new puzzle notification" system, although you could monitor the changelog of the wiki to see new pages being created.)

In our latest hunt, we used Google Spreadsheets extensively to collaboratively solve puzzles. Team members would typically create new sheets in the spreadsheet to test different theories for how the puzzle might work. This had the unexpected side effect of eliminating almost all updates to the puzzle's wiki pages -- it was easy to work in the spreadsheet, and hard to keep switching back to the wiki. The problem was that it is also "hard" to write long free-text blocks inside a spreadsheet, and so most sheets in the spreadsheet were "undocumented", and it was hard reconstructing what had been done on a puzzle when a newcomer picked it up. The discussion on our mailing list started along familiar paths: can we somehow force people to update the wiki? The "make the right thing easy" lesson suggests it would be more profitable to make it "easy" to add free-text blocks to sheets of the spreadsheet -- then the "right" thing will just naturally happen.


OLPC laid off most of their software staff yesterday -- 50% of the staff overall, but the cuts were deepest in software. That means, as of Friday, I am available to work -- for you!

I am hearing rumors that there might be a source of seed funding for continued development of Sugar, the educational software stack shipping on all OLPC XOs. If so, I'd love to be able to continue the work that's consumed me for the past 18 months. Fingers crossed.

Tags: , ,

xkcd and shut-the-box

An xkcd comic from a while ago mentioned a "knapsack problem," and one reader (dhogarty) wrote some set enumeration code to tackle it.

Before I go further, I should mention that Knuth is the definitive reference here (as in most things), in particular Volume 4 which is now available only in fascicle form. If you haven't been reading the pre-fascicles just as soon as they're posted on the TAOCP web page, you're a sad sorry person who doesn't deserve to be called a computer scientist. (If you're not a computer scientist, read at least fascicle 4's chapter on the history of combinatorics; you can buy a hardcopy from amazon.)

The xkcd problem is called a "knapsack problem", but it's not, really; see Knuth's commentary around Algorithm F of (vol 4 fascicle 3a). We're generating multicombinations of n items taken from the m appetizers (repetition permitted). It's always worthwhile generating your options in strict lexicographic order to get a handle on duplicates; consult the history of combinatorics cited above for cautionary tales.

A compact way to generate multicombinations without duplicates is simply to require the output to be in lexicographic (sorted) order. For example, tweaking dhogarty's code only slightly:

def all_seqs(size, alphabet):
   for n,c in zip(xrange(len(alphabet)), alphabet):
     for s in all_seqs(size-1, alphabet[n:]) if size>1 else [[]]:
       yield [c]+s

You can wrap this in an iteration of sizes from 0 to infinity to generate multicombinations of all possible cardinalities.

A more elegant solution only generates multicombinations which sum properly (this is similar to Knuth's Algorithm F):

def ways_to_sum(total, menu):
    if total < 0: return
    if total == 0: yield []
    for (item,price),n in zip(menu, xrange(len(menu))):
        for tail in ways_to_sum(total-price, menu[n:]):
            yield [ item ] + tail

menu = [('Fruit', 215), ('Fries', 275),
        ('Salad', 335), ('Wings', 355),
        ('Sticks', 420), ('Sampler', 580)]

for solution in ways_to_sum(1505, menu):
    print solution

It's regrettable that the solution isn't unique. Bad Randall.

I've found lots of opportunities to generate "all possible solutions" in various contexts, and Python's generators have generally been an elegant means to express solutions. For example, I've been discussing "Shut the Box"; here's some code fragments to generate all possible plays for some board state:

def ways_to_sum(total, min_digit=1, max_digit=9):
    """Generate all the ways to sum to the given total using the digits
    `min` through `max` without repetitions.
    By default min and max are 1 and 9, respectively.

    >>> list(ways_to_sum(1))
    >>> list(ways_to_sum(2))
    >>> list(ways_to_sum(3))
    [[1, 2], [3]]
    >>> list(ways_to_sum(12))
    [[1, 2, 3, 6], [1, 2, 4, 5], [1, 2, 9], [1, 3, 8], [1, 4, 7], [1, 5, 6], [2, 3, 7], [2, 4, 6], [3, 4, 5], [3, 9], [4, 8], [5, 7]]
    for first in xrange(min_digit, min(max_digit+1, total)):
        for tail in ways_to_sum(total-first, min_digit=first+1, max_digit=max_digit):
            yield [ first ] + tail
    if min_digit <= total and total <= max_digit:
        yield [ total ]

def all_states(min_digit=1):
    """Generate all possible shut-the-box states."""
    if min_digit <= 9:
        for x in all_states(min_digit=min_digit+1):
            yield x
            yield [min_digit] + x
        yield []

The all_states() function generates the states in lexicographic order based on the set's binary representation (a9 a8 a7 ... a2 a1, to use Knuth's notation).

[This was originally going to be a comment on dhogarty's post, but after writing it I discovered that dhogarty had broken his wordpress installation sometime after 12 Sep 2008. I reverse-engineered the "Captcha-free" spam protection and fixed the Ajax URL in his page, but no dice. Querying the "Captcha-free" implementation with a python one-liner and performing a manual submission still loses. Maybe trackback will work.]

Pippy. And Winnie-the-Pooh.

In October, SJ and I travelled to Peru for OLPC. (Ob. Plug: buy a G1G1 machine this holiday season! It helps get laptops to needy kids and helps fund us to develop more and better software for those kids.)

Anyway, my favorite story from our time in Peru was when SJ stood up in front of a classroom of university kids and announced, in Spanish, "Now we are going to make drawings with pee-pee." He actually was talking about Pippy, the python-programming-for-kids activity written by Chris Ball (with patches from yours truely and others) -- but there's a reason why Pippy is translated "Peppy" in Spanish =).

But I've found another contender for that crown: a cookbook. Best book title ever. (David Friedman not only found that book, he made it an "employee pick" when working at a "large chain bookstore".)

So. Pippy and Winnie-the-Pooh. Closing a scatological circle.

(While I'm in the neighborhood, I have to plug David Friedman's Murder in the Hundred Acre Wood as well. And 10 Lessons from the Movies. Ok, go resume your productive lives now.)

Sugarcamp Friday

Last bit: the video from Friday's Sugarcamp sessions (2008-11-21) is now up. Ogg-format video in both full and small sizes is available at download.laptop.org. There are "all of Sugarcamp", Wednesday, Thursday, and (new) Friday playlists on YouTube.

Friday's video is inside the cut.Collapse )

Friday has talks on Sugar UI and Collaboration. (These links are to the "small" Ogg-format files, reasonable for downloading.) I came in late to the Sugar UI brainstorm session, so the video is missing the first part. Sorry about that =(.

That's all the sugarcamp video! Hope you've enjoyed watching along!


Log in

No account? Create an account