Pango/Android -vs- Pango/NaCl
cananian

At the end of my Sugar/Android week, I had a simple Pango-on-Cairo demo running. This was built on a stack of ported libraries, including gettext, pixman, freetype, libxml2, fontconfig, and glib, as well as cairo and pango. You can run the demo yourself by sideloading pango-demo.apk onto your Android device (tested on a Motorola Xoom), and you can browse the source code to see what it entailed (here's the scariest part). (I was inspired by Akita Noek's android-cairo project, but I ended up reworking the build scheme and redoing most of the ports.)

Screenshot of Pango demo on Android

It made sense to start my Sugar/NaCl investigation by porting the same demo application to Native Client. The same stack of ported libraries was involved, although it was easy to include more functionality in the Native Client ports, including threading and PNG/PS/PDF support in cairo. The source code is a fork from the upstream naclports project, and the process was generally much cleaner. (But see my previous post for some caveats regarding naclports.) If you're using Chrome 10 or 11, you can run the demo in your browser (follow the instructions on that page). The Wesnoth team has a parallel project which ported some of these libraries as well, but not in an upstreamable manner.

Screenshot of Pango demo on Native Client

The demo app uses cairo to draw the background, an animated X, and some basic text in the center; it uses Pango's advanced international text support to draw properly-shaped Persian text in a circle around it. The center text is the "proper" bilingual Greek/Japanese written form of "pango"; the text around the edges is the Persian name of the internationalization library, "harfbuzz". Note that the Persian text is written right-to-left—and that I didn't put a full CJK font in the NaCl app, so the Japanese "go" character is missing. The Android port rebuilds the font cache at each startup, so it loads rather slowly; the NaCl port contains a prebuilt font cache so it starts more quickly.

Both ports took about two weeks. I blew my original schedule, partly due to the Patriot's day holiday, and partly because I'd given Android about a week's head start by tinkering on it before my original schedule post. The framerate of the demo is much better on NaCl (so fast that the edges of the animated X look choppy in the screenshot), but the hardware isn't easily comparable, so the comparison doesn't really tell us much. The porting effort was certainly more pleasant on NaCl, since newlib is a much more complete libc than Android's "Bionic"—but having gdb available made debugging on Android easier. (There is an unintegrated NaCl branch that integrates NaCl gdb in the browser, though!)

Much of the GNOME/POSIX library stack assumes access to a filesystem tree and does file-based configuration. In our demo application, fontconfig was the most culpable party: it wanted to load a configuration file describing font locations and naming, then to load the fonts themselves from the file system, and finally to write a cache file describing what it found back to the file system. Most ported software is going to want similar access—even if you store the user's own documents in a Journal, software still expects to find configuration, caches, and other data in a filesystem.

Android provides the POSIX filesystem APIs, but the filesystem an app can touch is segmented and sandboxed. As discussed previously, Android's Opaque Binary Blob feature may allow you to create a app-specific filesystem, but this doesn't let you share (for example) fonts and font configuration between activities. NaCl might eventually provide a similar unshared mechanism based on the HTML5 AppCache.

The preferred solution is more limited, but more flexible: no built-in filesystem APIs are used (or in NaCl's case, provided!) at all. Instead, you provide your own implementation of the POSIX file APIs (either via the --wrap linker indirection or through an appropriate backend to newlib/glibc/glib). For the NaCl demo app, I wrote a rather-elaborate in-memory filesystem --- only to find that an even-more-elaborate one already existed in naclports. But the longer-term solution uses message-passing (SRPC in NaCl, intents in Android) to implement these POSIX APIs. In Native Client, the implementation would be in browser-side JavaScript, which would then allow you to share parts of the filesystem tree between activities and/or map it into (cached) web-addressed resources. In either case, your application still sees the bog-standard POSIX API it expects.

More problematic are the networking APIs. Here Android provides a pretty standard socket library, while Native Client provides nothing at all. Using a browser-based implementation, as for the file APIs, will work fine for HTTP, WebSockets and even P2P via the HTML5 P2P APIs. But it's not clear that (for example) glib's elaborate asynchronous DNS name resolver implementation can (or should!) be implemented in a NaCl port.

In the end, the porting effort and abstraction shifts needed for Native Client and Android are roughly comparable. I expect Native Client will hold a strong edge in allowing close integration with web standards and web technologies. Android will probably continue to hold an edge in third-party application support and platform maturity.


litl's technical secrets revealed!
cananian

Update: Lucas has written up some additional technical details — and he mentions our update system, which was one of the first bits I worked on when I arrived. We heavily dog-food the updater, using buildbot to push out the latest bits to developers every night.

Update 2: Welcome, slashdot! The videos at http://litl.com/support/ give a good idea of what the UI is like.

Update 3: Non-technical press coverage: Wired, WSJ, Xconomy, CrunchGear

Update 4: More words: dignacio on litl's server side, J5, twitter

litl launches today, so I can finally talk a bit about the cool technologies behind our software stack.

On the server side, litl is a cloud computer, built on Google App Engine, Amazon S3, and Django — all of which are fantastic technologies. All machine data is stored in the cloud, so you can have a gorilla stomp on your litl, pick up another one, log on and instantly recreate your environment. (Since we developers are always abusing our prototype hardware, we've tested this a lot!)

On the client side, the litl software (naturally, code-named "big") is built on gjs, which is the smallest possible wrapper around JavaScript necessary to make it a large-scale programming language. I've really enjoyed programming in JavaScript, which might seem odd to people who (a) have had frustrating experiences with global variables and crazy incompatibilites trying to make JavaScript work on the web, and (b) know that I'm a static types and compiler-bondage sort of guy. So I'll spend a little time here talking about gjs.

From a language standpoint gjs adds just one feature: a minimal module/namespacing mechanism built on a single top-level definition: the name "imports". Modules are imported using (typically constant) definitions, such as:

const StringUtil = imports.stringUtil;

s = StringUtil.sprintf("%d", 3);

The dynamic stringUtil property of the imports object is an object whose properties are the top-level definitions in the file stringUtil.js, found on the import path. Subdirectories are additional dot-separated components, as in Java package names; imports.util is a dynamic object representing modules found in a util directory on the path. You may need to compare this to the namespacing mechanism in the abandoned ES4 proposal to appreciate how small and elegant this is.

Further, this module system integrates with the GObject system via GObject introspection annotations for library code. This allows easy integration with libraries written in C or any other introspectable language. For example:

const Gtk = imports.gi.Gtk;
Gtk.init(0, null);

let w = new Gtk.Window({ type: Gtk.WindowType.TOPLEVEL });
w.connect('destroy', Gtk.main_quit );

let button = new Gtk.Button({ label: "Hello, world!" });
button.connect('clicked', function() { w.destroy(); } );
w.add(button);
w.show_all();

Gtk.main();

The gjs system is built on the SpiderMonkey JavaScript engine, the one used in Firefox, so JavaScript execution benefits from all the JIT and performance work done upstream. Further, it means that we can code in JavaScript 1.8, Mozilla's dialect of JavaScript with lots of bells and whistles (mostly borrowed from Python):

gjs> function range(n) { for (let i=0; i<n; i++) yield i; }
gjs> [i*2 for (i in range(5))]
0,2,4,6,8

(In a later post perhaps I'll describe how you can use the yield expression to build a continuation-based system for writing asynchronous callbacks for UI code in a natural manner.)

Overall, JavaScript as a system programming language feels a lot like Lisp must have for the programming generation before mine: minimal syntax, very powerful and orthogonal core abstractions, and (dare I say it) not much type-checking or busy-work to get in your way. (gjs is not a homage to Sussman, but it should be!) JavaScript is a programming language for those who know what they're doing and aren't afraid of a little functional abstraction. (Now if only there was a way to disable semicolon insertion!)

OK, enough about gjs. The litl software stack is based on Canonical's distribution of Ubuntu for mobile/embedded devices, and the Canonical folks have been great partners. It's been a joy to get changes integrated upstream, and Canonical has done a lot of excellent work accommodating their distribution to litl's unique needs. On the UI side, we use X11 and some GTK widgets, but implement our own (very simple) window manager. Most of the actual look and feel is done with Clutter (hardware-accelerated 3d animated effects, whee!), and we have a Flash-based API for external developers. We also have hardware-accelerated h.264 video.

Regardless of the technical fun, though, I have to say that the best thing about working at litl is its management: developing with all the other rockstars here is made the more fun by the knowledge that management will help ensure that our goals are realistic and that we'll be able to hit our targets, with time left over to polish before release. It's just much more fun to code when you know you'll be proud of the end result.