First off, it was clearly advantageous to have the project run on System 6. Not only did this have a coolness factor that System 7 lacked, but it meant we could take over the entire machine (assuming MultiFinder was off), we could install runtime patches to implement our own paging system and deal with the smaller screen resolution without interfering with other apps, and finally we could greatly increase the low end of machines that could run it.
My first thought was to get it running on a Mac Plus, but even the maximum RAM of 4MB available to the 68000 Macs rapidly proved inadequate in some back-of-the-cocktail-napkin calculations despite using the top eight bits for doubled-up tag storage. So we went with a full 8MB allocation, meaning that a 68020 is required. Sorry. But this means everything from the Mac II on up, so I think it's still a pretty good selection of compatible hardware (and accelerator cards will be supported for 68000 machines).
The second problem with approaching the 68K port was how to actually compile it. Our choice of System 6 meant MPW and CodeWarrior were both out of the question, and THINK C doesn't have C++ support, so I settled on Symantec C++ 7.0 since it still works on System 6. However, owing to its age, Symantec C++ only supports a (by modern standards) drastically reduced set of the language and none of the gcc extensions that we depend on for the ordinary TenFourFox, and System 6 doesn't have dynamic linking. Also, there is no xpidl compiler system on the classic Mac OS for anything other than CodeWarrior (Classilla uses it to build its XPTs and headers), so it can't build the interface files. And then there was the issue of the compilation and linking completing sometime in this century.
So I attacked the problem from reverse. The G5 already builds XPTs and header files as part of the compile, so I pulled them from the 20 test build. Then I wrote up a little preprocessor (sort of like ansi2knr for those of you who remember what that was for) to deal with Mozilla's liberal use of SFINAE, equivalent code for gcc extensions, and commenting a few things out that were inessential and could not be built. (For example, no plugins.) GNU Makefiles were simply turned into C++ projects. Finally, to deal with the shared libraries, I wrote a dummy main() and built an "executable" with debugging symbols on so that symbols could be "exported" (more about that in a second) from libxul, since Mozilla doesn't support non-libxul builds and I couldn't think of a better way to do it. I dragged the spare quad G5 out of the closet and stuck Basilisk II at it on full speed at the lowest refresh rate with as much memory as it allowed, installed an OS and Symantec C++ and copied the files over, then built each project step by step as I worked on other things. What with manually punching OK and Cancel, occasional compile errors and the emulator overhead, it took about two weeks all told to finish the compilation (I really need to get build automation working, but this was a demo project, after all), but it did finish. The browser didn't really stand up at this point, but the debugger proved it was at least generating proper code.
The next two things to write were the dynamic linker and the JIT. Yes, there's a 68K JIT too. In fact, it was somewhat easier than the PowerPC JIT since the 68K has a frame pointer and it's a little less finicky about the stack (but since the 68K version is patterned after our PowerPC version, there's probably a fair bit more optimization to be done). Testing it was a little tricky in the debug JS shell, but that's included too for you to try. The dynamic linker scans the libxul "executable" at the beginning and generates either direct addresses to call or a stub routine for a user-level page fault for those functions that were being exported. This is obviously a little slow, but given that it wouldn't fit in memory (at about 65MB), this was the only way it would work. When run, the stub would try to execute the method out of paging space. If a given function or symbol had not been paged in, the linker would use an LRU ring to evict an eligible C++ method in memory and load the new one from the debug symbol table we compiled in, cache it, and run that.
Last was widget and font code. Harfbuzz, surprisingly, compiled without incident and generated most of the chrome and screen. I devised a custom MDEF for the menus to get them closer together so that we would make the most of our limited screen real estate, as well as icons for the bookmarks menu. Windows were just regular windows, and for this I imported much of Classilla's widget code, which aside from stripping out the Unicode stuff worked surprisingly well in System 6. (If you have QuickDraw acceleration, it's a lot faster, and it can use the RAM on cards like the 8*24*GC for GWorlds.) As far as the actual XUL browser, I just wrote a very minimal tab implementation for right now. The tabs are merely bitmap PICTs in the skin folder, as is the back button and basic layout of the bar.
Well, that's enough about implementation; let's show you how it works. TenFourFox right now comes on a 128MB HFS disk image. I tested it originally with Mini vMac, which needed a little bit of hacking to enable MacTCP and an emulated SONIC Ethernet card (Paul, I will send you these sources). Here we are at the desktop. All of these screenshots are at 512x342 so that we can test it even on a compact Mac.
We start the build and the loader immediately switches to the dynamic linker, which enumerates libxul. For ease of use, all of the other libraries were rolled into XUL as well, unlike in TenFourFox where some are still in external dylibs (this meant only one big library had to be scanned and managed). In a future version it will cache runtime information and offsets, but this is still undergoing a lot of debugging, so caching this information right now is not too useful.
The first time I got it to run (which was a delight), I expected that it would be hard to get everything to fit on the screen, but this was ridiculous:
There are three things wrong here (but one turned out serendipitously to be right). The first is that the buttons and text fields somehow picked up outlines. However, this works really well on a 1-bit screen, so I kept the side effect (drawing over native controls which actually contain the hit areas). The second is that the text on the shaded buttons is tough to read. But the worst is that the entire screen -- because remember it came from TenFourFox -- is rendering at 96dpi on a 72dpi display, so it's about 133% too large.
So I hacked layout to assume that the screen DPI was 72. This was a little harder than I thought it would be and had some glitches (especially because some image resizing was sometimes necessary), but I was able to get that to basically work. I also changed the font renderer to prefer either white or black depending on the background it was against, and then knock out pixels of the shading around it so that it remained legible. This was the result (against yesterday's Cesar Chavez Google doodle):
Image rendering is done using Floyd-Steinberg dithering to give the nicest fidelity, but solid shades drawn by the browser use a pattern diffusion since text rendering and knockout are more readable that way. The browser also computes a colour cube for background shades to further assist legibility. If there is no white background anywhere on the page, then the browser draws the lightest shade as white, and adjusts the others accordingly: after all, since there's no way the browser can faithfully render colours, why not have it just make good solid legible colour choices? You can see a nice example of this on Low End Mac below. The logo is using error diffusion, but the option bar is using pattern diffusion, and the font on top was rendered in white with knockouts to make it readable. (By the way, WOFF does work. It's downloaded to disk and cached.)
This adaptive layout strategy does not come cheap. On Mini vMac on the quad G5, with no speed control (balls to the wall), it took about 90 seconds to bring Google up, and about a minute for LEM. On my test SE/30, it took about three minutes for Google. I need to figure out some way of improving that within the unavoidable overhead of the linker system.
As far as the JIT goes, I am proud to say that on my test SE/30 SunSpider ran not only for probably the first time ever on a 68K Mac, but also completed it in under three hours! Excellent! However, SunSpider doesn't seem to be able to handle such long timings all that well, as you can see:
And then there's Facebook:
Well, there's still some work to be done, obviously. I think Zuckerberg should work harder on making Facebook accessible to all users, including 68K Macs.
I'll hopefully have test builds for 68K TenFourFox available shortly thereafter, along with a source dump and full build instructions. It's a great day for browsing on your vintage Mac! And as far as 20 on the PowerPC, look for it later this week.