Friday, September 30, 2016

gdb7 patchlevel 4 available

First, in the "this makes me happy" dept.: a Commodore 64 in a Gdansk, Poland auto repair shop still punching a clock for the last quarter-century. Take that, Phil Schiller, you elitist pig.

Second, as promised, patchlevel 4 of the TenFourFox debugger (our hacked version of gdb) is available from SourceForge. This is a minor bugfix update that wallpapers a crash when doing certain backtraces or other operations requiring complex symbol resolution. However, the minimum patchlevel to debug TenFourFox is still 2, so this upgrade is merely recommended, not required.

Wednesday, September 28, 2016

Firefox OS goes Tier WONTFIX

I suppose it shouldn't be totally unexpected given the end of FirefoxOS phone development a few months ago, but a platform going from supported to take-it-completely-out-of-mozilla-central in less than a year is rather startling: not only has commercial development on FirefoxOS completely ceased (at version 2.6), but the plan is to remove all B2G code from the source tree entirely. It's not an absolutely clean comparison to us because some APIs are still relevant to current versions of OS X macOS, but even support for our now ancient cats was only gradually removed in stages from the codebase and even some portions of pre-10.4 code persisted until relatively recently. The new state of FirefoxOS, where platform support is actually unwelcome in the repository, is beyond the lowest state of Tier-3 where even our own antiquated port lives. Unofficially this would be something like the "tier WONTFIX" BDS referenced in mozilla.dev.planning a number of years ago.

There may be a plan to fork the repository, but they'd need someone crazy dedicated to keep chugging out builds. We're not anything on that level of nuts around here. Noooo.

Tuesday, September 27, 2016

And now for something completely different: Rehabilitating the Performa 6200?

The Power Macintosh 6200 in its many Performa variants has one of the worst reputations of any Mac, and its pitifully small 603 L1 caches add insult to injury (its poor 68K emulation performance was part of the reason Apple held up the PowerBook's migration to PowerPC until the 603e, and then screwed it up with the PowerBook 5300, a unit that is IMHO overly harshly judged by history but not without justification). LowEndMac has a long list of its perceived faults.

But every unloved machine has its defenders, and I noticed that the Wikipedia entry on the 6200 series radically changed recently. The "Dtaylor372" listed in the edit log appears to be this guy, one "Daniel L. Taylor". If it is, here's his reasoning why the seething hate for the 6200 series should be revisited.

Daniel does make some cogent points, cites references, and even tries to back them some of them up with benchmarks (heh). He helpfully includes a local copy of Apple's tech notes on the series, though let's be fair here -- Apple is not likely to say anything unbecoming in that document. That said, the effort is commendable even if I don't agree with everything he's written. I'll just cite some of what I took as highlights and you can read the rest.

  • The Apple tech note says, "The Power Macintosh 5200 and 6200 computers are electrically similar to the Macintosh Quadra 630 and LC 630." It might be most accurate to say that these computers are Q630 systems with an on-board PowerPC upgrade. It's an understatement to observe that's not the most favourable environment for these chips, but it would have required much less development investment, to be sure.

  • He's right that the L2 cache, which is on a 64-bit bus and clocked at the actual CPU speed, certainly does mitigate some of the problems with the Q630's 32-bit interface to memory, and 256K L2 in 1995 would have been a perfectly reasonable amount of cache. (See page 29 for the block diagram.) A 20-25% speed penalty (his numbers), however, is not trivial and I think he underestimates how this would have made the machines feel comparatively in practice even on native code.

  • His article claims that both the SCSI bus and the serial ports have DMA, but I don't see this anywhere in the developer notes (and at least one source contradicts him). While the NCR controller that the F108 ASIC incorporates does support it, I don't see where this is hooked up. More to the point, the F108's embedded IDE controller -- because the 6200 actually uses an IDE hard drive -- doesn't have DMA set up either: if the Q630 is any indication, the 6200 is also limited to PIO Mode 3. While this was no great sin when the Q630 was in production, it was verging on unacceptable even for a low-to-midrange system by the time the 6200 hit the market. More on that in the next point.

    Do note that the Q630 design does support bus mastering, but not from the F108. The only two entities which can be bus master are the CPU or either the PDS expansion card or communications card via the PrimeTime II IC "southbridge."

  • Daniel makes a very well-reasoned assertion that the computer's major problems were due to software instead of hardware design, which is at least partially true, but I think his objections are oversimplified. Certainly the Mac OS (that's with a capital M) was not well-suited for handling the real-time demands of hardware: ADB, for example, requires quite a bit of polling, and the OS could not service the bus sufficiently often to make it effective for large-volume data transfer (condemning it to a largely HID-only capacity, though that's all it was really designed for). Even interrupt-driven device drivers could be problematic; a large number of interrupts pending simultaneously could get dropped (the limit on outstanding secondary interrupt requests prior to MacOS 9.1 was 40, see Apple TN2010) and a badly-coded driver that did not shunt work off to a deferred task could prevent other drivers from servicing their devices because those other interrupts were disabled while the bad driver tied up the machine.

    That said, however, these were hardly unknown problems at the time and the design's lack of DMA where it counts causes an abnormal reliance on software to move data, which for those and other reasons the MacOS was definitely not up to doing and the speed hit didn't help. Compare this design with the 9500's full PCI bus, 64-bit interface and hardware assist: even though the 9500 was positioned at a very different market segment, and the weak 603 implementation is no comparison to the 604, that doesn't absolve the 6200 of its other deficiencies and the 9500 ran the same operating system with considerably fewer problems (he does concede that his assertions to the contrary do "not mean that [issues with redraw, typing and audio on the 6200s] never occurred for anyone," though his explanation of why is of course different). Although Daniel states that relaying traffic for an Ethernet card "would not have impacted Internet handling" based on his estimates of actual bandwidth, the real rate limiting step here is how quickly the CPU, and by extension the OS, can service the controller. While the comm slot at least could bus master, that only helps when it's actually serviced to initiate it. My personal suspicion is because the changes in OpenTransport 1.3 reduced a lot of the latency issues in earlier versions of OT, that's why MacOS 8.1 was widely noted to smooth out a lot of the 6200's alleged network problems. But even at the time of these systems' design Copland (the planned successor to System 7) was already showing glimmers of trouble, and no one seriously expected the MacOS to explosively improve in the machines' likely sales life. Against that historical backdrop the 6200 series could have been much better from the beginning if the component machines had been more appropriately engineered to deal with what the OS couldn't in the first place.

In the United States, at least, the Power Macintosh 6200 family was only ever sold under the budget "Performa" line, and you should read that as Michael Spindler being Spindler, i.e., cheap. In that sense putting as little extra design money into it wasn't ill-conceived, even if it was crummy, and I will freely admit my own personal bias in that I've never much cared for the Quadra 630 or its derivatives because there were better choices then and later. I do have to take my hat off to Daniel for trying to salvage the machine's bad image and he goes a long way to dispelling some of the more egregious misconceptions, but crummy's still as crummy does. I think the best that can be said here is that while it's likely better than its reputation, even with careful reconsideration of its alleged flaws the 6200 family is still notably worse than its peers.

Friday, September 23, 2016

TenFourFox 45.5.0b1 available: now with little-endian (integer) typed arrays, AltiVec VP9, improved MP3 support and a petulant rant

The TenFourFox 45.5.0 beta (yes, it says it's 45.4.0, I didn't want to rev the version number yet) is now available for testing (downloads, hashes). This blog post will serve as the current "release notes" since we have until November 8 for the next release and I haven't decided everything I'll put in it, so while I continue to do more work I figured I'd give you something to play with. Here's what's new so far, roughly in order of importance.

First, minimp3 has been converted to a platform decoder. Simply by doing that fixed a number of other bugs which were probably related to how we chunked frames, such as Google Translate voice clips getting truncated and problems with some types of MP3 live streams; now we use Mozilla's built-in frame parser instead and in this capacity minimp3 acts mostly as a disembodied codec. The new implementation works well with Google Translate, Soundcloud, Shoutcast and most of the other things I tried. (See, now there's a good use for that Mac mini G4 gathering dust on your shelf: install TenFourFox and set it up for remote screensharing access, and use it as a headless Internet radio -- I'm sitting here listening to National Public Radio over Shoutcast in a foxbox as I write this. Space-saving, environmentally responsible computer recycling! Yes, I know I'm full of great ideas. Yes. You're welcome.)

Interestingly, or perhaps frustratingly, although it somewhat improved Amazon Music (by making duration and startup more reliable) the issue with tracks not advancing still persisted for tracks under a certain critical length, which is dependent on machine speed. (The test case here was all the little five or six second Fingertips tracks from They Might Be Giants' Apollo 18, which also happens to be one of my favourite albums, and is kind of wrecked by this problem.) My best guess is that Amazon Music's JavaScript player interface ends up on a different, possibly asynchronous code path in 45 than 38 due to a different browser feature profile, and if the track runs out somehow it doesn't get the end-of-stream event in time. Since machine speed was a factor, I just amped up JavaScript to enter the Baseline JIT very quickly. That still doesn't fix it completely and Apollo 18 is still messed up, but it gets the critical track length down to around 10 or 15 seconds on this Quad G5 in Reduced mode and now most non-pathological playlists will work fine. I'll keep messing with it.

In addition, this release carries the first pass at AltiVec decoding for VP9. It has some of the inverse discrete cosine and one of the inverse Hadamard transforms vectorized, and I also wrote vector code for two of the convolutions but they malfunction on the iMac G4 and it seems faster without them because a lot of these routines work on unaligned data. Overall, our code really outshines the SSE2 versions I based them on if I do say so myself. We can collapse a number of shuffles and merges into a single vector permute, and the AltiVec multiply-sum instruction can take an additional constant for use as a bias, allowing us to skip an add step (the SSE2 version must do the multiply-sum and then add the bias rounding constant in separate operations; this code occurs quite a bit). Only some of the smaller transforms are converted so far because the big ones are really intimidating. I'm able to model most of these operations on my old Core 2 Duo Mac mini, so I can do a step-by-step conversion in a relatively straightforward fashion, but it's agonizingly slow going with these bigger ones. I'm also not going to attempt any of the encoding-specific routines, so if Google wants this code they'll have to import it themselves.

G3 owners, even though I don't support video on your systems, you get a little boost too because I've also cut out the loopfilter entirely. This improves everybody's performance and the mostly minor degradation in quality just isn't bad enough to be worth the CPU time required to clean it up. With this initial work the Quad is able to play many 360p streams at decent frame rates in Reduced mode and in Highest Performance mode even some 480p ones. The 1GHz iMac G4, which I don't technically support for video as it is below the 1.25GHz cutoff, reliably plays 144p and even some easy-to-decode (pillarboxed 4:3, mostly, since it has lots of "nothing" areas) 240p. This is at least as good as our AltiVec VP8 performance and as I grind through some of the really heavyweight transforms it should get even better.

To turn this on, go to our new TenFourFox preference pane (TenFourFox > Preferences... and click TenFourFox) and make sure MediaSource is enabled, then visit YouTube. You should have more quality settings now and I recommend turning annotations off as well. Pausing the video while the rest of the page loads is always a good idea as well as before changing your quality setting; just click once anywhere on the video itself and wait for it to stop. You can evaluate it on my scientifically validated set of abuses of grammar (and spelling), 1970s carousel tape decks, gestures we make at Gmail other than the middle finger and really weird MTV interstitials. However, because without further configuration Google will "auto-"control the stream bitrate and it makes that decision based on network speed rather than dropped frames, I'm leaving the "slower" appellation because frankly it will be, at least by default. Nevertheless, please advise if you think MSE should be the default in the next version or if you think more baking is necessary, though the pref will be user-exposed regardless.

But the biggest and most far-reaching change is, as promised, little-endian typed arrays (the "LE" portion of the IonPower-NVLE project). The rationale for this change is that, largely due to the proliferation of asm.js code and the little-endian Emscripten systems that generate it, there will be more and more code our big-endian machines can't run properly being casually imported into sites. We saw this with images on Facebook, and later with WhatsApp Web, and also with MEGA.nz, and others, and so on, and so forth. asm.js isn't merely the domain of tech demos and high-end ported game engines anymore.

The change is intentionally very focused and very specific. Only typed array access is converted to little-endian, and only integer typed array access at that: DataView objects, the underlying ArrayBuffers and regular untyped arrays in particular remain native. When a multibyte integer (16-bit halfword or 32-bit word) is written out to a typed array in IonPower-LE, it is transparently byteswapped from big-endian to little-endian and stored in that format. When it is read back in, it is byteswapped back to big-endian. Thus, the intrinsic big-endianness of the engine hasn't changed -- jsvals and doubles are still tag followed by payload, and integers and single-precision floats are still MSB at the lowest address -- only the way it deals with an integer typed array. Since asm.js uses a big typed array buffer essentially as a heap, this is sufficient to present at least a notional illusion of little-endianness as the asm.js script accesses that buffer as long as those accesses are integer.

I mentioned that floats (neither single-precision nor doubles) are not byteswapped, and there's an important reason for that. At the interpreter level, the virtual machine's typed array load and store methods are passed through the GNU gcc built-in to swap the byte order back and forth (which, at least for 32 bits, generates pretty efficient code). At the Baseline JIT level, the IonMonkey MacroAssembler is modified to call special methods that generate the swapped loads and stores in IonPower, but it wasn't nearly that simple for the full Ion JIT itself because both unboxed scalar values (which need to stay big-endian because they're native) and typed array elements (which need to be byte-swapped) go through the same code path. After I spent a couple days struggling with this, Jan de Mooij suggested I modify the MIR for loading and storing scalar values to mark it if the operation actually accesses a typed array. I added that to the IonBuilder and now Ion compiled code works too.

All of these integer accesses have almost no penalty: there's a little bit of additional overhead on the interpreter, but Baseline and Ion simply substitute the already-built-in PowerPC byteswapped load and store instructions (lwbrx, stwbrx, lhbrx, sthbrx, etc.) that we already employ for irregexp for these accesses, and as a result we incur virtually no extra runtime overhead at all. Although the PowerPC specification warns that byte-swapped instructions may have additional latency on some implementations, no PPC chip ever used in a Power Mac falls in that category, and they aren't "cracked" on G5 either. The pseudo-little endian mode that exists on G3/G4 systems but not on G5 is separate from these assembly language instructions, which work on all PowerPCs including the G5 going all the way back to the original 601.

Floating point values, on the other hand, are a different story. There are no instructions to directly store a single or double precision value in a byteswapped fashion, and since there are also no direct general purpose register-floating point register moves, the float has to be spilled to memory and picked up by a GPR (or two, if it's a double) and then swapped at that point to complete the operation. To get it back requires reversing the process, along with the GPR (or two) getting spilled this time to repopulate the double or float after the swap is done. All that would have significantly penalized float arrays and we have enough performance problems without that, so single and double precision floating point values remain big-endian.

Fortunately, most of the little snippets of asm.js floating around (that aren't entire Emscriptenized blobs: more about that in a moment) seem perfectly happy with this hybrid approach, presumably because they're oriented towards performance and thus integer operations. MEGA.nz seems to load now, at least what I can test of it, and WhatsApp Web now correctly generates the QR code to allow your phone to sync (just in time for you to stop using WhatsApp and switch to Signal because Mark Zuckerbrat has sold you to his pimps here too).

But what about bigger things? Well ...

Yup. That's DOSBOX emulating MECC's classic Oregon Trail (from the Internet Archive's MS-DOS Game Library), converted to asm.js with Emscripten and running inside TenFourFox. Go on and try that in 45.4. It doesn't work; it just throws an exception and screeches to a halt.

To be sure, it doesn't fully work in this release of 45.5 either. But some of the games do: try playing Oregon Trail yourself, or Where in the World is Carmen Sandiego or even the original, old school in its MODE 40 splendour, Те́трис (that's Tetris, comrade). Even Commander Keen Goodbye Galaxy! runs, though not even the Quad can make it reasonably playable. In particular the first two probably will run on nearly any Power Mac since they're not particularly dependent on timing (I was playing Oregon Trail on my iMac G4 last night), though you should expect it may take anywhere from 20 seconds to a minute to actually boot the game (depending on your CPU) and I'd just mute the tab since not even the Quad G5 at full tilt can generate convincing audio. But IonPower-LE will now run them, and they run pretty well, considering.

Does that seem impractical? Okay then: how about something vaguely useful ... like ... Linux?

This is, of course, Fabrice Belliard's famous jslinux emulator, and yes, IonPower now runs this too. Please don't expect much out of it if you're not on a high-end G5; even the Quad at full tilt took about 80 seconds elapsed time to get to a root prompt. But it really works and it's useable.

Getting into ridiculous territory was running Linux on OpenRISC:

This is the jor1k emulator and it's only for the highest end G5 systems, folks. Set it to 5fps to have any chance of booting it in less than five minutes. But again -- it's not that the dog walked well.

vi freaks like me will also get a kick out of vim.js. Or, if you miss Classic apps, now TenFourFox can be your System 7 (mouse sync is a little too slow here but it boots):

Now for the bad news: notice that I said things don't fully work. With em-dosbox, the Emscriptenoberated DOSBOX, notice that I only said some games run in TenFourFox, not most, not even many. Wolfenstein 3D, for example, gets as far as the main menu and starting a new game, and then bugs out with a "Reboot requested" message which seems to originate from the emulated BIOS. (It works fine on my MacBook Air, and I did get it to run under PCE.js, albeit glacially.) Catacombs 3D just sits there, trying to load a level and never finishing. Most of the other games don't even get that far and a few don't start at all.

I also tried a Windows 95 emulator (also DOSBOX, apparently), which got part way into the boot sequence and then threw a JavaScript exception "SimulateInfiniteLoop"; the Internet Archive's arcade games under MAME which starts up and then exhausts recursion and aborts (this seems like it should be fixable or tunable, but I haven't explored this further so far); and of course programs requiring WebGL will never, ever run on TenFourFox.

Debugging Emscripten goo output is quite difficult and usually causes tumours in lab rats, but several possible explanations come to mind (none of them mutually exclusive). One could be that the code actually does depend on the byte ordering of floats and doubles as well as integers, as do some of the Mozilla JIT conformance tests. However, that's not ever going to change because it requires making everything else suck for that kind of edge case to work. Another potential explanation is that the intrinsic big-endianness of the engine is causing things to fail somewhere else, such as they managed to get things inadvertently written in such a way that the resulting data was byteswapped an asymmetric number of times or some other such violation of assumptions. Another one is that the execution time is just too damn long and the code doesn't account for that possibility. Finally, there might simply be a bug in what I wrote, but I'm not aware of any similar hybrid endian engine like this one and thus I've really got nothing to compare it to.

In any case, the little-endian typed array conversion definitely fixes the stuff that needed to get fixed and opens up some future possibilities for web applications we can also run like an Intel Mac can. The real question is whether asm.js compilation (OdinMonkey, as opposed to IonPower) pays off on PowerPC now that the memory model is apparently good enough at least for most things. It would definitely run faster than IonPower, possibly several times faster, but the performance delta would not be as massive as IonPower versus the interpreter (about a factor of 40 difference), the compilation step might bring lesser systems to their knees, and it would require some significant additional engineering to get it off the ground (read: a lot more work for me to do). Given that most of our systems are not going to run these big massive applications well even with execution time cut in half or even 2/3rds (and some of them don't work correctly as it is), it might seem a real case of diminishing returns to make that investment of effort. I'll just have to see how many free cycles I have and how involved the effort is likely to be. For right now, IonPower can run them and that's the important thing.

Finally, the petulant rant. I am a fairly avid reader of Thom Holwerda's OSNews because it reports on a lot of marginal and unusual platforms and computing news that most of the regular outlets eschew. The articles are in general very interesting, including this heads-up on booting the last official GameCube game (and since the CPU in the Nintendo GameCube is a G3 derivative, that's even relevant on this blog). However, I'm going to take issue with one part of his otherwise thought-provoking discussion on the new Apple A10 processor and the alleged impending death of Mac OS macOS, where he says, "I didn't refer to Apple's PowerPC days for nothing. Back then, Apple knew it was using processors with terrible performance and energy requirements, but still had to somehow convince the masses that PowerPC was better faster stronger than x86; claims which Apple itself exposed — overnight — as flat-out lies when the company switched to Intel."

Besides my issue with what he links in that last sentence as proof, which actually doesn't establish Apple had been lying (it's actually a Low End Mac piece contemporary with the Intelcalypse asking if they were), this is an incredibly facile oversimplification. Before the usual suspects hop on the comments with their usual suspecty things, let's just go ahead for the sake of argument and say everything its detractors said about the G5 and the late generation G4 systems are true, i.e., they're hot, underpowered and overhungry. (I contest the overhungry part in particular for the late laptop G4 systems, by the way. My 2005 iBook G4 to this day still gets around five hours on a charge if I'm aggressive and careful about my usage. For a 2005 system that's damn good, especially since Apple said six for the same model I own but only 4.5 for the 2008 MacBooks. At least here you're comparing Reality Distortion Field to Reality Distortion Field, and besides, all the performance/watt in the world doesn't do you a whole hell of a lot of good if your machine's out of puff.)

So let's go ahead and just take all that as given for discussion purposes. My beef with that comment is it conveniently ignores every other PowerPC chip before the Intel transition just to make the point. For example, PC Magazine back in the day noted that a 400MHz Yosemite G3 outperformed a contemporary 450MHz Pentium II on most of their tests (read it for yourself, April 20, 1999, page 53). The G3, which doesn't have SIMD of any kind, even beat the P2 running MMX code. For that matter, a 350MHz 604e was over twice as fast at integer performance than a 300MHz P2. I point all of this out not (necessarily) to go opening old wounds but to remind those ignorant of computing history that there was a time in "Apple's PowerPC days" when even the architecture's detractors will admit it was at least competitive. That time clearly wasn't when the rot later set in, but he certainly doesn't make that distinction.

To be sure, was this the point of his article? Not really, since he was more addressing ARM rather than PowerPC, but it is sort of. Thom asserts in his exchange with Grüber Alles that Apple and those within the RDF cherrypick benchmarks to favour what suits them, which is absolutely true and I just did it myself, but Apple isn't any different than anyone else in that regard (put away the "tu quoque" please) and Apple did this as much in the Power Mac days to sell widgets as they do now in the iOS ones. For that matter, Thom himself backtracks near the end and says, "there is one reason why benchmarks of Apple's latest mobile processors are quite interesting: Apple's inevitable upcoming laptop and desktop switchover to its own processors." For the record I see this as highly unlikely due to the Intel Mac's frequent use as a client virtual machine host, though it's interesting to speculate. But the rise of the A-series is hardly comparable with Apple's PowerPC days at all, at least not as a monolithic unit. If he had compared the benchmark situation with when the PowerPC roadmap was running out of gas in the 2004-5 timeframe, by which time even boosters like yours truly would have conceded the gap was widening but Apple relentlessly ginned up evidence otherwise, I think I'd have grudgingly concurred. And maybe that's actually what he meant. However, what he wrote lumps everything from the 601 to the 970MP into a single throwaway comment, is baffling from someone who also uses and admires Mac OS 9 (as I do), and dilutes his core argument. Something like that I'd expect from the breezy mainstream computer media types. Thom, however, should know better.

(On a related note, Ars Technica was a lot better when they were more tech and less politics.)

Next up: updates to our custom gdb debugger and a maintenance update for TenFourFoxBox. Stay tuned and in the meantime try it and see if you like it. Post your comments, and, once you've played a few videos or six, what you think the default should be for 45.5 (regular VP8 video or MSE/VP9).

Monday, September 12, 2016

Sunday, September 11, 2016

Ars Technica's notes from the OS 9 underground

Richard Moss has published his excellent and very comprehensive state of the Mac OS 9 userbase in Ars Technica. I think it's a detailed and very evenhanded summary of why there are more people than you'd think using a (now long obsolete) operating system that's still maintains more utility than you'd believe.

Naturally much of my E-mail interview with him could not be used in the article (I expected that) and I think he's done a fabulous job balancing those various parts of the OS 9 retrocomputing community. Still, there are some things I'd like to see entered into posterity publicly from that interview and with his permission I'm posting that exchange here.

Before doing so, though, just a note to Classilla users. I do have some work done on a 9.3.4 which fixes some JavaScript bugs, has additional stelae for some other site-specific workarounds and (controversially, because this will impact performance) automatically fixes screen update problems with many sites using CSS overflow. (They still don't layout properly, but they will at least scroll mostly correctly.) I will try to push that out as a means of keeping the fossil fed. TenFourFox remains my highest priority because it's the browser I personally dogfood 90% of the time, but I haven't forgotten my roots.

The interview follows. Please pardon the hand-conversion to HTML; I wrote this in plain text originally, as you would expect no less from me. This was originally written in January 2016, and, for the record, on a 1GHz iMac G4.

***

Q. What's your motivation for working on Classilla (and TenFourFox, but I'm mostly interested in Classilla for this piece)?

A. One of the worst things that dooms otherwise functional hardware to apparent obsolescence is when "they can't get on the Internet." That's complete baloney, of course, since they're just as capable of opening a TCP socket and sending and receiving data now as they were then (things like IPv6 on OS 9 notwithstanding, of course). Resources like Gopherspace (a topic for another day) and older websites still work fine on old Macs, even ones based on the Motorola 680x0 series.

So, realistically, the problem isn't "the Internet" per se; some people just want to use modern websites on old hardware. I really intensely dislike the idea that the ability to run Facebook is the sole determining factor of whether a computer is obsolete to some people, but that's the world we live in now. That said, it's almost uniformly a software issue. I don't see there being any real issues as far as hardware capability, because lots of people dig out their old P3 and P4 systems and run browsers on them for light tasks, and older G4 and G3 systems (and even arguably some 603 and 604s) are more than comparable.

Since there are lots of x86 systems, there are lots of people who want to do this, and some clueful people who can still get it to work (especially since operating system and toolchain support is still easy to come by). This doesn't really exist for architectures out of the mainstream like PowerPC, let alone for a now almost byzantine operating system like Mac OS 9, but I have enough technical knowledge and I'm certifiably insane and by dumb luck I got it to work. I like these computers and I like the classic Mac OS, and I want them to keep working and be "useful." Ergo, Classilla.

TenFourFox is a little more complicated, but the same reason generally applies. It's a bit more pointed there because my Quad G5 really is still my daily driver, so I have a lot invested in keeping it functional. I'll discuss this more in detail at the end.

Q. How many people use Classilla?

A. Hard to say exactly since unlike TenFourFox there is no automatic checkin mechanism. Going from manual checkins and a couple back-of-the-napkin computations from download counts, I estimate probably a few thousand. There's no way to know how many of those people use it exclusively, though, which I suspect is a much smaller minority.

Compare this with TenFourFox, which has much more reliable numbers; the figure, which actually has been slowly growing since there are no other good choices for 10.4 and less and less for 10.5, has been a steady 25,000+ users with about 8,000 checkins occurring on a daily basis. That 30% or so are almost certainly daily drivers.

Q. Has it been much of a challenge to build a modern web browser for OS 9? The problems stem from more than just a lack of memory and processing speed, right? What are there deeper issues that you've had to contend with?

A. Classilla hails as a direct descendant of the old Mozilla Suite (along with SeaMonkey/Iceweasel, it's the only direct descendant still updated in any meaningful sense), so the limitations mostly come from its provenance. I don't think anyone who worked on the Mac OS 9 compatible Mozilla will dispute the build system is an impressive example of barely controlled disaster. It's actually an MacPerl script that sends AppleEvents to CodeWarrior and MPW Toolserver to get things done (primarily the former, but one particularly problematic section of the build requires the latter), and as an example of its fragility, if I switch the KVM while it's trying to build stubs, it hangs up and I usually have to restart the build. There's a lot of hacking to make it behave and I rarely run the builder completely from the beginning unless I'm testing something. The build system is so intimidating few people have been able to replicate it on their computers, which has deterred all but the most motivated (or masochistic) contributors. That was a problem for Mozilla too back in the day, I might add, and they were only too glad to dump OS 9 and move to OS X with Mozilla 1.3.

Second, there's no Makefiles, only CodeWarrior project files (previously it actually generated them on the fly from XML templates, but I put a stop to that since it was just as iffy and no longer necessary). Porting code en masse usually requires much more manual work for that reason, like adding new files to targets by hand and so on, such as when I try to import newer versions of libpng or pieces of code from NSS. This is a big reason why I've never even tried to take entire chunks of code like layout/ and content/ even from later versions of the Suite; trying to get all the source files set up for compilation in CodeWarrior would be a huge mess, and wouldn't buy me much more than what it supports now. With the piecemeal hacks in core, it's already nearly to Mozilla 1.7-level as it is (Suite ended with 1.7.13).

Third is operating system support. Mozilla helpfully swallows up much of the ugly guts in the Netscape Portable Runtime, and NSPR is extremely portable, a testament to its good design. But that doesn't mean there weren't bugs and Mac OS 9 is really bad at anything that requires multithreading or multiprocessing, so some of these bugs (like a notorious race condition in socket handling where the socket state can change while the CPU is busy with something else and miss it) are really hard to fix properly. Especially for Open Transport networking, where complex things are sometimes easy but simple things are always hard, some folks (including Mozilla) adopted abstraction layers like GUSI and then put NSPR on top of the abstraction layer, meaning bugs could be at any level or even through subtleties of their interplay.

Last of all is the toolchain. CodeWarrior is pretty damn good as a C++ compiler and my hat is off to Metrowerks for the job they did. It had a very forward-thinking feature set for the time, including just about all of C++03 and even some things that later made it into C++11. It's definitely better than MPW C++ was and light-years ahead of the execrable classic Mac gcc port, although admittedly rms' heart was never in it. Plus, it has an outstanding IDE even by modern standards and the exceptional integrated debugger has saved my pasty white butt more times than I care to admit. (Debugging in Macsbug is much like walking in a minefield on a foggy morning with bare feet: you can't see much, it's too easy to lose your footing and you'll blow up everything to smithereens if you do.) So that's all good news and has made a lot of code much more forward-portable than I could ever have hoped for, but nothing's ever going to be upgraded and no bugs will ever be fixed. We can't even fix them ourselves, since it's closed source. And because it isn't C++11 compliant, we can forget about pulling in substantially more recent versions of the JavaScript interpreter or realistically anything else much past Gecko 2.

Some of the efficiencies possible with later code aren't needed by Classilla to render the page, but they certainly do make it slower. OS 9 is very quick on later hardware and I do my development work on an Power Mac G4 MDD with a Sonnet dual 1.8GHz 7447A upgrade card, so it screams. But that's still not enough to get layout to complete on some sites in a timely fashion even if Classilla eventually is able to do it, and we've got no JIT at all in Classilla.

Mind you, I find these challenges stimulating. I like the feeling of getting something to do tasks it wasn't originally designed to do, sort of like a utilitarian form of the demoscene. Constraints like these require a lot of work and may make certain things impossible, so it requires a certain amount of willingness to be innovative and sometimes do things that might be otherwise unacceptable in the name of keeping the port alive. Making the browser into a series of patches upon patches is surely asking for trouble, but there's something liberating about that level of desperation, anything from amazingly bad hacks to simply accepting a certain level of wrong behaviour in one module because it fixes something else in another to ruthlessly saying some things just won't be supported, so there.

Q. Do you get much feedback from people about the browser? What sorts of things do they say? Do you hear from the hold-outs who try to do all of their computing on OS 9 machines?

A. I do get some. Forcing Classilla to preferring mobile sites actually dramatically improved its functionality, at least for some period of time until sites starting assuming everyone was on some sufficiently recent version of iOS or Android. That wasn't a unanimously popular decision, but it worked pretty well, at least for the time. I even ate my own dogfood and took nothing but an OS 9 laptop with me on the road periodically (one time I took it to Leo Laporte's show back in the old studio, much to his amazement). It was enough for E-mail, some basic Google Maps and a bit of social media.

Nowadays I think people are reasonable about their expectations. The site doesn't have to look right or support more than basic functionality, but they'd like it to do at least that. I get occasional reports from one user who for reasons of his particular disability cannot use OS X, and so Classilla is pretty much his only means of accessing the Web. Other people don't use it habitually, but have some Mac somewhere that does some specific purpose that only works in OS 9, and they'd like a browser there for accessing files or internal sites while they work. Overall, I'd say the response is generally positive that there's something that gives them some improvement, and that's heartening. Open source ingrates are the worst.

The chief problem is that there's only one of me, and I'm scrambling to get TenFourFox 45 working thanks to the never-ending Mozilla rapid release treadmill, so Classilla only gets bits and pieces of my time these days. That depresses me, since I enjoy the challenge of working on it.

Q. What's your personal take on the OS 9 web browsing experience?

A. It's ... doable, if you're willing to be tolerant of the appearance of pages and use a combination of solutions. There are some other browsers that can service this purpose in a limited sense. For example, the previous version of iCab on classic Mac is Acid2 compliant, so a lot of sites look better, but its InScript JavaScript interpreter is glacial and its DOM support is overall worse than Classilla's. Internet Explorer 5.1 (and the 5.5 beta, if you can find it) is very fast on those sites it works on, assuming you can find any. At least when it crashes, it does that fast too! Sometimes you can even get Netscape 4.8 to be happy with them or at least the visual issues look better when you don't even try to render CSS. Most times they won't look right, but you can see what's going on, like using Lynx.

Unfortunately, none of those browsers have up-to-date certificate stores or ciphers and some sites can only be accessed in Classilla for that reason, so any layout or speed advantages they have are negated. Classilla has some other tricks to help on those sites it cannot natively render well itself. You can try turning off the CSS entirely; you could try juggling the user agent. If you have some knowledge of JavaScript, you can tell Classilla's Byblos rewrite module to drop or rewrite problematic portions of the page with little snippets called stelae, basically a super-low-level Greasemonkey that works at the HTML and CSS level (a number of default ones are now shipped as standard portions of the browser).

Things that don't work at all generally require DOM features Classilla does not (yet) expose, or aspects of JavaScript it doesn't understand (I backported Firefox 3's JavaScript to it, but that just gives you the syntax, not necessarily everything else). This aspect is much harder to deal with, though some inventive users have done it with limited success on certain sites.

You can cheat, of course. I have Virtual PC 6 on my OS 9 systems, and it is possible (with some fiddling in lilo) to get it to boot some LiveCDs successfully -- older versions of Knoppix, for example, can usually be coaxed to start up and run Firefox and that actually works. Windows XP, for what that's worth, works fine too (I would be surprised if Vista or anything subsequent does, however). The downside to this is the overhead is a killer on laptops and consumes lots of CPU time, and Linux has zero host integration, but if you're able to stand it, you can get away with it. I reserved this for only problematic sites that I had to access, however, because it would bring my 867MHz TiBook to its knees. The MDD puts up with this a lot better but it's still not snappy.

If all this sounds like a lot of work, it is. But at least that makes it possible to get the majority of Web sites functional to some level in OS 9 (and in Classilla), at least one way or another, depending on how you define "functional." To that end I've started focusing now on getting specific sites to work to some personally acceptable level rather than abstract wide-ranging changes in the codebase. If I can't make the core render it correctly, I'll make some exceptions for it with a stele and ship that with the browser. And this helps, but it's necessarily centric to what I myself do with my Mac OS 9 machines, so it might not help you.

Overall, you should expect to do a lot of work yourself to make OS 9 acceptable with the modern Web and you should accept the results are at best imperfect. I think that's what ultimately drove Andrew Cunningham up the wall.

I'm going to answer these questions together:

Q1. How viable do you think OS 9 is as a primary operating system for someone today? How viable is it for you?
[...]
Q2. What do you like about using older versions of Mac OS (in this case, I'm talking in broad terms - so feel free to discuss OS X Tiger and PPC hardware as well)? Why not just follow the relentless march of technology? (It's worth mentioning here that I actually much prefer the look and feel of classic MacOS and pre-10.6 OS X, but for a lot of my own everyday computing I need to use newer, faster machines and operating systems.)

A. I'm used to a command line and doing everything in text. My Mac OS 9 laptop has Classilla and MacSSH on it. I connect to my home server for E-mail and most other tasks like Texapp for command-line posting to App.net, and if I need a graphical browser, I've got one. That covers about 2/3rds of my typical use case for a computer. In that sense, Mac OS 9 is, at least, no worse than anything else for me. I could use some sort of Linux, but then I wouldn't be able to easily run much of my old software (see below). If I had to spend my time in OS 9 even now, with a copy of Word and Photoshop and a few other things, I think I could get nearly all of my work done, personally. There is emulation for the rest. :)

I will say I think OS 9 is a pleasure to use relative to OS X. Part of this is its rather appalling internals, which in this case is necessity made virtue; I've heard it said OS 9 is just a loose jumble of libraries stacked under a file browser and that's really not too far off. The kernel, if you can call it that, is nearly non-existent -- there's a nanokernel, but it's better considered as a primitive hypervisor. There is at best token support for memory protection and some multiprocessing, but none of it is easy and most of it comes with severe compromises. But because there isn't much to its guts, there's very little between you and the hardware. I admit to having an i7 MBA running El Crapitan, and OS 9 still feels faster. Things happen nearly instantaneously, something I've never said about any version of OS X, and certain classes of users still swear by its exceptionally low latency for applications such as audio production. Furthermore, while a few annoyances of the OS X Finder have gradually improved, it's still not a patch on the spatial nature of the original one, and I actually do like Platinum (de gustibus non disputandum, of course). The whole user experience feels cleaner to me even if the guts are a dog's breakfast.

It's for that reason that, at least on my Power Macs, I've said Tiger forever. Classic is the best reason to own a Power Mac. It's very compatible and highly integrated, to the point where I can have Framemaker open and TenFourFox open and cut and paste between them. There's no Rhapsody full-screen blue box or SheepShaver window that separates me from making Classic apps first-class citizens, and I've never had a Classic app take down Tiger. Games don't run so well, but that's another reason to keep the MDD around, though I play most of my OS 9 games on a Power Mac 7300 on the other desk. I've used Macs partially since the late 1980s and exclusively since the mid-late 1990s (the first I owned personally was a used IIsi), and I have a substantial investment in classic Mac software, so I want to be able to have my cake and eat it too. Some of my preference for 10.4 is also aesthetic: Tiger still has the older Mac gamma, which my eyes are accustomed to after years of Mac usage, and it isn't the dreary matte grey that 10.5 and up became infested with. These and other reasons are why I've never even considered running something like Linux/PPC on my old Macs.

Eventually it's going to be the architecture that dooms this G5. This Quad is still sufficient for most tasks, but the design is over ten years old, and it shows. Argue the PowerPC vs x86 argument all you like, but even us PPC holdouts will concede the desktop battle was lost years ago. We've still got a working gcc and we've done lots of hacking on the linker, but Mozilla now wants to start building Rust into Gecko (and Servo is, of course, all Rust), and there's no toolchain for that on OS X/ppc, so TenFourFox's life is limited. For that matter, so is general Power Mac software development: other than freaks like me who still put -arch ppc -arch i386 in our Makefiles, Universal now means i386/x86_64, and that's not going to last much longer either. The little-endian web (thanks to asm.js) even threatens that last bastion of platform agnosticism. These days the Power Mac community lives on Pluto, looking at a very tiny dot of light millions of miles away where the rest of the planets are.

So, after TenFourFox 45, TenFourFox will become another Classilla: a fork off Gecko for a long-abandoned platform, with later things backported to it to improve its functionality. Unlike Classilla it won't have the burden of six years of being unmaintained and the toolchain and build system will be in substantially better shape, but I'll still be able to take the lessons I've learned maintaining Classilla and apply it to TenFourFox, and that means Classilla will still live on in spirit even when we get to that day when the modern web bypasses it completely.

I miss the heterogeneity of computing when there were lots of CPUs and lots of architectures and ultimately lots of choices. I think that was a real source of innovation back then and much of what we take for granted in modern systems would not have been possible were it not for that competition. Working in OS 9 reminds me that we'll never get that diversity back, and I think that's to our detriment, but as long as I can keep that light on it'll never be completely obsolete.

***

Saturday, September 10, 2016

TenFourFox 45.4.0 available (plus: priorities for feature parity and down with Dropbox)

With the peerless level of self-proclaimed courageousness necessary to remove a perfectly good headphone jack for a perfectly stupid reason (because apparently no one at Apple charges their phone and listens to music at the same time), TenFourFox 45.4.0 is released (downloads, hashes, release notes). This will be the final release in the beta cycle and assuming no critical problems will become the public official release sometime Monday Pacific time. Localizations will be frozen today and uploaded to SourceForge sometime this evening or tomorrow, so get them in ASAP.

The major change in this release is additional tweaking to the MediaSource implementation and I'm now more comfortable with its functioning on G4 systems through a combination of some additional later patches I backported and adjusting our own hacks to not only aggressively report the dropped frames but also force rebuffering if needed. The G4 systems now no longer seize and freeze (and, occasionally, fatally assert) on some streams, and the audio never becomes unsynchronized, though there is some stuttering if the system is too overworked trying to keep the video and audio together. That said, I'm going to keep MediaSource off for 45.4 so that there will be as little unnecessary support churn as possible while you test it (if you haven't already done so, turn media.mediasource.enabled to true in about:config; do not touch the other options). In 45.5, assuming there are no fatal problems with it (I don't consider performance a fatal flaw, just an important one), it will be the default, and it will be surfaced as an option in the upcoming TenFourFox-specific preference pane.

However, to make the most of MediaSource we're going to need AltiVec support for VP9 (we only have it for VP3 and VP8). While upper-spec G5 systems can just power through the decoding process (though this might make hi-def video finally reasonable on the last generation machines), even high-spec G4 systems have impaired decoding due to CPU and bus bandwidth limitations and the low-end G4 systems are nearly hopeless at all but the very lowest bitrates. Officially I still have a minimum 1.25GHz recommendation but I'm painfully aware that even those G4s barely squeak by. We're the only ones who ever used libvpx's moldy VMX code for VP8 and kept it alive, and they don't have anything at all for VP9 (just x86 and ARM, though interestingly it looks like MIPS support is in progress). Fortunately, the code was clearly written to make it easier for porters to hand-vectorize and I can do this work incrementally instead of having to convert all the codec pieces simultaneously.

Interestingly, even though our code now faithfully and fully reports every single dropped frame, YouTube doesn't seem to do anything with this information right now (if you right-click and select "Stats for nerds" you'll see our count dutifully increase as frames are skipped). It does downshift for network congestion, so I'm trying to think of a way to fool it and make dropped frames look like a network throughput problem instead. Doing so would technically be a violation of the spec but you can't shame that which has no shame and I have no shame. Our machines get no love from Google anyway so I'm perfectly okay with abusing their software.

I have the conversion to platform codec of our minimp3 decoder written as a first draft, but I haven't yet set that up or tested it, so this version still uses the old codec wrapper and still has the track-shifting problem with Amazon Music. That is probably the highest priority for 45.5 since it is an obvious regression from 38. On the security side, this release also disables RTCPeerConnection to eliminate the WebRTC IP "leak" (since I've basically given up on WebRTC for Power Macs). You can still reenable it from about:config as usual.

The top three priorities for the next couple versions (with links to the open Github issues) are, highest first, fixing Amazon Music, AltiVec VP9 codepaths and the "little endian typed array" portion of IonPower-NVLE to fix site compatibility with little-endian generated asm.js. Much of this work will proceed in parallel and the idea is to have a beta 45.5 for you to test them in a couple weeks. Other high priority items on my list to backport include allowing WebKit/Blink to eat the web supporting certain WebKit-prefixed properties to make us act the same as regular Firefox, support for ChaCha20+Poly1305, WebP images, expanded WebCrypto support, the "NV" portion of IonPower-NVLE and certain other smaller-scope HTML/CSS features. I'll be opening tracking issues for these as they enter my worklist, but I have not yet determined how I will version the browser to reflect these backported new features. For now we'll continue with 45.x.y while we remain on 45ESR and see where we end up.

As we look into the future, though, it's always instructive to compare it with the past. With the anticipation that even Google Code's Archive will be flushed down the Mountain View memory hole (the wiki looks like it's already gone, but you can get most of our old wikidocs from Github), I've archived 4.0b7, 4.0.3, 8.0, 10.0.11, 17.0.11 and Intel 17.0.2 on SourceForge along with their corresponding source changesets. These Google Code-only versions were selected as they were either terminal (quasi-)ESR releases or have historical or technical relevance (4.0b7 was our first beta release of TenFourFox ever "way back" in 2010, 8.0 was the last release that was pure tracejit which some people prefer, and of course Intel 17.0.2 was our one and so far only release on Intel Macs). There is no documentation or release notes; they're just there for your archival entertainment and foolhardiness. Remember that old versions run an excellent chance of corrupting your profile, so start them up with one you can throw away.

Finally, a good reason to dump Dropbox (besides the jerking around they give those of you trying to keep the PowerPC client working) is their rather shameful secret unauthorized abuse of your Mac's accessibility framework by forging an entry in the privacy database. (Such permissions allow it to control other applications on your Mac as if it were you at the user interface. The security implications of that should be immediately obvious, but if they're not, see this discussion.) The fact this is possible at all is a bug Apple absolutely must fix and apparently has in macOS Sierra, but exploiting it in this fashion is absolutely appalling behaviour on Dropbox's part because it won't even let you turn it off. To their credit they're taking their lumps on Hacker News and TechCrunch, but accepting their explanation of "only asking for privileges we use" requires a level of trust that frankly they don't seem worthy of and saying they never store your administrator password is a bit disingenuous when they use administrative access to create a whole folder of setuid binaries -- they don't need your password at that point to control the system. Moreover, what if there were an exploitable security bug in their software?

Mind you, I don't have a problem with apps requesting that access if I understand why and the request isn't obfuscated. As a comparison, GOG.com has a number of classic DOS games I love that were ported for DOSBox and work well on my MacBook Air. These require that same accessibility access for proper control of input methods. Though I hope they come up with a different workaround eventually, the GOG installer does explain why and does use the proper APIs for requesting that privilege, and you can either refuse on the spot or disable it later if you decide you're not comfortable with it. That's how it's supposed to work, but that's not what Dropbox did, and they intentionally hid it and the other inappropriate system-level things they were sneaking through. Whether out of a genuine concern for user experience or just trying to get around what they felt was an unnecessary security precaution, it's not acceptable and it's potentially exploitable, and they need to answer for that.

Watch for 45.4 going final in a couple days, and hopefully a 45.5 beta in a couple weeks.

Tuesday, September 6, 2016

System7SoonToBeYesterday

One of my periodic "drop by now and then" sites is System7Today, extolling the virtues of System 7 to those people still rocking 60x-series Power Macs (that's Mac OS 7.0 through 7.6.1 for those of you who only know Mac OS as starting with a lower-case m, and also, get off my lawn), and I was disheartened to see that the System7Today forums are going read-only. I regretfully understand his reasoning though one wonders what will happen to the rest of the site. It hasn't changed in years, but it hasn't had to, and I love the frozen-in-time mockery of the Apple front page from at least a decade or so prior.

The only Macs I have still running System 7 are all running 7.1 (not counting the NetBSD 68K systems, which just use System 7 as a bootloader): one is my IIci with a 50MHz DayStar '030 and a MacIvory Lisp card, another is my recapped SE/30 looking for a job to do, and the last is my super-cute Mystic Color Classic. I like 7.1 a lot more than 7.5 or 7.6, and you can transplant lots of the 7.5 CDEVs and such back to 7.1 for a slimmer but still feature-filled experience. That said, I have to confess that I jumped to OS 8, and then OS 9, whenever I got the chance on my Power Macs. Part of this was that I upgraded those systems aggressively -- all of my Old World Power Macs in regular use have G3 or G4 upgrade cards, including my beloved PowerBook 1400, so they all run 9.1 or 9.2.2 -- and part of it is to run Classilla, but the biggest reason was just that OS 8 looked nicer and felt better, and 8.1 and 8.6 were still pretty speedy. I still run OS 8 on my PowerBook 2300c and Quadra 800.

But still, nostalgia dies hard. (No doubt being a TenFourFox user, you'd empathize.) While I've got lots of classic System 7-era software backed up for posterity on the Floodgap gopher server, it just doesn't have System7Today's throwback vibe and playful attitude. I'll miss it dearly pending its inevitable slow-motion decommissioning, most of all because it remains a great example of doing the most you can with the little you've got. Typing this blog post on a 2002-vintage 1GHz iMac G4, us PowerPC holdouts are still living in that spirit today.

Watch for 45.4 later this week(end).

Thursday, September 1, 2016

Talos moves to CrowdSupply

It looks like we've cleared the hump: Raptor Engineering tells me that Talos, the upcoming POWER8 workstation and probably the most powerful non-x86 workstation-class system you can buy, has enough interest to move to a formal CrowdSupply launch. Sign up and stay tuned. I've been still saving my pennies and I'm definitely buying.