Tuesday, January 27, 2015

And now for something completely different: the Pono Player review and Power Macs (plus: who's really to blame for Dropbox?)

Regular business first: this is now a syndicated blog on Planet Mozilla. I consider this an honour that should also go a long way toward reminding folks that not only are there well-supported community tier-3 ports, but lots of people still use them. In return I promise not to bore the punters too much with vintage technology.

IonPower crossed phase 2 (compilation) yesterday -- it builds and links, and nearly immediately asserts after some brief codegen, but at this phase that's entirely expected. Next, phase 3 is to get it to build a trivial script in Baseline mode ("var i=0") and run to completion without crashing or assertions, and phase 4 is to get it to pass the test suite in Baseline-only mode, which will make it as functional as PPCBC. Phase 5 and 6 are the same, but this time for Ion. IonPower really repays most of our technical debt -- no more fragile glue code trying to keep the JaegerMonkey code generator working, substantially fewer compiler warnings, and a lot less hacks to the JIT to work around oddities of branching and branch optimization. Plus, many of the optimizations I wrote for PPCBC will transfer to IonPower, so it should still be nearly as fast in Baseline-only mode. We'll talk more about the changes required in a future blog post.

Now to the Power Mac scene. I haven't commented on Dropbox dropping PowerPC support (and 10.4/10.5) because that's been repeatedly reported by others in the blogscene and personally I rarely use Dropbox at all, having my own server infrastructure for file exchange. That said, there are many people who rely on it heavily, even a petition (which you can sign) to bring support back. But let's be clear here: do you really want to blame someone? Do you really want to blame the right someone? Then blame Apple. Apple dropped PowerPC compilation from Xcode 4; Apple dropped Rosetta. Unless you keep a 10.6 machine around running Xcode 3, you can't build (true) Universal binaries anymore -- let alone one that compiles against the 10.4 SDK -- and it's doubtful Apple would let such an app (even if you did build it) into the App Store because it's predicated on deprecated technology. Except for wackos like me who spend time building PowerPC-specific applications and/or don't give a flying cancerous pancreas whether Apple finds such work acceptable, this approach already isn't viable for a commercial business and it's becoming even less viable as Apple actively retires 10.6-capable models. So, sure, make your voices heard. But don't forget who screwed us first, and keep your vintage hardware running.

That said, I am personally aware of someoneTM who is working on getting the supported Python interconnect running on OS X Power Macs, and it might be possible to rebuild Finder integration on top of that. (It's not me. Don't ask.) I'll let this individual comment if he or she wants to.

Onto the main article. As many of you may or may not know, my undergraduate degree was actually in general linguistics, and all linguists must have (obviously) some working knowledge of acoustics. I've also been a bit of a poseur audiophile too, and while I enjoy good music I especially enjoy good music that's well engineered (Alan Parsons is a demi-god).

The Por Pono Player, thus, gives me pause. In acoustics I lived and died by the Nyquist-Shannon sampling theorem, and my day job today is so heavily science and research-oriented that I really need to deal with claims in a scientific, reproducible manner. That doesn't mean I don't have an open mind or won't make unusual decisions on a music format for non-auditory reasons. For example, I prefer to keep my tracks uncompressed, even though I freely admit that I'm hard pressed to find any difference in a 256kbit/s MP3 (let alone 320), because I'd like to keep a bitwise exact copy for archival purposes and playback; in fact, I use AIFF as my preferred format simply because OS X rips directly to it, everything plays it, and everything plays it with minimum CPU overhead despite FLAC being lossless and smaller. And hard disks are cheap, and I can convert it to FLAC for my Sansa Fuze if I needed to.

So thus it is with the Por Pono Player. For $400, you can get a player that directly pumps uncompressed, high-quality remastered 24-bit audio at up to 192kHz into your ears with no downsampling and allegedly no funny business. Immediately my acoustics professor cries foul. "Cameron," she says as she writes a big fat F on this blog post, "you know perfectly well that a CD using 44.1kHz as its sampling rate will accurately reproduce sounds up to 22.05kHz without aliasing, and 16-bit audio has indistinguishable quantization error in multiple blinded studies." Yes, I know, I say sheepishly, having tried to create high-bit rate digital playback algorithms on the Commodore 64 and failed because the 6510's clock speed isn't fast enough to pump samples through the SID chip at anything much above telephone call frequencies. But I figured that if there was a chance, if there was anything, that could demonstrate a difference in audio quality that I could uncover it with a Pono Player and a set of good headphones (I own a set of Grado SR125e cans, which are outstanding for the price). So I preordered one and yesterday it arrived, in a fun wooden box:

It includes a MicroUSB charger (and cable), an SDXC MicroSD card (64GB, plus the 64GB internal storage), a fawning missive from Neil Young, the instigator of the original Kickstarter, the yellow triangular unit itself (available now in other colours), and no headphones (it's BYO headset):

My original plan was to do an A-B comparison with Pink Floyd's Dark Side of the Moon because it was originally mastered by the godlike Alan Parsons, I have the SACD 30th Anniversary master, and the album is generally considered high quality in all its forms. When I tried to do that, though, several problems rapidly became apparent:

First, the included card is SDXC, and SDXC support (and exFAT) wasn't added to OS X until 10.6.4. Although you can get exFAT support on 10.5 with OSXFUSE, I don't know how good their support is on PowerPC and it definitely doesn't work on Tiger (and I'm not aware of a module for the older MacFUSE that does run on Tiger). That limits you to SDHC cards up to 32GB at least on 10.4, which really hurts on FLAC or ALAC and especially on AIFF.

Second, the internal storage is not accessible directly to the OS. I plugged in the Pono Player to my iMac G4 and it showed up in System Profiler, but I couldn't do anything with it. The 64GB of internal storage is only accessible to the music store app, which brings us to the third problem:

Third, the Pono Music World app (a skinned version of JRiver Media Center) is Intel-only, 10.6+. You can't download tracks any other way right now, which also means you're currently screwed if you use Linux, even on an Intel Mac. And all they had was Dark Side in 44.1kHz/16 bit ... exactly the same as CD!

So I looked around for other options. HDTracks didn't have Dark Side, though they did have The (weaksauce) Endless River and The Division Bell in 96kHz/24 bit. I own both of these, but 96kHz wasn't really what I had in mind, and when I signed up to try a track it turns out they need a downloader also which is also a reskinned JRiver! And their reasoning for this in the FAQ is total crap.

Eventually I was able to find two sites that offer sample tracks I could download in TenFourFox (I had to downsample one for comparison). The first offers multiple formats in WAV, which your Power Mac actually can play, even in 24-bit (but it may be downsampled for your audio chip; if you go to /Applications/Utilities/Audio MIDI Setup.app you can see the sample rate and quantization for your audio output -- my quad G5 offers up to 24/96kHz but my iMac only has 16/44.1). The second was in FLAC, which Audacity crashed trying to convert, MacAmp Lite X wouldn't even recognize, and XiphQT (via QuickTime) played like it was being held underwater by a chainsaw (sample size mismatch, no doubt); I had to convert this by hand. I then put them onto a SDHC card and installed it in the Pono.

Yuck. I was very disappointed in the interface and LCD. I know that display quality wasn't a major concern, but it looks clunky and ugly and has terrible angles (see for yourself!) and on a $400 device that's not acceptable. The UI is very slow sometimes, even with the hardware buttons (just volume and power, no track controls), and the touch screen is very low quality. But I duly tried the built-in Neil Young track, which being an official Por Pono track turns on a special blue light to tell you it's special, and on my Grados it sounded pretty good, actually. That was encouraging. So I turned off the display and went through a few cycles of A-B testing with a random playlist between the two sets of tracks.

And ... well ... my identification abilities were almost completely statistical chance. In fact, I was slightly worse than chance would predict on the second set of tracks. I can only conclude that Harry Nyquist triumphs. With high quality headphones, presumably high quality DSPs and presumably high quality recordings, it's absolutely bupkis difference for me between CD-quality and Pono-quality.

Don't get me wrong: I am happy to hear that other people are concerned about the deficiencies in modern audio engineering -- and making it a marketable feature. We've all heard the "loudness war," for example, which dramatically compresses the dynamic range of previously luxurious tracks into a bafflingly small amplitude range which the uncultured ear, used only to quantity over quality, apparently prefers. Furthermore, early CD masters used RIAA equalization, which overdrove the treble and was completely unnecessary with digital audio, though that grave error hasn't been repeated since at least 1990 or earlier. Fortunately, assuming you get audio engineers who know what they're doing, a modern CD is every bit as a good to the human ear as a DVD-Audio disc or an SACD. And if modern music makes a return to quality engineering with high quality intermediates (where 24-bit really does make a difference) and appropriate dynamic range, we'll all be better off.

But the Pono Player doesn't live up to the hype in pretty much any respect. It has line out (which does double as a headphone port to share) and it's high quality for what it does play, so it'll be nice for my hi-fi system if I can get anything on it, but the Sansa Fuze is smaller and more convenient as a portable player and the Pono's going back in the wooden box. Frankly, it feels like it was pushed out half-baked, it's problematic if you don't own a modern Mac, and the imperceptible improvements in audio mean it's definitely not worth the money over what you already own. But that's why you read this blog: I just spent $400 so you don't have to.

Monday, January 19, 2015

Upgrading the unupgradeable: video card options for the Quad G5

Now that the 2015 honeymoon and hangovers are over, it's back to business, including the annual retro-room photo spread (check out the new pictures of the iMac G3, the TAM and the PDP-11/44). And, as previously mentioned on my ripping yarn about long-life computing -- by this way, this winter the Quad G5's cores got all the way down to 30 C on the new CPU assembly, which is positively arctic -- 2015 is my year for a hard disk swap. I was toying with getting an apparently Power Mac compatible Seagate hybrid SSHD that Martin Kukač was purchasing (perhaps he'll give his capsule review in the comments or on his blog?), but I couldn't find out if it failed gracefully to the HD when the flash eventually dies, and since I do large amounts of disk writes for video and development I decided to stick with a spinning disk. The Quad now has two 64MB-buffer 7200rpm SATA II Western Digital drives and the old ones went into storage as desperation backups; while 10K or 15Krpm was a brief consideration, their additional heat may be problematic for the Quad (especially with summers around here) and I think I'll go with what I know works. Since I'm down to only one swap left I think I might stretch the swap interval out to six years, and that will get me through 2027.

At the same time I was thinking of what more I could do to pump the Quad up. Obviously the CPU is a dead-end, and I already have 8GB of RAM in it, which Tiger right now indicates I am only using 1.5GB of (with TenFourFox, Photoshop, Terminal, Texapp, BBEdit and a music player open) -- I'd have to replace all the 1GB sticks with 2GB sticks to max it out, and I'd probably see little if any benefit except maybe as file cache. So I left the memory alone; maybe I'll do it for giggles if G5 RAM gets really cheap.

However, I'd consolidated the USB and FireWire PCIe cards into a Sonnet combo card, so that freed up a slot and meant I could think about the video card. When I bought my Quad G5 new I dithered over the options: the 6600LE, 7800GT and 2-slot Quadro FX 4500, all NVIDIA. I prefer(red) ATIAMD in general because of their long previous solid support for the classic Mac OS, but Apple only offered NVIDIA cards as BTO options at the time. The 6600LE's relatively anaemic throughput wasn't ever in the running, and the Quadro was incredibly expensive (like, 4x the cost!) for a marginal increase in performance in typical workloads, so I bought the 7800GT. Overall, it's been a good card; other than the fan failing on me once, it's been solid, and prices on G5-compatible 7800GTs are now dropping through the floor, making it a reasonably inexpensive upgrade for people still stuck on a 6600. (Another consideration is the aftermarket ATI X1900 GT, which is nearly as fast as the 7800GT.)

However, that also means that prices on other G5-compatible video cards are also dropping through the floor. Above the 7800GT are two options: the Quadro FX 4500, and various third-party hacked video cards, most notably the 2-slot 7800GTX. The GTX is flashed with a hacked Mac 7800GT ROM but keeps the core and memory clocks at the same high speed, yielding a chimera card that's anywhere between 15-30% faster than the Quadro. I bought one of these about a year and a half ago as a test, and while it was noticeably faster in certain tasks and mostly compatible, it had some severe glitchiness with older games and that was unacceptable to me (for example, No One Lives Forever had lots of flashing polygons and bad distortion). I also didn't like that it didn't come with a support extension to safely anchor it in the G5's card guide, leaving it to dangerously flex out of the card slot, so I pulled it and it's sitting in my junk box while I figure out what to do with it. Note that it uses a different power adapter cable than the 7800 or Quadro, so you'll need to make sure it's included if you want to try this card out, and if you dislike the lack of a card guide extension as much as I do you'll need a sacrificial card to steal one from.

Since then Quadro prices plummeted as well, so I picked up a working-pull used Apple OEM FX 4500 on eBay for about $130. The Quadro has 512MB of GDDR3 VRAM (same as the 7800GTX and double the 7800GT), two dual-link DVI ports and a faster core clock; although it also supports 3D glasses, something I found fascinating, it doesn't seem to work with LCD panels, so I can't evaluate that. Many things are not faster, but some things are: 1080p video playback is now much smoother because the Quadro can push more pixels, and high end games now run more reliably at higher resolutions as you would expect without the glitchiness I got in older titles with the 7800GTX. Indeed, returning to the BareFacts graph, the marginal performance improvement and the additional hardware rendering support is now at least for me worth $130 (I just picked up a spare for $80), it's a fully kitted and certified OEM card (no hacks!), and it uses the same power adapter cable as the 7800GT. One other side benefit is that, counterintuitively, the GPU is several degrees cooler (despite being bigger and beefier) and the fan is nearly inaudible, no doubt due to that huge honking heatsink.

It's not a big bump, but it's a step up, and I'm happy. I guess all that leaves is the RAM ...

In TenFourFox news, I'm done writing IonPower (phase 1). Phase 2 is compilation. That'll be some drudgery, but I think we're on target for release with 38ESR.

Saturday, January 10, 2015

31.4.0 available

31.4.0 is available, with many fixes and the speculative changes from 31.3.1pre. Before any of you wags point out that the copyright date is wrong, I didn't notice this until the middle of the G3 build and since I'd already wasted a day because I missed a patch (build day for TenFourFox is about eight hours, even with the G5 quad running full blast on the SSD I use for build acceleration, so scotching a build really burns up a lot of time), I really don't care enough. :P It's in the changesets for next time.

For 31.5 I'm looking at tweaking the WebM AltiVec code some more, possibly introducing some early speculative fetching. But in the meantime, it's IonPower for the rest of the weekend until my Master's classes start again on Monday (when this build will become final).

Downloads from the usual place.

Friday, January 2, 2015

36: ten times better than Firefox 3.6

"But ours goes to eleven."

This post is being typed in Firefox 36, which is, of course, ten times better than Firefox 3.6. Jokes aside, there are some noticible improvements, particularly in the build process; the new unified build (which glues certain blocks of code together into single megafiles) compiles nearly 20% faster on this G5. Linking the big XUL superlibrary, instead of over 15 minutes of drumming my fingers on the desk while the CPU gets pegged, is now just four or five minutes due to the reduced number of files and probably even less on an opt build because it has even less code to go through. Unified builds also enable the compiler to do better optimization between files, so even this debugging build of Firefox 36 aurora feels pretty tight. Cumulatively, these improvements make me very optimistic for 38ESR, our next (last?) major version, and make maintenance much less onerous. The recently added in-content preferences, while another carbon copy of Chrome chrome, do work pretty well and integrate nicely into the browser. Helpfully, Aurora now uses a totally separate profile from the production Firefox/TenFourFox by default; previously I had to run tandem profiles by hand.

36 does have its problems. This is a "Developer Edition" build; Mozilla is trying to get developers to move into earlier testing tiers, so they've hung new black duds on the browser chrome which look hideous on Tiger. Since we don't release Aurora builds routinely, I'm just not going to support this livery; I have enough problems with the default theme without that. New tabs kept crashing until I figured out that Electrolysis was getting activated by the page snapshot feature (we won't support this either; our tabs will default to blank in 38). Also, JavaScript performance seems to have regressed some as Mozilla makes BaselineCompiler more of a gatherer for better IonMonkey optimization than a decent compiler in its own right, but my aim is to have Ion ready for 38, so there.

Big-endian builds got hit by bug 1105087 which was fallout from inadvertently requiring Skia for graphics drawing on our platforms which don't yet support it (software drawing is sort of a mess in Firefox these days). Skia allegedly has big-endian support now, but it doesn't have a big-endian pixel order in common with Cairo which usually does the software work, so even just turning it on doesn't fix the problem. The proposal Mozilla made was to write some more code in there to do byteswapping when graphics cross between drawing modules, but that means four byteswaps for this sort of operation, and that totally sucks. 36 just has bug 1097776 backed out because Skia still has issues on 10.4 and we don't really need it (I prefer quicker if less clean output anyway), so we'll see if Mozilla will accept my counterproposal of two separate codepaths. If they don't, I don't know what the plan will be for Linux and *BSD.

Once I have 36 to the point where I'm basically happy, I will release changesets and continue with the Ion work; I am not planning an unstable build until 38 beta and it is likely that 31 will not be discontinued until 38.0.1 or 38.0.2 to allow enough testing cycles on IonPower. IonPower seems like a cool name for our new backend. I just came up with it. Like it? Yeah, it's awesome.

The general consensus on the speculative changes in 31.3.1pre is that JavaScript without background finalization is better and movie performance with more aggressive buffering is a wash. However, all of my test systems here play WebM somewhat to substantially better with the video buffering change, so I'm going to keep it and see what happens with wider release. Besides those changes, reductions in tab/window undo and the hyphenation bug fix, I've also been discovering other potential glitches in PPCBC during the IonPower rewrite and these will also be fixed in the next version. 31.4 is scheduled for release on 13 January.

Happy New Year, everybody.

Sunday, December 21, 2014

31.3.1pre test build available

Everyone needs a break, so I took a break from my Master's project to experiment with a few ideas I've been ruminating over (it's also a convenient excuse to goof off). We're always looking for more performance from our older machines and sometimes this requires rewriting or changing assumptions of the code where this can be done without a lot of work.

Multiprocessing/multithreading has been one of those areas. Since around Fx22 Mozilla has been using background finalization for JavaScript (so background finalization was in 24 and 31 but not in 17) which is to say that the procedure for deallocation of objects is not done on the main thread; finalization is a separate, though related, process to garbage collection where finalized objects with no remaining references are reclaimed. This, at least on first blush, would seem like an unmitigated win, especially on machines with multiple CPUs -- you just do the work on another core while you're working on something else. But interestingly it has some very odd interactions on my test systems after long uptimes. Some profiling I did when TenFourFox just seems to be sitting there waiting showed that it wasn't TenFourFox (per se) that was twiddling its thumbs; the OS X kernel was stuck in a wait state as well and it seemed to have been kicked off by ... background finalization, waiting on threading management in the kernel, which had temporarily deadlocked. This may be the source of some unexplained seizing up that a few people have complained about and we had no obvious way to reproduce.

We can't do anything about the kernel (this is another reason why I'm very concerned about how Electrolysis will perform on 10.4/10.5), but background finalization can be defeated with a few lines of easily ported code. This doesn't make the browser faster objectively, mind you; it just "spreads the badness around" so that finalization occurs predictably, and in a manner that makes it more likely to complete, which in turn makes garbage collection more likely to complete (which since the fix in 31.3 now can run on a different core and doesn't appear to trigger this specific issue), which in turn reduces the drag on performance. In effect, this rolls this aspect of JavaScript back to Fx17.

The result is more consistent and smoother, even if it's not truly faster -- especially on the G5 where bouncing around in code generally hobbles performance -- and does not seem to affect uniprocessor systems adversely, but I'd like to get a test out where you can play with it and see. You should not expect much difference immediately when you start; in fact, startup time might even be slower, and memory usage will show little if any improvement. This is just to improve the browser's responsiveness in terms of staying reasonably quick after multiple compartments have been allocated and need to be scanned.

Speaking of, I also implemented the reduction on tab undoes (to 4) and window undoes (to 2), which also reduces the number of compartments that must be scanned (you can override this from about:config, but be aware that every tab and tab-undo-state you keep in memory remains active and so the garbage collector must evaluate it), and I threw in one other change that forces substantial additional buffering of video playback just to see how this works out for you lot. Like the change in finalization, this "spreads the badness" by forcing the browser to build up a large backing store of fully decoded video frames before playback (up to 10 seconds' worth depending on available memory). Videos may appear to stall initially, but then can play more smoothly because decoding is now more aggressively buffered as well, not just downloaded video data. This quad shows improvement only in that playback is more consistent, but on my 1GHz iMac G4 many standard-definition videos on YouTube are now noticeably less like a slideshow with audio. It's possible to overdo this setting, so I've settled on a conservative number that seems to work decently for the test machines here (it's baked into the C++ code, sorry -- you can't twiddle this from about:config). The minimum recommendation for video is still a 1.25GHz G4.

The tab undo and buffering changes will be carried into 38, but 38 introduces generational garbage collection (to us) and I have to do some testing to determine if it will react adversely with GGC's system assumptions. Please note that I only built this for G5 and 7450 mostly because I want testing on a good mix of single and multi-CPU systems (7400 users can use the 7450 build if you really want to, but sorry, G3 folks, you'll have to wait for the next scheduled release), but I did the building on my new external solid state drive which reduces build overhead by as much as 30%. Not bad! I'll post some stuff about the RAM disk and SSD build testing I've been playing with in a future entry.

Downloads available from the usual place.

Saturday, December 20, 2014

Time, time, time, see what's become of ntpd

UPDATE: After digging through codepaths in ntpd, I'm concluding that an outbound connection -- i.e., you connecting to an external time source -- is probably not vulnerable, at least not to the most serious of the flaws below (ctl_putdata() and configure()). By default OS X does not allow inbound connections, which definitely are vulnerable. If you have changed this configuration, however, you should still update, and the sixth flaw in receive() may still apply, although the exploitability of this particular flaw is believed to be highly unlikely. The tools are still available if you want them.

The sec-buzz this time is a package of six, count 'em, six vulnerabilities in OS X's Network Time Protocol daemon, or ntpd (and possibly its associated tools, see below). NTP is one of the oldest Internet protocols still in use, starting with NTPv1 RFCed all the way back in 1985 -- in fact, the very same David L. Mills who wrote the original NTP RFC 958 is still the one maintaining it today, almost twenty years later. NTP is an extremely accurate and somewhat complex means of time synchronization that can keep Internet-connected clock devices and computers tightly coordinated with highly accurate timesources. In theory, NTP can capture time differences as small as 233 picoseconds, or 2-32 seconds; in typical network environments, it can keep accurate time within a few tens of milliseconds (currently the internal NTP server at Floodgap has an average dispersion of about 15ms), and as low as a single millisecond or less on a local network (my NetBSD/cobalt server is showing 0.2ms).

Accurate time is very important, especially for security, and virtually every modern platform offers some manner of time synchronization; almost every Un*x or Un*x-like operating system uses ntpd, the reference implementation (don't start that OpenNTPD crap here, please), including Mac OS X. And no surprises, the version included on every release of OS X is vulnerable up to and including Yosemite which only runs ntp 4.2.6.

The immediate response is, "but wait! wait! I don't run a time server!" No, you don't (probably), but if you have "Set date & time automatically" checked in Date & Time in System Preferences, you're running ntpd. (If you did the change I recommended in an earlier post to switch to intermittent ntpdate in cron, you're not running ntpd but you are still vulnerable to one of the flaws; read on.) If the remote server sends you a specifically crafted malicious packet, your computer could be commandeered to run a program under the control of the attacking server.

In practice, this attack is going to be limited for most of us for several reasons:

  • By default the Apple implementation doesn't use cryptographically authenticated time packets (flaws one, two and three).

  • If you are connecting to a well-maintained, trusted time server (time.apple.com would be an obvious one), the chances of it going bad or packets from it being intercepted and subverted are low to very low in most circumstances. (The risk of interception and subversion is higher if you don't trust the nameserver or the network, such as unencrypted Wi-Fi in a hostile environment, where you could be directed to a malicious server or someone could try to inject packets as fast as possible to arrive before an authoritative response.)

  • If you're on a Power Mac, unless the attacker knows this and sends a PowerPC-specific sequence, the circulating attacks (to flaws four, five and six) will very probably be designed for x86 and this lowers your effective risk to near zero. At worst ntpd will just crash. (If you're on an Intel Mac, though, this doesn't help you; you're the type of computer an automated scanner would target.)

  • If you don't even try to synchronize your clock, which is apparently the case for more systems than I would have thought, you aren't vulnerable at all. But this isn't a good idea.

If you are running ntpdate from cron only, and not ntpd, ntpdate is potentially vulnerable to flaw six but none of the others (again, assuming authentication is not enabled). No one is even sure if it's exploitable in its current form, and all of the mitigations above (except the last) still apply, so your risk is even lower in that case. I am in error. ntpdate is not vulnerable to this flaw.

In my situation, I have an internal NTP server which talks to a selection of publicly available stratum-1 and stratum-2 timesources. Since it's facing out on the wild wild Internet, I went ahead and upgraded that to ntp 4.2.8, which has these flaws repaired. All of my internal systems only talk to this internal NTP server which is now protected from the problem, and the only way someone's going to subvert that is by either splicing the wires or installing malicious software on the inside -- and if that happens I've got bigger problems than bad clocks. Just in case, however, I went ahead and built and updated ntpdate on my 10.4 systems so that there's no chance.

After careful consideration, I am hesitant to indiscriminately advise everyone to update ntp on your system -- I've learned from offering a rebuild of bash just how many of you can wreck your computers from Terminal :P -- unless you must connect to a risky network from time to time. Even in that case, however, it might be better just to disable "Set date & time automatically" when you're on those types of connections instead and resynch your clock when you're back on something reasonably trustworthy.

If you really can't avoid that, or you're incredibly paranoid, then here is a build of the relevant utilities for 10.4 PowerPC (it will work on 10.5 also). I had to make a change to ntp_io.c to make it build with Xcode 2.5, which I included if you want to roll your own. It only includes the core ntp tools that have relevant vulnerabilities or dependencies (ntpd ntpdate ntpdc ntpq) and none of the SNTP stuff, which is a pretty sucky way to maintain time anyhow. You will notice there is no x86 or universal build, because the current version of the source code doesn't make it easy to build fat binaries, nor instructions, because I want you to think very carefully about why you're installing this. If you're on an x86 system with a vulnerable timekeeper and no available update for this issue (10.6, say), and you're recurrently or even constantly on a hostile network, then you've got bigger problems than this one -- this is a very small bandaid on a potentially deep wound. For that matter, though, even if you're on a Power Mac, please consider what you're trying to accomplish before you install these replacements. They can replace the current ones on the system directly, or if you're running ntpdate from cron and don't have ntpd running, then just put the new ntpdate somewhere convenient like /usr/local/bin and change your crontab to run it from there.

Stay careful.

Wednesday, December 10, 2014

And now for something completely different: My lamps in the Palm of Mom's hand

I am a huge fan of the classic Palm OS. Yes, webOS was pretty cool, while it lasted (there's a Pre 2 on my shelf courtesy of the funkatronic Ed Finkler that I barely got to explore before HP killed the Palm line), but the original Palm OS was not unlike what you'd get if you miniaturized the classic Mac OS into a handheld form factor. Under the hood, you had very similar system designs and implementations, and Macs were well supported as development tools. Heck, they were even 68K systems, like the original Macs, and the switch to the ARM-based Palm devices was eerily similar to the Power Macintosh transition. The ARM Palm OS even had its own 68K emulator -- in fact, the normal state of the system was to be running 68K code!

Myself I started with a Palm m505 which I bought in medical school (new shortly after its introduction, a substantial expense to a starving student) for calculations and drug databases, at which it excelled. I also rapidly learned how to program it, using a remarkable port of the Lua programming language called Plua, and when I upgraded to a ARM-based Zire 72 everything easily transferred over to the new unit which is a testament to how well the sync and OS were designed. Both of these units still work, by the way, even though the m505's battery is shot and won't hold a charge anymore. Even after I got an original iPhone in 2007 I still used the Zire 72 for a number of years because it had better pharmacy apps and it could record video (the iPhone couldn't do this until the 3GS).

Parenthetically, however, if I were forced to pick the best Palm OS device ever made I would actually make a non-Palm selection: the AlphaSmart Dana wireless. If you'd ever wondered what a PalmOS laptop would wind up like, that's pretty much what the Dana is, with a wide 560x160 backlit greyscale LCD, a full and luxurious keyboard, two, count 'em, two SD/SDIO slots, up to 16MB of memory, built-in WEP 802.11b, an integrated USB cradle and even a USB port for a printer. It runs on rechargeable batteries (easily replaceable) or even AAs; mine has a nearly new rechargeable battery pack, a 1GB SD card (maximum supported) in one slot and a Palm Bluetooth SDIO card in the other. The keyboard is incredibly good. There are writers who cling to these things even now because they run for ages on a single charge, the built-in editor is quick and distraction-free, and when you've got your stuff typed, you simply plug it into your Mac or PC and "squirt" it into your word processor of choice: it emulates a USB keyboard! To this day my mother uses one for her notes at church because it runs longer than many laptops, it's light and durable, and when she comes home she can merely plug it into her Mac mini and dump everything into Microsoft Word. Its chief flaw -- and this is a big one -- is that the 33MHz Dragonball VZ CPU is an absolute slug. While it's more than adequate for its avowed purpose and very thrifty with power, heavy duty apps just drag on it even if you install an overclocker.

The biggest, baddest Palm of them all is of course the T|X ("TX"), with a 312MHz Intel PXA270 ARM CPU, 32MB of RAM, 128MB of flash memory (remember that earlier Palms kept everything in RAM, so battery failure was catastrophic), built-in Bluetooth and WiFi and a beautiful 320x480 (320x448 effective resolution) colour LCD screen. Although the earlier Tungsten T5 had 256MB of Flash and a 416MHz CPU, it was cursed by God with system bugs that updates did not completely fix, bad memory management, sluggish Bluetooth and no Wi-Fi. The TX can easily be overclocked to 520MHz (don't exceed this, though, or you'll have a nice doorstop for a day or two while the battery discharges!), it can take additional SD card space (there's even a third-party SDHC driver), and it has more advanced networking, an updated OS and an updated version of the Blazer web browser. By the way, hats off to Dmitry Grinberg for making both of those tools free. That's classy.

But I still like my Zire 72 (in the 72s form factor with the more durable silver paint job) better than my T|X for four reasons: first, the camera, which is pretty dire by today's standards but is still very handy; second, the extra side button (more on that in a moment); third, mini-USB is much easier to work with than that stupid Athena connector on the T5 and TX (though you can charge over USB with the Athena); and last but not least, the replaceable battery. The T|X battery is soldered in -- if you need to hard-power-cycle it, you'll either have to wait it out or cut the leads, and you'll need to get out your soldering iron to put in a new one. The Zire's battery can be (carefully) removed and replaced, however, greatly expanding its longevity. While it won't surf the web anywhere near as well as the T|X, no one's realistically using their Palm for that anymore, you can put nice big SD cards in it as well (and overclock it too because the Zire 72 uses exactly the same CPU as the TX, and the same 32MB of RAM, though no flash memory), and it has built-in Bluetooth also. If you really need WiFi, the palmOne 802.11b SDIO card will work, and the 320x320 screen is pretty decent. Right now, my Zire 72, Dana wireless and TX are all out in the commons area with their own charging stations and they each have their own little tasks they specialize in.

Which brings me to the point of this blog post. I am also a big fan of the Philips hue light system, which uses controllable colour LED bulbs for automated illumination. You can run it from a smartphone, but I centrally control mine on the secured internal network using huepl, a command-line tool I wrote for this purpose. (It works on 10.4, of course.) I have two hue base stations set up, one in my office, and one shown here out in the commons.

Last time Mom was over, Dad and I were working outside early while she slept in my guest room. She got up and found out she couldn't turn the lights on because she didn't have a device to talk to the central server. Never leave your mother in the dark. Ever. So for their next visit I decided I need to make a guest light controller.

While Philips makes a wall switch (interestingly using the kinetic energy of pressing its buttons to send a signal to the hue base station), a custom wireless solution would be more optimal and more flexible. However, I don't allow Wi-Fi on my secured internal network because I don't want someone sneaking into it one day -- it's a secured network for a reason to protect all these lovely old machines, not to mention my financial records. There's a shorter range way of getting into a network though:

Yep, that's a Bluetooth access point. Although theoretically another Class 1 device could try to talk to it from outside the house (if they figured out the pairing code), in practice it's going to be limited by most transceivers being substantially lower power. But more importantly, this specific access point, the Belkin F8T030, is particularly useful in this situation because with its default firmware it only offers the older Bluetooth LAN Access Profile, not the Personal Area Network (Bluetooth PAN) profile currently supported by 10.4+ and most current mobile devices. My Android phones and Macintoshes can only see the Belkin if they're right on top of it, and even then, even with the correct pairing code, they can't do anything with it or connect to the internal network through it. Only the Palm OS devices can -- because they speak Bluetooth LAP, not PAN.

You can flash the F8T030 to speak PAN as well, but I actually don't want that, in this application. By the way, the F8T030 runs uCLinux -- you can even telnet into it and get a shell prompt!

So that solves the connectivity end; now I have to get the Palm to talk to the lights. In this case I selected the Zire 72 as the light controller because I have a couple of them as spares lying around and as mentioned I can repair them to some extent. First thing was to configure it and pair it with the Bluetooth access point, which was pretty simple in PalmOS 5. However, rather than teaching the Palm how to speak the hue base station REST API (because I'd have to give it keys and keep those current), I decided to create a couple scripts on the internal interface of the web server and use EudoraWeb to control the lights through those scripts, like so:

This worked, but you can rapidly see some disadvantages. First, there's not a lot of room left to add more options without getting cluttered (admittedly my choice of the Zire 72 made this problem more acute). Second, running it through EudoraWeb means I don't have any real control over the interface. If the Bluetooth connection got wacky (say Mom didn't have the access point in a direct line of sight, or she pointed the Zire 72 in another direction), EudoraWeb would start throwing generic error messages or offering to switch to offline mode, which would likely drive her crazy and leave the Palm in an "indeterminate" state. And she'd still be in the dark.

So that brings us back to Plua, because I could at least put better error handling in. Plua has very simple built-in primitives for things like streams and TCP/IP sockets, so I moved the scripts to the internal gopher server to make the protocol simple and threw together a proof of concept. This first draft had some big buttons you could push like a remote. This seemed like a good idea and worked well ... except that if the Bluetooth connection failed or you turned the Palm off while the light app was running, Plua went haywire and didn't reconnect. (Obviously a bug, but I don't have any way of fixing that in the Plua runtime.) If you restarted the app, it worked and reestablished the Bluetooth link, but that was inconvenient.

Then it dawned on me: since the connection would be reestablished when the app restarted, move the selection interface into the Palm launcher with multiple separate apps for each light profile that just called the appropriate script on the internal gopher server and quit. Plus, I could have as many profiles as fit on the screen and generate them off a template. On the iMac G4 I wrote a script generator and cross-compiled four basic profiles (off, all on, all dim, corner only) in MacPlua and pushed them over to the Zire 72.

Now, whenever you start any of them (large enough to be pressed with a finger), the Bluetooth link is reestablished if it lapsed, and the correct script on the gopher server is invoked:

But wait: we can make it even easier for Mom. Remember that single side button for voice memos that the Zire 72 has? You can change it to anything, so I changed it to ... turn the lights on by calling that light profile app. Mom picks up the Palm, sees the big label on it saying "to turn lights on, press side button," sees the only side button the device has, presses it, the Zire turns on, the app starts, connects, and the lights come on. You can even find it in a dark room because of the charging light on the end table. See it in the photograph above?

It's like the best Mother's Day present I've ever gotten her. Totally. No doubt.

Here's the source code for the template so you can see how cool and easy Plua is. PalmOS in the house yo.

host = "gopher-internal"
port = 70
sel1 = "/auto/lights/on"

gui.title("Sending command to lights...")

eh, et, es = io.open("tcp:/"..host..":"..port, "rw")
if (eh ~= nil) then
    eh:write(sel1.."\r\n")
    -- Consume all data
    while true do
        ev = gui.event()
        if ev == ioPending then
            buf = eh:read(8192)
            break
        end
        if ev == appStop then
            break
        end
    end
    eh:close()
    -- Fall through to launcher
else
    gui.alert("Can't access network: point Palm at desk")
    -- Fall through to launcher
end

Speaking of the classic Mac OS, I discovered this surprising project last week. I'll just leave it here for your interest.