tag:blogger.com,1999:blog-1015214236289077798.post7122210262139767895..comments2024-03-24T17:13:53.855-07:00Comments on TenFourFox Development: 38.7.0 available (plus: thanks, Mozilla, for making the web little-endian)ClassicHasClasshttp://www.blogger.com/profile/17331846076856918359noreply@blogger.comBlogger16125tag:blogger.com,1999:blog-1015214236289077798.post-86328665317251422322016-03-29T09:26:57.333-07:002016-03-29T09:26:57.333-07:00VirtualPC did this on the G3/G4. This caused a le...VirtualPC did this on the G3/G4. This caused a lengthy delay when porting it to the G5, because the G5 does not have a little endian mode (being derived from POWER4)chmeeehttps://www.blogger.com/profile/15601204803374423591noreply@blogger.comtag:blogger.com,1999:blog-1015214236289077798.post-89665057479111166952016-03-20T13:41:57.448-07:002016-03-20T13:41:57.448-07:00Getting an SSD for your PPC?? Here is some import...Getting an SSD for your PPC?? Here is some important info I found out that might help.<br /><br />As stated above, I just got an OWC 'TRIM-Free' ssd for my MDD DP 1.42. At first, this was easily the best performance I've EVER had on this machine - OS9 level peppiness even running Tiger with heavy apps like TenFourFox. <br /><br />BUT it all came crashing down in less than a day. <br /><br />The drive became completely unresponsive and I could not get any further than boot. Thought: "Knew this was too-good to be true!" Well OWC tested the drive and reported back it was fine!?!?!? To be on safe-side they sent another knew one and adviced me to attach to newer machine (Linux in my case) and verify it had newest firmware before using. Well this meant having to partition to MBR so Linux could see it. Well, once it was verified, I repartitioned back to APT to put in the Mac, but suddenly it was unresponsive again like the 1st one! Well to use the parlance of our time - WTF?<br /><br />So it was time to get me some edumacation into this stuff. The reason Sandforce controllers don't need TRIM is that they do it themselves when the drive is idle. On a modern system, with copious amounts of RAM, the only time the 'Garbage Collection' function is noticeable, is when large numbers of blocks are being reclaimed. In older OSX systems with 2GB RAM limits, this becomes much more likely than in newer systems. <br /><br />So what to do? In this case, the drive was 120GB (for $64 = Good Deal), I had initially partitioned it into 2 sections (80GB for OSX and 40GB for OS9). On a light day, maybe this would be fine, but on those 'Heavy-Flow Days', I can easily push 20GB or so onto VM, so I either plan on allowing for periodic down-time or give the drive all the room it can support to enable maximum paging flexibility. The second idea has been great and no more problems. Also, some have said that with Sandforce's drives this also makes sense in wear-leveling, because the more of the drive is available, the more it can spread the data around, and the drives also auto-recopy data periodically to make sure it stays fresh.<br /><br />Boot-times/program load times aside, one of these SSDs are the best investment you can put into your classic PowerMac. Like having virtually limitless RAM. But you need to allow it more open-space to auto-maintain (for G4/32-bit systems at least 40GB).<br /><br />Also while they do still offer the 'Legacy' IDE/ATA versions, there is no reason to pay the extra $40 when an IDE/Sata adaptor (at least if you are on a desktop with the room inside) like this one http://www.ebay.com/itm/Pata-IDE-To-Sata-Hard-Drive-Adapter-Converter-3-5-HDD-DVD-Parallel-to-Serial-ATA-/171424564491 is available for about $6 and works like a charm.<br /><br />Happy PPC Computing Folks!!<br /><br />•• Note on IDE/SATA adaptors: The smaller inline ones like mentioned above, generally have a 2TB limit. Larger drives often require a PCI card. Also, they sometimes add an additional 1-second delay to Access/Spin-Up times. Once data starts moving, there is no delay, but if your only drive is an SSD, it might make sense to experiment disabling 'disksleep' on pmset in Terminal. Even in this case, however, there will occasionally be a momentary searching during bootup for the system folder as the card comes to life. This is normal.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-1015214236289077798.post-70457407692327875252016-03-10T23:20:25.277-08:002016-03-10T23:20:25.277-08:00me getting married -nice- I wish you a happy marri...me getting married -nice- I wish you a happy marrige and a strong reationship with your wife !Anonymoushttps://www.blogger.com/profile/11486421528839238656noreply@blogger.comtag:blogger.com,1999:blog-1015214236289077798.post-63464281561783562652016-03-06T02:22:38.695-08:002016-03-06T02:22:38.695-08:00Congratulations :-)Congratulations :-)Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-1015214236289077798.post-35271561866208753782016-03-05T21:51:30.827-08:002016-03-05T21:51:30.827-08:00You are right, it was all PC ports (but they were ...You are right, it was all PC ports (but they were GREAT Ports). Had the mouse and keyboard that came with Quake III and the web on it was pretty nice. Actually heard my first ever MP3 through that browser - played it in window!<br /><br />Just a super-quick question - if you have a moment to tell me what you think. Am researching putting an Sandforce SSD as a dedicated virtual-memory (swap-disk) to mitigate my 32-bit/2GB limitation in Tiger. Have worked out most of the issue with getting it to mount on startup (thanks to working mostly in Linux these days), but had wanted to ask if you can see any problems with this? <br /><br />Since these don't need TRIM had hoped to simply use it as a system disc, and do routine backups, but ran into a catastrophic disk failure on the second day (after had to do a forced-restart). OWC says this is really rare and they are happy to exchange it, but it got me thinking that maybe I could take the stress off it, and still have most of the benefits if I set it up as only a VAR disk. <br /><br />BTW, while it was working, was the BEST PERFORMANCE I've EVER HAD ON ANY COMPUTER! TenFourFox open with 54-tabs, Two 6000x9000 pixel images in photoshop doubled in size by 10% increments (total of 11GB of Virtual Mem being used) and at worst, 1/10th second when moving between apps or Photoshop save-states (right up till the moment it died). OWC says these should be as tough as regular HDs. Did I get a dud? Or should I try the above approach?<br /><br />ANY thoughts on this greatly appreciated.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-1015214236289077798.post-79135634156659693652016-03-05T19:51:37.286-08:002016-03-05T19:51:37.286-08:00Thank you, sir! Might take you up on it in the nea...Thank you, sir! Might take you up on it in the near future. :)ClassicHasClasshttps://www.blogger.com/profile/17331846076856918359noreply@blogger.comtag:blogger.com,1999:blog-1015214236289077798.post-73157568319452701992016-03-05T18:07:32.099-08:002016-03-05T18:07:32.099-08:00"breaking the promise of Web interoperability..."breaking the promise of Web interoperability and platform independence"<br />... well I wouldn't say this is that.<br /><br />Thanks for the best wishes. If you ever stop in Auckland I'd be glad to meet up with you. And congratulations to you!Roberthttps://www.blogger.com/profile/01801341049800948737noreply@blogger.comtag:blogger.com,1999:blog-1015214236289077798.post-37017817257205896612016-03-05T15:02:57.760-08:002016-03-05T15:02:57.760-08:00Yes, the DC is pretty awesome. I'm impressed h...Yes, the DC is pretty awesome. I'm impressed how much wear you got out of PlanetWeb, and it still plays really great games. But it's unfortunately the best example of how SuperH's bi-endian abilities are ignored: *everything* runs little endian on it, from the native shell to Windows CE apps to Linux to NetBSD. In fact, you probably couldn't use the onboard hardware if you tried to run it big.ClassicHasClasshttps://www.blogger.com/profile/17331846076856918359noreply@blogger.comtag:blogger.com,1999:blog-1015214236289077798.post-74541521319164961442016-03-05T14:10:11.822-08:002016-03-05T14:10:11.822-08:00I knew that ARM was nearly as old as x86, but didn...I knew that ARM was nearly as old as x86, but didn't realize Apple had a hand in it that early on. I, like a few others, assumed they were grabbing it up to hedge the mobile market, and possibly take it desktop if they ever grow weary of Intel.<br />SuperH is a pretty unique case. It was fully bi-endian in the mid 90's and was amazing as a game-console platform. I used my Saturn and Dreamcast a lot back then and in the case of the later, actually used it to register for classes in college through the old PlanetWeb browser they made. Was as fast as any of the PPC (beige) Macs we had at the college and even had Flash 4.0 and early Javascript performance that was really not bad (considering the Dreamcast, in particular, only had 16MB of RAM). I often wish it wasn't relegated to automotive computing today.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-1015214236289077798.post-29384182789962930682016-03-05T13:25:05.030-08:002016-03-05T13:25:05.030-08:00Well, that's hindsight speaking. While we clas...Well, that's hindsight speaking. While we classically associate big-endian with big-iron (like SPARC, PA-RISC, etc.), remember that the 68K is big-endian and appeared in lots of applications, and no one cared about that. In addition, keeping the same endianness probably made the 68K-PPC transition somewhat less complicated. The TMS 9900 series was also a consumer oriented big-endian design, obviously much less successful, but still in the same market.<br /><br />SuperH is really in the same situation as ARM: it can run bi-endian, but virtually everything runs it little. ARM had a few big-endian applications early on but now it's exclusively little too, much as MIPS has become.<br /><br />That said, ARM7 was the contemporary for the 601, and the 601 was beefier, faster and had IBM backing it. I think choosing ARM before portability and performance/watt became market factors would have been premature at that time. Plus, don't forget that while Apple wasn't there for the original ARM chips, along with Acorn and VLSI they were the original founders of Advanced RISC Machines and they still retain almost 15% of the company.ClassicHasClasshttps://www.blogger.com/profile/17331846076856918359noreply@blogger.comtag:blogger.com,1999:blog-1015214236289077798.post-36569860019915970432016-03-05T13:05:39.353-08:002016-03-05T13:05:39.353-08:00Mega-Congrats CK on the both Cool 2016 Events (or ...Mega-Congrats CK on the both Cool 2016 Events (or 3 if we count the Power8 ;0) <br /><br />Honestly, the endian-issue is one of those things that Apple should have thought about when taking PowerArch to the mainstream. On a dedicated server, who cares? As long as it does what it needs to do, but cross-platform demands a high-level of compatibility.<br /><br />At the very least, they should have instituted hard-ware endian correction (like Hitachi's SuperH), and on the other end, they might well have gotten in on ARM much earlier on. It had L.E. and also was pretty well-seasoned even by the time they switched away from 68k. The notion that RISC goes hand-in-hand with B.E. has really held them (and Spark also) back. You are truly taking on the world with this last bit...swap! ;0)Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-1015214236289077798.post-37243026003582967372016-03-05T12:09:21.459-08:002016-03-05T12:09:21.459-08:00Thanks! :)Thanks! :)ClassicHasClasshttps://www.blogger.com/profile/17331846076856918359noreply@blogger.comtag:blogger.com,1999:blog-1015214236289077798.post-45829759971530768302016-03-05T12:06:58.268-08:002016-03-05T12:06:58.268-08:00In a general sense, you could argue that 10.4-10.6...In a general sense, you could argue that 10.4-10.6 with Rosetta do just that already through emulation (they compile big-endian PowerPC code to little-endian x86 code and run that, doing the byte-conversion on the fly).<br /><br />But if you consider emulation and dynamic recompilation cheating and you want to do this on the metal, you'll need (at minimum) a CPU that can mark tracts of memory with which endianness is in use and an OS that understands that setting and how to manipulate it. To make it not suck, the CPU should also be able to handle multiple execution streams of differing endianness and the OS should know how to translate calls between processes that differ in endianness. Some CPUs, including some PowerPC ones, do support per-page endianness, but they need the OS support to be useful and this is much more difficult to work with. Nowadays emulation is so much more convenient that people just throw CPU at the problem if they really have to deal with this situation.ClassicHasClasshttps://www.blogger.com/profile/17331846076856918359noreply@blogger.comtag:blogger.com,1999:blog-1015214236289077798.post-67414864841896489052016-03-05T11:59:36.943-08:002016-03-05T11:59:36.943-08:00But array buffers were already system-endianness (...But array buffers were already system-endianness (for WebGL), so that wasn't really the problem. The problem was that asm.js suddenly made JavaScript care about the underlying layout of memory by imposing byte-level load-store semantics, and tools like Emscripten made it worse by not even building on big-endian platforms at all.<br /><br />Sure, we can adapt, but the real victims are the big-endian platforms caught in the middle, like, say, Xbox 360 (see dherman's analysis on this which was prescient on the problem almost four years ago), which is also WebGL-capable. Since Apple is so lazy about OpenGL and only a minority of Power Macs under 10.5 are capable of 2.0, we (TenFourFox) can say we don't support it and lose pretty much nothing because it wouldn't have worked in most cases anyway, so we just unilaterally change JS since only JS cares. Dave concluded the same. But the 360 has to give up one or the other: it can either do WebGL, but not do little-endian asm.js-based code, or it can run LE asm.js-based code but break (or disable) WebGL. And you have to make that choice even if you don't have an OdinMonkey backend because asm.js code will happily run in the interpreter, too. So it really is asm.js that caused this situation; array buffers are just the mechanism.<br /><br />Please note I'm not saying as a practical measure that it's not a good thing the issue is forced. (I'm not saying it's a good thing either, but I can see why it's not a bad thing.) But even if breaking the promise of Web interoperability and platform independence turned out to be a net win for current developers and most present-day users, it's still a broken promise.<br /><br />On a personal note, though, as a fellow brother in Christ and having seen your name in Mozilla stuff for as long as I've kicked around the community (getting on a decade-ish), I wish you well in your next endeavour though. I'll be flying through Auckland to visit the fiancee very soon (Air NZ has nicer seats than Qantas).ClassicHasClasshttps://www.blogger.com/profile/17331846076856918359noreply@blogger.comtag:blogger.com,1999:blog-1015214236289077798.post-39899866090904281012016-03-05T11:46:14.562-08:002016-03-05T11:46:14.562-08:00Is it possible to run both Big-endian and little-e...Is it possible to run both Big-endian and little-endian at the same time within a single operating system on a computer?Anonymoushttps://www.blogger.com/profile/11184067838521016715noreply@blogger.comtag:blogger.com,1999:blog-1015214236289077798.post-65587578162662099772016-03-05T02:21:51.805-08:002016-03-05T02:21:51.805-08:00Making the Web little-endian was totally the right...Making the Web little-endian was totally the right decision. asm.js developers don't have to worry about endianness issues, and instead a few compiler developers (including you) deal with this once and for all.<br /><br />The only alternative would have been to not specify endianness for array buffers, which would put the burden back on Web developers, who would promptly assume little-endian anyway and you'd be as badly off or worse.Roberthttps://www.blogger.com/profile/01801341049800948737noreply@blogger.com