Very related but self promotional—I have a hobby business selling restored Mac mini G4s. I clean all of them internally, upgrade them with 128 GB SSDs, max them out at 1 GB of RAM, put a new clock battery in, and pre-install the Mac OS 9 Lives hacked version of Mac OS 9 that runs on them. You can buy one from me here:
I don't think I'll start pre-installing System 7 since most of my customers are using Mac OS 9 (and the domain is os9.shop!), but you could certainly get a machine from me with Mac OS 9 and install System 7 yourself if you so desire.
My customers have included a lot of real businesses running legacy software who want the fastest, least intrusive, and least energy intensive Mac OS 9 desktop machine they can buy. I've sold to dentists, veterinarians, museums, and auto repair stores. You'd be amazed how many people are running Classic Mac software in 2025.
Did you have to do anything special to get the SSD to play nice with OS9? I tried adding one to a 300MHz G3 iMac and it took forever to initialize on boot and would randomly stall a lot.
I use a mSATA to IDE adapter that I buy in bulk. This is the Amazon available equivalent of it: https://amzn.to/48qEaOm
I use only 128 GB mSATA cards from reputable brands.
I always do the following:
- Boot from the Mac OS 9 Lives 9.2.2 image (v9 of the image) by CD
- Wipe the SSD using Disk Utilities 2.1
- Restore from the CD
I will say this fails perhaps 1 out of 20 times. Hard to say how often this is an actual hardware failure versus some kind of incompatibility with the mSATA SSD since I do use a range of brands. I am always using the same adapters.
Seems more of a curiousity than something practical - in particular, the System 7 "native" on the Mac mini G4 is missing a lot of drivers. There aren't that many situations where software runs well on System 7 tha doesn't on Mac OS 9.2.2, and for the rare case that it does, emulation in something like vMac is sufficient.
> It is also my opinion Mac OS 9.2.2 is the greatest OS, and Mac OS, ever, but not everything that is possible in earlier Mac OS versions is possible in Mac OS 9.2.2.
I had fun with hypercard on MacOS 9. At work, even. The boss was into rapid prototyping, and I cooked up some damn productive stacks in a hurry.
It runs on the Cube and under OS 9 emulation on the new stuff.
Hypercard scripters did cool things that most users don't do today. And without those monster data centers.
Back when Java was the NextBigLanguage, we built Java development tools at KL Group/Sitraka (now a part of Quest). For version 2 of the suite of tools, we were getting rid of the nerdy configuration text file and planned on shipping a configuration wizard (yes, we called them wizards while fondling the onions we tied to our belts).
I was the Program Manager, and as usual we were very tightly constrained for time, and in the era of golden master DVDs that had to be ready to distribute at JavaOne in the Moscone Centre... Hard decisions had to be made. The team decided to work on more important features, and drop the configuration wizard from 2.0. Then I did what everyone knows is a no good, very bad, terrible thing. And although I got away with it that time, it's still a no good very bad, terrible thing:
I took my work computer home for the weekend and fired up a HyperCard "compiler" called Runtime Revolution that could make executables for Windows and Unix. Come Monday morning, we had a shippable configuration wizard. Leadership blew its top, because one of their values was, "We're a Java shop, which means we use Java to write Java tools." And after I left the company, they rewrote the configuration wizard in Java Swing.
To this day I consider firing up Electron and a complete React framework for simple tools to be a "Turing Tarpit," a place where absolutely anything you imagine is possible, but nothing of interest (in the domain of simple tools) is easy.
Thank you for writing down this memory. It would fit perfectly on https://folklore.org but unfortunately it seems that the site is no longer accepting new memories.
Yeah, when a coworker and I showed my wife the first OS X preview, she was alarmed at how long it took to shut down (I mean System 7 shut down like you just kicked the cord out). "You'll have to find something else to like about it," was my coworker's response.
And to be sure, there was/is a lot to like about OS X.
But, probably because of the lack of a kernel, etc., System 7 sits somewhere in that nether/middle region on our personal computer journey. It's rich library of functions (the Toolbox) set it apart from machines before it that might have instead had a handful of ASSM routines you could "CALL" in BASIC to switch display modes, clear the screen, etc. But, as Amiga owners often reminded the Mac community in the day, no "true" preemptive multitasking…
I should say too, regarding programming, these days your ability to write safe, threaded code is probably the highest virtue to strive for, hardest to perfect — at least for me (so hard to wrap my head around). It seems to separate the hacks (in the negative sense) from the programming gods. I think wistfully of those simpler times when managing memory well, handling error returned from the system API gracefully were the only hurdles.
"You can’t simply add a lock here, because this function can be called while the lock is already held. Taking the same lock again would cause a deadlock…"
"The way you've implemented semaphores can still allow a potential race condition. No, I have no idea how we can test that scenario, with the unit tests it may still only happen once in a million runs—or only on certain hardware…"
(Since I have retired I confess my memory of those hair-pulling days are getting fuzzier—thankfully.)
Threads and locks are fundamentally the wrong abstraction for most scenarios. This is explained in complementary ways in two of the finest technical books ever written, Joe Armstrong's "Programming Erlang" and Simon Marlow's "Parallel and Concurrent Programming in Haskell". I highly recommend both.
Thank you for many fond memories of playing Glider and Pararena.
There are plenty of ways to multi threaded code these days. From actors to coroutines on the programmatic interface level to using green threads directly in go or Java. There is very little reason to resort using locks, mutexes, or semaphores outside of frameworks designed to make multi threading easier or very specific high performance code. (Where in the latter case it could be argued that multi threaded probably adds unreasonable latency and context switching.)
Whoever is downvoting you for speaking the truth should go stand in a corner. Or try maining BeOS for a while, to experience first-hand what happens when application programmers are forced to use threads and locks.
It’s not everything, it’s just Chrome. Chrome is 1.6GB including all its dependencies. It’s going to be slow to start on any system if those dependencies aren’t preloaded.
Most Mac software I use (I don’t use Chrome) starts quickly because the dependencies (shared libraries) are already loaded. Chrome seems to have its own little universe of dependencies which aren’t shared and so have to be loaded on startup. This is the same reason Office 365 apps are so slow.
It's not just Chrome, it's everything, though apps that have a large number of dependencies (including Chrome and the myriad Electron apps most of us use these days) are for sure more noticeable.
My M4 MacBook Pro loads a wide range of apps - including many that have no Chromium code at all in them - noticeably slower than exactly the same app on a 4 year old Ryzen laptop running Linux, despite being approximately twice as fast at running single-threaded code, having a faster SSD, and maybe 5x the memory bandwidth.
Once they're loaded they're fine, so it's not a big deal for the day to day, but if you swap between systems regularly it does give macOS the impression of being slow and lumbering.
Disabling Gatekeeper helps but even then it's still slower. Is it APFS, the macOS I/O system, the dynamic linker, the virtual memory system, or something else? I dunno. One of these days it'll bother me enough to run some tests.
Somewhere around 2011 when I switched my MBP to an SSD (back when you could upgrade the drives, and memory, yourself), Chrome opened in 1-2 bounces of the dock icon instead of 12-14 second.
People used to make YouTube videos of their Mac opening 15 different programs in 4/5 seconds
Now, my Apple Silicon MacBook Air is very, very fast but at times it takes like 8-9 seconds to open a browser again.
I loved the MBP’s from that era. That was my first (easy) upgrade as well in addition to more memory. Those 5400 RPM hard drives were horrible. Also another slick upgrade you could do back then is to swap out the super drive with a caddy to have a second SSD/HDD.
It still works fine today, though I had install Linux on it to keep it up to date.
I'm running the latest MacOS right now on a modest m4 Mini and it doesn't seem slow to me at all. I use Windows for gaming and Linux for several of my machines as well and I don't "feel" like MacOS is slow.
In any case, Chrome opens quickly on my Mac Mini, under a second when I launch it from clicking its icon in my task bar or from spotlight (which is my normal way of starting apps). When Chrome is idle with no windows, opening chrome seems even faster, almost instant.
This made me curious so I tried opening some Apple apps, and they appear to open about the same speed as Chrome.
Gui applications like Chrome or Keynote can be opened from a terminal command line using the open command so I tried timing this:
$ time open /Applications/Google\ Chrome.app
which indicated that open was finished in under 0.05 seconds total. So this wasn't useful because it appears to be timing only part of the time involved with getting the first window up.
It's always been that way. Even when I had a maxed out current-gen Mac Pro in 2008, it still launched and ran faster in Windows than MacOS.
I have seen people suggesting that it's because of app signature checks choking on Internet slowness, but 1. those are cached, so the second run should be faster, and in non-networked instances the speed is unchanged, and 2. I don't believe those were even implemented back in 2002 when I got my iMac G4, and it was likewise far quicker in Linux than in OS X.
At the time (2002), I joked that it was because the computer was running two operating systems at once: NeXTSTEP and FreeBSD.
MacOS 9 was awful, a product of a rather unpleasant era for Apple really. I wanna say through 9.2.1 maybe even through to 9.2.2 the OS had a nasty habit of corrupting your disk. Hardware-wise Apple used CMD64x based IDE controllers so when OS9 wasn't screwing with your data the hardware itself would.
There absolutely were animations e.g. when closing a Finder window, but they were much lighter weight. As far as I'm concerned System 7 was probably the zenith.
I'd rather say the zenith was 8.1 which was not very widely used. 8.5 did add some nice gimmicks like the app switcher palette but for some reason it felt way slower than 8.1.
To me it’s the opposite, System 7 crashed all the time and MacOS 9 was rock solid. System 7 was a mess until 7.6, at which point it was basically MacOS 8. And the UI was way more pleasing, the system 7 one had a 80s vibe to me.
Mac OS 9 was Apple Windows ME; too many side ports of new features into the rickety legacy core OS (Win32 / Toolbox Mac OS) and not enough attention paid to detail since the Next Big Thing was already cooking (XP / OS X).
Mac OS 9 was certainly not rock solid as far as crashes were concerned, but very much better than System 7, that was clear to me. Maybe it is my rose-tinted glasses colouring my memory but I also remember that there were very few small bug, you know the just annoying kind, than I have today with macOS 15, there may be fewer hard crashes, but the number of paper cuts have increased by many orders of magnitude.
I remember it crashing a lot but maybe that's because I came of age around the OS 8/9 era. IIUC OS 9 had no memory protection so it's not exactly a surprise it was fragile.
Yup. It feels like I have traded, on an average week, three hard crashes (enough to need a reboot) and five small bugs back then, with zero hard crashes and ninety minor bugs (some requiring restarting the app) today. Sometimes I feel like I would like to go back because many of the smaller bugs drive me mad in a way that never happened back then.
Well, I got my B&W G3 because MacOS 9 lunched the filesystem as it was prone to doing. SCSI drive so it wasn't that other disk corruption fun (which I went through in PC land). As far as I'm concerned MacOS 9 was mostly a bunch of paper cuts glued together. Lots of stuff that would've demoed in OSX if Apple had the time and patience.
So yeah Apple had tacked on vestigial multi-user support, an automatic system update mechanism, USB support, etc., etc. but underneath it was still the same old single user, cooperative multitasked, no memory protection OS as its predecessors. Unlike OSX, MacOS 9 (like 7 and 8 before it) still relied on the Toolbox which was a mishmash of m68k and ppc code.
Win98 was head and shoulders above System 9, from a stability perspective. It had protected memory, actual preemptive multitasking, a somewhat functional driver system built on top of an actual HAL, functional networking, etc, etc.
To be clear, Win98 was a garbage fire of an OS (when it came to stability); which makes it so much worse that Mac OS 8-9 were so bad.
98's multitasking and memory 'protection' were a joke. In the same mid high machine for the era, 2k and xp were miles ahead of w98 on mid-high load.
Maybe not on a Pentium, but once you hit 192MB of RAM and some 500 MHz P3/AMD k7, NT based OSes were tons better.
You only realized that upon opening a dozen of IE windows. W98, even SE, will sweat. 2k will fly.
On single tasks, such as near realtime multimedia ones, w98 would be better, such as emulators/games or video players with single thread decoders. On multiprocessing/threading, w98 crawled against W2K even under p4's and 256MB of RAM.
Well, Win NT is an actual operating system, and Win 98 and Classic macOS are just horribly overgrown home computer shells in an environment they should never have been exposed to.
Ahem, w98 BSOD if you sneezed hard near it. Installing a driver? BSOD. IE page fault? BSOD. 128k stack limit reached? either grind to a halt or a BSOD. And so on...
I worked at a company that was delivering a client-side app in Java launched from IE. I think we had an ActiveX plugin as the "launcher." This predated "Java Web Start." It was hysterically bad. We were targeting 32 meg Win 98 systems and they were comically slow and unstable. Most of our developers had 64 and 128 meg boxes with NT 4.0. I mostly worked on the server side stuff, and used them as a terminal into the Solaris and Linux systems.
System 6 had menu blinks, zoom animations (with rect XORs no less), and button blinks when you used keyboard completion. Mac was the original "wasteful animation" OS.
This is feedback. You press a shortcut; how do you know it worked or not? You do because the corresponding menu rapidly blinked. Or you double click an icon and suddenly a rectangle appears in another part of the screen. Is this related? Here the animation shows that yes, the icon transformed into a window.
On the other hand on my mobile Firefox I wait seemingly a half second each time I long press a link, because there is an animation that zooms a context menu. It does not zoom from the link, which could be justified maybe, but always in the same place in the center of the screen. This animation is meaningless and thus wasteful.
That xor effect was under FVWM too for moving and resizing windows and doing an xor wireframe was MUCH faster than a full repaint.
If you had no X11 acceleration (xvesa for instance), that mode was magnitudes faster than watching your whole browser window repaint on a resize lasting more than 3 seconds on a Pentium.
I like System 6: the most complete version of the “real” classic Mac OS before System 7 started to be more “modern.” Dead simple, not a lot of new abstractions and metaphors layered on.
It still irritates me that command + N does a new window, not a new folder. I wouldn’t even have used that shortcut much, as I was a kid and it’s been 25 years.
I enjoyed how quick it was on my G4 iMac (Mac OS X 10.1/10.2 was a total dog) but it was never stable enough for my liking. Forced to choose between fast and unstable (OS9) or slow and steady (OS X), I chose to install Yellow Dog Linux instead (reject the premise).
> In my case, first I tried using the latest Python 3.13.9 both from Windows 7 (bad idea due to resource fork loss) and macOS 10.14.6 Mojave, but neither worked: it seems like that version of Python was just too new. I then retried with Python 3.8.10 instead (which I chose thinking it might be more period-appropriate for the script's age) on Mojave, which worked flawlessly.
Ah, classic Python. Removing features [0] and breaking perfectly working software just because the feature is old, ugly, and not widely used.
Why not C89? Try to make it as portable as possible. The software is intended for preservation of old computers and their software. Would make sense for the software to be as portable as possible.
Who knows, maybe someone would want to run it on vintage Mac hardware?
This is a good point. For a while, I was writing vintage Mac hacking tools in ANSI C [1].
The answer is that it's too hard. I want to see the Happy Mac on as much exotic hardware as possible, but my day job keeps me very busy, and I want to enjoy my hobby without tedium and toil. I prefer to use a language that is "easier to write, easier to read, easier to understand, easier to maintain" [2].
It already exists. But, if any, Free Pascal with Lazarus for classic Mac ppc would be ideal.
Put that MacSSL port available under fpc and now you can compete with the rest.
That's a bit reductive. We're talking about something only relevant to retrocomputing (MacOS 9 is unsupported as of early 2002 per Wikipedia) where they could scarcely find evidence of anyone using it at all. And taking these things out of the standard library means that core devs can wash their hands of it. (It also means that a large majority of users can avoid downloading and storing things that will always be useless to them; but nobody seems to care about that sort of thing any more, much to my chagrin.)
StarMax series (and the 4400) seemed to be about as close to CHRP as we got. My off-brand StarMax clone (PowerCity) had a PS/2 and an ISA port. Ran BeOS well, and had a quirk that I could hear a tight loop on the speaker.
AFAIK most StarMax systems that were released (a prototype exists of a CHRP StarMax model) are based on the Tanzania / LPX-40 design, which is mostly a traditional PCI PowerMac[1], albeit with oddities like support for PC style floppy drives. PS/2 is handled by the CudaLite microcontroller which presents it to the OS as ADB devices for example. I've not heard of a version with ISA slots, although I assume you could just have a PCI to ISA bridge chip, even if MacOS presumably wouldn't do anything with it.
Right, I think those were the closest we got to the CHRP standard, as they moved the platform toward PC-style floppies, PS/2, ATX PSU and even more generic "platform" stuff than most clones. I'm fairly sure I had an ISA slot, I do remember trying to get a bargain bin NE2K card working in mine under linux (it didn't work). Definitely did nothing under OS 8/9.
The powercity models were interesting, because they came out after Apple revoked Motorola's clone license. A German company, ComJet, bought up the boards and sold unlicensed clones cheap. Case was slightly different, but otherwise they corresponded to StarMax models (fairly certain they were identical but may have been last revision boards).
Kinda sorts. The systems that the "MacOS on CHRP" thing ran on had a very strange looking device tree, with some bizarre combination of PC and Mac peripherals.
Refer to the "Macintosh Technology in the
Common Hardware Reference Platform" book for more information, if you're curious about the Mac IO pieces.
The Motorola Yellowknife board seems remarkably similar to this system, as well as the IBM Long Trail system (albeit with Long Trail using a VLSI Golden Gate versus a MPC106 memory controller). Both of them use W83C553 southbridges and PC87307 Super I/O controllers.
The architecture is kind of weird, but the schematics on NXP's website can probably elucidate a bit more on the system's design.
A fun "do-it-yourself" question for people who've always wanted to learn about the baroque architecture of the PowerPC Mac and the classic Mac OS: where is hardware support for specific models implemented?
I have an iMac G4 1.25 GHz. Originally, it was a 1GHz, but I swapped out the motherboard for a later model. For a while I've been wondering if I would had been better off with an earlier motherboard capable of booting OS 9 natively. Compared with using OS X's classic mode, this would omit the overhead of running a whole other OS and leave me with more resources to run OS 9 apps and games. I don't get a whole lot of use out of the earlier OS X software that I have on there...
Maybe in the future I won't have to make that choice! I'd much rather dual boot OS 9 off a different partition, but that hasn't been supported on the 1-1.25GHz models (Thanks Steve...) and no one has gotten it working properly. Maybe now it will be possible! A man can dream...
Slight nitpick: this isn't "natively booting" System 7, nor is any other PowerPC Mac. This is simply an emulator, no different than using vMac, qemu, etc. - it just happens to be an emulator that Apple has been shipping with PowerPC Macs since it introduced the Power Mac 6100, which "natively" booted System 7.1.2.
Nonetheless, this is an impressive accomplishment.
I remember that yes, expensive operations could take a while, but the interface was much faster than my M1 Max Studio for the sole reson you actually do not have to wait for animations.
And not just for the reasons that animations were sparse, they also never blocked input, so for example if you could see where a new element would appear you could click there DURING the animation and start eg typing and no input would be lost meaning that apps you used every day and became accustomed to would just zip past at light speed because there were no do-wait do-wait pipeline.
The animations were there, but they were frame-based with the number of frames carefully calculated to show UI state changes that were relevant. For example, when you would open a folder, there would be an animation showing a window rect animating from the folder icon into the window shape, but it would be very subtle - I remember it being 1 or 2 intermediate frames at most. It was enough to show how you get from "there" to "here" but not dizziingly egregious the way it became in Aqua.
Truth be told, I do have a suspicion that some folks (possibly - some folks close to Avie or other former NeXT seniors post-acquisition) have noticed that with dynamic loading, hard drive speed, and ubiquitous dynamic dispatch of ObjC OSX would just be extremely, extremely slow. So they probably conjured a scheme to show fancy animations to people and wooing everyone with visual effects to conceal that a bit. Looney town theory, I know, but I do wonder. Rhapsody was also perceptually very slow, and probably not for animations.
There were also quite a few tricks used all the way from the dithering/blitting optimizations on the early Macs. For example, if you can blit a dotted rect for a window being dragged instead of buffering the entire window, everything underneath, the shadow mask - and then doing the shadow compositing and the window compositing on every redraw - you can save a ton of cycles.
You could very well have do-wait-do-wait loops when custom text compositing or layout was involved and not thoroughly optimized - like in early versions of InDesign, for instance - but it was the exception rather than the rule.
> Truth be told, I do have a suspicion that some folks (possibly - some folks close to Avie or other former NeXT seniors post-acquisition) have noticed that with dynamic loading, hard drive speed, and ubiquitous dynamic dispatch of ObjC OSX would just be extremely, extremely slow. So they probably conjured a scheme to show fancy animations to people and wooing everyone with visual effects to conceal that a bit. Looney town theory, I know, but I do wonder. Rhapsody was also perceptually very slow, and probably not for animations.
Done exactly this myself to conceal ugly inconsistent lags - I don’t think it is that uncommon an idea.
I'm think that ObjC's dynamic dispatch is reasonably fast. I remember reading something about being able to do millions of dynamic dispatch calls per second (so less 1 us per) a long time ago (2018-ish?), but I can't think how to find it. The best I could come up with is [1], which benchmarks it as 2.8 times faster than a Python call, and something like 20% slower than Swift's static calling. In the Aqua time-frame I think that it would not have been slow enough to need animations to cover for it.
My most durable memory is all the reboots due to programs crashing. Didn't help that a null pointer deref required a system reboot - or that teenage me was behind the keyboard on that front.
Preemption is a very nice OS feature it turns out (particularly once multi-core rolled around). Still, I recall os 8 and 9 being generally snappier than windows 98 (and a lot snappier than early builds of OSX)
How does preemption work on a processor that barely has interrupts and has no way to recover state after a page fault, in an OS that has to fit into a couple dozen kilobytes of ROM?
Later Mac ROMs were 512KB, same with the later Amiga Kickstarts (3.x) That was a lot of space for the late 80's and early 90's. Interrupts were supported (8, if I recall.) And 68000 machines didn't support virtual memory until the 68010 and later, so no issues with page faults.
I still remember the day teenage me got an Amiga 500 with a whopping 512K of RAM, and witnessed the power of multitasking, way back in 1988.
The Amiga had preemptive multithreading with multiple task priorities on the original MC68000. Preemption is distinct from memory protection or paging.
There were plenty of preemptive multitasking systems for the original 68000, and regardless page fault recovery was fixed from the 010 onwards.
And certainly was very not a problem on PowerPC which TFA is about.
Also not sure how you can say the 68000 "barely has interrupts" I don't even know what you're on about.
MacOS was broken because Jobs forced it to be that way after he was kicked off the Lisa team. Which had a preemptive multitasking operating system on the 68000.
Preemptive multitasking is unrelated to page faults. And the 68k handled page faults just fine starting from the 68010.
Space constraints were certainly limiting on the earlier models, but later ones were plenty capable. Apple itself shipped a fully multitasking, memory protected OS for various 68k Mac models.
By the late 80s, the only reason the Macintosh system was still single-tasking with no protected memory was compatibility with existing apps, and massive technical debt.
One of my early Macs was a Performa 638CD with no dedicated FPU. I had upgraded to a Performa 6400 (which felt like an absolute dog despite its size) but finally had an opportunity to move to the PowerComputing PowerTower Pro 225. What a beast! I hate to say it, but it was probably my favorite Mac I'd ever owned before the first iMac.
The Megahertz wars in the 1990s made it really difficult to understand relative performance across even the same ISA like this, and I think computers with the 603 CPU were a bit of a wrench in people's perception of the Mac.
The 180 or 200MHz 603e with 16k L1 cache in that Performa 6400 wasn't slow by any stretch, but it probably didn't have L2 cache. Coupled with the gradual transition to PPC native code of the OS and apps, these machines were often a little mismatched to expectations and realities of the code.
Meanwhile that PowerTower had a 604e with 32/32k L1 and 1MB L2 cache. That was a fast flier with a superscalar and out of order pipeline more comparable to the Pentium Pro and PII.
I have a PowerCenter Pro 210 in my basement right now! It's not quite as nice as the newer architecture in the PowerTower Pro machines, but it runs MacOS 7.6.1 wonderfully. It is more than enough for classic Mac games of that era - and a joy to use.
The later PowerCenter Pro’s could run with a 60 MHz FSB whereas the PowerTower Pro’s were usually 45-50 MHz FSB. There are a variety of tasks where my PowerCenter Pro 240 outruns my PowerTower Pro 250 for precisely that reason.
As an European, Classic Macs (and current ones) were just for arts/writting people. If you knew what CMYK was in order to print a newspaper, you were a Mac user.
I emulated Mac OS 7 under XP times, and i was impressed that you could get far faster speeds emulating the M68k (and partially the PPC) compared to Intel X86 without any hardware accelerating chip (IntelVT) or kernel modules trapping X86 instructions running it at native speeds.
I mean, PPC and M68k chips where much easier to emulate than X86 on itself.
On software, Classic Mac users can just resort to IRC and Gopher clients and visit the public https://bitlbee.org IRC servers in order to connect 'modern' accounts and being proxied to a Mac IRC client. And for Gopher, you have gopher://hngopher.com, gopher://magical.fish and the like.
Sadly you don't have an easy TLS library as Amiga users have (AmiSSL) where even modern web can work on it (and IRC over TLS, Gemini...).
Altough... if Amiga m68k emulators run fast with the Rosetta like tech for PPC... you would just fire up Workbench and then AmiSSL. Crude, but it would work. If not, here in the Apple subdir
you can get, maybe, some TLS enabled browsers:
Ardi Executor. There's a recent fork at GitHub. You can run m68k binaries seamlessly. You don't need propietary MacOS parts, just the software.
But if you are some software preserver, having a libre option to run legacy media it's always good for historical reasons. I am a daily libre software user but I emulate ancient machines with propietary stuff just for curiosity. As it not a personal computing device I find it fine. It's just an historical toy and not my computing device. And, well, if you want to create libre engines for old Mac games (ScummVM, SDL ports...), for sure you need to at least emulate the old OSes and run the propietary game in order to compare the output and correctness.
Also, it already exists "Mac" for x86. It was Rhapsody DR2 and it could run Classic Mac software and NeXT one too. It was like a blend of these two. OSX it's like NeXT Step concept 2.0, with few traces of Mac Classic.
Qemu will run it fine.
Rhapsody DR2 is not a solution for classic Mac OS on x86. Lunduke writes:
"Unfortunately [the Blue Box] was only available on PowerPC versions of Rhapsody"
Another option is Advanced Mac Substitute. It doesn't run everything, but what it does run it runs really well. One of my goals is that you can use a 68K Mac application (e.g. MacPaint) as part of your personal computing workflow, if you wish.
Very related but self promotional—I have a hobby business selling restored Mac mini G4s. I clean all of them internally, upgrade them with 128 GB SSDs, max them out at 1 GB of RAM, put a new clock battery in, and pre-install the Mac OS 9 Lives hacked version of Mac OS 9 that runs on them. You can buy one from me here:
https://os9.shop
I don't think I'll start pre-installing System 7 since most of my customers are using Mac OS 9 (and the domain is os9.shop!), but you could certainly get a machine from me with Mac OS 9 and install System 7 yourself if you so desire.
My customers have included a lot of real businesses running legacy software who want the fastest, least intrusive, and least energy intensive Mac OS 9 desktop machine they can buy. I've sold to dentists, veterinarians, museums, and auto repair stores. You'd be amazed how many people are running Classic Mac software in 2025.
Did you have to do anything special to get the SSD to play nice with OS9? I tried adding one to a 300MHz G3 iMac and it took forever to initialize on boot and would randomly stall a lot.
I use a mSATA to IDE adapter that I buy in bulk. This is the Amazon available equivalent of it: https://amzn.to/48qEaOm
I use only 128 GB mSATA cards from reputable brands.
I always do the following:
- Boot from the Mac OS 9 Lives 9.2.2 image (v9 of the image) by CD
- Wipe the SSD using Disk Utilities 2.1
- Restore from the CD
I will say this fails perhaps 1 out of 20 times. Hard to say how often this is an actual hardware failure versus some kind of incompatibility with the mSATA SSD since I do use a range of brands. I am always using the same adapters.
Seems more of a curiousity than something practical - in particular, the System 7 "native" on the Mac mini G4 is missing a lot of drivers. There aren't that many situations where software runs well on System 7 tha doesn't on Mac OS 9.2.2, and for the rare case that it does, emulation in something like vMac is sufficient.
Wow this is neat!!! Put on my list to order sometime soon!
> It is also my opinion Mac OS 9.2.2 is the greatest OS, and Mac OS, ever, but not everything that is possible in earlier Mac OS versions is possible in Mac OS 9.2.2.
I had fun with hypercard on MacOS 9. At work, even. The boss was into rapid prototyping, and I cooked up some damn productive stacks in a hurry.
It runs on the Cube and under OS 9 emulation on the new stuff.
Hypercard scripters did cool things that most users don't do today. And without those monster data centers.
Back when Java was the NextBigLanguage, we built Java development tools at KL Group/Sitraka (now a part of Quest). For version 2 of the suite of tools, we were getting rid of the nerdy configuration text file and planned on shipping a configuration wizard (yes, we called them wizards while fondling the onions we tied to our belts).
I was the Program Manager, and as usual we were very tightly constrained for time, and in the era of golden master DVDs that had to be ready to distribute at JavaOne in the Moscone Centre... Hard decisions had to be made. The team decided to work on more important features, and drop the configuration wizard from 2.0. Then I did what everyone knows is a no good, very bad, terrible thing. And although I got away with it that time, it's still a no good very bad, terrible thing:
I took my work computer home for the weekend and fired up a HyperCard "compiler" called Runtime Revolution that could make executables for Windows and Unix. Come Monday morning, we had a shippable configuration wizard. Leadership blew its top, because one of their values was, "We're a Java shop, which means we use Java to write Java tools." And after I left the company, they rewrote the configuration wizard in Java Swing.
https://en.wikipedia.org/wiki/LiveCode_(company)
To this day I consider firing up Electron and a complete React framework for simple tools to be a "Turing Tarpit," a place where absolutely anything you imagine is possible, but nothing of interest (in the domain of simple tools) is easy.
Thank you for writing down this memory. It would fit perfectly on https://folklore.org but unfortunately it seems that the site is no longer accepting new memories.
Not only that, everything felt _snappy_. No wasteful animations to add 0.28 ms to every interaction.
Oh, gotta be super snappy on a Mac mini G4!
Yeah, when a coworker and I showed my wife the first OS X preview, she was alarmed at how long it took to shut down (I mean System 7 shut down like you just kicked the cord out). "You'll have to find something else to like about it," was my coworker's response.
And to be sure, there was/is a lot to like about OS X.
But, probably because of the lack of a kernel, etc., System 7 sits somewhere in that nether/middle region on our personal computer journey. It's rich library of functions (the Toolbox) set it apart from machines before it that might have instead had a handful of ASSM routines you could "CALL" in BASIC to switch display modes, clear the screen, etc. But, as Amiga owners often reminded the Mac community in the day, no "true" preemptive multitasking…
I should say too, regarding programming, these days your ability to write safe, threaded code is probably the highest virtue to strive for, hardest to perfect — at least for me (so hard to wrap my head around). It seems to separate the hacks (in the negative sense) from the programming gods. I think wistfully of those simpler times when managing memory well, handling error returned from the system API gracefully were the only hurdles.
"You can’t simply add a lock here, because this function can be called while the lock is already held. Taking the same lock again would cause a deadlock…"
"The way you've implemented semaphores can still allow a potential race condition. No, I have no idea how we can test that scenario, with the unit tests it may still only happen once in a million runs—or only on certain hardware…"
(Since I have retired I confess my memory of those hair-pulling days are getting fuzzier—thankfully.)
Threads and locks are fundamentally the wrong abstraction for most scenarios. This is explained in complementary ways in two of the finest technical books ever written, Joe Armstrong's "Programming Erlang" and Simon Marlow's "Parallel and Concurrent Programming in Haskell". I highly recommend both.
Thank you for many fond memories of playing Glider and Pararena.
There are plenty of ways to multi threaded code these days. From actors to coroutines on the programmatic interface level to using green threads directly in go or Java. There is very little reason to resort using locks, mutexes, or semaphores outside of frameworks designed to make multi threading easier or very specific high performance code. (Where in the latter case it could be argued that multi threaded probably adds unreasonable latency and context switching.)
Whoever is downvoting you for speaking the truth should go stand in a corner. Or try maining BeOS for a while, to experience first-hand what happens when application programmers are forced to use threads and locks.
I don't understand why it takes 5 seconds for Chrome to open on my MBP while it's near instant on my Linux and Windows PC.
Why is eveything so slow on new MacOS?
It’s not everything, it’s just Chrome. Chrome is 1.6GB including all its dependencies. It’s going to be slow to start on any system if those dependencies aren’t preloaded.
Most Mac software I use (I don’t use Chrome) starts quickly because the dependencies (shared libraries) are already loaded. Chrome seems to have its own little universe of dependencies which aren’t shared and so have to be loaded on startup. This is the same reason Office 365 apps are so slow.
It's not just Chrome, it's everything, though apps that have a large number of dependencies (including Chrome and the myriad Electron apps most of us use these days) are for sure more noticeable.
My M4 MacBook Pro loads a wide range of apps - including many that have no Chromium code at all in them - noticeably slower than exactly the same app on a 4 year old Ryzen laptop running Linux, despite being approximately twice as fast at running single-threaded code, having a faster SSD, and maybe 5x the memory bandwidth.
Once they're loaded they're fine, so it's not a big deal for the day to day, but if you swap between systems regularly it does give macOS the impression of being slow and lumbering.
Disabling Gatekeeper helps but even then it's still slower. Is it APFS, the macOS I/O system, the dynamic linker, the virtual memory system, or something else? I dunno. One of these days it'll bother me enough to run some tests.
Somewhere around 2011 when I switched my MBP to an SSD (back when you could upgrade the drives, and memory, yourself), Chrome opened in 1-2 bounces of the dock icon instead of 12-14 second.
People used to make YouTube videos of their Mac opening 15 different programs in 4/5 seconds
Now, my Apple Silicon MacBook Air is very, very fast but at times it takes like 8-9 seconds to open a browser again.
I loved the MBP’s from that era. That was my first (easy) upgrade as well in addition to more memory. Those 5400 RPM hard drives were horrible. Also another slick upgrade you could do back then is to swap out the super drive with a caddy to have a second SSD/HDD.
It still works fine today, though I had install Linux on it to keep it up to date.
I'm running the latest MacOS right now on a modest m4 Mini and it doesn't seem slow to me at all. I use Windows for gaming and Linux for several of my machines as well and I don't "feel" like MacOS is slow.
In any case, Chrome opens quickly on my Mac Mini, under a second when I launch it from clicking its icon in my task bar or from spotlight (which is my normal way of starting apps). When Chrome is idle with no windows, opening chrome seems even faster, almost instant.
This made me curious so I tried opening some Apple apps, and they appear to open about the same speed as Chrome.
Gui applications like Chrome or Keynote can be opened from a terminal command line using the open command so I tried timing this:
which indicated that open was finished in under 0.05 seconds total. So this wasn't useful because it appears to be timing only part of the time involved with getting the first window up.Do you by chance still run an intel version of chrome on an apple silicon device?
Our work laptops have antivirus and other verification turned on which impose a 4-16x penalty on IO.
The cpu, memory, and ssd are blazing fast. Unfortunately they are hamstrung by bad software configuration.
The better question is why is Chrome so much slower and more of battery drainer than Safari on a Mac
It's always been that way. Even when I had a maxed out current-gen Mac Pro in 2008, it still launched and ran faster in Windows than MacOS.
I have seen people suggesting that it's because of app signature checks choking on Internet slowness, but 1. those are cached, so the second run should be faster, and in non-networked instances the speed is unchanged, and 2. I don't believe those were even implemented back in 2002 when I got my iMac G4, and it was likewise far quicker in Linux than in OS X.
At the time (2002), I joked that it was because the computer was running two operating systems at once: NeXTSTEP and FreeBSD.
MacOS 9 was awful, a product of a rather unpleasant era for Apple really. I wanna say through 9.2.1 maybe even through to 9.2.2 the OS had a nasty habit of corrupting your disk. Hardware-wise Apple used CMD64x based IDE controllers so when OS9 wasn't screwing with your data the hardware itself would.
There absolutely were animations e.g. when closing a Finder window, but they were much lighter weight. As far as I'm concerned System 7 was probably the zenith.
I'd rather say the zenith was 8.1 which was not very widely used. 8.5 did add some nice gimmicks like the app switcher palette but for some reason it felt way slower than 8.1.
8.1 was peak MacOS Classic for me as well. 8.5 was like Windows 98. Just added stuff that made it slower.
To me it’s the opposite, System 7 crashed all the time and MacOS 9 was rock solid. System 7 was a mess until 7.6, at which point it was basically MacOS 8. And the UI was way more pleasing, the system 7 one had a 80s vibe to me.
Mac OS 9 was Apple Windows ME; too many side ports of new features into the rickety legacy core OS (Win32 / Toolbox Mac OS) and not enough attention paid to detail since the Next Big Thing was already cooking (XP / OS X).
Mac OS 9 was certainly not rock solid as far as crashes were concerned, but very much better than System 7, that was clear to me. Maybe it is my rose-tinted glasses colouring my memory but I also remember that there were very few small bug, you know the just annoying kind, than I have today with macOS 15, there may be fewer hard crashes, but the number of paper cuts have increased by many orders of magnitude.
I remember it crashing a lot but maybe that's because I came of age around the OS 8/9 era. IIUC OS 9 had no memory protection so it's not exactly a surprise it was fragile.
Yup. It feels like I have traded, on an average week, three hard crashes (enough to need a reboot) and five small bugs back then, with zero hard crashes and ninety minor bugs (some requiring restarting the app) today. Sometimes I feel like I would like to go back because many of the smaller bugs drive me mad in a way that never happened back then.
Well, I got my B&W G3 because MacOS 9 lunched the filesystem as it was prone to doing. SCSI drive so it wasn't that other disk corruption fun (which I went through in PC land). As far as I'm concerned MacOS 9 was mostly a bunch of paper cuts glued together. Lots of stuff that would've demoed in OSX if Apple had the time and patience.
So yeah Apple had tacked on vestigial multi-user support, an automatic system update mechanism, USB support, etc., etc. but underneath it was still the same old single user, cooperative multitasked, no memory protection OS as its predecessors. Unlike OSX, MacOS 9 (like 7 and 8 before it) still relied on the Toolbox which was a mishmash of m68k and ppc code.
7.6.x was pretty cool
W95 and W98 werent' much better until W98SE. Linux distros were rough but mega-stable.
Win98 was head and shoulders above System 9, from a stability perspective. It had protected memory, actual preemptive multitasking, a somewhat functional driver system built on top of an actual HAL, functional networking, etc, etc.
To be clear, Win98 was a garbage fire of an OS (when it came to stability); which makes it so much worse that Mac OS 8-9 were so bad.
98's multitasking and memory 'protection' were a joke. In the same mid high machine for the era, 2k and xp were miles ahead of w98 on mid-high load.
Maybe not on a Pentium, but once you hit 192MB of RAM and some 500 MHz P3/AMD k7, NT based OSes were tons better.
You only realized that upon opening a dozen of IE windows. W98, even SE, will sweat. 2k will fly.
On single tasks, such as near realtime multimedia ones, w98 would be better, such as emulators/games or video players with single thread decoders. On multiprocessing/threading, w98 crawled against W2K even under p4's and 256MB of RAM.
Well, Win NT is an actual operating system, and Win 98 and Classic macOS are just horribly overgrown home computer shells in an environment they should never have been exposed to.
And yet, OS 8 and OS 9 couldn't even match that joke.
Ahem, w98 BSOD if you sneezed hard near it. Installing a driver? BSOD. IE page fault? BSOD. 128k stack limit reached? either grind to a halt or a BSOD. And so on...
I worked at a company that was delivering a client-side app in Java launched from IE. I think we had an ActiveX plugin as the "launcher." This predated "Java Web Start." It was hysterically bad. We were targeting 32 meg Win 98 systems and they were comically slow and unstable. Most of our developers had 64 and 128 meg boxes with NT 4.0. I mostly worked on the server side stuff, and used them as a terminal into the Solaris and Linux systems.
> To be clear, Win98 was a garbage fire of an OS (when it came to stability); which makes it so much worse that Mac OS 8 and 9 were so bad.
WIn 98SE and Mac OS 9 were on par. Ditto with System 7.5.3 and Windows 95 OSR2.
I disagree, and gave the technical reasons. So now we're just going into opinion, which I'm not interested in.
Either way, you're welcome to believe what you like.
System 6 had menu blinks, zoom animations (with rect XORs no less), and button blinks when you used keyboard completion. Mac was the original "wasteful animation" OS.
This is feedback. You press a shortcut; how do you know it worked or not? You do because the corresponding menu rapidly blinked. Or you double click an icon and suddenly a rectangle appears in another part of the screen. Is this related? Here the animation shows that yes, the icon transformed into a window.
On the other hand on my mobile Firefox I wait seemingly a half second each time I long press a link, because there is an animation that zooms a context menu. It does not zoom from the link, which could be justified maybe, but always in the same place in the center of the screen. This animation is meaningless and thus wasteful.
That xor effect was under FVWM too for moving and resizing windows and doing an xor wireframe was MUCH faster than a full repaint.
If you had no X11 acceleration (xvesa for instance), that mode was magnitudes faster than watching your whole browser window repaint on a resize lasting more than 3 seconds on a Pentium.
HyperCard is one of my all time favourite memories of Mac OS.
I like System 6: the most complete version of the “real” classic Mac OS before System 7 started to be more “modern.” Dead simple, not a lot of new abstractions and metaphors layered on.
I kind of wish there was a version of System 6 without MultiFinder. Classic Mac OS clearly wasn’t built with multi-tasking in mind.
You could turn off Multifinder in System 6, no problem. It wasn't until System 7 that it was fully baked-in.
You might enjoy decker:
https://internet-janitor.itch.io/decker
> Mac OS 9.2.2 is the greatest OS
It still irritates me that command + N does a new window, not a new folder. I wouldn’t even have used that shortcut much, as I was a kid and it’s been 25 years.
1. Go to the App Shortcuts subsection of the Shortcuts section of the Keyboard preferences panel in System Preferences.
2. Add an arbitrary keyboard shortcut (perhaps ⌥⌘N) for the "New Finder Window" Finder menu command.
3. Add your desired keyboard shortcut (⌘N) for the "New Folder" Finder menu command.
4. Restart Finder.
Thank you!
FYI, Trello (or one of the many clones of it) can be used for similar purposes.
I enjoyed how quick it was on my G4 iMac (Mac OS X 10.1/10.2 was a total dog) but it was never stable enough for my liking. Forced to choose between fast and unstable (OS9) or slow and steady (OS X), I chose to install Yellow Dog Linux instead (reject the premise).
> In my case, first I tried using the latest Python 3.13.9 both from Windows 7 (bad idea due to resource fork loss) and macOS 10.14.6 Mojave, but neither worked: it seems like that version of Python was just too new. I then retried with Python 3.8.10 instead (which I chose thinking it might be more period-appropriate for the script's age) on Mojave, which worked flawlessly.
Ah, classic Python. Removing features [0] and breaking perfectly working software just because the feature is old, ugly, and not widely used.
[0] https://github.com/elliotnunn/tbxi/issues/1
Max frustrating. If I were writing tbxi again it would be in Go.
Why not C89? Try to make it as portable as possible. The software is intended for preservation of old computers and their software. Would make sense for the software to be as portable as possible.
Who knows, maybe someone would want to run it on vintage Mac hardware?
This is a good point. For a while, I was writing vintage Mac hacking tools in ANSI C [1].
The answer is that it's too hard. I want to see the Happy Mac on as much exotic hardware as possible, but my day job keeps me very busy, and I want to enjoy my hobby without tedium and toil. I prefer to use a language that is "easier to write, easier to read, easier to understand, easier to maintain" [2].
[1] https://github.com/elliotnunn/mac-rom/blob/master/Tools/Tool... [2] https://commandcenter.blogspot.com/2012/06/less-is-exponenti...
It already exists. But, if any, Free Pascal with Lazarus for classic Mac ppc would be ideal. Put that MacSSL port available under fpc and now you can compete with the rest.
I commented on the GitHub issue rather than here, if you're interested. I'd like to help make this work.
That's a bit reductive. We're talking about something only relevant to retrocomputing (MacOS 9 is unsupported as of early 2002 per Wikipedia) where they could scarcely find evidence of anyone using it at all. And taking these things out of the standard library means that core devs can wash their hands of it. (It also means that a large majority of users can avoid downloading and storing things that will always be useless to them; but nobody seems to care about that sort of thing any more, much to my chagrin.)
https://github.com/python/cpython/issues/83534
Misread as “Mac mini M4” and was going to be _very_ impressed.
Honestly this is still pretty insane.
[dead]
StarMax series (and the 4400) seemed to be about as close to CHRP as we got. My off-brand StarMax clone (PowerCity) had a PS/2 and an ISA port. Ran BeOS well, and had a quirk that I could hear a tight loop on the speaker.
AFAIK most StarMax systems that were released (a prototype exists of a CHRP StarMax model) are based on the Tanzania / LPX-40 design, which is mostly a traditional PCI PowerMac[1], albeit with oddities like support for PC style floppy drives. PS/2 is handled by the CudaLite microcontroller which presents it to the OS as ADB devices for example. I've not heard of a version with ISA slots, although I assume you could just have a PCI to ISA bridge chip, even if MacOS presumably wouldn't do anything with it.
[1] https://cdn.preterhuman.net/texts/computing/apple_hardware_d...
Right, I think those were the closest we got to the CHRP standard, as they moved the platform toward PC-style floppies, PS/2, ATX PSU and even more generic "platform" stuff than most clones. I'm fairly sure I had an ISA slot, I do remember trying to get a bargain bin NE2K card working in mine under linux (it didn't work). Definitely did nothing under OS 8/9.
The powercity models were interesting, because they came out after Apple revoked Motorola's clone license. A German company, ComJet, bought up the boards and sold unlicensed clones cheap. Case was slightly different, but otherwise they corresponded to StarMax models (fairly certain they were identical but may have been last revision boards).
Kinda sorts. The systems that the "MacOS on CHRP" thing ran on had a very strange looking device tree, with some bizarre combination of PC and Mac peripherals.
Refer to the "Macintosh Technology in the Common Hardware Reference Platform" book for more information, if you're curious about the Mac IO pieces.The Motorola Yellowknife board seems remarkably similar to this system, as well as the IBM Long Trail system (albeit with Long Trail using a VLSI Golden Gate versus a MPC106 memory controller). Both of them use W83C553 southbridges and PC87307 Super I/O controllers.
The architecture is kind of weird, but the schematics on NXP's website can probably elucidate a bit more on the system's design.
A fun "do-it-yourself" question for people who've always wanted to learn about the baroque architecture of the PowerPC Mac and the classic Mac OS: where is hardware support for specific models implemented?
In concentrically encrusted layers
I have an iMac G4 1.25 GHz. Originally, it was a 1GHz, but I swapped out the motherboard for a later model. For a while I've been wondering if I would had been better off with an earlier motherboard capable of booting OS 9 natively. Compared with using OS X's classic mode, this would omit the overhead of running a whole other OS and leave me with more resources to run OS 9 apps and games. I don't get a whole lot of use out of the earlier OS X software that I have on there...
Maybe in the future I won't have to make that choice! I'd much rather dual boot OS 9 off a different partition, but that hasn't been supported on the 1-1.25GHz models (Thanks Steve...) and no one has gotten it working properly. Maybe now it will be possible! A man can dream...
9 has been possible on that board for years now. No internal speaker but the headphone jack works.
When did that happen? Do you have a link to the exact CD image you used?
You can find a 9 image for it at macos9lives.com
Slight nitpick: this isn't "natively booting" System 7, nor is any other PowerPC Mac. This is simply an emulator, no different than using vMac, qemu, etc. - it just happens to be an emulator that Apple has been shipping with PowerPC Macs since it introduced the Power Mac 6100, which "natively" booted System 7.1.2.
Nonetheless, this is an impressive accomplishment.
This is really cool, the kind of content great to see here.
That's impressive but early macOS were pretty awful UX; I think the UI thread was everything.
I remember clicking and waiting.
I remember that yes, expensive operations could take a while, but the interface was much faster than my M1 Max Studio for the sole reson you actually do not have to wait for animations.
And not just for the reasons that animations were sparse, they also never blocked input, so for example if you could see where a new element would appear you could click there DURING the animation and start eg typing and no input would be lost meaning that apps you used every day and became accustomed to would just zip past at light speed because there were no do-wait do-wait pipeline.
The animations were there, but they were frame-based with the number of frames carefully calculated to show UI state changes that were relevant. For example, when you would open a folder, there would be an animation showing a window rect animating from the folder icon into the window shape, but it would be very subtle - I remember it being 1 or 2 intermediate frames at most. It was enough to show how you get from "there" to "here" but not dizziingly egregious the way it became in Aqua.
Truth be told, I do have a suspicion that some folks (possibly - some folks close to Avie or other former NeXT seniors post-acquisition) have noticed that with dynamic loading, hard drive speed, and ubiquitous dynamic dispatch of ObjC OSX would just be extremely, extremely slow. So they probably conjured a scheme to show fancy animations to people and wooing everyone with visual effects to conceal that a bit. Looney town theory, I know, but I do wonder. Rhapsody was also perceptually very slow, and probably not for animations.
There were also quite a few tricks used all the way from the dithering/blitting optimizations on the early Macs. For example, if you can blit a dotted rect for a window being dragged instead of buffering the entire window, everything underneath, the shadow mask - and then doing the shadow compositing and the window compositing on every redraw - you can save a ton of cycles.
You could very well have do-wait-do-wait loops when custom text compositing or layout was involved and not thoroughly optimized - like in early versions of InDesign, for instance - but it was the exception rather than the rule.
> Truth be told, I do have a suspicion that some folks (possibly - some folks close to Avie or other former NeXT seniors post-acquisition) have noticed that with dynamic loading, hard drive speed, and ubiquitous dynamic dispatch of ObjC OSX would just be extremely, extremely slow. So they probably conjured a scheme to show fancy animations to people and wooing everyone with visual effects to conceal that a bit. Looney town theory, I know, but I do wonder. Rhapsody was also perceptually very slow, and probably not for animations.
Done exactly this myself to conceal ugly inconsistent lags - I don’t think it is that uncommon an idea.
I'm think that ObjC's dynamic dispatch is reasonably fast. I remember reading something about being able to do millions of dynamic dispatch calls per second (so less 1 us per) a long time ago (2018-ish?), but I can't think how to find it. The best I could come up with is [1], which benchmarks it as 2.8 times faster than a Python call, and something like 20% slower than Swift's static calling. In the Aqua time-frame I think that it would not have been slow enough to need animations to cover for it.
[1] https://forums.swift.org/t/performance-of-swift/26911
My most durable memory is all the reboots due to programs crashing. Didn't help that a null pointer deref required a system reboot - or that teenage me was behind the keyboard on that front.
more the fault of MB of ram and HDDs being quite slow to be honest.
> I think the UI thread was everything.
How would you have done it?
Preemption is a very nice OS feature it turns out (particularly once multi-core rolled around). Still, I recall os 8 and 9 being generally snappier than windows 98 (and a lot snappier than early builds of OSX)
How does preemption work on a processor that barely has interrupts and has no way to recover state after a page fault, in an OS that has to fit into a couple dozen kilobytes of ROM?
Later Mac ROMs were 512KB, same with the later Amiga Kickstarts (3.x) That was a lot of space for the late 80's and early 90's. Interrupts were supported (8, if I recall.) And 68000 machines didn't support virtual memory until the 68010 and later, so no issues with page faults.
I still remember the day teenage me got an Amiga 500 with a whopping 512K of RAM, and witnessed the power of multitasking, way back in 1988.
The Amiga had preemptive multithreading with multiple task priorities on the original MC68000. Preemption is distinct from memory protection or paging.
There were plenty of preemptive multitasking systems for the original 68000, and regardless page fault recovery was fixed from the 010 onwards.
And certainly was very not a problem on PowerPC which TFA is about.
Also not sure how you can say the 68000 "barely has interrupts" I don't even know what you're on about.
MacOS was broken because Jobs forced it to be that way after he was kicked off the Lisa team. Which had a preemptive multitasking operating system on the 68000.
Preemptive multitasking is unrelated to page faults. And the 68k handled page faults just fine starting from the 68010.
Space constraints were certainly limiting on the earlier models, but later ones were plenty capable. Apple itself shipped a fully multitasking, memory protected OS for various 68k Mac models.
By the late 80s, the only reason the Macintosh system was still single-tasking with no protected memory was compatibility with existing apps, and massive technical debt.
I’ve been waiting for this post.
I run OS 9 on my lamp iMac G4 but now I want to try 7.6.1!
yes, multiple Macs within arms reach right now!
++ BBEdit
One of my early Macs was a Performa 638CD with no dedicated FPU. I had upgraded to a Performa 6400 (which felt like an absolute dog despite its size) but finally had an opportunity to move to the PowerComputing PowerTower Pro 225. What a beast! I hate to say it, but it was probably my favorite Mac I'd ever owned before the first iMac.
The Megahertz wars in the 1990s made it really difficult to understand relative performance across even the same ISA like this, and I think computers with the 603 CPU were a bit of a wrench in people's perception of the Mac.
The 180 or 200MHz 603e with 16k L1 cache in that Performa 6400 wasn't slow by any stretch, but it probably didn't have L2 cache. Coupled with the gradual transition to PPC native code of the OS and apps, these machines were often a little mismatched to expectations and realities of the code.
Meanwhile that PowerTower had a 604e with 32/32k L1 and 1MB L2 cache. That was a fast flier with a superscalar and out of order pipeline more comparable to the Pentium Pro and PII.
Oh believe me. I owned it. It felt slow even at the time.
Yup. Recall the far better cycle efficiency of the 100 MHz hyperSPARC.
Consumers didn't grok cycle efficiency, pipeline depth, or branch prediction miss pipeline stall latency.
I have a PowerCenter Pro 210 in my basement right now! It's not quite as nice as the newer architecture in the PowerTower Pro machines, but it runs MacOS 7.6.1 wonderfully. It is more than enough for classic Mac games of that era - and a joy to use.
The later PowerCenter Pro’s could run with a 60 MHz FSB whereas the PowerTower Pro’s were usually 45-50 MHz FSB. There are a variety of tasks where my PowerCenter Pro 240 outruns my PowerTower Pro 250 for precisely that reason.
As an European, Classic Macs (and current ones) were just for arts/writting people. If you knew what CMYK was in order to print a newspaper, you were a Mac user.
I emulated Mac OS 7 under XP times, and i was impressed that you could get far faster speeds emulating the M68k (and partially the PPC) compared to Intel X86 without any hardware accelerating chip (IntelVT) or kernel modules trapping X86 instructions running it at native speeds. I mean, PPC and M68k chips where much easier to emulate than X86 on itself.
On software, Classic Mac users can just resort to IRC and Gopher clients and visit the public https://bitlbee.org IRC servers in order to connect 'modern' accounts and being proxied to a Mac IRC client. And for Gopher, you have gopher://hngopher.com, gopher://magical.fish and the like. Sadly you don't have an easy TLS library as Amiga users have (AmiSSL) where even modern web can work on it (and IRC over TLS, Gemini...).
Altough... if Amiga m68k emulators run fast with the Rosetta like tech for PPC... you would just fire up Workbench and then AmiSSL. Crude, but it would work. If not, here in the Apple subdir you can get, maybe, some TLS enabled browsers:
gopher://bitreich.org/1/lawn
and
gopher://happymacs.ddns.net/1Vintage-Mac-Software-Archive
MacSSL:
https://github.com/demoniccode12/MacSSL
Usenet will work fine without any TLS, and there's tons of content out there.
It was because of QuarkXPress and Photoshop. In the same way WordPerfect and Lotus 1-2-3 were dominant for business computers.
Wish someone would try to create native MacOS classic on x86 hardware.
There are so many Unix or Linux ABI compatible kernels like the recent Moss written in rust.
> Wish someone would try to create native MacOS classic on x86 hardware.
Apple worked on this themselves - and then they canned it.
https://lowendmac.com/2014/star-trek-apples-first-mac-os-on-...
Ardi Executor. There's a recent fork at GitHub. You can run m68k binaries seamlessly. You don't need propietary MacOS parts, just the software.
But if you are some software preserver, having a libre option to run legacy media it's always good for historical reasons. I am a daily libre software user but I emulate ancient machines with propietary stuff just for curiosity. As it not a personal computing device I find it fine. It's just an historical toy and not my computing device. And, well, if you want to create libre engines for old Mac games (ScummVM, SDL ports...), for sure you need to at least emulate the old OSes and run the propietary game in order to compare the output and correctness.
Also, it already exists "Mac" for x86. It was Rhapsody DR2 and it could run Classic Mac software and NeXT one too. It was like a blend of these two. OSX it's like NeXT Step concept 2.0, with few traces of Mac Classic. Qemu will run it fine.
https://lunduke.substack.com/p/hands-on-with-1998s-rhapsody-...
Rhapsody DR2 is not a solution for classic Mac OS on x86. Lunduke writes:
"Unfortunately [the Blue Box] was only available on PowerPC versions of Rhapsody"
Another option is Advanced Mac Substitute. It doesn't run everything, but what it does run it runs really well. One of my goals is that you can use a 68K Mac application (e.g. MacPaint) as part of your personal computing workflow, if you wish.
https://www.v68k.org/ams/
Adding Executor does that for free as in freedom.
Edit, ah, both are similar.
It would be great if somebody tried to create an opensource version of Rhapsody DR2 that ran on X86 baremetal.
Would not even need to be binary compatible. Source compatible API would be enough.
Rhapsody DR2 is more like Classic Mac than any current MacOS.
Source compatible API it's GNUStep since the 90's.
At least the NeXTStep part; not the Mac GUI (Carbon?) one.
I will have to see if this is yet able to run Macromedia Freehand/MX --- if it is, I no longer need to have a Windows machine for that....
Now, if I can just get a nice portable with:
- largish OLED
- current gen Wacom EMR digitizer support
- decent battery life
running Linux, I can get off the Windows update treadmill....
And scientists.
For some reason european science was full of old school Mac users.
Much of the early part of the Human Genome Project was done using gel based DNA sequencing machines that were controlled by Classic Macs.
The rest of our shop was Solaris on SPARC/x86 and we had our own custom tool chain that crunched the data, but the sequencer itself was run by a Mac.
From 1999 or so forward the next generation of machines were Windows.