Over Christmas I tried to actually build a usable computer from the 32-bit era. Eventually I discovered that the problem isn't really the power of the computer. Computers have been powerful enough for productivity tasks for 20 years, excepting browser-based software.
The two main problems I ran into were 1) software support at the application layer, and 2) video driver support. There is a herculean effort on the part of package maintainers to build software for distros, and no one has been building 32 bit version of software for years, even if it is possible to build from source. There is only a very limited set of software you can use, even CLI software because so many things are built with 64 bit dependencies. Secondly, old video card drivers are being dropped from the kernel. This means all you have is basic VGA "safe-mode" level support, which isn't even fast enough to play an MPEG2. My final try was to install Debian 5, which was period correct and had support for my hardware, but the live CDs of the the time were not hybrid so the ISO could not boot from USB. I didn't have a burner so I finally gave up.
So I think these types of projects are fun for a proof of concept, but unfortunately are never going to give life to old computers.
> Computers have been powerful enough for productivity tasks for 20 years
It baffles me how usable Office 97 still. I was playing with it recently in a VM to see if it worked as well as I remembered, and it was amazing how packed with features it is considering it's nearing on thirty. There's no accounting for taste but I prefer the old Office UI to the ribbon, there's a boatload of formatting options for Word, there's 3D Word Art that hits me right in the nostalgia, Excel 97 is still very powerful and supports pretty much every feature I use regularly. It's obviously snappy on modern hardware, but I think it was snappy even in 1998.
I'm sure people can enumerate here on the newer features that have come in later editions, and I certainly do not want to diminish your experience if you find all the new stuff useful, but I was just remarkably impressed how much cool stuff was in packed into the software.
I think MS Word was basically feature-complete with v4.0 which ran on a 1MB 68000 Macintosh. Obviously they have added lots of UI and geegaws, but the core word processing functionality hasn't really changed at all.
Small, medium and large colleges in the UK ran on Novell servers and 386 client machines with windows for workgroups and whatever Office they came with. I think the universities were using unixy minicomputers then though. Late 80s early 90s. Those 386 machines were built like tanks and survived the tender ministrations of hundreds of students (not to mention some of the staff).
I still use Office 2010 to this day and feel like absolutely nothing is missing that I truly need. The only issues are Alt-Tab and multiple monitors have bugs. But functionality? 100%.
It's wild to remember that I basically grew up with this type of software. I was there, when the MDI/SDI (Multi-Document Interface / Single-Document Interface) discussion was ongoing, and how much backlash the "Ribbon"-interface received. It also shows that writing documents hasn't really changed in the past 30 years. I wonder if that's a good or bad development.
With memory prices skyrocketing, I wonder if we will see a freeze in computer hardware requirements for software. Maybe it's time to optimize again.
I have MS Office 4.0 installed on my 386DX-40 with 4 MB of RAM and 210 MB HDD, running Windows 3.1, and it is good. Most of the common features are there, it's a perfectly working office setup. The major thing missing is font anti-aliasing. Office 95 and 97 are absolutely awesome.
I do remember running Word on an Am386DX-40 and later an i486DX2-66 and there was an issue that wouldn't be a problem with faster hardware; the widow/orphan control happened live so if you made an edit, then hit print, there was a race condition where you could end up with a duplicated line or missing line across page boundaries. Since later drafts tended to have fewer edits, I once turned in a final draft of a school paper with such an error.
Totally agree!
I‘d pay definitely $300 (lifetime license) for a productivity suite like Windows 95 design and Office 95 with no bloatware and ads. Just pure speed and productivity.
I'd add multicore processors as well, which makes multiprocess computing viable. And as a major improvement, Apple's desktop CPUs which are both fast, energy efficient and cool - my laptop fan never turns on. At one point I was like "do they even work?" so I ran a website that uses CPU and GPU to the max, and... still nothing, stuff went up to 90 degrees but no fan action yet. I installed a fan control app to demonstrate that my system does in fact have fans.
Meanwhile my home PC starts blowing whenever I fire up a video game.
It's crazy too to realise how much of the multi-application interop vision was realized in Office 97 too. Visual Basic for Applications had rich hooks into all the apps, you could make macros and scripts and embed them into documents, you could embed documents into each other.
It's really astonishing how full-featured it all was, and it was running on those Pentium machines that had a "turbo" button to switch between 33 and 66 MHz and just a few MBs of RAM.
With the small caveat that I only use Word, it runs perfectly in WINE and has done for over a decade. I use it on 64-bit Ubuntu, and it runs very well: it's also possible to install the 3 service releases that MS put out, and the app runs very quickly even on hardware that is 15+ years old.
The service packs are a good idea. They improve stability, and make export to legacy formats work.
WINE works better than a VM: it takes less memory, there's no VM startup/shutdown time, and host integration is better: e.g. host filesystem access and bidirectional cut and paste.
I have used this on half a dozen machines with precisely zero special config.
Step by step:
1. Install WINE, all defaults from OS package manager.
2. Open terminal. Change to directory with Office 97 install files.
3. Run `wine setup`
4. For me: turn off everything except the essential bits of Word. Do not install OS extensions, as they won't work. No bits that plug into other apps. No WordMail, no FastFind, no Quicklaunch toolbar, no Office Assistant.
It definitely was snappy. I used it on school computers that were Pentium (1?) with about as much RAM as my current L2 cache (16MB). Dirty rectangles and win32 primitives. Very responsive. It also came with VB6 where you could write your own interpreted code very easily to do all kinds of stuff.
The curse-ed ribbon was a huge productivity regression. I still use very old versions of Word and Excel (the latter at least until the odd spreadsheet exceeds size limits) because they're simply better than the newer drivel. Efficient UI, proper keyboard shortcuts with unintrusive habbit-reinforcing hints, better performance, not trying to siphon all my files up to their retarded cloud. There is almost nothing I miss in terms of newer features from later versions.
The ribbon thing was a taste of things to come in the degradation of UI standards. Take something that works great and looks ok, replace it with something flashy that gives marketing people something to say. Break the workflow of existing users. Repeat every 10 years.
IIRC the Ribbon had real UX testing behind it. All the most common features were truly easier to access, but it was harder to find a certain feature when you needed it. In other words they optimized for the wrong thing.
My favorite was that Paste was a giant button while Cut and Copy were small because the UX research found that people paste more than they cut or copy...
This! I have the 14-core M4 Macbook Pro with 48GB of RAM, and Word for Mac (Version 16 at this time) runs like absolute molasses on large documents, and pegs a single core between 70 and 90% for most of the time, even when I'm not typing.
I am now starting to wonder how much of it has to do with network access to Sharepoint and telemetry data that most likely didn't exist in the Office 97 dial-up era.
Features-wise - I doubt there is a single feature I use (deliberately) today in Excel or Word that wasn't available in Office 97.
It's an optional install. You can just click Custom, untick "Office Assistant" and other horrid bits of bloat like "Find Fast" and "Word Mail in Outlook" and get rid of that stuff.
My crappy old 2018 Chromebook is still just about usable with 2GB but has gone from a snappy system to a lethargic snail.. and getting slower every update.. Yeah for progress!
eMMC Chromebooks are notorious for storage-related slowdowns. If it's an option, booting a ChromeOS variant or similar distro off a high-speed microSD, over USB, or (least likely with a Chromebook) via PXE might confirm.
“Powerful enough for productivity tasks” is very variable depending on what you need to be productive in. Office sure. 3D modelling? CAD? Video editing? Ehhhhh not so sure.
I don’t know enough about CAD to comment but video editing is considerably more expensive now for a bunch of reasons and I don’t think an Amiga could handle it now.
Video compression is a lot more computationally complex now than it was in the 90s, and it is unlikely that an Amiga with a 68k or old PowerPC would be able to handle 4k video with H265 or ProRes. Even if you had specialized hardware to decode it, I’m not 100% sure that an Amiga has enough memory to hold a single decompressed frame to edit against.
Don’t get me wrong, Video Toaster is super awesome, but I don’t think it’s up to modern tasks.
Except for Internet surfing, a plain Amiga 500 would be good enough for what many folks do at home, between gaming, writing letters, basic accounting and the occasional flyers for party invitations.
Total nostalgia talk. Those machines were just glacially slow at launching apps and really everything, like spell check, go get a coffee. I could immediately tell the difference between a 25Mhz Mac IIci and a 25Mhz Mac IIci with a 32KB cache card. That's how slow they were.
Some of us do actually use such machines every now and then.
The point being made was that for many people whose lives doesn't circle around computers, their computing needs have not changed since the early 1990's, other than doing stuff on Internet nowadays.
For those people, using digital typewriter hardly requires more features than Final Writer, and for what they do with numbers in tables and a couple of automatic updated cells, something like Superplan would also be enough.
> their computing needs have not changed since the early 1990's, other than doing stuff on Internet nowadays.
So in other words, their computer needs have changed significantly.
You can't do most modern web-related stuff on a machine from the 90s. Assuming you could get a modern browser (with a modern TLS stack, which is mandatory today) compiled on a machine from the 90s, it would be unusably slow.
Not everyone is all the time on the Internet, for some folks their computer needs have stayed the much pretty much the same.
If they want to travel they go to an agency, they still go to the local bank branch to do their stuff, news only what comes up on radio and TV, music is what is on radio, CDs and vinyl, and yet manage to have a good life.
Amigans are already using AmiSSL and AmiGemini (and some web browsers) perfectly fine in m68k CPU's recreated with FPGA's.
You can do modern TLS stuff with a machine from the 90's if you cut own the damn JavaScript and run services from https://farside.link or gemini://gemi.dev proxying the web to Gemini.
Yeah, I just posted that a lot of that software was amazing and pretty 'feature-complete', all while running on a very limited old personal conmputers.
Just please don't gaslight us with some alternate Amiga bullshit history. All that shit was super slow, you were begging for +5Mhz or +25KB of cache. If Amiga had any success outside of teenage gamers, that stuff would have all been historical, just like it was on the Mac.
The Amiga had huge success outside of "teenage gamers", even if in niche markets. Amigas were extremely important in TV and video production throughout the 1990s. I remember a local Amiga repair shop in South Florida that stayed in business until about 2007, mainly by servicing Amigas still in service in the local broadcast industry -- all of the local cable providers in particular had loads of them, since they were used for the old Prevue Guide listings, along with lots of other stuff.
Goes both ways, Mac was hardly something to write home about outside US, and they did not follow Commodore footsteps into bankruptcy out of sheer luck.
The Mac didn't exist in Europe except for expensive A/V production machines and the printing world (books, artists, movie posters, covers and the like).
If you were from Humanities and worked for a newspaper design layout you would use a Mac at work. That's it.
I worked on Macs from the start of my career in 1988. They were the standard computer for state schools in education here in the Isle of Man in the late 1980s and early 1990s.
The Isle of Man's national travel company ran on a Mac database, Omnis, and later moved to Windows to keep using Omnis.
I supported dozens of Mac-using clients in London through the 1990s and they were the standard platform in some businesses. Windows NT Server had good MacOS support from the very first version, 3.1, and Macs could access Windows NT Server shares over the built-in Appleshare client, and store Mac files complete with their Resource Forks on NTFS volumes. From 1993 onwards this made mixed Mac/PC networks much easier.
I did subcontracted Mac support for a couple of friends of mine's consultancy businesses because they were Windows guys and didn't "speak Mac".
Yes, they were very strong in print, graphics, design, photography, etc. but not only in those markets. Richer types used them as home computers. I also worked on Macs in the music and dance businesses and other places.
Macs were always there.
Maybe you didn't notice but they always were. Knowing PC/Mac integration was a key career skill for me, and the rise of OS X made the classic MacOS knowledge segue into more general Unix/Windows integration work.
Some power users defected to Windows NT between 1993 and 2001 but then it reversed and grew much faster: from around 2001, PowerMacs started to become a credible desktop workstation for power users because of OS X. From 2006, Macintel boxes became more viable in general business use because the Intel chips meant you could run Windows in a VM at full speed for one or two essential Windows apps. They ran IE natively and WINE started to make OS X feasible for some apps with no need for a Windows licence.
In other words, the rise of OS X coincided with the rise of Linux as a viable server and GUI workstation.
In Portugal there was only one single shop for the whole country, Interlog, located in Lisbon.
Wanted to get a Mac, needed to travel there, or order by catalogue, from magazine ads.
On my university there were about 5 LCs on a single room for students use, while the whole campus was full of PCs, and UNIX green/amber phosphor terminals to DG/UX rooms, on all major buildings.
Besides that single room, there were two more on the IT department, and that was about it.
When Apple was going down, between buying Be or NeXT as last survival decision, the fate of the university keeping those Macs around was being discussed.
>Yes, they were very strong in print, graphics, design, photography, etc. but not only in those markets. Richer types used them as home computers. I also worked on Macs in the music and dance businesses and other places.
So, A/V production, something I said too. My point still stands. Macs in Europe were seen as something fancy for media production people and that's it. Something niche for the arts/press/TV/cinema world.
Nope. Wrong. My own extensive personal experience, travelling and working in multiple countries. Not true, never was.
Like I said, and you missed: but not only there.
People often mistake "Product A dominates in market B" -- meaning A outsells all others in B -- for "A only sells in market B."
Macs were expensive. Clone PCs were cheap. Yeah, cheap products outsell expensive ones. Doesn't mean that the expensive ones are some kind of fancy designer brand only used by the idle rich.
Yes, it was. I'm from Spain. The Macs where for media people, not for the common worker on a boring office, where MS dominated. At home, Macs where a thing maybe for some rounding percent from kids living in a loaded neighbour.
No one got Macs at school either. First DOS, then Windows 95/98. Maybe in some Universities they used Macbooks well into the OSX era, as a reliable Unix machine to compile legacy scientific stuff; and even in those environments GNU/Linux began to work perfectly well recompiling everything from Sparcs and the like with a much cheaper price.
Forget about pre-OSX machines in Spain outside of a newspaper/publishing/AV producing office. Also, by the time XP and 2000 were realiable enough against OSX (w9x was hell) that OS was replaced for much cheaper PC alternatives.
I mean, if w2k/wxp could handle big loads without BSODing every few hours, that was a success. And as the Pentium 4's with SSE2 and Core Duo's happened, suddenly G4's and G'5 weren't that powerful any more.
Those machines could be pretty darn fast - if you get one and run the earliest software that still worked on. DOS-based apps would fly on a 486, even as Windows 95 would be barely usable.
Or controlling the heating and AC systems at 19 schools under its jurisdiction using a system that sends out commands over short-wave radio frequencies
> Eventually I discovered that the problem isn't really the power of the computer.
Nope, that’s a modern problem. That’s what happens when the js-inmates run the asylum. We get shitty bloated software and 8300 copies of a browser running garage applications written by garbage developers.
I can’t wait to see what LLMs do with that being the bulk of their training.
not gonna disagree with you, but, as a solo developer who needs to reach audiences of all sorts, from mobile to powerful servers, the most reasonable choice today is Javascript. JS, with its "running environments" (Chrome, Node, etc.), has done what Java was supposed to do in the 90s. It's a pity that Java didn't hold its promises, but the blame is to put all on the companies that ran the show back then (and running the show now).
Rookie developers who use hundreds of node modules or huge CSS frameworks are ruining performance and hurt the environment with bloated software that consumes energy and life time.
> There is a herculean effort on the part of package maintainers to build software for distros, and no one has been building 32 bit version of software for years, even if it is possible to build from source. There is only a very limited set of software you can use, even CLI software because so many things are built with 64 bit dependencies
That seems odd? Debian 12 Bullseye (oldstable) has fully supported i386 port. I would expect it to run reasonably well on late 32 bit era systems (Pentium4/AthlonXP)
AFAIU the Debian i386 port has effectively required i686 level CPU's for quite a long time (CMOV etc.)? So if he has an older CPU like the Pentium it might not work?
But otherwise, yes, Debian 12 should work fine as you say. Not so long ago I installed it on an old Pentium M laptop I had lying around. Did take some tweaking, turned out that the wifi card didn't support WPA2/3 mixed mode which I had configured on my AP, so I had to downgrade security for the experiment. But video was hopeless, it couldn't even play 144p videos on youtube without stuttering. Maybe the video card (some Intel thing, used the i915 driver) didn't have HW decoding for whatever video encoder youtube uses nowadays (AV1?), or whatever.
we were decoding 480x320 MP4 on PalmOS 5 devices in early 2000. Those were single-core in-order 200mhz ARM devices with no accelerators at all. Pentium M outperforms those easily and thus can do it too.
I used to run a cs1.6 server on an amd 800mhz with 256mb of ram in the 2000s. I'm looking these days to get a mac mini and while thinking that 16gb will not be enough I remembered about that server. It was a NAT gateway too, had a webserver also with hitstats for the cs server. And it was a popular 16v16 type of server too. What happened? How did we get to 16gb minimum and 32gb will make you not sad.
NetBSD is probably what would make most sense to run on that old hardware.
Alternatively you may have accidently built a great machine for installing FreeDOS to run old DOS games/applications. It does install from USB, but needs BIOS so can't run it on modern PC hardware.
I was on linux as my main driver in the early 2000s an we did watch movies back then, even DVDs. Of course, the formats where not HD and it was DivX or DVD ISOs.
I remember running Gentoo and optimizing build flags for mplayer to get it working, at a time I had a 500Mhz Pentium III, later 850Mhz. And I also remember having to tweak the mplayer output driver params to get a good and smooth playback, but it was possible (mplayer -vo xv for Xvideo support). IIRC I got DVD .iso playback to run even on the framebuffer without X running at all (mplayer -vo fb). Also the "-framedrop" flag came in handy (you can do away with a bit less than 25fps when under load). Also, definitely you would need compile-time support for SSE/SSE2 in the CPU. I am not even sure I ever had a GPU that had video decoding support.
My 32 bit laptop is a Thinkpad T42 from 2005 which has a functioning CDROM, and which can run Slackware15 stable 32bit install OKish, so I haven't tried any of this but:
My first thought: How about using a current computer to run qemu then mounting the Lenny iso as an image and installing to a qemu hard drive? Then dd the hard drive image to your 32bit target. (That might need access to a hard drive caddy depending on how you can boot the 32bit target machine, so a 'hardware regress' I suppose).
My second thought: If target machine is bootable from a more recent live linux, try a debootstrap install of a minimal Lenny with networking (assuming you can connect target machine to a network, I'm guessing with a cable rather than wifi). Reboot and install more software as required.
I have OpenBSD running on my old 2004 Centrino notebook (I might be lagging 2-3 versions behind, I don't really use it, just play around with it) and it's fine until you start playing YouTube videos, that is kinda hard on the CPU.
Yes, NetBSD and OpenBSD work fine on the 2005 T42 but as you say video performance is low. Recent OpenBSD versions have had to reduce the range of binary packages (i.e. outside of the base and installed with pkg_add) on i386 because of the difficulty of compiling them (e.g. Firefox, Seamonkey needing dependencies that are hard to compile on i386, a point the poster up thread made).
"There is a herculean effort on the part of package maintainers to build software for distros, and no one has been building 32 bit version of software for years, even if it is possible to build from source."
This statement must be Linux-only
Pre-compiled packages for i386 are still available for all versions of NetBSD including the current one
I have a P166 under my desk and once in a blue moon I try to run something on it.
My biggest obstacles are that it doesn't have an ethernet port and that it doesn't have BIOS USB support (although it does have a card with two USB ports).
I've managed to run some small Linux distros on it (I'll definitely try this one), but, you're right, I haven't really found anything useful to run on it.
It seems that both OpenBSD [1] and NetBSD [2] still support i386, for example here [3] you can find the image for a USB stick.
I expect at least the base system (including X) to work without big issues (if your hardware is supported), for extra packages you may need a bit of luck.
i had an original 7" eeepc from 2007, running archlinux-32 from ~2017, with Xfce and all that, and few months ago updated it.. took me almost a day, going through various rabbit-holes, like 1-2 static-built pacmans and python and manually picking and combining various versions. The result was okay but somehow took more space than before (it has 4G ssd, from which i did have 2gb free, now only 1.5). But it maybe that is not old enough as machine..
Reminds me of my first linux distro called damnsmall linux. I think this was used as a first attempt to port linux to the gamecube, but the main team driving the effort ended up going with Gentoo instead.
From the main page:
As with most things in the GNU/Linux community, this project continues to stand on the shoulders of giants. I am just one guy without a CS degree, so for now, this project is based on antiX 23 i386. AntiX is a fantastic distribution that I think shares much of the same spirit as the original DSL project. AntiX shares pedigree with MEPIS and also leans heavily on the geniuses at Debian. So, this project stands on the shoulders of giants. In other words, DSL 2024 is a humble little project!
Though it may seem comparably ridiculous that 700MB is small in 2024 when DSL was 50MB in 2002, I’ve done a lot of hunting to find small footprint applications, and I had to do some tricks to get a workable desktop into the 700MB limit. To get the size down the ISO currently reduced full language support for German, English, French, Spanish, Portuguese and Brazilian Portuguese (de_DE, en_AU, en_GB, en_US, es_ES, fr_FR, es_ES, pt_PT, & pt_BR ). I had to strip the source codes, many man pages, and documentation out. I do provide a download script that will restore all the missing files, and so far, it seems to be working well.
Alpine is great, especially for anything single purposed and headless (be it physical, VM, or container) so long as that thing isn't too tied to glibc. Been around a long time with a stable community (who are mostly using it for containers). It also defaults to a typical versioned release scheme but has the ability to switch to rolling just by changing the repo if you know you need the latest versions.
I once tried to use it as a GUI daily driver on my work laptop (since I was already using it for containers and VMs at work) and found that stretched it a bit too far out of its speciality. It definitely had the necessary packages, just with a lot of rough edges and increased rate of problems (separate from glibc, systemd, or other expected compatibility angles). Plus the focus on having things be statically linked makes really wide (lots of packages) installs negated any space efficiency gains it had.
The persistence strategy described here (mount -t msdos -o rw /dev/fd0 /mnt) combined with a bind mount to home is a nice clever touch for saving space.
I don't know if that's also true for data integrity on physical magnetic media. FAT12 is not a journaling filesystem. On a modern drive, a crash during a write is at best, annoying while on a 3.5" floppy with a 33mhz CPU, a write operation blocks for a perceptible amount of time. If the user hits the power switch or the kernel panics while the heads are moving or the FAT is updating, that disk is gone. The article mentions sync, but sync on a floppy drive is an agonizingly slow operation that users might interrupt.
Given the 253KiB free space constraint, I wonder if a better approach would be treating the free space as a raw block device or a tiny appended partition using a log-structured filesystem designed for slow media (like a stripped down JFFS2 or something), though that might require too many kernel modules.
Has anyone out there experimented with appending a tar archive to the end of the initramfs image inplace for persistence, rather than mounting the raw FAT filesystem? It might be safer to serialize writes only on shutdown, would love more thoughts on this.
Controversial position: journaling is not as beneficial as commonly believed. I have been using FAT for decades and never encountered much in the way of data corruption. It's probably found in far more embedded devices than PCs these days.
If you make structural changes to your filesystem without a journal, and you fail mid way, there is a 100% chance your filesystem is not in a known state, and a very good chance it is in a non-self-consistent state that will lead to some interesting surprises down the line.
No, it is very well known what will happen: you can get lost cluster chains, which are easily cleaned up. As long as the order of writes is known, there is no problem.
Better hope you didn't have a rename in progress with the old name removed without the new name in place. Or a directory entry written pointing to a FAT chain not yet committed to the FAT.
Yes, soft updates style write ordering can help with some of the issues, but the Linux driver doesn't do that. And some of the issues are essentially unavoidable, requiring a a full fsck on each unclean shutdown.
I don't know how Linux driver updates FAT, but if it doesn't do it the way DOS did, then it's a bug that puts data at risk.
1) Allocate space in FAT#2, 2) Write data in file, 3) Allocate space in FAT#1, 4) Update directory entry (file size), 5) Update free space count.
Rename in FAT is an atomic operation. Overwrite old name with new name in the directory entry, which is just 1 sector write (or 2 if it has a long file name too).
No, the VFAT driver doesn't do anything even slightly resembling that.
In general "what DOS did" doesn't cut for a modern system with page and dentry caches and multiple tasks accessing the filesystem without completely horrible performance. I would be really surprised if Windows handled all those cases right with disk caching enabled.
While rename can be atomic in some cases, it cannot be in the case of cross directory renames or when the new filename doesn't fit in the existing directory sector.
FAT has two allocation tables, the main one and a backup. So if you shut it off while manipulating the first one you have the backup. You are expected to run a filesystem check after a power failure.
FAT can be made tolerant form the driver just like a journaled FS:
1) mark blocks allocated in first FAT
If a crash occurs here, then data written is incomplete, so write FAT1 with data from FAT2 discarding all changes.
2) write data in sectors
If a crash occurs here, same as before, keep old file size.
3) update file size in the directory
This step is atomic - it's just one sector to update. If a crash occurs here (file size matches FAT1), copy FAT1 to FAT2 and keep the new file size.
4) mark blocks allocated in the second FAT
If a crash occurs here, write is complete, just calculate and update free space.
5) update free space
Ps. On old good days there was not initrd and other ram disk stuff - you read entire system straight from the disk. Slackware 8 was that for sure and NetBSD (even newest one) is still doing it by default
OpenWrt on some devices such as Turris Omnia writes the squashfs (mounted as RO root fs) in the "root" partition and then, immediately after, in the same partition, it writes a jffs2 (mounted as RW overlayfs). So it can be done.
> If the user hits the power switch or the kernel panics while the heads are moving or the FAT is updating, that disk is gone.
Makes sense, great point. I would rather use a second drive for the write disk space, if possible (I know how rare it's now to have two floppy drives, but still).
Sadly, it does not seem to boot on my 486 DX2, I even stuffed 32M of RAM into the machine (8*4M, maximum the mainboard supports), more than the recommended 20M.
I have copied the floppy image from the site. It churns for about a minute and a half, loading kernel and initrd, then says "Booting kernel failed: Invalid Argument" and drops into SYSLINUX prompt.
EDIT: I tried a few more floppies to rule that out as the cause of the problem.
EDIT 2: I cloned SYSLINUX, checked out the specific commit and did some prodding around.
The function `bios_boot_linux` in `com32/lib/syslinux/load_linux.c` initializes errno to EINVAL. Besides sanity checking the header of the kernel image, there are a few other error paths that also `goto bail;` without changing errno.
Those other error paths all seem to be related to handling the memory map. I know that the BIOS in my machine does not support the E820h routine. I have a hunch that this might be the reason why it fails.
Most of those machines seem to be newer systems which probably support E820h, except for another 486 DX2 with a similar vintage as mine, that also failed to boot.
What’s your board and BIOS? Syslinux 6.x COM32 Linux loader
goes through the memmap layer syslinux_memmap_find() to place the kernel/initrd. If INT 15h E820 is missing and/or buggy on a 486 BIOS, it can surface as “invalid argument”.
For my 486 distro[see snacklinux.org], I use syslinux 4.07 due to similar issues. I never had any luck with syslinux 6.x, I’d recommend a similar path. It always seems funny to me when I see similar projects, claiming it runs on 486 hardware but rarely do I see people actually doing that, and just fire up qemu instead. Running Linux in a vacuum isn’t realistic, especially when we’re talking old hardware and configuring IRQs manually.
It is running some AMI BIOS variant with a copyright date of 1992, I currently don't have the exact version string around to compare with the ROM dumps on retroweb. vbindiff says the "F" and "M" images are identical and the "H" only has a few 1-byte differences, mostly typos in ASCII strings.
I've written a small boot sector program once that tries out memory and CPU information gathering techniques, so I know the INT 15h, E820h, E801h are not implemented but INT 12h and INT 15h AH=88h return something sane. When I have more than 16M installed, the later reports the full 31M of HIMEM, but I'm not sure how the ISA memory hole factors into this.
From what I saw glancing at the scanning code yesterday, syslinux 6.x should fall back onto AH=88h if AX=E820/E801 doesn't work. It's interesting to know that this worked in older SYSLINUX, I'm curious to check out what changed.
I remember the QNX Demo on a 1.44 MB floppy disk. It booted straight into a full blown window manager and had a basic web browser. That was 1999 and I never saw anything like that afterwards.
The first time I booted menuet OS (2005? high school?) I was absolutely floored at how capable (and decent looking) an OS that lives entirely on a 1.44mb floppy could be.
At a very low level, it is. I know the individual that made a "diagnostic" for the floppy drive while working as a tech on the Apple I and Apple II designs which caused the drive to whine in patterns that were distinctly ... orgasmic.
> There is 264KB of space left for your newly created files.
This could be increased noticeably by using one of the common extended floppy formats. The 21-sectors-per-track format used by MS¹ for Windows 95's floppy distribution was widely supported enough by drives (and found to be reliable enough on standard disks) that they considered it safe for mass use, and gave 1680KB instead of the 1440Kb offered by the standard 18-sector layout. The standard floppy formatting tools for Linux support creating such layouts.
--------
[1] There was some suggestion² that MS invented the extended floppy format, they were sometimes called “windows format”, but it³ had been used elsewhere for some time before MS used them for Windows and Office.
[2] I'm not sure if this came from MS themselves, or was invented by the tech press.
[3] and even further extended formats, including 1720KByte by squeezing in two extra tracks as well as more data per track which IIRC was used for OS/2 install floppies.
Iirc (it's been a while), Interactive Unix (full?) install required some 40 (forty!) 5 1/4" floppies (I believe 1.2MiB) anno 1992 or so. Linux (SLS) install was (a little later) so much smaller, even with X11 and TeX, as it had shared libraries (somewhat new in the *nix world then).
Before then, a local clone store had an 'insane deal' on floppy disks, and they came with Slackware. I had a Mac, and the floppies weren't very good so.
12‽ I'd swear the Slackware I downloaded was closer to 30+. On dialup. Via a VAX. Using FTP to go from internet to the VAX box, then Kermit from the VAX to the DOS PC using Procomm Plus. Write it all, start the install sequence, find out that the 18th disk was bad. Reboot. Rinse. Repeat.
X disks were X11. There were also the A,B, C etc disks.
Then there was the Coherent install, with massive manual on ultra thin paper with the shell on the front.
Probably not. Pretty sure it was Puppy Linux (among I'm sure others) that could be run on just two floppies. I used to have this old 933MHz Coppermine system that I took when a medical office was going to throw it out, some time in the early 00s.
The HDD was borked but it had a 3.5" bay that worked, so I got a floppy-based distro running on it. I later replaced the drive and then made the mistake of attempting to compile X11 on it. Results were... mixed.
I wonder if formatting the floppy is necessary. Could syslinux or maybe lilo load the kernel directly from raw floppy sectors and have the initrd appended to it and the commad line directly inside the kernel via CONFIG_CMDLINE? I know u-boot can do it, but that's 8+ MB.
As an alternative, isn't ext2 smaller by having no FAT tables?
There’s something really lovely about this project - especially as they’re using the last kernel from May 2025 before x486 support was removed. It feels like somebody lovingly mending their car for one last time or something similar. (I’m tired but you can probably find a cuter metaphor)
It’s amazing to me that the floppy is still a relevant target unit. Just large enough to be useful, small enough to be a real challenge to use well. I don’t see the same passion for 700MB CDROM distributions, probably because the challenge just isn’t there.
I should've been more clear. Sure, I started my Linux days on 2.0.36, which booted by floppy, on a Pentium 2. But what I want is some semblance of a distro, with tools and a way to do things, not just rolling my own technically-bootable kernel.
Since it’s an 1.44M image I assume they use 3.5” diskettes. The terms floppy and diskette are used as synonyms today, but the different names make sense since floppies are flexible and “floppy”. Diskettinux?
I was making routers our of old PCs (486 or early pentiums) with 2 network cards (3com or ne2000) back in 2000 with floppies and CoyoteLinux. Installed 10s of them in the students houses.
I was hoping someone would mention CoyoteLinux. It was my residential router for several years in the early 2000s. My 'disaster recovery plan' consisted of a second floppy disk (which fortunately I never had to use).
(That mail also mentions the floppy driver is "basically orphaned" though. But evidently it's still there and builds.)
Maybe you're thinking of the floppy tape (ftape) driver, which was removed back in the 2.6.20 kernel. Though there's a project keeping an out-of-tree version of it working with recent kernels at https://github.com/dbrant/ftape
Don't think so? Linux should still support almost all builtin motherboard floppy controllers, for the platforms it still runs on. ISA floppy controller support is probably not as comprehensive, but not because anything has been dropped.
Floppy is a race of robotic jackalopes, known for their floppy ears. A "Single Floppy" is a rare subset of that species where only one ear flops down due to a random mutation of their hardware.
Ok, impressive, but - why?
No current computer has a floppy disk drive anymore.
The Web Page claims building such a disk is a learning exercise, but the knowledge offered is pretty arcane, even for regular Linux users.
Is this pure nostalgia?
The two main problems I ran into were 1) software support at the application layer, and 2) video driver support. There is a herculean effort on the part of package maintainers to build software for distros, and no one has been building 32 bit version of software for years, even if it is possible to build from source. There is only a very limited set of software you can use, even CLI software because so many things are built with 64 bit dependencies. Secondly, old video card drivers are being dropped from the kernel. This means all you have is basic VGA "safe-mode" level support, which isn't even fast enough to play an MPEG2. My final try was to install Debian 5, which was period correct and had support for my hardware, but the live CDs of the the time were not hybrid so the ISO could not boot from USB. I didn't have a burner so I finally gave up.
So I think these types of projects are fun for a proof of concept, but unfortunately are never going to give life to old computers.
It baffles me how usable Office 97 still. I was playing with it recently in a VM to see if it worked as well as I remembered, and it was amazing how packed with features it is considering it's nearing on thirty. There's no accounting for taste but I prefer the old Office UI to the ribbon, there's a boatload of formatting options for Word, there's 3D Word Art that hits me right in the nostalgia, Excel 97 is still very powerful and supports pretty much every feature I use regularly. It's obviously snappy on modern hardware, but I think it was snappy even in 1998.
I'm sure people can enumerate here on the newer features that have come in later editions, and I certainly do not want to diminish your experience if you find all the new stuff useful, but I was just remarkably impressed how much cool stuff was in packed into the software.
(edit to say I'm obviously ignoring i8n etc.)
https://www.popularmechanics.com/technology/gadgets/a23139/c...
With memory prices skyrocketing, I wonder if we will see a freeze in computer hardware requirements for software. Maybe it's time to optimize again.
Yeah you can get machines which are higher specced easily enough, but they’re usually at the upper end of the average consumers budget.
But perhaps I'm just projecting. Ugh, Electron.
Meanwhile my home PC starts blowing whenever I fire up a video game.
It's really astonishing how full-featured it all was, and it was running on those Pentium machines that had a "turbo" button to switch between 33 and 66 MHz and just a few MBs of RAM.
With the small caveat that I only use Word, it runs perfectly in WINE and has done for over a decade. I use it on 64-bit Ubuntu, and it runs very well: it's also possible to install the 3 service releases that MS put out, and the app runs very quickly even on hardware that is 15+ years old.
The service packs are a good idea. They improve stability, and make export to legacy formats work.
WINE works better than a VM: it takes less memory, there's no VM startup/shutdown time, and host integration is better: e.g. host filesystem access and bidirectional cut and paste.
Step by step:
1. Install WINE, all defaults from OS package manager.
2. Open terminal. Change to directory with Office 97 install files.
3. Run `wine setup`
4. For me: turn off everything except the essential bits of Word. Do not install OS extensions, as they won't work. No bits that plug into other apps. No WordMail, no FastFind, no Quicklaunch toolbar, no Office Assistant.
5. Enter product key: 11111-1111111
6. Allow to complete.
7. Install SRs.
8. Run and use app.
It definitely was snappy. I used it on school computers that were Pentium (1?) with about as much RAM as my current L2 cache (16MB). Dirty rectangles and win32 primitives. Very responsive. It also came with VB6 where you could write your own interpreted code very easily to do all kinds of stuff.
My favorite was that Paste was a giant button while Cut and Copy were small because the UX research found that people paste more than they cut or copy...
I am now starting to wonder how much of it has to do with network access to Sharepoint and telemetry data that most likely didn't exist in the Office 97 dial-up era.
Features-wise - I doubt there is a single feature I use (deliberately) today in Excel or Word that wasn't available in Office 97.
I'd happily suffer Clippy over Co-Pilot.
It's an optional install. You can just click Custom, untick "Office Assistant" and other horrid bits of bloat like "Find Fast" and "Word Mail in Outlook" and get rid of that stuff.
A fella can dream, anyways.
And they did video editing on Amigas with an add-on peripheral called a Video Toaster.
Video compression is a lot more computationally complex now than it was in the 90s, and it is unlikely that an Amiga with a 68k or old PowerPC would be able to handle 4k video with H265 or ProRes. Even if you had specialized hardware to decode it, I’m not 100% sure that an Amiga has enough memory to hold a single decompressed frame to edit against.
Don’t get me wrong, Video Toaster is super awesome, but I don’t think it’s up to modern tasks.
The point being made was that for many people whose lives doesn't circle around computers, their computing needs have not changed since the early 1990's, other than doing stuff on Internet nowadays.
For those people, using digital typewriter hardly requires more features than Final Writer, and for what they do with numbers in tables and a couple of automatic updated cells, something like Superplan would also be enough.
So in other words, their computer needs have changed significantly.
You can't do most modern web-related stuff on a machine from the 90s. Assuming you could get a modern browser (with a modern TLS stack, which is mandatory today) compiled on a machine from the 90s, it would be unusably slow.
If they want to travel they go to an agency, they still go to the local bank branch to do their stuff, news only what comes up on radio and TV, music is what is on radio, CDs and vinyl, and yet manage to have a good life.
You can do modern TLS stuff with a machine from the 90's if you cut own the damn JavaScript and run services from https://farside.link or gemini://gemi.dev proxying the web to Gemini.
Just please don't gaslight us with some alternate Amiga bullshit history. All that shit was super slow, you were begging for +5Mhz or +25KB of cache. If Amiga had any success outside of teenage gamers, that stuff would have all been historical, just like it was on the Mac.
If you were from Humanities and worked for a newspaper design layout you would use a Mac at work. That's it.
That is absolutely not a valid generalisation.
I worked on Macs from the start of my career in 1988. They were the standard computer for state schools in education here in the Isle of Man in the late 1980s and early 1990s.
The Isle of Man's national travel company ran on a Mac database, Omnis, and later moved to Windows to keep using Omnis.
It's still around:
https://www.omnis.net/
I supported dozens of Mac-using clients in London through the 1990s and they were the standard platform in some businesses. Windows NT Server had good MacOS support from the very first version, 3.1, and Macs could access Windows NT Server shares over the built-in Appleshare client, and store Mac files complete with their Resource Forks on NTFS volumes. From 1993 onwards this made mixed Mac/PC networks much easier.
I did subcontracted Mac support for a couple of friends of mine's consultancy businesses because they were Windows guys and didn't "speak Mac".
Yes, they were very strong in print, graphics, design, photography, etc. but not only in those markets. Richer types used them as home computers. I also worked on Macs in the music and dance businesses and other places.
Macs were always there.
Maybe you didn't notice but they always were. Knowing PC/Mac integration was a key career skill for me, and the rise of OS X made the classic MacOS knowledge segue into more general Unix/Windows integration work.
Some power users defected to Windows NT between 1993 and 2001 but then it reversed and grew much faster: from around 2001, PowerMacs started to become a credible desktop workstation for power users because of OS X. From 2006, Macintel boxes became more viable in general business use because the Intel chips meant you could run Windows in a VM at full speed for one or two essential Windows apps. They ran IE natively and WINE started to make OS X feasible for some apps with no need for a Windows licence.
In other words, the rise of OS X coincided with the rise of Linux as a viable server and GUI workstation.
Wanted to get a Mac, needed to travel there, or order by catalogue, from magazine ads.
On my university there were about 5 LCs on a single room for students use, while the whole campus was full of PCs, and UNIX green/amber phosphor terminals to DG/UX rooms, on all major buildings.
Besides that single room, there were two more on the IT department, and that was about it.
When Apple was going down, between buying Be or NeXT as last survival decision, the fate of the university keeping those Macs around was being discussed.
So, A/V production, something I said too. My point still stands. Macs in Europe were seen as something fancy for media production people and that's it. Something niche for the arts/press/TV/cinema world.
Like I said, and you missed: but not only there.
People often mistake "Product A dominates in market B" -- meaning A outsells all others in B -- for "A only sells in market B."
Macs were expensive. Clone PCs were cheap. Yeah, cheap products outsell expensive ones. Doesn't mean that the expensive ones are some kind of fancy designer brand only used by the idle rich.
No one got Macs at school either. First DOS, then Windows 95/98. Maybe in some Universities they used Macbooks well into the OSX era, as a reliable Unix machine to compile legacy scientific stuff; and even in those environments GNU/Linux began to work perfectly well recompiling everything from Sparcs and the like with a much cheaper price.
Forget about pre-OSX machines in Spain outside of a newspaper/publishing/AV producing office. Also, by the time XP and 2000 were realiable enough against OSX (w9x was hell) that OS was replaced for much cheaper PC alternatives.
I mean, if w2k/wxp could handle big loads without BSODing every few hours, that was a success. And as the Pentium 4's with SSE2 and Core Duo's happened, suddenly G4's and G'5 weren't that powerful any more.
https://www.popularmechanics.com/technology/infrastructure/a...
Truly, I do not miss the swamp of toolbar icons without any labels. I don't weep for the old interface.
Nope, that’s a modern problem. That’s what happens when the js-inmates run the asylum. We get shitty bloated software and 8300 copies of a browser running garage applications written by garbage developers.
I can’t wait to see what LLMs do with that being the bulk of their training.
Exciting!
Rookie developers who use hundreds of node modules or huge CSS frameworks are ruining performance and hurt the environment with bloated software that consumes energy and life time.
That seems odd? Debian 12 Bullseye (oldstable) has fully supported i386 port. I would expect it to run reasonably well on late 32 bit era systems (Pentium4/AthlonXP)
But otherwise, yes, Debian 12 should work fine as you say. Not so long ago I installed it on an old Pentium M laptop I had lying around. Did take some tweaking, turned out that the wifi card didn't support WPA2/3 mixed mode which I had configured on my AP, so I had to downgrade security for the experiment. But video was hopeless, it couldn't even play 144p videos on youtube without stuttering. Maybe the video card (some Intel thing, used the i915 driver) didn't have HW decoding for whatever video encoder youtube uses nowadays (AV1?), or whatever.
The CPU will be struggling with most modern video formats including h.264.
Nowadays on an n270 CPU based netbook I use mpv and yt-dlp capped to 420p, even if I can play 720p@30FPS.
It can boot from a floppy or from a CD drive, and it lets you chainload into a live usb even on old computers.
I used it to boot from CD from a floppy in an old Pentium MMX and it worked great (although slow, of course)
Alternatively you may have accidently built a great machine for installing FreeDOS to run old DOS games/applications. It does install from USB, but needs BIOS so can't run it on modern PC hardware.
My first thought: How about using a current computer to run qemu then mounting the Lenny iso as an image and installing to a qemu hard drive? Then dd the hard drive image to your 32bit target. (That might need access to a hard drive caddy depending on how you can boot the 32bit target machine, so a 'hardware regress' I suppose).
My second thought: If target machine is bootable from a more recent live linux, try a debootstrap install of a minimal Lenny with networking (assuming you can connect target machine to a network, I'm guessing with a cable rather than wifi). Reboot and install more software as required.
#inicio
Usage: mpv $YOUTUBE_URLUpgrade ASAP.
This statement must be Linux-only
Pre-compiled packages for i386 are still available for all versions of NetBSD including the current one
I still compile software for i386 from pkgsrc
https://ftp.netbsd.org/pub/pkgsrc/current/
NB. I'm not interested in graphical software, I prefer VGA textmode
My biggest obstacles are that it doesn't have an ethernet port and that it doesn't have BIOS USB support (although it does have a card with two USB ports).
I've managed to run some small Linux distros on it (I'll definitely try this one), but, you're right, I haven't really found anything useful to run on it.
I expect at least the base system (including X) to work without big issues (if your hardware is supported), for extra packages you may need a bit of luck.
[1] https://www.openbsd.org/plat.html
[2] https://wiki.netbsd.org/ports/
[3] https://wiki.netbsd.org/ports/i386/
Don't lose hope. You can boot it one way or other :)
https://youpibouh.thefreecat.org/loadlin/
Little known fact; before 2006 all we did was play Pong and make beep-boop noises on our computers.
From the main page:
As with most things in the GNU/Linux community, this project continues to stand on the shoulders of giants. I am just one guy without a CS degree, so for now, this project is based on antiX 23 i386. AntiX is a fantastic distribution that I think shares much of the same spirit as the original DSL project. AntiX shares pedigree with MEPIS and also leans heavily on the geniuses at Debian. So, this project stands on the shoulders of giants. In other words, DSL 2024 is a humble little project!
Though it may seem comparably ridiculous that 700MB is small in 2024 when DSL was 50MB in 2002, I’ve done a lot of hunting to find small footprint applications, and I had to do some tricks to get a workable desktop into the 700MB limit. To get the size down the ISO currently reduced full language support for German, English, French, Spanish, Portuguese and Brazilian Portuguese (de_DE, en_AU, en_GB, en_US, es_ES, fr_FR, es_ES, pt_PT, & pt_BR ). I had to strip the source codes, many man pages, and documentation out. I do provide a download script that will restore all the missing files, and so far, it seems to be working well.
https://www.damnsmalllinux.org/
For those who are curious, Alpine was the recommended distro as I went through various reviews. I don't know how reliable that advice is.
I once tried to use it as a GUI daily driver on my work laptop (since I was already using it for containers and VMs at work) and found that stretched it a bit too far out of its speciality. It definitely had the necessary packages, just with a lot of rough edges and increased rate of problems (separate from glibc, systemd, or other expected compatibility angles). Plus the focus on having things be statically linked makes really wide (lots of packages) installs negated any space efficiency gains it had.
I don't know if that's also true for data integrity on physical magnetic media. FAT12 is not a journaling filesystem. On a modern drive, a crash during a write is at best, annoying while on a 3.5" floppy with a 33mhz CPU, a write operation blocks for a perceptible amount of time. If the user hits the power switch or the kernel panics while the heads are moving or the FAT is updating, that disk is gone. The article mentions sync, but sync on a floppy drive is an agonizingly slow operation that users might interrupt.
Given the 253KiB free space constraint, I wonder if a better approach would be treating the free space as a raw block device or a tiny appended partition using a log-structured filesystem designed for slow media (like a stripped down JFFS2 or something), though that might require too many kernel modules.
Has anyone out there experimented with appending a tar archive to the end of the initramfs image inplace for persistence, rather than mounting the raw FAT filesystem? It might be safer to serialize writes only on shutdown, would love more thoughts on this.
Yes, soft updates style write ordering can help with some of the issues, but the Linux driver doesn't do that. And some of the issues are essentially unavoidable, requiring a a full fsck on each unclean shutdown.
1) Allocate space in FAT#2, 2) Write data in file, 3) Allocate space in FAT#1, 4) Update directory entry (file size), 5) Update free space count.
Rename in FAT is an atomic operation. Overwrite old name with new name in the directory entry, which is just 1 sector write (or 2 if it has a long file name too).
In general "what DOS did" doesn't cut for a modern system with page and dentry caches and multiple tasks accessing the filesystem without completely horrible performance. I would be really surprised if Windows handled all those cases right with disk caching enabled.
While rename can be atomic in some cases, it cannot be in the case of cross directory renames or when the new filename doesn't fit in the existing directory sector.
Makes sense, great point. I would rather use a second drive for the write disk space, if possible (I know how rare it's now to have two floppy drives, but still).
This isn't true, I commented lower in the thread, but FAT keeps a backup table, and you can use that to restore the disk.
Sadly, it does not seem to boot on my 486 DX2, I even stuffed 32M of RAM into the machine (8*4M, maximum the mainboard supports), more than the recommended 20M.
I have copied the floppy image from the site. It churns for about a minute and a half, loading kernel and initrd, then says "Booting kernel failed: Invalid Argument" and drops into SYSLINUX prompt.
EDIT: I tried a few more floppies to rule that out as the cause of the problem.
Here are some screenshots: https://imgur.com/a/floppinux-0-3-1-Mdh1c0w
EDIT 2: I cloned SYSLINUX, checked out the specific commit and did some prodding around.
The function `bios_boot_linux` in `com32/lib/syslinux/load_linux.c` initializes errno to EINVAL. Besides sanity checking the header of the kernel image, there are a few other error paths that also `goto bail;` without changing errno.
Those other error paths all seem to be related to handling the memory map. I know that the BIOS in my machine does not support the E820h routine. I have a hunch that this might be the reason why it fails.
The website has an image gallery where people ran it on actual hardware: https://krzysztofjankowski.com/floppinux/floppinux-in-the-wi...
Most of those machines seem to be newer systems which probably support E820h, except for another 486 DX2 with a similar vintage as mine, that also failed to boot.
For my 486 distro[see snacklinux.org], I use syslinux 4.07 due to similar issues. I never had any luck with syslinux 6.x, I’d recommend a similar path. It always seems funny to me when I see similar projects, claiming it runs on 486 hardware but rarely do I see people actually doing that, and just fire up qemu instead. Running Linux in a vacuum isn’t realistic, especially when we’re talking old hardware and configuring IRQs manually.
It is running some AMI BIOS variant with a copyright date of 1992, I currently don't have the exact version string around to compare with the ROM dumps on retroweb. vbindiff says the "F" and "M" images are identical and the "H" only has a few 1-byte differences, mostly typos in ASCII strings.
I've written a small boot sector program once that tries out memory and CPU information gathering techniques, so I know the INT 15h, E820h, E801h are not implemented but INT 12h and INT 15h AH=88h return something sane. When I have more than 16M installed, the later reports the full 31M of HIMEM, but I'm not sure how the ISA memory hole factors into this.
From what I saw glancing at the scanning code yesterday, syslinux 6.x should fall back onto AH=88h if AX=E820/E801 doesn't work. It's interesting to know that this worked in older SYSLINUX, I'm curious to check out what changed.
https://news.ycombinator.com/item?id=38059961
https://news.ycombinator.com/item?id=27249075
That was 1999 and I never saw anything like that afterwards.
Now you have ;-)
https://web.archive.org/web/20240901115514/https://pupngo.dk...
oh god
Whish coil whine was configurable :)
https://silent.org.pl/home/2022/06/13/the-floppotron-3-0/
This could be increased noticeably by using one of the common extended floppy formats. The 21-sectors-per-track format used by MS¹ for Windows 95's floppy distribution was widely supported enough by drives (and found to be reliable enough on standard disks) that they considered it safe for mass use, and gave 1680KB instead of the 1440Kb offered by the standard 18-sector layout. The standard floppy formatting tools for Linux support creating such layouts.
--------
[1] There was some suggestion² that MS invented the extended floppy format, they were sometimes called “windows format”, but it³ had been used elsewhere for some time before MS used them for Windows and Office.
[2] I'm not sure if this came from MS themselves, or was invented by the tech press.
[3] and even further extended formats, including 1720KByte by squeezing in two extra tracks as well as more data per track which IIRC was used for OS/2 install floppies.
Ah, good times ;-)
https://en.wikipedia.org/wiki/Tomsrtbt
X disks were X11. There were also the A,B, C etc disks.
Then there was the Coherent install, with massive manual on ultra thin paper with the shell on the front.
The HDD was borked but it had a 3.5" bay that worked, so I got a floppy-based distro running on it. I later replaced the drive and then made the mistake of attempting to compile X11 on it. Results were... mixed.
As an alternative, isn't ext2 smaller by having no FAT tables?
https://www.zelow.no/floppyfw/
to setup small router on 486 with 12 MB ram and run flawless. Later i get Linksys WRT54GL and decommissioned that machine.
And they used to fail all the time, especially when you had something that spanned more than a single disk.
Are you from South Africa? I understand it was the standard slang name there -- and nowhere else, because of the double entendre.
or busybox (surprisingly useful)
Sarge dropped i386, Squeeze i486
(That mail also mentions the floppy driver is "basically orphaned" though. But evidently it's still there and builds.)
Maybe you're thinking of the floppy tape (ftape) driver, which was removed back in the 2.6.20 kernel. Though there's a project keeping an out-of-tree version of it working with recent kernels at https://github.com/dbrant/ftape
The Linux kernel drops i486 support in 6.15 (released May 2025), so 6.14 (released March 2025) is the latest version with full compatibility.
It's basically what people used before USB sticks. But it was also the storage medium that software was sold on, before CD-ROMs became widespread.