Although the GNU operating system was first conceived in 1983 and the Free Software Foundation (FSF) had first declared an interest in using the Mach microkernel as the core of the GNU operating system kernel as far back as 1987, the sources of the Mach microkernel – developed at Carnegie Mellon University (CMU) – weren’t released under a suitable licence until 1991, by which time Linus Torvalds had begun his project to write a UNIX-like kernel for the IBM 386.
If the Linux kernel hadn’t been written when it was, licensed under the GPLv2 and surrounded by components of the GNU operating system, or Linux hadn’t captured the moment and the imagination of developers, the energy that gathered around Linux might have gone to the Hurd and the world might have been a different place. But it wasn’t just the rise of Linux, or the choice of the Mach microkernel that slowed the progress of the Hurd.
The design of the Hurd was an attempt to embody the spirit and promise of the free software movement in code. As one anonymous hacker employee of the FSF back in the early days of the Hurd project, put it: “The sentiment around the design was, I think it fair to say, somewhat giddy. The free software movement was (and is) all about freeing users from subjugation to those who provide software. The Hurd’s microkernel architecture and the structure of the daemons would securely free users from subjugation to system administrators – each user could securely invoke a set of daemons to create the operating environment he or she wished, no special permissions required.”
A slightly large closet
Richard Stallman had announced his intention to write a complete UNIX-like operating system to be known as GNU, ‘GNU’s Not Unix!’, in September 1983. The years between 1983 and the inception of the Hurd were spent writing the operating system and tools that made the development of a kernel possible, the editors and compilers, Bash, Make, Autoconf, Emacs, GCC, sed, gawk and the command-line tools.
GNU paid for itself through the sale of the software. In the early days of the Hurd the FSF employed developers, before “any kind of data over voice or particularly high bandwidth connection was commonplace – so that hacking was over modem connected to text terminal. Mostly we hacked in a shared office which, if you saw it, you’d think ‘Wow, that’s a slightly large closet.’ We were, at that time, guests of MIT.”
Linus Torvalds had announced the arrival of “a (free) operating system (just a hobby, won’t be big and professional like GNU) for 386(486) AT clones” on comp.os.minix just a few short months after work began on the Hurd. Torvalds’ choice of a monolithic kernel was not the choice of the purists, but provided the quickest route to a working kernel.
The appeal of the Linux kernel to the hackers, hobbyists and academics who swarmed to help in its development was that it was free software, available under GPLv2, and ran on the kind of hardware they had at home. The momentum was with Linux and the community grew surprisingly fast. The community made GNU/ Linux what it was, and while work continued on GNU Hurd, it was at a slower pace. The community wasn’t coming its way.
An idealistic philosophy
From a user perspective, the Hurd was going to be a long time coming, and the Linux developers had slotted Linux into the space that was meant to be occupied by the Hurd at the heart of the GNU operating system. Stallman was initially sceptical. Early versions of Linux were exclusive to the IBM 386, and according to Stallman: “We heard that Linux was not at all portable (this may not be true today, but that’s what we heard then). And we heard that Linux was architecturally on a par with the UNIX kernel; our work was leading to something much more powerful.”
Linux was dependent on GCC and the GNU tools, and its profile began to grow as
distributions emerged, and the FSF began to see Linux as an acceptable, if sub-optimal and temporary, substitute for the kernel at the heart of the GNU operating system. As Stallman was quick to point out: “There is no operating system called Linux. The OS called Linux is GNU. Linux is a program – a kernel. A kernel is one part of an OS, the lowest-level program in the OS that keeps track of other programs running, and apportions memory and processor time among them.”
He insisted that the GNU operating system with Linux at its heart should be known as GNU/Linux so that “people understand that the system exists because of an idealistic philosophy. Call it Linux and it defeats the philosophy. It’s a very serious problem. Linux is not the system. Linux is one piece of it… The idealistic vision of the GNU project is the reason we have this system.”
Work continued on the Hurd but it became obvious that the FSF had chosen a difficult route in its search for perfection. The microkernel presented a series of problems to overcome, and people who might have participated had been diverted to work on Linux, which was usable and bearing fruit. Despite the criticisms of the likes of Andy Tanenbaum at the outset of the Linux project, Torvalds’ choice of a monolithic kernel for Linux made it easier to arrive at a working free operating system.
Stallman later admitted, “I take full responsibility for the technical decision to develop the GNU kernel based on Mach, a decision which seems to have been responsible for the slowness of the development. I thought using Mach would speed the work by saving us a large part of the job, but I was wrong.” In latter years the Hurd has been ported to a variety of microkernels, from L4 to Coyotos and to Viengoos, but has never had the community and resources that went the way of Linux.
The principle and the promise
In the late Nineties there was a schism in the community, symbolised by the ECGS (pronounced ‘eggs’) split of GCC – as an attempt to break GCC development away from the FSF – and the founding of the Open Source Initiative (OSI), to promote a less stringent, or watered down, view of the possibilities of free software.
“The main differentiation [the OSI] sought from the FSF is that they would not condemn proprietary software or describe themselves as a freedom movement – they sought to emphasise the economic advantages of having volunteers do work for no pay.” But in the view of some “their main purpose upon founding was to attempt to politically marginalise RMS (a project in which they’ve had some success).”
An appearance of the Hurd was first promised in 1994, when Emacs was said to be up and running, and a release was promised in 2001, but never materialised. After the port to the L4 microkernel in 2005, Markus Brinkmann was promising “we can now easily explore and develop the system in any way we want,” but was forced to admit that “with my glibc port, I can already build simple applications, but most won’t run because they need a file system or other gimmicks (like, uhm, fork and exec), and I only have stubs (dummy functions which always return an error) for that now.”
In the mid-Nineties Debian arrived on the scene and through the ‘Debian Guidelines’, written by Bruce Perens, became the practical expression and conscience of the free software movement, while the FSF divested itself of much of its role in defining and developing the GNU operating system and put its efforts into the politics of free software.
Since 1998, Debian GNU/Hurd has been an active project of the Debian community, who offer an installation CD and live CD, which can be seen as the sanctioned version of the current status of Hurd development, but is still not considered to be an ‘official’ Debian release. The Hurd is not up to production quality, and has some limitations on hardware support, but can be usefully run in a virtual box and is worth a try.
Where once the FSF paid developers to work on GNU projects, most are now volunteers or employees of companies paid to work on projects like GCC. Much of the focus went out of the Hurd because Linux does the job, and there was no burning need for another kernel, but the principle and the promise have lingered on, and there may yet be scope for a return to the original vision of the GNU Hurd.
Alix – The True GNU Kernel
Richard Stallman tells the story that the GNU kernel was not originally supposed to be called the Hurd.
“Its original name was Alix – named after the woman who was my sweetheart at the time. She, a UNIX system administrator, had pointed out how her name would fit a common naming pattern for UNIX system versions; as a joke, she told her friends, ‘Someone should name a kernel after me.’ I said nothing, but decided to surprise her with a kernel named Alix.”
“It did not stay that way. Michael (now Thomas) Bushnell, the main developer of the kernel, preferred the name Hurd, and redefined Alix to refer to a certain part of the kernel – the part that would trap system calls and handle them by sending messages to Hurd servers.”
“Later, Alix and I broke up, and she changed her name; independently, the Hurd design was changed so that the C library would send messages directly to servers, and this made the Alix component disappear from the design.”
“But before these things happened, a friend of hers came across the name Alix in the Hurd source code and mentioned it to her. So she did have the chance to find a kernel named after her.”
Bushnell chose the name Hurd, partly because the Hurd suggested a herd of GNU, and partly because the Hurd was a recursive acronym for ‘Hird of Unix-Replacing Daemons’ and a Hird was a ‘Hurd of Interfaces Representing Depth’. As Bushnell put it “We have here, to my knowledge, the first software to be named by a pair of mutually recursive acronyms.”
Thomas Bushnell is still a Debian developer and a Gregorian friar.
At the bleeding edge
Unlike the Linux kernel, which is monolithic, the Hurd uses a microkernel, and functionality is moved out of kernel space and into userland. The microkernel sits between the hardware and most of the activities that are normally assumed by a monolithic kernel.
Thomas Bushnell, one of the primary architects of the Hurd in its earlier days, summarised the theory in his paper ‘Towards a New Strategy of OS design’, written in 1996. “The GNU Hurd,” he wrote, “is designed to make the area of system code as limited as possible. Programs are required to communicate only with a few essential parts of the kernel; the rest of the system is replaceable dynamically. Users can use whatever parts of the remainder of the system they want, and can easily add components themselves for other users to take advantage of. No mutual trust need exist in advance for users to use each other’s services, nor does the system become vulnerable by trusting the services of arbitrary users.” In practice, this means that users do not defer to the superuser for activities like mounting a file system or loading a device driver, which was the case with Linux until recent years, since when Linux has begun to accumulate microkernel-like features of its own.
“It was well understood back then,” an anonymous GNU employee remembered, “and even a point of discussion in academia, that a microkernel architecture posed some difficult problems for performance (related mostly to a greater number of context switches as messages pass between daemons rather than syscalls being handled by a monolithic kernel). Rashid’s work [at Carnegie Mellon] had suggested that this problem was not so terribly significant after all. And so, at least to me, it felt like the GNU project was not only doing this shoestring-budget freedom- fighting hacking, but also leading near the bleeding edge of CS research made practical. Well, that was the theory, anyway, and we were mighty proud of ourselves and generally excited to be there.”
The Hurd was a remarkable adventure into the state of the art of operating system theory as it existed at that time. The objective of GNU was to achieve something both UNIX-like and something akin to the operating system of a Lisp machine, the original single-user workstation that had grown out of the hacker culture of the AI Lab at MIT, where Stallman had learnt his craft. “Emacs (with its Lisp extensibility) was taken to be a paradigm for how interactive programs might work. Originally, it was even envisioned that the window system would be Lisp based.
“One early change to the original GNU vision occurred when it became clear that X11 worked pretty well and was here to stay and would be free software. As a practical matter: just use that.”
The might have beens
When GNU was first conceived, the obvious solution was to find a ready-made kernel that was already in the public domain.
Stallman’s first choice was TRIX, which had been developed on his home ground at MIT, and is mentioned in the GNU Manifesto. “An initial kernel exists but many more features are needed to emulate UNIX,” he wrote in 1984. “When the kernel and compiler are finished, it will be possible to distribute a GNU system suitable for program development.” As late as December 1986, the GNU developers were “working on the changes needed to TRIX”, and it wasn’t until the following year that Stallman began to take an interest in Mach.
Other ideas were mooted, including the use of Berkeley’s Sprite operating system and the BSD kernel. “RMS was a very strong believer, wrongly, I think, in a very greedy algorithm approach to code reuse issues,” Thomas Bushnell later remembered.
“My first choice was to take the BSD 4.4-Lite release and make a kernel. I knew the code, I knew how to do it. It is now perfectly obvious to me that this would have succeeded splendidly and the world would be a very different place today. RMS wanted to work together with people from Berkeley on such an effort. Some of them were interested, but some seem to have been deliberately dragging their feet: and the reason now seems to be that they had the goal of spinning off BSDI. A GNU based on 4.4-Lite would undercut BSDI.”
As Bushnell describes it, Stallman came to the conclusion that “Mach is a working kernel. 4.4-Lite is only partial. We will go with Mach.”