devfs for Management and Administration
Traditionally in a UNIX-style operating system, processes access devices via the /dev directory. Within this directory are hundreds of device nodes allocated as either block or character devices, each with a major and minor number that corresponds to its device driver and its device instance. Therefore, whenever a new device is supported by the kernel, you have to create a node in /dev that corresponds to the new device so processes are able to access it. This habit can become tedious and makes life as system administrator a little more complex.
A lot of other problems come with having /dev on disk. For one, managing permissions and device nodes not in use can be time-consuming and overly complex. Another major problem with having /dev on a disk is you cannot mount a read-only root filesystem that is practical for everyday use. There are also the issues of /dev growth and having a non-UNIX root filesystem.
devfs provides a solution to these problems and also gives system administrators a new tool for checking which devices are available. devfs is written as a virtual filesystem driver, and it keeps track of the device drivers currently registered while also automatically creating and removing the corresponding device nodes in /dev. devfs is comprised of three different parts, but as a system administrator you will interact only with two of them. The first part is the kernel VFS module. The part of devfs that you will not deal with as a system administrator is the kernel API that devfs provides to drivers.
Each driver in use must have devfs_register() and devfs_unregister() calls to work with devfs. If you would like more details on how to write drivers that function with devfs, check out Richard Gooch's web site in the resource section. The final piece of the devfs puzzle is called devfsd. devfsd is a system dæmon that does all the ugly tasks, including managing permissions, maintaining symbolic links within /dev and a host of other things that go beyond the scope of this article.
Managing /dev can be a big pain in the rump. For starters, on a typical system there are over 1,200 device nodes. And out of those, only a couple hundred are ever used. This results in an extremely messy /dev directory. How many of you out there actually go through and clean up all the entries in /dev that correspond to hardware you don't have and probably never will have? Not many I bet. Not doing the cleanup does not seem to be too big of a deal--device nodes do not take up a lot of space, and we all have multigigabyte hard drives. But skipping the cleanup can be somewhat problematic because /dev grows as device lookup time is increased.
With devfs in place, you now have an intelligent device management scheme that creates and removes nodes in /dev when you load and unload the kernel device driver modules. This is taken care of at the kernel level, so as a system administrator you do not have to worry about a thing. Having dynamic device node creation also allows you to use the /dev as an administration utility to see if you hardware is installed properly.
Yet another problem with having /dev on a disk is you cannot mount a practical read-only root filesystem. When you are working with embedded systems, this factor can be crucial. By having /dev on a disk, if you were to mount the root filesystem as read-only, you would not be able change tty ownerships. This results in a slew of problems and security issues. The other problem relating to this is having a non-UNIX root filesystem, because the majority of non-UNIX filesystems do not support characters and block special files or symbolic links. devfs fixes both of these problems because the /dev is now mounted as a virtual filesystem in a read-write mode and is not dependent on the state of the root filesystem.
Getting devfs up and running on your system is a fairly easy task and can be completed on a Saturday afternoon. The steps involved are rebuilding the kernel, installing the new kernel, building devfsd, installing devfsd, configuring devfsd and rebooting. If you are unfamiliar with rebuilding your kernel, you should either wait until your distribution is shipping kernel packages with devfs support or check out the Linux Kernel HOWTO (see Resources).
The first step in installing devfs is insuring that your kernel has devfs support built-in. You can do a quick check to see if your currently running kernel has devfs support by executing:
grep devfs /proc/filesystems
If your kernel has devfs you should see:
nodev devfs
If you do not have devfs support in your kernel, you are going to need to build a new kernel, specifically kernel 2.4.10 or greater. I would recommend getting the latest kernel source from www.kernel.org; for this article I was using 2.4.18. Configure the kernel to your liking with your favorite configuration method and add the following options:
CONFIG_EXPERIMENTAL CONFIG_DEVFS_FS CONFIG_DEVFS_MOUNT
You also should disable dev pts since devfs now takes care of this process. (Various users have reported that leaving dev pts enabled creates serious operational problems with devfs.) Install your spiffy new kernel, and do not forget to make a backup of your old one in case something goes awry.
You are now ready to install devfsd. devfsd is the portion of devfs that manages permissions, symbolic links, compatibility issues and other miscellaneous things. While it is not required for you to run devfsd, it is highly recommended; if you do not run it, all of your software must be configured to point to the new locations in /dev. Go out and download the latest version of devfsd from Richard Gooch's web site. As of this writing, the latest version is 1.3.25. Compiling and installing devfsd is pretty typical. The only minor change is if you do not keep your kernel in /usr/src/linux, you should set the environment variable KERNEL_DIR to point your kernel source directory. Extract and install devfsd:
tar -xzvf devfsd-v1.3.25.tar.gz cd devfsd/ make && make install
After installing devfsd you will need to create a startup script and modify the devfsd.conf file to your liking. The startup script for devfsd should run before anything else, so any dæmon or process that accesses /dev in the old way will still run. See Listing 1 for a basic startup script. Installing the startup script is going to be different for individual distributions. For Debian GNU/Linux, copy the devfsd script to /etc/init.d and create a symbolic link to /etc/rcS.d/S01devfsd, so devfsd always gets started. You also will want to link shutdown scripts to /etc/rc1.d/K99devfsd and /etc/rc6.d/K99devfsd. Refer to your distributions documentation on how and where to place new startup scripts.
Listing 1. Basic devfsd Startup Script
The next step to getting devfs up and running on your system is to configure devfsd. The configuration file for devfsd is located at /etc/devfsd.conf (see Listing 2). This file allows you to tweak devfsd to do almost anything relating to /dev.
I like to keep my devfsd configuration pretty simple and include only compatibility entries, module auto-loading and /dev permissions. The following two lines in devfsd.conf create compatibility symlinks to the old device names, so all your currently configured software still works:
REGISTER .* MKOLDCOMPAT UNREGISTER .* RMOLDCOMPAT
If you want to enable the module auto-loading functionality, add this line:
LOOKUP .* MODLOAD
This brings us to something that frustrates a lot of people when they first start using devfs. How do I get my permissions to come back after a reboot? This question has many answers: you can create a tarball of all the changed inodes prior to shutdown and then untar them during startup; you can store your permissions on a disk-based /dev and have devfsd copy and save them when starting up and shutting down; or you could simply add PERMISSIONS entries to your /etc/devfsd.conf file. Managing the device permissions for devfs in devfsd.conf via PERMISSIONS entries is great--you can have one entry for an entire group of devices. The following are some basic permissions I set up on my workstation:
REGISTER ^cdroms/.* PERMISSIONS root.cdrom 0660 REGISTER ^pty/s.* PERMISSIONS root.tty 0600 REGISTER ^sound/.* PERMISSIONS root.audio 0660 REGISTER ^tts/.* PERMISSIONS root.dip 0660
What those entries do is fairly simple. All the devices that are found under /dev/cdroms now have root as the owning user and cdrom as the owning group, with 0660 permissions, or u+rw g+rw o-rwx. I find this to be the easiest way to manage permissions. Using devfsd to manage permissions also prevents you from doing a quick chmod on a device when you first install it, telling yourself that you will set up the permissions correctly later and, of course, quickly forgetting that promise.

Comments
It is disappointing to see a glaring error (possibly of omission) in the
second paragraph of this article.
It is possible to access normal UNIX devices (for read, write and ioctls)
even if the device nodes are on a read-only filesystem. I say that this
is posslbly merely an error of omission because there is one common
operation on tty devices that is not posible on read-only nodes.
Many UNIX systems (including all mainstream Linux distributions) have
implementation of the login command (or PAM modules) that chown and
chmod a user's terminal as the user logs in. This particular feature would
not work on static device nodes on read-only media/filesystems. That
would mean that some programs (such as the mostly obsolete mesg
command) would not function properly (since it requires user ownership
of the terminal). Even that would only affect serial port and virtual console
logins. A modern Linux system with a modern glibc should be using
/dev/pts which is mounted like proc or devfs.
See such a error (or omission) so early in this article seriously disrupted
my reading of the rest.
Other than that the article was mostly straightforward and reasonable.
The caveat about disabling devpts is well taken. The description of
devfsd's role is brief. Perhaps too brief, is devfsd statically linked or
linked exclusively against things that are FHS guaranteed to be under
/lib (NOT under /usr/lib)?
The suggestion regarding the use of "generic" versus explicit interface
names (in /etc/fstab) is spurious. If you want device independent
mounting of devices use labels. If you migrate your data from IDE to
SCSI drives you'll probably have to change your fstab with any device
naming scheme you used --- but if you migrate the data *and* set
labels ... the problem will simply not come up.
Personally I think that the majority of mainstream Linux users and
professional system administrators should simply wait until the distribution
maintainers and developers integrate devfs into their core installation
and package management suites. I've played with devfs, and many
programmers, especially all kernel and device driver developers should
experiment with it and get used to it's quirks. However, it will become
a management nightmare for sysadmins to make such fundamental
customizations of their production systems without a compelling reason.
This (devfs) is good in theory. However, devfs does not exist on its own. There a lot of drivers and configuration files that know nothing about it.
Concerning my experience, I tried devfs for a diskless cluster and could not get rid of multiple warnings. And with a production system, one can not afford taking risks.
So, devfs is nice, but I rather wait until it is officially supported by my distribution.