Jim Mohr's SCO Companion


Copyright 1996-1998 by James Mohr. All rights reserved. Used by permission of the author.

Be sure to visit Jim's great Linux Tutorial web site at https://www.linux-tutorial.info/

Configuring the Network

You may have noticed that this is the first chapter that is specifically intended to help you configure something. Up to this point, I have tried to avoid the issue of configuration because that's not the focus of this book. I want you to be able to understand the complexities of each issue, so that you can figure out how to do the configuration on your own. Second, that's what the manuals are there for. Despite what some people want to believe, the SCO manuals are well written. They contain what you need to configure your system. Even if you don't know what you are doing.

So, why do we have a chapter on configuring your network? Well, it was necessary. The number of network sites is now well over a million and it is growing every day. Unless your office consists of a single server and a couple of terminals, you probably have some kind of network connection. Even if you do have a small site, the excitement of the Internet is something that you just can't miss out on. In order to get involved, you have to connect to the Internet.

There is also the issue that configuring a network is not an easy thing, especially for the uninitiated. You may understand the principles, but unless you have configured it before, you will probably be flipping through the manuals once or twice. Also, there are enough files out there that you may just make a typo or two.

In this chapter, we're going to cover some of the basics of configuring network connections. While we do it, we'll be talking about some of the more common places that mistakes are made. If you know what other people do to make things go wrong, you are less likely to make them yourself.

Installing and Configuring Network Software

One key issue when configuring your network is the issue of user access. If everyone will be connecting to other machines using telnet or ftp, then all you need to worry about is ensuring that they have an account that machine. However, when you want to allow access through the remote commands, where a login is not always desirable, then you need to think of something else. This is where the concept of user equivalence comes it.

As it's name implies user equivalence is a mechanism whereby users on two separate machines can be considered the same user. If configured, commands can issued on a remote machine, without having the user login. One important point is that this equivalence is based on the user's name, not the UID.

User equivalence can be established in two ways. You can establish a system wide equivalence by create the file /etc/hosts.equiv. This file simply contains a list of remote hosts to which user equivalence has been established. Every user from the remote host who uses an "r” command (rlogin, rcmd) will be considered equivalent to his local counterpart so a password will not be required. Keep in mind, however that user equivalence is only one way. The existence of your machine in my /etc/hosts.equiv file means, you can connect to me. It does not me I can connect to you.

Unless you are in a very trusting environment, creating entries in a /etc/hosts.equiv file may not be a good idea. This open up one machine to everyone on the other. On the other hand, setting up user equivalence in this fashion, does not allow root access.

The second way of establishing user equivalence is one a user-to-user basis. By creating the file $HOME/.rhosts, you can allow specific accounts from specific machines access as this user. For example, my user (jimmo) on scoburg has a .rhosts file the my home directory. It contains four lines:

siemau jimmo

vesta jimmo

vesta tisha

vesta root

Here the use jimmo in the machines siemau and vesta are equivalent to the local user jimmo, just as the user root on vesta is. That means if I am logged in on siemau as jimmo I can issue a remote command to scoburg. With the root and tisha accounts on vest it is a little different. When you use a remote command, the default is to assume you are the same user on the remote side. If you aren't you need to use the option -l <user_name> , where <user_name> is the user o the remote machine. For example, if I were on vesta, logged in as tisha, and wanted to run a the date on scoburg as jimmo, the command would be:

rcmd scoburg -l jimmo date

On scoburg the .rhosts file in jimmo's home directory is checked for the user tisha. Since tisha is listed, the command is successful.

Talk about rcmds in section on networking

Installing SCO network software (Custom, Netconfig)

The Node Check Manager -> Check it out.


Despite it's appearance to the uninitiated, UUCP is fairly straight forward to configure. As I talked about previously, there are three primary files when dealing with UUCP connections. All three reside in /usr/lib/uucp and are simply text files that are edited to configure UUCP. Whether you use a text editor yourself or use uuinstall (the UUCP Manager in OpenServer simple calls uuinstall) all that is getting done is editing these files. Since it is these files that are getting edited, I will forgo a discussion of any of the configuration tools. Once you know how to configure it by hand, then using the configuration tools is easy.

In a nutshell, we can say that the Systems file says which machines you know about, how to call them and when it the right time to contact them; the Devices files says what ports to use, at what speeds and how the device is connected to you system; and the Permissions file says who can get what file and from whom. In addition to configuring these files, there are a few hardware considerations before you can get UUCP to work.

One of the first things is another site to call. Granted most of you already knew that, but I figured it had to be said. Next, you need a serial line/port. If all the serial ports on you machine are taken up, you cannot connect to the remote machine. Now, that might sound like I am being silly, but trust me the question will come up. While in SCO Support, I talked with a customer who could not get their modem working. He tried several combinations of speeds, parity and other settings, but it wouldn't even dial out. Finally he called SCO Support and got me. Turns out that he had a terminal on tty1a and was trying to get the modem working on tty1A. (If this means nothing to you, look back on our discussion of major and minor numbers.)

Note that you don't have to have a modem to get a UUCP connection. If you have two machines side by side and no other means of communicating other than serial cables or SNEAKER-net, then UUCP is a viable alternative. For details on SNEAKER-net see the chapter on networking basics.

For right now, let's assume that there is a modem between the machines. Later we'll get into connecting between two machines without a modem. Start by logging in as root and changing directories to /usr/lib/uucp. First, we'll edit the Systems file. As I mentioned a moment ago this is used to determine which sites are known to you and how to contact them. However, before you start editing, I suggest you gather the information about this connection. Let's make a list, using the fields in the Systems file entry as a base. Let's take the line that's used for SCO's bulletin board, sosco, which looks like this:

sosco Any ACU 300-2400 14084253502 -\r\d-ogin:-\K\d-ogin:-\K\d-ogin: uusls

The format of each line is:

site_name times_to_call device-type connection_speeds phone_no chat-script

First, we have a site_name. This is what you want to call the machine when using uucp. This does not have to be the same as it's TCP/IP name or any other name you previously gave it. Although it does make life simpler, you can call it whatever you like. The only consideration is that you may run into problems with machine names longer than eight characters, so I suggested you keep them shorter.

The next issue to consider is the times to call. In many situations, you will want to set it up to call at any time, in which case this field will be "Any". However, the site might be far enough away to be long distance. If the information you transfer is not all that urgent, then allowing connections only between 6PM-8AM is cheaper. One company I know uses UUCP to transfer the daily accounts from their point-of-sale system from the branch offices to the main office. To ensure this is done after all transactions have been completed, they only allow communication from the main to the branch offices after midnight. This would required a Never in the branch offices and something like 0001-0800 in the main office.

I am aware of cases where connections are allowed only once a week. This too can be configured in the times_to_call field. Using the two letter abbreviations for the days of the week, you can come up with any combination you like. For example:


would allow you to call the remote machine only on Monday, Wednesday and Friday between 12:30 and 1:45 PM. The abbreviations for the days of the week are: Mo,Tu,We,Th,Fr,Sa,Su, plus Wk for every weekday (Mo-Fr). You can also use Wk in conjunction with both times and weekend days. For example, if we wanted to allow calling only between 8PM and 8AM on weekdays, but all day Saturday and Sunday the entry might could look like this:


Be careful when you have times that span across midnight. Despite appearances UUCP interprets the day and time as two seperate entries. As a result, you might not get what you expect with the entry:


This appears to mean Mo-Fr, 8PM-8AM, which it does, sort of. What happens on Friday? When we hit midnight on Friday, it is no longer a weekday, but rather Saturday, therefore the system will not be contacted after midnight on Friday. The reverse goes for Sunday after midnight. It is now Monday, a weekday between 8PM-8AM. Therefore, the call will be made.

You can also specify the minimum time to wait before trying the call again, should it fail. This is separated from the calling times with a semi-colon and is expressed in minutes. For example, to wait 5 minutes before trying a failed call in the above example, we would write it like this:


Normally, UUCP uses what is called "exponential backoff". This means that the time between when UUCP is able to retry after successive failures increases exponentially. By using this retry field, UUCP will wait that amount of time before you are allowed to retry, instead of using the exponential backoff. Keep in mind, that usually UUCP will make one retry immediately after a failed attempt. This is a retry from our point of view, but not for UUCP.

If the call fails the second time, there must be some mechanism to retry as UUCP will not normally do it on its own. Instead you must either having polling set up (which we talk about later) or start uucico by hand (uutry). If the retry time has not been reached, you get an error to that effect. (To correct it, remove the status file for that system in /usr/spool/uucp/.Status) However, the system will not allow you to retry if the rety time has not yet been reached.

The next thing to look for is the kind of device you want to connect through (the device_tpye field). The two most common are ACU (Automatic Call Unit - modem) and Direct (a direct connection). There are others, including TCP, however, they are not commonly used so we will have to forgo discussing them. The device type is not really a predefined, carved in stone device, but rather an alias. It points to a particular entry (or group of entries) in the Devices file. Therefore, if you have several modems with varying speeds, you could define an alias for each one. This way you contact the remote side with a modem that has the same speed as the remote site. (How this is done we'll get to shortly)

The next entry is the speed (or baud rate) to use when connecting to the remote site. Keep in mind that these are discrete values. That is, although we specified a range of 300-2400, there is no such thing as a baud rate of 734. When trying to connect, UUCP will try the fastest speed first and only try the slower speeds if it cannot connect. Also note that the speed does not have to be a range. For example, it makes sense to limit calls to a single speed if you don't want to waste the call if it cannot be made at the fastest available speed.

Note that not all dialers or modems can handle every speed.

At first glance, it doesn't look like there is much to say about a phone number. Well, a phone number by itself is just a phone number. However, in this context you can use phone numbers is association with what are called "tokens". Just as a bus token is used to represent the money you paid for you fair, phone number tokens represent phone numbers. Or at least, parts of them. In the spirit of self-documentation, I might want a way to indicate where I am calling. For example, I have 100 sites thoughout the country and don't want to have to remember every area code. A solution would be to use tokens indicating the area. I might create an entry in the phone number field that looks like this:


From this I immediately know I am calling to Santa Cruz. The translation is made from the file /usr/lib/uucp/Dialcodes, where each entry in the file has the format:

Dialarea Dialcode

So the entry for this example would look like:

SantaCruz 1408

Therefore when the call is actually made, it is interpreted as 14084253502. I can also use the special character = to for a secondary dial tone or - to pause for a second. For example, if I needed to dial nine to get an outside line, I could have made an entry line this:

SantaCruz 9=1408

We now get to perhaps the most intimidating part of the Systems file: the chat script. As it's name implies, this is a script to direct how the calling machine is supposed to "chat" with the remote machine. The chat script is composed of a sequence of characters that are expected from the remote machine and what should be sent in response. This is ofter talked about as Expect-Send pairs. Usually, if expected charactes are not sent, then there is a Expect-Send sub-pair. Each value is separated by a dash. To clear this up, let's look at the above example:

-\r\d-ogin:-\K\d-ogin:-\K\d-ogin: uusls

Here we have four Expect-Send pairs: \r\d ogin:-\K\d ogin:-\K\d ogin: uusls

Here we don't expect anything before we send a carriage return (\r) followed by a delay of 1 second (\d). We then expect the characters. We ignore the first character for a couple of reasons. First, it may not have come through and if we were expecting "login:" then the missing 'l' would cause it not to match. Second, some sites might have a login prompt that says "Login:" instead of "Login:". That would fail to match as well. If it failed to match, (for either of these reasons, or it may not have gotten sent within the 10 seconds timeout) we go to the next sub-pair. This time we send a BREAK (\K) and the 1 second delay. This would have the effect of forcing the remote side to cycle through the speeds if the port on the remote side was configured to do so. If no match, we go to the third, expect send pair. If this doesn't match, then the entire connection fails.

If we do match, then we skip the remaining and send the last field, uusls. This is the login name, for the account we want to use to login. Here, there is no password. However, for accounts where passwords are expected, there would be another expect-send pair. Perhaps like this:

ssword: RegnadKcin

Here we also ignore the first character. If we see "ssword", we send RegnadKcin. The reason that there are no Expect-Send sub-pairs here is that once we get the ogin: part, we can pretty much be sure that the "ssword" will come across correctly.

One thing that always annoyed me is that chat scripts often require a lot of effort to get them to work correctly. You might have to try several differ sequences before you get it right. If you are calling a site that others have succesfully called, then talking with the administrator of that site might give you some ideas on what to try.

You can also try the cu command to connect to the other site. With both cu and uucp, you can turn on debugging with -x9 and watch the connection progress. Even though you don't have a normal account, you can try to login using the UUCP account and password. If you get in that way, then you should be able to get in with UUCP, provided UUCP is configured correctly. Later on I will go through a UUCP session with -x9 debugging turned on.

Next comes the Devices file. Which entry in Devices is accesses is determined by the device-type entry in the Systems file. In the example from the Systems file we were references an ACU line. This entry might look like this:

ACU tty2A - 2400-9600 hayes2400

The general for of the line is:

device-type tty-device dialerline speed dialer <token dialer token...>

Here again we see the reference to the ACU device. It is this line that the device-type field in the Systems file is pointing to. If you had multiple ACU lines here and one was busy, UUCP would try the next one. The tty-device entry is the serial port that you want to call out on. Note that were are referring to the modem controll device (big A) since this is a modem conecction. With a direct connection (more on that in a moment), we would use the non-modem control port (little A). For more on the difference, see the section on major and minor numbers and the serial(HW) man-page

The dialer line should be a dash. Just leave it a dash. It's used to indicate the seperate Devices line used by an 801-type calling device. What an "801-type calling device" is, is beyond the scope of this book. So, with that in mind, you can safely leave it as a dash. If you have an "801-type calling device", I apologize for not providing you more information.

Next is the speed. In this case, we provided a range (2400-9600). If this could handle multiple speeds, but we only want to use one, we could entry a single speed. If you try to make a connection (with cu, for example) and specify a speed other than specified, the system will reply that there are no devices available. This also applies to specifying speeds outside the range.

For example, say we want to connect to a machine at 2400, but the speed specified here was 9600. We issue a cu command, specifying that speed. Like this:

cu -s2400 5558672

We would end up with a message like this:


This means what it says, that there are no devices available. If they were all being used, we'd get this message as well. However, here it doesn't mean that someone is using the port, but rather there are no devices at all that match that speed. Of course, the same thing would apply if you had ten modems, nine with 9600 and one with 2400 and the one with 2400 just happened to be busy at the time.

The last field might actual consist of multiple fields. Normally, this is only one, like in our case, and represents the dialer (more in a moment). You might have a network where a modem is connected to switch. You must first connect to the swicth and then to the modem. The example Devices given in the SCO doc looks like this:

ACU tty14 - 1200 develcon vent ventel

Here we have the dialer-token pair "develcon vent" and a dialer ventel. The develcon entry points to a line in the /usr/lib/uucp/Dialers file which is used to access the switch. The vent field is a flag (token) which is passed to the develcon switch, telling it what device to connected. Finally, we get to the ventel dialer, which may also be a line in Dialers.

The Dialers file is perhaps the simplest way of connecting up a modem. It neither expects nor allows many of the options possible with the dialer binary programs. So, just what is a dialer. Well, it's just as it's name implies, it dials the modem. There are several provided with your system, which also reside in /usr/lib/uucp. These are binary programs that access your modem, configure it according to what you have defined and dial the number you want.

Instead of having a seperate dialer, the entries in Dialers use a simple program internal to UUCP. Since UUCP is getting all of its configuration information from the Dialers file, the entries often come in pairs. The first one is used to prepare the modem for dialing out and the other is used to reinitialized the modem after the call completes. One of the more common entries is that for a Hayes Smartmodem 2400. BY default, the two entries look like this:

hayes2400 =,-, "" ATQ0E0T&D2&C1S0=0X4S2=043\r\c OK\r ATDT\T\r\c Speed

&hayes2400 =,-, "" +++\dATQ0H OK\r ATE0&D2&C1X4S2=128 OK\r ATS0=1Q1\r

The general format is:

dialer-name translation-table expect-send-pairs

The dialer name is anything we want to give it, although both names must be the same, with the terminating (reinitialization) entry beginning with an ampersand (&). This is a one to one mapping with whatever dialer you use in the Devices file. That is, you can create a new dialer with a new initialization string as long as you use the same name in the Devices file.

The translation table is used to translate the the dialtone/pulse codes used for pauses (== and -) into those appropriate for the particular device. In the above example, both types of pauses are translated to the comma, which is all the hayes2400 modem understands.

At the end of the line are sets of expect and sends, very similar to the chat script in the system file. Keep in mind that what is "sent" is not necessarily the exact characters the modem recieved. It might be better to think of the send strings as information give to the dialer to tell it what to send. In both lines, above, we have a set of double-quotes ("") with nothing inside. This simply means that UUCP should expect nothing before it sends the first set of characters. In this case what we are sending is a series of Hayes (AT) commands, followed by a carriage return (\r) without a a new-line (\c). It then expects to see "OK" followed by a carriage return. This "OK" tells the dialer that everything went well and it was able to understand the command(s) just sent.

We next send the mode the AT command ATDT, which means to be ready to dial the number following. Instead of a telephone number, here we have a \T which is used to represent the telephone number. What is actually sent to the modem as a phone number is what appears in the Systems file. If that number contains tokens, then those are translated first, before the number is sent to the modem. Following the command to dial the phome number, we once again have the carriage return with no new-line. Table 0.1 contains a list of some of the escape characters used in the Dialers file.


pause (ca. 1/2 second delay)


delay (2 seconds)


phone number/token


phone number with Dialcodes and character translation


null byte




turn on echo checking (for slow devices)


turn off echo checking


carriage return


no new-line




send octal number nnnn

Table 0.1 Dialers file escape sequences

If we decide we want some of the advanced features, then we are probably going to choose one of the existing binary dialers. These come in two flavors. The first has the format: dialXYZ, where XYZ will give you some indication of what kind of modem it accesses. For example, the dialHA24 is used to access a Hayes 2400 modem and the dialMUL dialer is sued to access a Multitech modem. Each one of these is paired with a .c file, which, as you might have guessed, is the source code for that dialer. If you have access to a C compiler for your SCO system then you can modify the dialers.

The problem is that not everyone has access to a C compiler. Plus, just having a compiler does not mean that you have created a modem dialer before. Unless you a fairly familiar with how dialers interact with the modem, modifying one of the .c files is not easy.

Realizing the need to be able to configure modems, SCO introduced a new set of dialers. These are those with the format atdialXYZ, here again XYZ gives you some indication of what kind of dialer you are dealing with. The difference is that instead of being seperate files, the atdialers, as they are called, are actually links to one another. Rather than having their configuration information hardcoded like the dialXYZ files, the atdialers get their configuration information from ASCII files that are easily edited.

In ODT, these files resided in /etc/default with the same name as the atdial file. For example, the atdialHAY dialer would read its configuration information from /etc/default/atdialHAY. In OpenServer, they have the same format and behavior, but now reside in /usr/lib/uucp/etc.

In essence, the information in the atdial configuration files is the same information that you provided when you modified the dialers yourself. Instead of having to recompile the binary, you only need to modify the configuration file. These is also a shell script that prompts you for the values you want to assign to each of the configuration strings: /usr/lib/uucp/make.dialer. This has the advantage that it not only writes the necessary information into the configuration file, it also links the base dialer (/usr/lib/uucp/atdial) to the name you specified. Personally, using make.dialer is more awkward and it's easier to make a copy of an existing file make the changes then do the link yourself. Unless you have one of the other modem brands listed, in most cases using the hayes configuration file (atdialHAY) and changing it works well.

When configuring the strings, there is one thing I need to point out that is often a point of confusion. That is the AT commands themselves. The confusion lies in the name itself. Because they are referred to AT commands, often people seen each command as an AT followed by the code for the specific function. This is true, however, the AT is not required before every command. Actually the AT is required only before the first command in each line. The function of the AT is to tell the modem that commands follow, and does not serve as a delimiter to the commands. (Note that if you are communicating with the modem interactively, the more than likely there is one command per line and each must be preceeded with the AT) Let's take an example the setup strigs from the atdialUSR file:


If the AT was required before every command, then this would be seen as a single, very long command. Instead, since the AT is only needed to get the modem's attention, we have 9 seperate commands: &F, X4, Q0, &D2, &C1, &B1, S0=1, S2=043, and &W.

Figuring that out isn't as straight forward as it first seems. There are several things need to be kept in mind. First, the ampersand (&) indicates and "advanced" command, i.e. one that may not be on your modem. Knowing this, I can say that the previous command ends at the character just before the ampersand. Therefore, I can break this command into smaller token and have smaller chunks to deal with. This could look like this:

&FX4Q0 &D2 &C1 &B1S0=1S2=043 &W

The next thing to keep in mind is that none of the AT commands constists soley of or starts with a number. Therefore &D2 and &C1 cannot be broken down any further. I also know that modems (normally) have memory (called registers) that can be read from or written to. These are the 'S' registers. Since they contain values and are not functions, I have to assign a value. Also, these are the only ones that can take on values, so I can work from the equal sign using the general format:


where # is the S-register number and <value> is the value I am assigning. This gives me:

&FX4Q0 &D2 &C1 &B1 S0=1 S2=043 &W

So, we're almost there. All that's left is the very beginning of the command. One thing I can use is that the AT commands are almost exclusively single letters or single letters followed by a number. The exceptions are the S-registers, but those are easy to distinguish. I can thing take a quess and break the strings down to:

&F X4 Q0 &D2 &C1 &B1 S0=1 S2=043 &W

Now, to make sure that I broke this down correct, I should verify that there is a command like each one of my tokens. Hopefully your modem manual has got a list of the AT commands that are applicable to your modem.

To understand the behavior of the AT commands, there are some underlying concepts that we need to address first. Modems operate in four modes and how they react to signals will be different. When you first turn on a modem, it is in idle mode. It sits there waiting for you to tell it what to to and will remain in idle mode until it is told to change.

The first state it normally changes to command mode. This is the state it moves into after you poke it in the ribs with "AT" to tell it to wake up a get ready for a command. While in command mode, the modem waits until it sees a carriage return before it will interpret the command. This is why a single line in the configuration file is considered one command, but in interactive mode you need the AT before ach command.


Raise CDC on carrier detect


Modem goes off hook and returns to command mode when DTR drops


Reset to Factory defaults


Write S-registers to memory


Do not echo commands


Echo commands


Do not display result messages


Display result messages


Number of rings until autoanswer.


ASCII character used to exit data mode

Table 0.2 Common AT/Hayes Commands

When one modem has contacted another and a carrier signal has been initiated, the modem enters data mode. Once the modem is in data mode, it will remain there until something happens to pull it out. The most common is that modem is simply told to leave data mode. This is when it receives the exit sequnce ("+++"). This also happens when one (or both) computer looses the DTR signal saying that the computer is no longer attached to the modem. Another instance is when carrier is lost.

If you connect directly to the mode (perhaps using cu) you can enter commands interactively. This is "interactive mode." This is often useful in testing the modem as you can see immediately how it reacts to each command you give it. This is done by using the keyword dir in place of the phone number or system name. For example, to connect directly to a modem attached to tty1a the command would look like this:

cu -l tty1a dir

Two things to note here. First, we need to specify the line we want to go out on, in this case tty1a. Second, we need to specify the non-modem control port. This seems a little counter-intuitive since we are trying to connect to a modem. However, as I mentioned before, the significance of the modem control device is that it reacts to carrier-detect and not necessarily is connected to a modem.

One file we haven't talked about yet isn't necessarily required to get a UUCP connection. This is the Permissions file, which, as we talked about earlier, controls what machines have access to what directories and what programs they can execute on our machine. Like the other UUCP files, you are provided with a Permissions file. This contains an entry that allows anyone with the correct password access. It looks like this:

MACHINE=samplesite LOGNAME=uucp \

COMMANDS=rmail:rnews:uucp \

READ=/usr/spool/uucppublic:/usr/tmp \

WRITE=/usr/spool/uucppublic:/usr/tmp \


Although this looks like several lines, it is actually one logical line. Like in many files, we can separate one logical line into several physical lines by placing a backslash at the end of each physical line. Therefore, we need to talk about fields rather than lines.

The first field, MACHINE=, is used to identify a machine that we are calling. This is matched with whatever entry we use in the Systems file. If there is no direct match then we use the "samplesite" machine entry. Here comes the first big security question. Since any site you call will be able to access your system through this entry, is this really what you want? Although the access they have is limited (and we'll get to that in a moment) it is an issue to consider.

The LOGNAME field is used when a machine calls us. This is the name of the UUCP account used to login into our system, hence the name. The usual practice is to create a UUCP account for a machine whose name is similar to the machine name. For example, the machine siemau might get a LOGNAME of uusiemau. It is also often the case that there is a MACHINE-LOGNAME pair for each machine. In this way you can manage each machine individually, regardless of whether you call it or it calls you. However, you do not have to do it like this.

Note that there must me an account for UUCP users to login under. It is advisable to create a separate account and not use the account uucp, which is the UUCP administrative user. Because you are creating a UUCP account, there is no "real" user logging in and therefore you need to have a different login shell. For UUCP users this will be /usr/lib/uucp/uucico. For more details on creating accounts see the section on users and security or the System Administrators Guide.

Rather than having to list out specific machine or account names, we can set either of these variables to OTHER. This will then match any instance that is not explicitly matched by something else.

For example, you wish to limit access to machines that call you by having a single account. The entry could look like this:

LOGNAME=uucp \ COMMANDS=rmail:rnews:uucp \ READ=/usr/spool/uucppublic:/usr/tmp \ WRITE=/usr/spool/uucppublic:/usr/tmp \


This is identical to the previous example, except that there is no MACHINE field. This says that any one logging into our system as the UUCP user uucp has the access privileges listed. Then, to limit access to machines we call, we create a separate MACHINE entry that looks like this (leaving the last three lines unchanged):

MACHINE=samplesite \

COMMANDS=rmail \

Here we made a slight change. Instead of allowing the machine we are calling access to the commands rmail, rnews and uucp. We simply allow it access to rmail. That means it can send us mail messages, but cannot use the rnews or uucp binaries on our machines (For example, using uux). On the other hand, since we are doing the calling, we know what each machine is, so we might want to open up the access a little more. We can increase the number of commands the machine we are calling can use. In fact, if we wanted we could let the machines we call have access to anything on our system by having an entry like this:


This means that any machine that we call will be able to run any command locally. Well, sort of. At first this looks like a huge security hole. Some malicious user could set up a uux job that uses fdisk to delete your partitions. Fortunately, uux jobs are executed as the user uucp (/usr/bin/uux is owned by uucp and is a SUID program). Therefore, even though we give every machine permission to run any command on our machine, it can only run those commands that the user uucp can.

Another way to use the MACHINE entry is to list multiple machines. For example:

MACHINE=siemau:scoburg:vesta LOGNAME=uucp \

In this example, the permissions that follow are valid for the machines siemau, scoburg and vesta, as well as when user uucp logs in. Keep in mind that this does not mean that the three machines must use the name uucp to login or that only these three machines can use the LOGNAME uucp. It only means that the permissions for the three machines is the same as that for the user uucp. Potentially, we could have another entry for the user uusiemau that had more permissions that this one. If we called siemau, then the MACHINE=siemau would apply. However, if siemau called us with the LOGNAME uusiemau, then this would apply, instead.

A less known, and therefore less used field is MYNAME. This tells the remote site what your name is. When you call a remote site (or it calls you), it has to know what machine name you are. Otherwise, it cannot transfer files to you correctly. Keep in mind that transfer of files is done by site name and not by the LOGNAME or MACHINE name. Usually, the site name matches the entry in Systems which matches the MACHINE entry in Permissions.

A problem arises when trying to contact a machine that has the same name as you or you have the same name as some other machine known to the one you are calling. Hopefully, this won't happen if you are calling a machine within your domain. However, if you are calling some machine outside of your domain, this is a possibility. UUCP was developed without the understanding of Internet domains. Therefore, it sees only the machine name. To prevent problems, you can use the MYNAME (field? variable?). If you do so, then when a connection is made, the remote side will think that your machine name is something different. For example, if I had it set like this:

MACHINE=vesta MYNAME=hilltop \

every time I would call vesta, it would think that my machine's name was hilltop.

Okay, we limit access to machines we call via the MACHINE field. We limited access to machines that call us via the LOGNAME field. What programs each has access to on our machine is controlled by the COMMAND field. Where that can be done, is controlled by the fields in the next three lines:

READ=/usr/spool/uucppublic:/usr/tmp \ WRITE=/usr/spool/uucppublic:/usr/tmp \


The READ and WRITE fields are as one would expect. They limit what directories the remote machines can read from and write to. Actually, this is not completely true. It is more accurate to say that this limits the root directory of where the remote machine can read or write. That means that in this example, if there was a subdirectory /usr/spool/uucppublic/data, the remote machine would have access to it. By using just the slash, as in:


you allow the remote system to write in anywhere starting from the root directory. This looks like another security hole, but again, like the COMMAND field, access permissions on directories is limited by what the user uucp can do. Therefore, the above example does not give either the machine or the account permission to write into the root directory.

We can also limit access by specifying what directories that cannot be read from or written to. For example, we might want to give a remote site READ access to /, but to keep them from getting the passwd or group files, we tell UUCP not to allow the to read /etc. This is done with the NOREAD variable. The counterpart to this is NOWRITE, which limits where a machine can write. Put together with READ and WRITE, it might look something like this:




We further restrict access to files by using the SENDFILES and REQUEST variables. These limit whether the remote machine can send us file or request files from us, respectively. These can either be yes, to allow that action or no to prohibit it. SENDFILES can also be set to 'call'. If defined so, the only time the the local machine will send files is when it has called the other site.

Although it is doubtful that we will ever call a machine named 'samplesite', however this is actually an alias to mean any machine. So, when ever we call a site that is not specially listed in the permissions file, the default machine 'samplesite' is used.

There are two other variables that can be used to increase security, CALLBACK and VALIDATE. The CALLBACK variable indicates that transactions cannot be initiated by the remote side by calling you. You have to be the one to call the remote side. When CALLBACK=yes, then the requests are denied when the remote machine calls you, the call is stopped, and your side calls back. This adds more security since you have a way of matching the LOGNAME to the MACHINE name. The LOGNAME calls you, and CALLBACK=yes. You call the other site and now the MACHINE variable permissions take effect. Like having to sides setting the callback times to Never, both sides setting CALLBACK=yes means that transactions can never take place. (Well, you can't get much more secure than that.)

The VALIDATE variable is used as a very simplistic authentication mechanism in conjunction with the COMMANDS variable. The VALIDATE variable is set to a list of machines that are tied to a specific LOGNAME. You can then set up a uucp account with a special password for your trusted machines. An entry might look like this:

LOGNAME: goodguys VALIDATE: siemau:scoburg:vesta

Keep in mind that any one who gets this goodguys password can set MYNAME to one of these and easily masquerade as one of the goodguys.

If you are going to be regularly transferring file then you probably want to automate that process. You might have a site that is your mail server. However, they want you to pay for the call, so they have the time to call you set to Never in the Systems file. Therefore, you have to go get the files yourself. You could set up a crontab that ran uutry at regular intervals. However, there are some mechanisms already built in.

As one might guess, the mechanism use to process UUCP requests at regular intervals is cron. By default there is a cron file for the uucp user, which already does most of what we would want. This is the file /usr/spool/cron/crontabs/uucp, which probably looks like this:

39,9 * * * * /usr/lib/uucp/uudemon.hour > /dev/null

10 * * * * /usr/lib/uucp/uudemon.poll > /dev/null

45 23 * * * ulimit 5000; /usr/lib/uucp/uudemon.clean > /dev/null

48 10,14 * * 1-5 /usr/lib/uucp/uudemon.admin > /dev/nullA

(For details of what each field is, see the crontab(C) man-page)

The first entry runs /usr/lib/uucp/uudemon.hour once every half-an-hour. This has always bothered me since the command is called uudemon.HOUR. However, once an hour, the script runs and starts two other scripts, /usr/lib/uucp/uusched and /usr/lib/uucp/uuxqt whose job it is to check the uucp spool directories looking for work.

The second line runs the /usr/lib/uucp/uudemon.poll script once an hour. The first thing you need to do is to change it to uudemon.poll2. The reason is that uudemon.poll is a much older and kludgey. It works by setting up a dummy control file (C. file in /usr/spool/uucp/<sitename>). Instead, uudemon.poll2 calls uucico directly. Anopther difference is that uudemon.poll gets its polling information from /usr/lib/uucp/Poll while uudemon.poll2 gets it from either Poll.day or Poll.hour, which you need to create yourself.

Which of the configuration files is used depends on whether you start uudemon.poll with the -d option or not. If you leave it off, Poll.hour will read, otherwise Poll.day. Both have the same basic format:

nodename <tab> hour1 hour2 hour3 ....

Entries can be followed by a 'w' to indicate that a call should be made only if there is work for the remote site. If the 'w' is not there, a call will be made regardless. This is useful if you want to regularly check your mail server. An example might look like this:

mailserver.siemau.com 8w 9 10 11 12 13 14 15 16 17 18 19w

Here we have it set to poll between 8AM and 7PM, but the first and last hours are only done if there is work waiting for the remote site. One common question is: "Does this mean you can only poll once an hour?" No, it doesn't. Inside of uudemon.poll2 a variable (NOW) is set to the current hour. If this matches one of the entries in Poll.hour, then uucico is started. Therefore, you could have the cronjob run once every 20 minutes and as long as uudemon.poll2 starts within the specified hour the call will be made.

The Poll.day file is similar, however instead of having hours, it lists days of the week with 0 being Sunday and 6 being Friday. The w also means only call if there is work for the remote side.

New to OpenServer is the /usr/lib/uucp/Configuration file. With this file you can configure details of the connection that were previously impossible. For example, UUCP exchanges packets of a set size. Prior to OpenServer, there was no of changing this. Now you can with the Configuration file.

Another aspect of the packet exchange is the number of packets sent before UUCP waits for acknowledgement. This is called the window. In ODT, the window size was three packets. With the Configuration file you can adjust the window to suit your needs. If you have a reliable connection, there is nothing that would prevent you from increasing this value to limit the time spent acknowledging the packets and spending more time sending them. This increases your throughput.

The Configuration file allows you to make both site specific and device specific definitions. Both have the smae format with the first line defining whether this is a device or site definition. The general format is:


PROTOCOL STRINGS protocol_list

USE PROTOCOL protocol1

WITH parameter = value

WITH parameter = value

USE PROTOCOL protocol2

WITH parameter = value

WITH parameter = value

USE PROTOCOL protocoln

WITH parameter = value

WITH parameter = value

KEYWORD can either be SYSTEM for a system definition or DEVICE for a device definition. The name is the specific site name from Systems or device type from Devices. The protocol_list is a list of protocols that are support by that site or device. If you wish to have different parameters defined for each of the specified protocols, there needs to be an entry for each parameter that is different than the default.

For example, to configure a site specific definition, the entry, might look like this:

SYSTEM siemau



WITH window = 5

WITH packet_size = 1024


WITH window = 7

Here we have configured UUCP so that it can communicate with siemau using either the G or g protocol. If we use the G protocol, the window is 5 packets and each packet is 1024 bytes. If we use the g protocol, the window is 7 packets, but each is the default size of 64 bytes.

Note that this does not force the use of one packet or enable. Both sides must support the specific values for this to work correctly. See the Configuration(F) man-page for more details.

Also new to OpenServer is the script /usr/lib/uucp/remote.unknown. This script is executed when a site attempts to login whose name is not listed in the Systems file. This logs the conversations and prevents the remote side from logging in. If you want to allow unknown hosts to connect, simply change the permissions on /usr/lib/uucp/remote.unknown to 000.

/etc/default/dial* - Default files for /usr/lib/uucp/dial* programs.

/etc/uuinstall - Administer UUCP configuration files.

/usr/adm/ctlog - Log for /usr/bin/ct.

/usr/bin/ct - Spawns getty to a remote terminal.

/usr/bin/cu - Call UNIX (terminal program).

/usr/bin/uucp - UNIX-to-UNIX copy.

/usr/bin/uudecode - Decode a uuencoded binary file.

/usr/bin/uuencode - Encode a binary file.

/usr/bin/uulog - Query log for uucp or uuxqt transactions.

/usr/bin/uuname - List system known through UUCP.

/usr/bin/uupick - Accept or reject transmitted files.

/usr/bin/uuto - Public UNIX-to-UNIX file copy.

/usr/bin/uustat - UUCP status and job control.

/usr/bin/uux - UUCP remote execution.

/usr/lib/uucp/Configuration - Protocole oriented configuration file.

/usr/lib/uucp/Devices - Devices used for UUCP.

/usr/lib/uucp/Dialcodes - UUCP abbreviations (tokens) file.

/usr/lib/uucp/Dialers - UUCP Dialers file.

/usr/lib/uucp/Maxuuscheds - uusched limit file.

/usr/lib/uucp/Maxuuxqts - uuxqt limit file.

/usr/lib/uucp/Permissions - Permissions file for UUCP transfers.

/usr/lib/uucp/Poll - UUCP remote computer poll file.

/usr/lib/uucp/Sysfiles.eg - Example Sysfiles file.

/usr/lib/uucp/Systems - UUCP Systems known to you.

/usr/lib/uucp/dial* and /usr/lib/uucp/atdial* - UUCP dialers

/usr/lib/uucp/dial*.c - Source code for dial* files.

/usr/lib/uucp/makefile - Makefile used to compile new dialers.

/usr/lib/uucp/remote.unknown - Script executed when an unknown host attempts to login it.

/usr/lib/uucp/uuchat - Dial a modem.

/usr/lib/uucp/uucheck - Check validity of UUCP configuration files.

/usr/lib/uucp/uucico - UUCP transport program.

/usr/lib/uucp/uuclean - Cleans UUCP spool directories.

/usr/lib/uucp/uucpd - UUCP server for network connections.

/usr/lib/uucp/uudemon.admin - Collects uustat data.

/usr/lib/uucp/uudemon.clean - Script to clean UUCP directory (calls uuclean).

/usr/lib/uucp/uudemon.hour - Checks UUCP spool directory .

/usr/lib/uucp/uudemon.poll - Controls polling of "passive" systems

/usr/lib/uucp/uudemon.poll2 - Alternative (better) polling scheme

/usr/lib/uucp/uugetty - UUCP getty program (just does an exec of /etc/getty

and is there for backwards compatibility).

/usr/lib/uucp/uusched - UUCP scheduler program.

/usr/lib/uucp/uutry - Try to contact remote system

/usr/lib/uucp/uuxqt - Execute remote command requests

/usr/spool/uucp - UUCP spool directory

/usr/spool/uucp/.Admin -

/usr/spool/uucp/.Corrupt -

/usr/spool/uucp/.Log - Contains UUCP log directories

/usr/spool/uucp/.Old -

/usr/spool/uucp/.Sequence -

/usr/spool/uucp/.Status -

/usr/spool/uucp/.Workspace -

/usr/spool/uucp/.Xqtdir -

/usr/spool/uucplogins - Default home directory for UUCP logins

/usr/spool/uucppublic - Publicly accessible UUCP directory.

Table 0.3 Key UUCP Files


Configuring SLIP

As we talked about in the first part of the book, the basic concept under SLIP is the idea of a link. The slattach command is used to create the SLIP link. Once the link is created, the connections can be made to the remote side, just like any other network connect, such as Ethernet or PPP. New to OpenServer is the ability to connect either via a tty device, as you did in ODT, but also connect to a UUCP site name. This make administration significantly easier. In ODT you need to define the IP address for each side as you were creating the link. New in OpenServer is the ability to create dynamic incoming and outgoing links as well.

When the SLIP link is created, you have a new network interface that behaves the same as with other types of connections. Although there is a one-to-one relationship between the two sides if the link, (as these are passed to slattach as parameters), you still have "normal" network functionality. This includes multiple applications communicating on both ends of the connection.

Before SLIP will work, the SLIP STREAMS stack must have been installed and linked into the kernel. Note that this is not done by default when SLIP is installed.

Also new to OpenServer is the idea of packet filtering. Like with PPP, packet filtering with SLIP allows you to let packets through the interface based on IP addresses, packet type and other criteria. The filter file is /etc/pppfilter and as one might guess is the same file that's used by PPP. We get to packet filtering shortly.

With a dedicated link, the SLIP connection is the only thing that uses the line. Either side may initiate the link, however nothing else ever uses the line. Dynamic links are where the line can be shared. In this case, either UUCP or PPP could use the line if it happened to be free at the moment. A dynamic incoming line is simply one with a shared line that was initiated by the remote host. A dynamic outgoing is one where the local host initiates the call.

Although you could have a dedicated line for connections across modems, it is more appropriate for leased lines and other cases where you have a permanent connection. As with the other kinds of links, dedicated links are configured with the Network Configuration Manager. To add a your first slip connection, select the Add new WAN connection under the Hardware menu. Here you have the choice of adding either PPP or slip. If one is already installed, you are only given the choice of adding the other. To add new connections after the first one, you needed to select Modify hardware configuration from the Hardware menu then select add.

You are then prompted to add a protocol to the SLIP connection. Here, your only choice is TCP/IP, so you create the chain TCP/IP®SLIP. When you get to the screen to select the type of connection you want, you may get confused. The SCO Doc (and I) talk about dedicated, dynamic incoming and dynamic outgoing. However, when you get to the window where you add the protocol to the SLIP driver, you are given the choice of "Incoming link”, "Outgoing link” and "Manual link”. These are your dynamic incoming link, dynamic outgoing link and dedicated link, respectively.

With a dynamic incoming link, you configure the port to accept a login by enabling it, thereby getting getty to run. Because you actually go through a login similar to that for a UUCP connection, this type of link requires that you configure a login account on the local machine. On the remote machine, a system name must be configured in the UUCP systems and a chat script that logs you in. In addition, a dynamic outgoing link must be configured on the remote side. In essence these two links work together. You can't use one type without configuring the other.

When you configuring the incoming link, the Network Configuration Manager will create the SLIP account for you. You are prompted for the account name and the name of the script to use when the account logs in. Like with UUCP accounts, the SLIP account gets a special login script instead of the standard login shell. It is this script that runs slattach to create the link. Keep in mind that the Network Configuration Manager does not create the script, you must do that by hand and the Network Configuration Manager puts the correct entry in /etc/passwd.

As we mentioned, thre reverse of the dynamic incoming link is the dynamic outgoing link. When you get through the menus to the point where you are inputting the IP addresses and names there is something to watch out for. If you select a specific tty line, then the UUCP system name is grayed out. If you select "none” for the tty device, then you can enter an name in the UUCP system name field. This makes sense because if you are specifying a line, then it is a not dynamic. If you want to use the UUCP name and therefore have a dynamic connection, then you would select a specific tty device. However, if you want to limit the SLIP system name to a particular device, you can do so in the Systems file. Instead of using an ACU device, define an SLACU device that has an entry in Devices and this points to a single tty device.

Like incoming links, the Network Configuration Manager will only configure the connection so far. You will need to edit /etc/tcp by hand to add the slattach line:

slattach -d device_name uucp_name

where the -d indicates that this is a dynamic line, the device name is the tty device the connection should be made on and uucp_name is the name of the remote host as it appears in the UUCP Systems file. If you leave the -d off, the system will try to configure the line as dedicated. You can also add other options to the slattach line in /etc/tcp such as turning on header compression, changing the maximum transmission unit, etc. Although there is nothing requiring you to put the the slattach line in /etc/tcp, it is a logical place as that's where most of the the TCP/IP related programs are started.

When I said that SLIP did packet filtering like PPP, I wasn`t entirely accurate. PPP allows you to filter packets that are used to activate or keep up the connection, however you can only filter "pass" packets with SLIP. What this means is that you can use criteria such as packet type, destination and packet length as you filter criteria, but not simply those that are used to keep the link up. Like PPP, the filter can be defined for each link separately. This can be useful if you wish to limit access on slower lines, but faster ones can be more open.

Because dynamic links use the UUCP facilities, you can use UUCP to test the connection. For example, you might want to try to connect to the remote side using cu. If you can successfully connect using cu and the remote side accepts the login and password, a failed SLIP connected probably lies in a incorrect configuration of the SLIP half of the link. Note that if you login into a SLIP account, you are not given a normal login shell, but rather the script with the slattach is started. However, if you connect in this fashion you have narrowed down areas for potential problems. For more details on this, see the section on configuring UUCP.

If you have multiple network interfaces on your system, you may wish to configure it so that packets can be routed over the SLIP link. If so, you will need to establish this as a SLIP gateway. This is one of the options presented to you in the Network Configuration Manager.

If you configure a dedicated link, then a line will be created in /etc/tcp that will establish the link when the system boots up. Remember that /etc/tcp is linked to /etc/rc2.d/S??tcp, so any changes to /etc/tcp effect /etc/rc2.d/S??tcp. If I wanted to make a SLIP connection between siemau and scoburg, the slattach line, might look like this:

/etc/slattach /dev/tty1a\ 38400

Like many configuration files, the logical line can be spread across two physical if you escape the end-of-line with the backslash. The syntax is:

slattach device source_ip destination_ip net_mask baud_rate

Note that we did not put the slattach in the background as indicated in the SCO Doc. Per SCO Support, this is a bug in the doc. You want SLIP to be running in the foreground on that port. Here, also, you see that we have hard coded the line with the tty device and the baud rate.

When you configure the dynamic incoming link, the procedure is basically the same as for a dynamic outgoing link. Two differences is that you need to provide a logname for the SLIP account and then create the login script. The account name is governed by the same limitation as a "normal" account.

The login script is different from the line in /etc/tcp:

/etc/slattach /dev/tty1a 38400

Here we see that the port is hard coded to /dev/tty1a and the IP addresses of the source and destination machines are hard coded as well. This bothers me. If you have a dynamic incoming line, you may have several lines the people can dial up with for a SLIP link. This mechanism implies one incoming line per telephone number. The number for this machine is specified in the Systems file and hooks up to a specific number and therefore a specific modem and therefore a specific tty line. However, some companies (including SCO) have single numbers from the outside that is hooked to a set of other phone numbers. If one is busy, the system will search for a free line. When you get one of these systems, there is no guarentee that you get the same tty device every time. However, hard coding it like this forces you to.

Packet Filtering

If you want packet filtering enabled you have to do two things. First create the /etc/pppfilter file. If you have already configured packet filtering for PPP, then this file should already exist (Otherwise, packet filtering will not work for PPP). If you do not specifically use filtering, then all packets are passed. If the file does exist, then you can simply add entries to it. Of, if you want, you can use existing entries you've defined for PPP links. On the slattach line, you use the -p option to point to the line in /etc/pppfilter that you want to use. For example, if I did not what allow connection like ntp, ftp or timed, I could block them like this:

slip1 pass !port ntp and !port ftp and !port timed

The general format is:

tag filter_type filter_criteria

This example has the tag slip1. Therefore, I would reference it on the slattach command line as -p slip1. The filter_type is pass. Remember you can only filter the packets to passthrough. Not the packets that bring up or keep up the link. Finally, the we have the filter criteria. In this example, we are saying not to pass (because of the !) the packets going to or coming from the ntp, ftp or time port.

At this point I am going to have to pass you to the packetfilter(SFF) man-page. There are literally dozens of different filter criteria that you can use, including destination address, length of the packet and even specific combination of these criteria as the packetfilter syntax can also use Boolean variables.

You can also use the Network Configuration Manager to set the packet filter tag entry for outgoing SLIP links. Since tag is a just a pointer to a line in a file, there is nothing wrong with multiple SLIP links sharing the same tag line. For that matter you could have 64 lines, mixed SLIP and PPP all using a single tag.

Any changes you make to the filter file will not go into effect until the link is reconnected. In fact, any changes you make to the link are not valid until you bring up the link. Therefore, if you want the changes to take effect, you must bring down the link with ifconfig and then back up. Note that if you have a connection to the remote site, ifconfig does not stop it, it just brings the link down. Therefore, you might still be running up long distance charges.

Changes can be made in two ways. First by running the Network Configuration Manager. Select the TCP/IP line under SLIP. Next, select Modify Protocol Configuration in the Protocol menu. Or you can run slconf. I would select the later since the Network Configuration Manager just runs slconf. If you change the name of the host to connect to for an outgoing link, make sure that you change the UUCP files if necessary.


Like the other network protocols, you must add the PPP protocol stack before configuring any PPP links. This is accomplished in ODT 3.0 with the netconfig utility or Network Configuration Manager if you are running OpenServer. IN ODT 3.0, this process is called adding a chain. In both instances, the PPP protocol stack is being added to the kernel, so you must relink and reboot before you can establish a link.

If you are running ODT 3.0, PPP is configured using the netconfig utility and the Network Configuration Manager if you have OpenServer. Because PPP in ODT 3.0 is limited in the configuration options, there isn't much to the netconfig configuration other than specifying the port name, the IP addresses and the PPP account name. However, the Network Configuration Manager allows you to configure packet filtering, authentication, timeout values, link and communications parameters, as well as what type of PPP connection this should be.

Once you have configured the PPP stack using the Network Configuration Manager, you can use the pppconf utility to add, modify, or remove the various PPP configuration options. Although pppconf is actually called from the Network Configuration Manager, you save a little time by calling it yourself.

When you configure a link using the Network Manager, two changes are made. First the necessary changes are made to /etc/ppphosts. Second, entries are added to /etc/hosts to reflect the new machines on the network.

PPP configuration is essentially the same as for SLIP. In both cases, netconfig is used in ODT and the Network Configuration Manager is used in OpenServer. The concept is also the same, and in both operating systems. Like SLIP, PPP is the network interface driver on which TCP/IP sits. A chain is necessary to create the network interface, and here you would be creating the chain sco_tcp®ppp. Like all the other types of network interfaces, once you use the appropriate configuration tool, the chain is then written to /usr/lib/netconfig/chains.

A link configuration is necessary for every link, which is based on (at a minimum) an entry in the /etc/ppphosts file. This is one of the reasons why you simply can't add chains to /usr/lib/netconfig/chains to and expect things to work. By running netconfig or the Network Configuration Manager (pppconf) you add the appropriate entries to /etc/ppphosts. Yes, potentially you could make the changes here, however, why bother? This process works and, therefore, you run a greater risk of messing things up.

One thing I need to point out is that the ppphosts file is complete different in the two releases. You cannot copy the file from ODT to OpenServer and expect it to work. Let's take a quick look at each file to see what they look like. In ODT, the ppphosts looked like this: 5 - siemau 10 3 5 - vesta 10 3

where the fields are

destination inactivity_timeout tty dest_uucp_name ACK_timeout max_retries

Although netconfig puts the destination IP address here, this can be the name of the system, provided there is a means to resolve the host name such as /etc/hosts or DNS. The inactivity timeout is how long to wait with no activity on the line, before the link is broken. If you are using a direct connection and not the UUCP facilities, then the tty field is the name of device used for the direct connect. The ACK_timeout, is the time to wait for an PPP acknowledgment. This is optional and defaults to 10 seconds. The max_retries is the number of time the system will retry each PPP protocol request. This, too, is optional and defaults to 3.

So, what does the file look like in OpenServer? Let's take a look:

siemau:scoburg staticdev=/dev/tty2a speed=19200 mask= name=jmohr

vesta:ppp-scoburg uucp=vesta retry=5 mask=

*nppp local=ppvesta remote=scoburg

Immediately you can see these are significantly different. In the OpenServer file, you have many different options, since there are several different kinds of links that you can establish. In every case, however, the definition of the machine names is the same. In general, the syntax is:


The first line is a dedicated line. It lists the tty device (staticdev), the connection speed (speed), the netmask(mask) and the name of the system used when authentication is enabled. The in second line we have a dynamic outgoing line. Since we are using a dynamic outgoing line, we are using the UUCP facilities, therefore the name of the machine as it shows up in the Systems file is here defined with uucp=. The retry= option is the number of times the systems should try to call the remote hosts, before it fails. Then, once again we have the netmask.

If you have a dynamic incoming connection, then it is completely different as you see in the third line. The first entry is the name of the account that the remote side will use to log in. Note that the name must be preceded with the asterisk (*) to indicate that this is the account name and not a machine name. The local machine name is defined by local= and the remote machine name is defined by remote=.

One very important thing to keep in mind is the remote machine names been resolvable by the local machine without using this link. That is, either you use /etc/hosts or the name server is on this side of the PPP link. Otherwise you have a Catch-22. That is, in order to resolve the name, you have to bring up the link. However, to brink up the link, you have to resolve the name.

Note that all of these values were placed into /etc/ppphosts by the pppconf command (or through Network Configuration Manager). If we make other configuration changes such as enabling filtering or authentication, then the appropriate values would be placed in here as well. For more informat, see the ppphosts(SFF) man-page or the /etc/ppphosts file itself.

As I mentioned, before, in the SCO implementation in OpenServer, there are four different kinds of links that you can configure: dedicated, dynamic outgoing, dynamic incoming and manual outgoing.

If you have a dedicated serial line to the remote host, then a dedicated link may be advisable. Dedicated serial lines usually refer to such things as serial cable connecting two machines directly or leased telephone lines. This does have some limitation in that dynamic allocation of IP addresses is not allowed and the serial line must be connect to a single remote host.

If the serial line is not static, then a dynamic incoming link configuration might be more applicable. This is what you would need if you wanted to share the same serial line with PPP and UUCP, or when the connection requires a "getty-type" login. The IP addresses can be specified by either the local or remote host.

If the local machine needs to initiate the connection and the local machine also assigns the IP address, you might want to use what is referred to as a "Dynamic outgoing, transparent bringup link". This is type of link is used over dynamically acquired (maybe shared) serial lines.

If the remote hosts must be the one to assign the IP addresses, then a "dynamic outgoing, manual bringup link" is probably what you will configure. Here, too, the serial line is one that is dynamically acquired.

Because PPP operates over serial lines and therefore can be accessed via modems, it has a connection to the outside world that doesn't exist with other network media. As a result, an extra layer of security may be necessary. This is the concept of PPP authentication. Like UUCP authentication, PPP authentication is not a requirement. It is at the discretion of the administrator. If enabled, a hosts will require that the other host correctly identify itself. In essence, this is the same as a password and login of "real" users. The default is that no authentication is required.

As I mentioned earlier, SCO PPP provides two methods of authentication (aside for the authentication provided by the login), Password Authentication Protocol (PAP) and Challenge-Handshake Authentication Protocol (CHAP). If enabled, the authentication parameters must be included in the entry in /etc/ppphosts. The auth= parameter enables authentication and sets it to either pap or chap depending on what you set it to. Remember that the remote side must support this kind of authentication as well. By default, PPP waits one minute for the remote side to authenticate itself, however, this can be changed using the authtmout= parameter, which specifies the timeout in minutes.

During the authentication the authenticating host must be able to identify the other hosts. "Generic" authentication is also possible if there is default entry in /etc/pppauth. If not, the configuration entry in /etc/ppphosts must contain the hosts id. This is done with the name= variable. This is matched to an entry in /etc/pppauth which contains a name= entry and a password= entry.

If PAP is used, pppd requests that the remote host send the corresponding name and password. These are checked against the entries in /etc/pppauth. If authentication is required by the remote host, then the local pppd sends the name and password. In both cases, if there is no match, then connection is refused. Keep in mind that the name and password are not encrypted in any way. Therefore authentication provides very minimal security.

If you are using CHAP, pppd sends a CHAP request packet to the remote host, which, contains the local host name and a random string generated by the local host. The remote host then computes a "result'', using the request packet ID, the random string, and the password. The remote machine then sends the result back to the local machine, which then generates a result based on the same values. If the two results match, then connection is allowed. If the remote side has CHAP authentication enabled, the processes is the same, except that the remote side sends the first packet. Since the authentication is done with the calculated "result", CHAP is more secure than PAP.

SCO PPP also allows packet filtering. This controls the behavior of PPP depending upon what kind of packet is being sent. For example, PPP can be configured to behave one way if someone wants to make a telnet connection and some other way if they want to connect using ftp.

Figure 0481 Packet filtering

As I mentioned before, a PPP connection is not always up. Attempts to connect with the remote side push pppd a little to go out and make the connection. PPP packet filtering is a way to say that if the outbound packets is ftp, for example, then not to reinitiate the connection. This can also be used to limit the access to the link in general. For example, even if the link is up, packet filtering can be used to eliminate all ftp packets. Since it is these packets that let PPP know that the link is being used, packet filtering can also be used to limit what kinds of packets make the link think it is in use.

At first this last functionality might not seem too important. However, I have talked with administrators who have that this is a major problem. Let's consider a dynamic link that is made through a modem. When the connection is initiated, pppd must first get the modem to call out, the remote host must answer and the two sides must negotiate a moment or two until the link is established. If the link is not used within the timeout period, the link is dropped.

Image you have an impatient employee who doesn't like to have to wait until the link is re-established. Since multiple connections are possible across a PPP link, he or she just might have a ping to the remote system running in the background. This keeps the link up and there is no time last re-establishing the connection as the connection is always there. background.

Now image what happens if the remote site happens to be somewhere other than the local calling area. It doesn't have to be out of state, just somewhere where you are charged by the minute. What happens if this impatient employee goes on vacation for two weeks and forgets to turn off the ping?

Packets can be filtered at several different levels. The first level is the protocol type, such as tcp, udp or icmp. You can also make the restriction more specific why preventing connection on specific ports. For example, we could configure it so that tcp packets were allowed in general, but we could deny access to connections on port 23, thus disabling telnet across that link. Or if we wanted to restrict access to or from certain IP addresses. For example, we didn't want our employees downloading megabytes worth of games from archive sites, we could filter out all packets to or from the IP address of that archive.

As with other aspects of PPP, filtering is accomplished on a per-link basis. Therefore, I might allow telnet from one site, but disallow it from another. Both SLIP and PPP use the same filter configuration file (/etc/pppfilter). For more details on packet filtering, see the section

It is possible to allow your existing UUCP configuration to do most of the work in establishing a PPP link. This requires UUCP connectivity to be established first between the two sites, which may not always be possible. However, because UUCP connectivity is somewhat simply that for PPP, this is often a good test of the physical connection.

When you bring up a dynamic outgoing PPP, pppd requests a UUCP link to the remote host. Like normal UUCP connections, communication characteristics such as baud rate, parity, and flow control are established. Keep in mind that the UUCP configuration is doing this and not PPP. Once the UUCP connection has been established, UUCP passes the tty device name to PPP which then takes over the link.

In order to configure this feature you need to include the uucp= variable in /etc/ppphosts. The value of the uucp= variable is the name of the host you are connecting and must match the name of the host as it is defined in /usr/lib/uucp/Systems,

If you are configuring a dynamic link, there are a couple of things to consider. First, note that you can configure a link on an IP-address to IP-address basis, as well as Hostname-to-hostname. If you specify both names and IP address, then the host name will take precedence and an entry is added to /etc/hosts that matches up the host name to IP address. If you specify a IP address, then the remote system must agree to use those addresses. If this hasn't been agreed upon before hand, the connection will be refused.

Although all these changes are carried out by the Network Configuration Manager, all that is really being done is making changes to /etc/ppphosts. Therefore, if you were so inclined, you could make the changes yourself. However, I would recommend that you use the tools provided as it's easier and you are less prone to mistakes. On the other hand, there is no problem with copying this file between machines.

On incoming calls, the first barrier the calling system has to get past is the login procedure. Like UUCP, a special account needs to be set up where the login shell is /usr/lib/ppp/ppp and not something like /bin/ksh. Again, this is similar to UUCP. When you configure PPP with either ODT 3.0 or OpenServer, a PPP user account is set up. In OpenServer you are ask if you want to do this and are even given the choice of what user ID number you want. However, in ODT the account is simply created for you.

When assigning names, you must be sure that the name doesn't match any other interface. For example, if you have an Ethernet card on you machine at it is associated with the name scoburg, then a good choice of name for the host on the PPP connection might be ppp-scoburg. It is close enough that users will (hopefully) know it is the same machine (plus the ppp in front makes is clear what interface it is going over). However, it is a different name, so there will be no conflicts. Again, if you enter both an IP address and a name, the name takes precedence, is written to /etc/ppphosts and the name and IP address are written to /etc/hosts.

Leave the name field for the local machine blank if you want the remote system to assign the host name. You must also set the negotiation option to 'yes', which is done by clicking the advanced options button.

If you want the remote side to assign the address too, leave the address field blank as well. If you put an IP address here, the other host must accept this value. Otherwise the link won't be established.

If you (that is the local machine) is to establish the remote name and IP address, this can done through an IP address pool. An IP address pool is a set of IP address that are set aside for such dynamic connections. To enable this feature, enter the name of an IP address pool instead of a host name. Leave this field blank if you want the remote system to assign a host name. Also, set one of the advanced options for IP address negotiation to Yes.

An additional piece of information that you need to include when configuring a dedicated line is what serial port you are going to use. This shows up in ppphosts as the staticdev parameter. Although you are probably going to be using a modem, you should a line that does not support modem control.

If you have multiple network interfaces in your system and want to be able route packets through the PPP connection, you need to establish the PPP interface as a gateway. If not already done, this will turn on ipforwarding and ipsendredirects within the kernel. If you enable this, you need to relink the kernel and reboot. This does not have any effect on the ppphosts file.

Configuring the Domain Name System (DNS)

In the first part of the I discussed DNS as being a means of centrally administering the file necessary for node name to IP-address translation. Although the relationship of the files is pretty straight-forward, they are rather initimidating to the uninitiated. (Myself included)

So, what do the DNS configuration files look like? Well, since the first file that named looks at is /etc/named.boot, that seems like a good place to

start. Let's assume we wanted to set up a primary name server. We might have a file that looks like this:


; Boot file for Primary Master Name Server



; type domain source file or host


directory /etc/named.d

primary siemau.com siemau.forward

primary 147.142.199.in-addr.arpa siemau.rev

primary 0.0.127.in-addr.arpa named.local

cache . root.cache

Lines beginning with a semi-colon are considered comments and blank lines are ignored. The first line with configuration information is:

directory /etc/named.d

This tells named that if no path is specified, it should look for the other configuration files in the specified directory. In this case, /etc/named.d. Note that this is the default directory used in OpenServer. ODT used the default directory /usr/lib/named. However, since the named.boot file is read when named starts up, you could change it to anything you want.

The first primary line says that we are the primary name server for the domain siemau.com This says that the information to resolve forward requests are found in the file siemau.forward. Okay, so what are forward requests. Well, "forward requests" is my term. I use it for two reasons. First, the file containing the information, often ends with .forward. Second, I think the primary function of a name server is to translate names to IP addresses. Therefore, this is going forward. Translating IP addresses to names is going in reverse.

Note that you will often see that the forward mapping file is referred to as named.hosts or domain_name.host and the reverse mapping as named.rev or domain_name.rev. I like to call one .forward and one .rev so I know by looking at them what their function is. It doesn't matter what you call them as long as there are pointed to in named.boot.

In order to be the primary server we must say that we are. This is accomplished through the Start of Authority (SOA) record, which says we are the start of authority for the given domain. That is, when trying to find the answer to a query, the buck stops here. We have all the right answers when it comes to this domain. The SOA record is required and might look like this:

siemau.com. IN SOA siemau.siemau.com. jimmo.siemau.com. (

8675309 ; Serial

10800 ; Refresh

1800 ; Retry

3600000 ; Expire

259200 ) ; Minimum

The fields in the first line are: domain, data class, type of record, primary name server, responsible individual. The data class will always be IN for Internet. Often you will see root or postmaster as the person responsible for this domain. Here, I picked myself. Note that the format is jimmo.siemau.com and not jimmo@siemau.com as one might expect.

The Serial number is a way for secondary servers to keep track of the validity of their information. If the secondary has a serial number that is lower that on the primary, it knows that the information is outdated. It will then pulls over an updated copy.

The Refresh is how often (in seconds) the secondary servers should check the primary for updated information. In every implementation I have ever seen, this value is set to 10800 seconds, or three hours. You can change it if your site requires it.

The Retry is how often (in seconds) the secondary server should retry to contact the primary. This value of 3600 seconds (one hour) is also something I have seen in almost every case. Again, change it as you need it.

The Expire time is how long the secondary will try before it gives up and declares the data it has as invalid. This is based on the attitude that no data is better than old data. Here we have 1000 hours or almost 42 days.

The Minimum is the value that other resource records should use as their time-to-live, if no other value is defined for them. The time-to-live is how long a given piece of information is considered valid.

At the end of each of these records you see a semi-colon. This is used in DNS database files as the start of a comment. Any text from the semi-colon to the end of the line is considered part of the comment. You will also see that many lines have semi-colons as their first character. In these cases, the whole line is a comment.

Note also there is a dot after each .com entry. This indicates the end of the name. Remember I mentioned that the trailing dot indicates the root domain? In these cases here, this dot is required. If you leave it off, the system will assume that it should tack on the domain name onto the end. Therefore, you might end up with the domain name twice. This behavior can actually come in handy and we'll get to it shortly.

The SOA a record is just one resource record that you find in DNS database files. There are several others that we will get through during the course of this discussion. Resource records have general format:

name {ttl} data-class record-type record-specific-data

The name is simply something we are looking for. For example, we might have a machine name and we are looking for the IP address. We have the machine name, this is our value. On the far right is the record-specific-data or the IP address. The TTL value is the time-to-live. This is an optional value since we already defined a minimum in the SOA record. We could have also defined a ttl value for this SOA record, if we had wanted. The data-class can be one of several values. However, only the IN for Internet class is commonly used, therefore that is the only one we'll use here. The record-type tells us what kind of resource record we have. For example, SOA is one record type.

After the SOA record there is usually an entry saying which machines are name servers, such as:

siemau.com. IN NS siemau.siemau.com.

The value we have is siemau.com. For this record type, this value is the domain name. The domain is the same for the SOA record, as we are defining this machine to be the name server as well. The data-class, again, is IN for Internet. Since we are defining which machine is the name server, the record type is NS, for name server. Lastly, we get the FQDN of the machine (siemau.siemau.com.). Note that in both cases we had the dot at the end of each name.

One thing that I should point out here is that a good choice for which machine is the name server is one that is on multiple networks, that is one that servers as gateway. This is a good choice since it already has to know about multiple networks to be the gateway. It is said to be well connected. This saves managing one machine as the gateway and the other as the name server.

Next, we have the name to address mappings. Let's assume for simplicity's sake that I only have two other machines in my network. The entries for all my machines might look like this:

siemau.siemau.com. IN A

vesta.siemau.com. IN A

scoburg.siemau.com. IN A

The general format is:

machine-name data-type record-type IP-address

Note that despite the fact that siemau is our name server, we still need to include in here. Otherwise there would be no way to translate it's name to an address. The new piece of information here is the A record-type. This simply says that this specific record is making the translation from machine name to IP-address. Each entry is referred to as an address record, or address resource record. Note again the trailing dot (.).

We also need a mapping for the node "localhost". This is a special name for the local machine and is accessed using a special driver called the "loopback driver". Rather than accessing the card, the loopback driver knows that this is the local machine and does not need to go out to the network card. Certain function on the system take advantage of the capabilities of this driver.

localhost IN A

One thing I need to point out is the dot (.) at the end of each FQDN. This says that the name stops here. Remember that the dot is use to indicate the root domain. By putting the dot here, this says that we have reach the root domain, so we won't go any further.

Leaving the dot off can be a mistake or intention. In these examples it would be a mistake. In fact, in the time I was doing tech support at SCO, leaving off the dot was (perhaps) the most common mistake made when configuring the name server. However, we can leave it off intentionally in certain circumstance and have it be correct. We can use abbreviations (shorten forms) to indicate the machine name. For example, we could have written the first entry like this:

siemau IN A

Because we already defined what the domain name is in the named.boot file, the system knows what to append. Therefore, we can try to contact either siemau or siemau.siemau.com and the name server will translate that correctly to

We now need to make the translations from IP address to name. As I mentioned before, there are "reverse" translations. The data for these translations is in the file siemau.rev as indicated by the line from named.boot:

primary 147.142.199.in-addr.arpa siemau.rev

In general, the format of the entry is similar to that of the forward entries. For our three examples they would look like this: IN PTR siemau.siemau.com. IN PTR vesta.siemau.com. IN PTR scoburg.siemau.com.

There are a couple of new things here. First, is the record type. Here we have PTR for pointer records. These point to the machine name. The next is the "in-addr.arpa" after the IP address. To understand we need to take a step back.

Assume we wanted to make the translation from machine name to IP address and we had no idea where that machine was. As I mentioned there are name servers for all of the top-level domains that are aware of the name servers for all of the domains under it. For the .com domain, one such machine is kava.nisc.sri.com. So, if we had wanted to find the machine vesta.siemau.com, we could ask the name server that was responsible for the .com domain (kava.nisc.sri.com). Since kava knows about the siemau domain, and knows that siemau.siemau.com is the name server for that domain it tells you to go ask siemau yourself.

Now, let's go the other way. Question: is the domain with the first octet of the IP address 199 a .com, .edu or .gov domain? How can you tell? The answer is that there is no way to tell. IP addresses are not arrange by the type of organization. We can guess that the network 199 is probably a class C network (since it is over 192), but it can just as easily be .com a .edu or anything else. So rather than trying to find the name server for every single domain and asking "Are you the right one?", a quicker way had to be developed.

The solution was to create a portion of the Internet name space that used the addresses as a name. This portion is considered a separate domain and is referred to as the in-addr.arpa domain. The names of both machines and sub-domains within the in-addr.arpa domain are simply the IP addresses. There are 256 sub-domains of the in-addr.arpa domain, 256 sub-domains of each of those domains and so on.

If you look, the "names" listed in the in-addr.arpa domain have the IP address reversed from the way we are accustomed to seeing it. This is keeping with the idea that in the names, the more specific names are on the left and get more general as you move to the right. It also makes things easier to managed since we can say that the 147.142.199.in-addr.arpa domain is administered by one organization. (This is because 199.142.147 is a separate Class C network).

Note also that there is a dot at the end of the reverse address. Here, too, this tells the name server where the end is. Since we already said what the in-addr.arpa domain was in the named.boot file. We can make a short cut by listing only the host portion, just like we did with the FQDN. The entry for siemau would then look like this:

1 IN PTR siemau.siemau.com.

Note that here we have the dot a the end of the FQDN, but it wasn't at the end of the IP address in the address (A) record. This is because the dot comes at the end of a domain name. In in-addr.arpa notation, the IP address is part of a name, it just happens to look like an IP address, albeit a reversed one. Think of it this way, a period comes at the end of a sentence, which is a bunch of words. If you have a set of numbers, there is no period.

If we had a class B network, we could also make use of these abbreviations. For example, if siemau had an address of, it's pointer (PTR) record could have been written like this:

1.147 IN PTR siemau.siemau.com.

This reminds me of the second most common error I say in support and that is using the abbreviations for the reverse address, but not reversing them! That is, in this example above, writing it as:

147.1 IN PTR siemau.siemau.com.

Don't do that! A reverse domain has the IP address portion of the name reversed as well. No matter what part you include.

By writing the IP-Address reversed like this, we are essentially creating a new domain. The root domain, is still dot (.). However, this time there is just the single top-level domain in-addr.arpa. This notation is often referred to as the reverse domain. Because we are defining a new domain in the siemau.rev file, we need a new Start of Authority record. We could copy the SOA record from the siemau.forward file, however the domain is wrong. The domain is now 147.144.199.in-addr.arpa. So, all we need to do is replace the old domain name with the new one and the entry would be identical. The first line would then look like this:

147.144.199.in-addr.arpa. IN SOA siemau.siemau.com jimmo.siemau.com (

We can now duplicate the remainder of the SOA record from the siemau.rev file. One thing I do to help keep things straight is to think of the NS record as part of the SOA record. In "reality", they separate records. However, if you think of them together, you won't forget and leave off the NS record. Since we are defining a new domain, we also need the NS record for this new domain. It's NS record would look like this:

147.144.199.in-addr.arpa. IN NS siemau.siemau.com.

However, I don't like the idea of two SOA records. There is the chance that I update the database files, but forget to update one of the SOA record with the new serial number. To eliminate that problem, there is a directive that you can give the name server to include another file while it's reading the information. This is the $INCLUDE directive. To include a single SOA record, we create a file, perhaps siemau.soa and use the $INCLUDE directive in both the siemau.forward and siemau.rev files. The line would look like this:

$INCLUDE siemau.soa

Since we already defined the directory in the named.boot file, there is no need for a path here. However, we have a problem. The SOA record in siemau.forward is for a different domain (siemau.dom) than in siemau.rev 147.144.199.in-addr.arpa). We can take advantage of a magic character: @. This will be read as the domain name, provided the domain name is same as the origin (The origin is the machine that this data is on).

Let's create a single SOA file (i.e. siemau.soa) and make it identical to other others with the exception of the domain name. Instead we replace it with the "@." Next we remove the SOA records from the siemau.forward and siemau.rev file and replace it with the $INCLUDE directive above. When the name server reads the siemau.forward file, it gets to the $INCLUDE directive sees that it needs to include the siemau.soa file. When it gets to the "@", the system translates it as siemau.com. Next, when the system reads the siemau.rev file, it sees the same directive, includes the same file, however, this time the "@" is interpreted as "147.144.199.in-addr.arpa.

There are still two lines in the named.boot file that we haven't covered. The first sets up this servers as primary for the "local" domain. This is a special domain that refers to this host only. Remember from our discussion of IP address that the IP address for the local host is The "network" that this host is on is 127.0.0. We always need to be the primary name server for this domain, there we have the line in our named.boot:

primary 0.0.127.in-addr.arpa named.local

The named.local file could contain just two lines:

$INCLUDE named.soa 1 IN PTR localhost.

Note that here, too, we are including the named.soa file. When the system reads named.local, it includes named.soa and the "@" is translated to 0.0.127.in-addr.arpa as it should.

The last line tells us to read the cache file:

cache . root.cache

The root.cache file is the list of the root domain name servers. This file is provided by SCO and is as up to date as possible. Unless you are personally aware of any changes, then I suggest leaving this file alone. You can obtained the most current list root name servers using anonymous ftp from the machine ftp.rs.internic.net. The file is domain/named.root.

Let's assume we want vesta to be the secondary name server for this domain. We would then create a named.boot file on vesta, that might look like this:

directory /etc/named.d

secondary siemau.com siemau.forward

secondary 147.142.199.in-addr.arpa siemau.rev

primary 0.0.127.in-addr.arpa named.local

cache . root.cache

If we look carefully, we see that the only difference is the that for the forward and reverse files, we change "primary" to secondary. Note that that vesta is still the primary for the domain 0.0.127.in-addr.arpa (the local domain). The contents of the files are theoretically the same. This is where the concept of the serial number comes in. When the secondary loads it's file it compares the serial number to what it reads from the primary. Note also that the IP address,, the address of the primary server. In our case this is the machine siemau.

If we want a caching only server, the named.boot file is a little simpler:









We still specify the directory and the root.cache file. However, we are now the primary for just a single machine, ourselves.

In any of the example named.boot files we could have include a line that simply said:


That would be a name server, regardless of what of type, that forwards all requests that it cannot satisfy to a list of predetermined forwarders. If this sever does not have the answer, it will not interact with any other server, except for those listed as forwarders. Therefore, any time you have a slave server, you must also have a list of forwarders. The entry for the forwarders might look like this:


The last kind of server is called a remote server in the SCO DOC. I called a client. It's configuration is the simplest. You need to create a file called /etc/resolv.conf and include a line defining the domain and then a list of the name servers or resolvers as they resolve your queries. If we had siemau as the primary for this domain and vest the secondary, our file might look like this:

domain siemau.com



Note that if this file doesn't exits, your system will expect to get it's information from the /etc/hosts file. Therefore, you can say that on the client side that if /etc/resolv.conf doesn't exist, you are not using DNS.

If you have a larger network with many different departments, you might have already decided to have multiple name servers. As I mentioned, it is a good idea to have your name servers also be the gateway as they are "well connected." This also applies to the gateway you have to the Internet. To make life simpler for both you trying to reach the outside world and the people trying to get in, it is a good idea to have the Internet gateway also the primary name server for your entire domain.

If your organization is large, then having the Internet gateway a name server for your entire organization would be difficult to manage. Since you already decided to break your network down by department, then each department should have its own name server. One thing you could do is set up the domain name server as a secondary for the sub-domains. This is easy to set up (as we described above) and saves you from having to maintain a list of every machine within your organization.

What if you have a lot of traffic to and from the Internet? It's possible that your domain name server will get overburdened. Not only does it have to route packets and handle name service queries from both sides of the gateway, it also has to handle name queries from one sub-net looking for a machine in the other. For example, assume the machine in the cash.finance.siemau.com wants to contact buyer.sales.siemau.com (two sub-domains). The machine cash asks it's name server how to get to buyer, which doesn't know. However, that name server knows about the name server for siemau.com. So it goes and asks how to get to buyer. If the domain name server is secondary for all the sub-domains, it stores copies of the data for all those sub-domains. So the domain name server needs extra memory and spends time resolving the query. We can save both memory and time, by making the domain name server a stub server for the sub-domains rather than a secondary. The stub entries in named.boot might look like this:

stub finance.siemau.COM finance.stub

stub sales.siemau.COM sales.stub

In order for us to know about the other name server, we have to have both NS and SOA records. They look the same as they would on the sub-domain name servers.

When the finance name server fails to resolve the query for the sales domain, it asks the domain name server which simply passes the buck to the name server at IP address (the sales name server). This saves time over both having to ask the sales name server itself or looking it up in it's own database. (Well, it saves time from the perspective of the domain name server and not from the perspective of the client.)

There are still several record types that I haven't mentioned. One of them is machine aliasing. For example, you might have a machine that acts as your ftp server, your mail server and your World Wide Web server. Rather that requiring everyone accessing this machine to know that vesta.siemau.com is the server for all three of these functions, you can create an alias to make things easier. This is done by the CNAME (canonical name) record. Example entries would look like this:

ftp IN CNAME vesta

mailserv IN CNAME vesta

www IN CNAME vesta

Any reference to these three machines is translated to mean the machine vesta. Keep in mind that if you use such an alias, this should be the only reference in you name server database. You should not have PTR record that point from an IP address to one of these aliases, instead use it's canonical (real) name, vesta.

We can also use the name server database to store information about the machine itself. This is done through the HINFO (host information) resource record. We could have the entry for our machine, siemau, that looks like this:

siemau IN A

N HINFO Pentium OpenServer

The record specific data on the right is composed of two strings. The first is the Hardware and the second is the Operating System. The strings may contain spaces or tabs, but you need to include them within quotes or the system will seem them as separate strings. "Technically" these two strings should be "machine name" and "system name" and match one of the strings in RFC 1340, but this requirement is not enforced. There is also the problem that man newer machines won't be on the list.

One thing that seems to be missing is the machine name from the HINFO record. Well, this is another short cut. By leaving the name field out of any record, it defaults to the same value as the previous entry. Here, the previous entry is the A record for siemau. Therefore, the name field of the HINFO record is also siemau.

We can also use the name server to manage certain aspects of our users. For example, you can have mail systems (such as MMDF) read the name server information to determine what machine a particular user gets his or her mail on. This is done with the MB (mailbox) resource record. An example, might look like this:

jimmo IN MB siemau.siemau.com.

In this domain, mail for the user jimmo should be sent to the machine siemau.siemau.com. Note that this only works if you have unique users within the domain. In addition, there must only be one MB record for each user.

You can make things easier by specifying a single machine as the mail server. This is done with a MX (mail exchanger) resource record. The MX record can also be expanded to include sub-domains. For example, the name server for the siemau.com domain has MX records for all the sub-domains under it. The resource specific information contains the presence, which is a numerical value used to determined the order in which mail is sent to different machines. The preference should be 0 unless you have multiple mail servers within the domain.

Let's assume that this is a large company and we have given each department it's own domain (regardless if they have different IP sub-nets). We then decide that mail sent to any one in a subdomain goes to the mail-server for that sub-domain, but any mail to the parent domain, goes to a different server. Some entries might look like this:






























In this example, mail sent just to a user in the domain siemau.com will go to siemau.siemau.com. Mail sent to either of the other three domains(finance, sales, and market) will be send to a machine in that respective domain. Note that there are two MX records listed for the sales.siemau.com domain. One has a preference of 0 and the other a preference of 1. Since the preference for buyer.sales.siemau.com (0) is lower than for cash.finance.siemau.com (1), the mail program will first try buyer. If buyer can't be reached it will try cash. Keep in mind that the numbers have only mean what order to check. We could have given one a preference of 45 and the other a preference of 66 and they would still have been checked in the same order.

Let's assume that we feel mail to the sales department is so important that we want it to try still another machine before it gives up. We could have a third MX record for salessiemau.com that looks like this:

IN MX 2 siemau.siemau.com.

In this case, buyer will be checked and if the mail message cannot be delivered, cash is checked. If cash cannot be reached, siemau is checked. If we changed the preference of siemau to 1, like the preference for cash, one of them will be chosen at random. This can be used if you want to spread the load across multiple machines.

There are a few other resource records types that we haven't discussed. There are not a commonly used as the others, so we will have to forgo talking about them. If you would like to learn more, check the SCO documentation or the book DNS and BIND by Paul Albitz and Cricket Liu from O'Reilly and Associates.

As I mentioned earlier, you can use the $INCLUDE directive to include a file containing the SOA record. However, you can use the $INCLUDE directive to include any file. This is very useful if you files have grown to unmanageable sizes and you need to break them apart. Assume your network contains 200 machines. There are A, PTR and possibly MX records for each machine. You could created three separate files for each of these. ( Normally, A and PTR are in separate files already.) You could then use the $INCLUDE directive to include the MX records in one of the other files.

Debugging The Name Server

Sorry, you're going to have to do it. Unless you are a flawless typist and have every step written down exactly, one day you are going to forget something. As a result, the name server won't function the way you expect. Hopefully, it's something simple (like forgetting a dot) and you can quickly make the change.

The problem is what to do after you've made the change. Remember, named reads the configuration information when it starts. To get named to re-read the configuration file, you could stop and restart TCP. However, this would not be taken too kindly by the users who have their connection suddenly drop. The solution is to poke named in the ribs and tell it go re-read the files. This is done by sending the named process a hang-up signal with "kill -1 <pid>", where <pid> is the PID of named. To find the PID, either grep through ps -e or look in /etc/named.pid. This also has the side effect of having secondary name servers check the serial numbers, which can be used to force updates.

If you want to have named dump its current database and cache you can send named a interrupt signal (SIGINT, kill -2). This dumps the database into /usr/tmp/named_dump.db. Sending named SIGUSR1 (kill -16) you can turn on debugging, the output which is sent to /usr/tmp/named.run. Subsequent SIGUSR1 signals sent to named will increase the debugging a level. Sending it SIGUSR2 (kill -17) turns off debugging completely.

You can also get named to trace all incoming queries, which is sent to /usr/adm/syslog. This is done by sending SIGWINCH (kill -20). Be careful with this, however. Even on smaller networks, the amount of information logged in syslog can be fairly substantial. If you forget to turn it of, you can fill up your root file system. To turn of tracing, send SIGWINCH again. Note that all of theses options can be enabled from the start-up script in /etc/rc2.d.

Perhaps the most useful debugging tools is nslookup (name server lookup). The nslookup command can be used either interactively or non-interactively, to obtain information from different servers. Depending on how it's set, you can input an IP address and get a name back or input the name and get the IP address back. If you are using the name server, either as a server or a client, nslookup can be used to gather information.

To start it interactively, simply type nslookup at the command line. You are then brought into the nslookup "shell," where you can issue commands and set the options needed to get the information you need. By using 'set all' you can view the current options. By default, nslookup will return the IP address of the input machine name (A forward query). For example, if we ran nslookup on vesta, nslookup would respond with something like this:

Default Server: siemau.siemau.com



This tells us what the default server is and shows that it is ready for the first command by displaying the > prompt.

Let's say we wanted to see what information the name server when we run nslookup on siemau. We type in scoburg and press return. This gives us:

> siemau

Server: localhost


Name: siemau.siemau.com


As we can see this is what we expect. Note that in the first case, the default server was siemau. However, when we run it on the name server itself, the server is "localhost."

One question that comes to my mind is does it translate the IP address back to the host name correctly? Let's see. When we type in the IP address, we get:


Server: localhost


Name: vesta.siemau.com


We can list all the available information for a domain with the 'ls' command. The general syntax is:

ls [ option ] domain

Where domain is the name of the domain we would like the information about. If we want we can redirect the output to a file, we can use either type of output redirection (> or >>). If we want to see it on the screen, we get:

>set all

Default Server: localhost


Set options:

nodebug defname search recurse

nod2 novc noignoretc port=53

querytype=A class=IN timeout=5 retry=4




If I want to see everything there is about a domain. I use the ls command. Keep in mind that by itself, the system will think it is a machine name and try to resolve it. However, followed by the domain name we get:

> ls siemau.com



server = siemau.siemau.com





However, this does not tells us anything about the mail exchanger or canonical names that we may have defined. To get everything, we us the -d option like this:

> ls -d siemau.com


siemau.com. SOA siemau.siemau.com jimmo.siemau.com. (60001 1080 0 1800 3600000 86400)

















Pentium OpenServer














As we can see, this gives us everything we can thing about including mail boxes, HINFO lines, canonical names, the SOA records and all of the address records. Note that there is only one MB record here. In reality, I probably would have had MB records for all the users on this system. If this network had been even a little larger, then this output would probably be too much to view. Therefore you can use other options to limit what you see. For example, the -t option is used to specify a type of query. If we wanted to look for all the mail exchangers, we could use the command "ls -t MX siemau.com,” which gives us:

siemau.com. 0 vesta.siemau.com

Which give us the domain, the preference of the mail-exchanger and the name of the mail exchanger. Which is all the information in the MX record.

We can also tell nslookup that we want to look for particular kind of records. Say I want to look for the MX record for a particular machine. I could set the query type to MX and look for that machine, like this:

> set type=MX

> siemau.siemau.com

Server: localhost


siemau.siemau.com.siemau.com preference = 0, mail exchanger = vesta.siemau.com

siemau.siemau.com internet address =

Okay. This says that the mail exchanger for siemau is vesta. Are you sure? What nslookup is actually telling us is that vesta.siemau.com is the mail-exchanger for siemau.siemau.com.siemau.com. Why? Because we didn't put the dot at the end of the domain name. Like other aspects of the name server, nslookup tacked the domain name onto the end of siemau.siemau.com to give us siemau.siemau.com.siemau.com. If I just use a machine name, the domain name is tacked on as well, but it comes out differently:

> siemau

Server: localhost


siemau.siemau.com preference = 0, mail exchanger = siemau.siemau.com

siemau.siemau.com internet address =

The nslookup program also has a configuration file that can come in handy. This is the .nslookuprc file in your home directory. Like the .exrc file for vi, the .nslookuprc is read everytime you start nslookup. The format is also like .exrc, with one entry per line. Assuming I wanted to set the query time to PTR records and set the time out to 5 seconds, I could have these two lines in my .nslookuprc file, like this:

set querytype=ptr

set timeout=5

This would be the same as starting nslookup from the command line like this:

nslookup -query=ptr -timeout=5

Setting parameters is not the only thing that you can do from the command line. In fact, most anything you can do from inside of nslookup, you can do from the command. I could expand the above command to give me:

nslookup -query=ptr -timeout=5

Server: localhost


Name: vesta.siemau.com


So what is this all good for? The most important thing is tracking down problems you might be experiencing. For example, if a particular machine cannot reached, nslookup might show you that there is no A record for that machine. Perhaps mail to a particular machine never ends up where it should. Checking the MX record for that machine indicates it ends up going to a completely different machine than you though.

Unfortunately, I cannot list every problem that could arise and what nslookup would show. However, with the understanding of how DNS and nslookup work that you've gained in the last couple of sections, the best way to proceed is to look at what nslookup is telling you. Based on the way you think DNS is configured, is what nslookup records correct? This may sound like an over simplification. However, isn't that what problem solving really is? Knowing what should happen and what would cause it to happen differently?


Just becuase you know the IP address of a partiuclar machine, does not mean you know how to get there. If the machine you are trying to reach is on the same network as yours, then there is normally no problem. If not, then you need to know what path, or route, to take to get to that machine.

Routes are added to and removed from the system using the route command. The general syntax is:

route <option> command destination gateway metric

The options include -f to flush all entries in the routing tables. The two command used are add and delete. The destination is the IP address of the machine or network you want to reach. You can also use tokens for the network name by including entries in the /etc/networks file. This is an ASCII file containing two columns. The first is the name of the network and the second column is the network address. You can then use that name in the route command.

The gateway is the IP address of the interface to which the packets need to be addressed. Keep in mind that they system must already know how to get to the gateway for this to work.

The metric is a value that normally indicates the number of intermediate machines (hops). The systme uses this value in determining the shortest route to a particular machine.

For example, let's assume we have an entry in /etc/networks like this:

sco 132.147

Let's also assume that the machine I need to use to access this route ahs an IP address of I could then run the route command like this:

route add sco 0

This says that any packet destinated for the sco network (as defined in /etc/networks) should go to the IP address with a metric of 0. Normally, 0 is used when the IP address you specifiy is directly connected to your machine.

If you have a single machine that serves as your gateway to the rest of the world, you can specify default instead of a specific address or network as your destination. In the example above, if we wanted to use the same machine for all networks instead of just sco, the command would look like this:

route add default 0

In OpenServer, routes are create automatically when you use the Network Configuration Manager. When you add or remove a network interface the system makes a note of what networks you are directly connected to. This information is used to create an entry in the routing tables. This way, your machine always knows about what networks it is directly connected to.

As you move about the network, dynamic entries are created by the routing protocol that you use. (Most commonly routed) The rotuing protocol communicates with it's counterpart on other machines and adds entries to the routing tables automatically.

Whne it starts, routed looks for the file /etc/gateways, which contains a list of gateways. (What else?) The general format for this file is:

<net|host> name gateway metric type

The first field specifies whether the gateway is to a specific machine or network. The name field is the name of the destination host or network. This can either be an IP address or a token. If using a token, then the host name must be located in /etc/hosts or can be determined through DNS. If through DNS, routed must be started after named. If a network, the name must be in /etc/networks.  

The gateway field is the name or address of the gateway that is to be used. The metric is the same as for routes and indicates the number of hops. The type can be either passive, active or external. A passive gateway is one that is not exected to exchange routing information. Active gateways will exchange information and usually have routed running on them. External gateways are ones that are managed by another system, but alternate routes should not be installed.

/etc/bootpd - Internet Boot Protocol server.

/etc/bootptab - Bootp configuration information.

/etc/cpd - Copy protection daemon.

/etc/default/tcp - TCP/IP default configuration file.

/etc/dig - send domain name query packets to name servers.

/etc/ethers - Ethernet address to hostname database.

/etc/fingerd - remote user information server.

/etc/ftpd - Internet File Transfer Protocol server.

/etc/gated - gateway routing daemon.

/etc/hosts - list of hosts on network.

/etc/hosts.equiv - list of trusted hosts.

/etc/if.ignore - data base of ignored network interfaces.

/etc/ifconfig - configure network interface parameters.

/etc/inetd - Internet 'super-server.'.

/etc/inetd.conf - Inetd configuration file.

/etc/ldsocket - load socket configuration.

/etc/mkhosts - make node name commands.

/etc/named - Internet domain name server daemon.

/etc/named.b.* - Model DNS configuration files.

/etc/netconfig - Configure networking products.

/etc/networks - List of known networks.

/etc/ntpdate - Set date and time via NTP.

/etc/ntpq - Network Time Protocol query program.

/etc/pppd - PPP daemon.

/etc/pppauth - PPP authentication database.

/etc/pppfilter - PPP packet filtering configuration file.

/etc/ppphosts - PPP link configuration file.

/etc/ppppool - IP address pool for PPP links.

/etc/protocols - List of Internet protocols.

/etc/rarpd - Rarp daemon.

/etc/rexecd - Remote execution server.

/etc/ripquery - Wuery RIP gateways.

/etc/rlogind - Access rlogin requests.

/etc/route - Manually manipulate routing tables.

/etc/routed - Network routing daemon.

/etc/rshd - Remote shell server daemon.

/etc/rwhod - System status server daemon.

/etc/services - List of network services provided.

/etc/slattach - Create a SLIP link.

/etc/slink - Streams linker.

/etc/snmpd - SNMP daemon.

/etc/snmpd.conf - SNMP daemon configuration file.

/etc/sockcf - Socket configuration file.

/etc/strcf - STREAMS configuration file .

/etc/syslog.conf -Configuration file for syslogd.

/etc/syslogd - System log daemon. .

/etc/talkd - Daemon accepting talk requests.

/etc/tcp - TCP start/stop script.

/etc/telnetd - Daemon accepting telnet requests.

/etc/tftpd - Daemon accepting tftp requests.

/etc/timed - Daemon accepting timed requests.

/etc/timed - time server daemon.

/etc/timedc - timed control program.

/etc/traceroute - Trace packet routeis to remote machines.

/etc/uucpd - UUCP server.

/usr/bin/arp - Delivers ARP information.

/usr/bin/biff - Notify users when mail arrives.

/usr/bin/finger - Find information about users.

/usr/bin/ftp - Network file transfer program.

/usr/bin/hostname - Delivers name of current host.

/usr/bin/logger - Make entries in the system log.

/usr/bin/netstat - Show network status.

/usr/bin/nslookup - Query name servers interactively.

/usr/bin/ping - Sends ICMP ECHO_REQUEST packets to network hosts.

/usr/bin/rcmd - Remote shell command execution.

/usr/bin/rdate - Notify time server that date has changed.

/usr/bin/rdist - Remote file distribution program.

/usr/bin/rlogin - Remote login program.

/usr/bin/ruptime - Show host status of local machines.

/usr/bin/rwho - Who is logged in on local network.

/usr/bin/talk - Talk to another user.

/usr/bin/telnet - Telnet remote login program

/usr/bin/tftp - User interface to TFTP protocol.

/usr/lib/named - Default directory for DNS information.

Table 0.4 Key TCP/IP Files


YOu need a generalized note saying that you assume that every package we

talk about is installed. talk about the syntax in /etc/exports.

As I have mentioned on numerous occassion, many functions in ODT that were only possible from the command line or by editing files yourself are possible through the "managers" of SCO Admin. One of them is the Filesystem Manager, which allows you to add local filesystem (like you could in ODT with mdkev fs), but now you can add remote filesystems as well.

As I mentioned in the section on NFS in the first half of the book, there are two basic terms when talking about NFS filesystems: exporting and importing. The machine with the filesystem that is to be made available (the server) will "export" these filesystem to others. The machines that mount these remote filesystems (the clients) "import" the filesystems.

There are four ways you can mount a remote filesystem. The first is automatically mounting it when the system boots up. This usually requires you running the Filesystem Manager in OpenServer or editing /etc/default/filesys. The /etc/default/filesys file contains not only the mount information for local filesystems, but for remote filesystems as well. For more details on this, see the chapter on filesystem or checkout the filesys(F) man-page. (NOTE: I don't consider adding a line in some rc script that does a mount command to be automatically mounting the filesystem.)

If the remote mount is a one-time deal, the system administrator can also mount it by hand. Potentially, the administrator could create and entry in /etc/default/filesys that does not mount the filesystem at boot time, but rather is mounted later on. In either event, the system administrator would use the mount command. If necessary, the system administrator can also allow users to mount remote filesystems.

A client machine can also be configured to mount remote filesystem on an "as-needed" basis, rather than whenever the system boots up. This is through the mechanism of the automount program. We'll get into a lot of details about how automount works later on.

The syntax for using the mount command to mount remote file system is basically the same as for local filesystems. The difference being that you specify the remote host along with the exported path. For example, if I want to mount the man-pages from scoburg, I could do it like this:

mount -f NFS [-o options] scoburg:/usr/man /usr/man

Here I told the mount command that I was mounting a filesystem of type NFS and that the filesystem was on the machine scoburg under the name /usr/man. I then told it to mount it onto the local /usr/man directory. There are a couple of things to note here. First, I don't have to mount the filesystem on the same place as it is exported from. I could have just as easily exported it to /usr/doc or /usr/local/man. If I want I can include other options like "normal filesystems" such as read only.

There are a couple of options to mount that are specific to NFS filesystems, so we should take a look at them. The first is the soft option, this tells the mount command to return an error if the server does not respond within a specific time out. If we want the system to go onto other things rather than wait for the mount to complete, we can use the bg (for background) option. If we don't specify the soft often, then the mount attempt is considered hard and the system will try indefinitely. By setting the intr option, you can kill the attempt to mount the filesystem. Also new to OpenServer is the tcp option. This allows you to use TCP as the protocol instead of UDP. Note that both sides have to be configured to use TCP.

If you are a server, the primary configuration file is /etc/exports. This is a list of the directories that the server is making available for mounting along with who can mount them and what permissions they have. In addition, the server needs a way to find the clients address, therefore mounting will fail if the name cannot be resolved either by DNS or /etc/hosts. Likewise, the clients depends on name resolution to access the server.

As we discussed briefly in the first part of the book, remotely mounted filesystems present a unique set of problems when dealing with user access rights. Because it can have adverse effects on your system, it is necessary to have both user and group ID unique across the entire network. If you don't, access to files and directories can be limited, or you may end up giving someone access to a file that shouldn't. Although you could create each user on every system, or copy the passwd files, the most effect method is using NIS, which we will get into shortly.

Normally, NFS is started automatically when the system goes into run-level 2 (multi-user mode) when init executes the /etc/rc2.d/S89nfs script. This should be linked to /etc/nfs, which allows the root user to stop or start NFS as needed. This would be done simply as: nfs stop or nfs start. Note that since NFS is is started and stoped through an rc script, moving to run-levels 0, 1 or 6 will stop NFS.

Configuring lockd

From our discussion on NFS at the beginning of the book, we know that the lockd daemon is responsible for maintaining file locks on NFS filesystems. When a process accesses a file that has a lock on it, the request is refused (blocked). Generally, such locks exists are relatively short time before they are removed. By default, lockd will wait 5 seconds before attempting to retry. You can change the length lockd waits by using the -b flag and specifying the wait time in seconds. For example, to specify a wait time of 10 seconds, it would look like this:

/etc/lockd -b 10

Note that lockd should be started by the NFS startup script. Therefore, I would recommend that if you want to change the time-out, you change the lockd entry in /etc/rc2.d/S89nfs.

When a system has crashed and comes back up, there is a short period in which no lock requests are processed. This is to ensure the integrity of the locks and to ensure that the correct process can reclaim the lost locks. By default, the system waits 40 seconds before any new lock requests are processed. This too can be altered, in this case it is done with the -g flag. Editing the entry in /etc/rc2.d/S89nfs I could change the grace period to 60 seconds like this:

/etc/lockd -g 60

By default, there are two lockd daemons running, one for TCP and the other for UDP. If you have a busy system, you can increase this number by using the -t option to increase the number of TCP lockds or -u to ease the number of UDP lockds. For example:

/etc/lockd -t 2 -u 3

this allocates two lockd daemons for TCP and 3 for UDP. With no flag, the specified number of UDP lock daemons will be allocated. For example:

/etc/lockd 3

will allocate three lockd daemons for UDP.

More NFS Mount Options

If you decided to run the Filesystem Manager, the menu option are not always clear as to what options in /etc/default/filesys they refer to. Therefore, let's briefly go through the options so we know what the Filesystem Manager is doing, should we need to edit the filesys file.

After filling in the blanks on the first screen, you reach the advanced mount options by clicking on the button "Advanced Mount Options". (Do I hear an echo in here?) The first item is "Mount in background." If you select "Yes" then the mount command will wait until the command returns before continuing. A "no" means that the system will start to the mount and go onto other business. This is the bg option in /etc/default/filesys.

The NFS mount type determines whether the mount will be a hard mount, a soft mount or a spongy mount. A spongy mount is one that is hard sometimes and soft at others. If set to hard mount, the system will try indefinitely to complete an operation before it gives up. This operation can be anything such as reading from or writing to the filesystem. It does not just have to be the mount operation. If you specify a hard mount, I suggest that you also enable the interrupt option. That way if you try an operation and it gets stuck, you can break out of it with the interrupt key.

With a soft mount the system will try 'n' time before it reports an error, where 'n' is the number of times to retry as specified in the Filesystem Manager. This can also be specified in the filesys file using the retrans= option. The default is five times, which equates to retrans=5. If you consider a server "unreliable" or the connection is slow, soft mounts are a good idea. If you set the mount type to spongy, then the connection behaves like a soft mount on operations like stat(), fsstat(), readlink() or readdir(). In all other cases it behaves like a hard mount.

As we talked about a moment ago, by enabling the keyboard interrupt you can kill NFS operations that have hung while waiting for a response from a hard-mounted filesystems. I have never heard of a logical reason for tuning it off, so I would recommend that you never do.

As I mentioned before, permissions are a problem when dealing with NFS mounted filesystems. This is compounded when dealing with SUID and SGID files. The options here are ignore and honor. If you ignore them, all programs executed on NFS mounted filesystems will be executed as if they have no SUID or SGID bits set. If you set it to honor, the programs will behave like they do on local filesystems. This puts the suid option in the mntopts of /etc/default/filesys.

The Cache attributes option is a Boolean variable that determines if file attributes will be cached. By default they are and I recommend you leave it so unless your application requires close synchronization with the server. If you change it to no, performance when accessing the mounted filesystem will be impaired. This is the noac option.

The read/write buffer size defines the size of the (what else?) read-write buffers in bytes. Through the Filesystem Manager you size one size which is valid for both the read and write buffers. However, you can change this by editing /etc/default/filesys and changing the rsize value for the read buffer and the wsize value for the write buffer. The only time I know to change this from the default of 8k is when you have a slow connection.

The timeout period defines (in seconds) how long each operation should wait before it is considered to have failed. This is the timeo option and is set to 300 seconds (5 minutes!) by default. The number of times to retry before reporting an error is valid only for soft mounts. This is because a hard mount will never give up. The default is five and is defined by the retrans= option.

Although you can't specify it in the Filesystem manager, new to OpenServer is the ability to run NFS across TCP instead of UDP. This is simply done by using the tcp mount option. One thing I would like to point out is that if you use the Filesystem Manager and change any one of the advanced options, it will include everyone of the options in /etc/default/filesys, even the ones that are unchanged.

One thing I need to point out about using the Filesystem Manager is that when you specify the remote host name, you must do so using the fully-qualified domain name. Otherwise, the system won't see it. You can also choose the name from a pull down list.

As one might guess, you can also use the Filesystem manager to mount and unmount the filesystems, aside from just changing their mount characteristics. You can also use it to administer your /etc/exports file by allowing you to add or modify the existing configuration. It also has an option for you to View your existing export configuration. Well, sort of. This is one of those "almosts" that I have grown to expect with OpenServer. When I specify in the /etc/exports file that a particular directory can only be accessed by a certain host, the Export Mode is reported as "read-mostly" and there is no indication of what the limitations are. In addition, changes to the file /etc/exports are reflected immediately in the Output of the Filesystem manager, without first running exportfs. This means the Filesystem Manager does not check the current state of your exported filesystems, but what it could be the next time you reboot or run exportfs.

One important function is the ability to define the UID of unknown users. If you are running NIS and all user accounts are distributed to all systems, then you really don't have this problem. If there are cases where a user is unknown then you can specify the anon= option to a particular user ID. Potentially to a guest account or someone else who has limited access. This is an extra safety precaution that you can take. Note that the root user is always considered unknown. Therefore if the anon= option is set, the root always gets set to this UID. You can also disable this by setting anon=-1. Therefore, unknown users will have no access.

When Things Go Wrong

There are a couple of tools that can be used to specifically check NFS connections. Because NFS relies on the same mechanism as other programs using TCP/IP, solving NFS problems start with understanding the tools used for TCP/IP. Rather than repeating myself, I would point to the section on configuring TCP/IP and the SCO doc.

If you want see all the programs using RPC on a specific machine, I would run it as:

rpcinfo -p <hostname>

Which might give me something like this:









































































































The columns are:

Program-number version-number protocol port

The program number is the RPC number of that program. You can see what RPC number equates to what program number by looking in /etc/rpc. Here we see that all the NFS related daemons are running. If we look carefully, we see that there for each program (except walld and rusersd) there is a copy of the program using both UDP and TCP. If you find that one or more of these is not running, then stopping and restarting NFS might work. Otherwise, rebooting should correct the problem. Note that portmapper, mountd, nfs, and status are required.

If you want to check if a particular program is running on server, this can also be done with rpcinfo. The general syntax for this command is:

rpcinfo -u <server_name> <program_name>

For example, if I wanted to check to see if lockd was running on scoburg, I would not run it as:

rpcinfo -u scoburg lockd

If I did, I would end up with the message:

rpcinfo: lockd is unknown service

This is because the name of the service in RPC's eyes is nlockmgr. Therefore, the correct syntax would be:

rpcinfo -u scoburg nlockmgr

Which should give you the response:

program 100021 version 1 ready and waiting

If you don't then run rpcinfo -p to find out what programs are registered.

If you want to find out about what filesystems are being mounted or can be mounted, you can use showmount. On the server, showmount -a will show you which filesystem have been mounted and by whom. This will be in the form host:filesystem. If you want to find out what filesystem are being exported and their permissions, use showmount -e. On scoburg system, I get this:

export list for scoburg.siemau.com:

/usr/man (everyone)

/usr/lib (everyone)

/u1 access=siemau

/u2 (everyone)

Each of the filesystem listed is accessible from every other system with the expection of /u1 which is only accesible from siemau. This is essential what is contained in /etc/exports.

If a client simply stops responding, it may be because the server is down and you have configured a hard mount. If so, the client may wait indefinitely for the server to come back up. Once it does, the processes can continue as before. If the mount is soft, you will (should) get an error after the number of retries specified. (5 by default).


In order to be able to mount a remote filesystem using automount, you would first need to be able to mount it using normal NFS. That is to say that there are no flags that you can set on the remote side (where the filesystem is being exported) that says either to explicitly allow or deny access via automount. The remote filesystem simply makes a resource available and you access it with whatever tool you see fit. Therefore, for the purpose of this discussion, we are going to simply assume that in each case, the remote host has given us permission to access that filesystem. For more details, see the previous section on NFS configuration.

We mentioned that automount is useful when remote filesystems are infrequently mounted, such as man-pages or help files. While at SCO, I had automounter configured so I could access the filesystems containing the source code. I could the cd into /source/ODT3 and would instantly be at the top of the source tree for ODT 3.0.

The basic configuration unit with the automounter is one or more files called "maps." These map the filesystems you want to access to the directory where you want to mount them (the mount points). These map files are fairly easy to configure and can be edited with any text editor. All automount connections are based on references in the master map file. By default this is /etc/auto.master, so we will base our discussion on that. However, you can change this by starting automount with '-f <file_name>', where <file_name> is the name of the master map file. This can be done either from the command line or by editing the entry in /etc/rc2.d/*nfs.

So, what does a master mount file look like. On my machine, I have two entries and the file looks like this:

/- /etc/auto.direct -r

/usr/doc /etc/auto.doc -r

/home/siemau -passwd

The general syntax of the file is:

mount-point map-file mount-options

The first line lists /- as the mount point. At first, you might think that I have a directory /-. This would make accessing it quite difficult as many commands would think that the '-' is an option. Well, lucky for us, this is not what is happening. When you specify the mount point like this (/-), automount sees it as being a direct map. It will therefore use the mount point you specify in the map file. Here the map file is /etc/auto.direct. Note that you don't have to name the map file like this. I just gave it the name auto.direct for two reasons. First, the convention is to name the files auto.whatever, with "whatever" usually referring to some characteristic of that mount. Since, this is an direct mount, I named it auto.direct. Which is basically the second reason why I named it so: self-documentation.

In the second line, the mount point is an indirect map. Both automount and you can see that immediately because we have a full path name of the mount point. Before I go on, I want to re-address one point. One of the differences between direct and indirect maps is that the directory for indirect maps does not exist anywhere. If you are in the parent directory of the mount point you will not see the mount-point directory. On the other hand, the directory for direct maps does exist (potentially having been created by automount). If it already exists and automount is started, any files in that directory are hidden, just like a "normal" mount.

The map-name is the name of the file containing the direct or the indirect map, or the name of the built-in map, if you are using one. In the first line of the above example, the name of the map file is /etc/auto.direct. To insure that automount gets the right file, I recommend that you always refer to the files by their absolute names. In the third line, you see that I am using in built-in map, whereby I am importing the home directories from another machine. Since the path of the mount point must include the system name, we see that the home directories will be mounted from "siemau". (This is from the -passwd map) Note that because of the build-in map, we do not specify a map file.

At the end of the line are mount options. In the first two cases, I specified the filesystems a read only (-r). Note that this only takes effect if there are no option in the map file that contradict it, as it is the map file that takes precedence. You can use any option that is would apply to another NFS filesystem, except for the bg (background) and fg (foreground) options. This is simply because with automount, these options do not apply.

To find out the specifics of the filesystem to be mounted, automount looks in the map file. In general, the format of both indirect and direct map files is the same:

mount-point options remote-host:<path_name>

The key difference is the format of the mount-point entry. For a direct map, you use the full path name of the mount point. If the directory does not exist, the automount will create it for you. However, you must include a full-path. For example, my auto.direct file looks like this:

/usr/direct -rw siemau:/u1

Here I am saying that I want to mount the /u1 filesystem from the machine siemau onto my /usr/direct directory. Note the mount option here, -rw. In the auto.master file I specified that I wanted to mount it read only. However, here I specified read-write, therefore the filesystem will be mounted read-write, as I said that the options in the map file take precedence.

On the other hand, you specify a kind-of relative path for the indirect maps. This is the name of the virtual directory underneath the directroy specified in auto.master. Confused? Let's look at the example on my system:

man -r siemau:/usr/man

scohelp -r siemau:/usr/lib/scohelp

As we see the basic format is the same as for the direct map. The only difference (which is a very important difference is the absence of full paths for the mount points. If we look back to auto.master, we see that the mount point is /usr/doc. If I did a cd into, or otherwise accessed the directory, /usr/doc/man, then automount would kick in and I would be looking at /usr/man on siemau. What automount does is to append the directory listed in the map file to the directory listed in auto.master. Here, man or scohelp are appended onto /usr/doc.

If we look in the parent directory (/usr/doc) before we mount the filesystem we see that it is empty. However, once we cross the mount point there will be the symbolic link that points to directrory in /tmp_mnt. Remember, too, that automount only mounts filessystem that are being used. Therefore, siemau:/usr/lib/scohelp won't be mounted at this point (unless, of course, we already crossed that mount point.)

This is where the virtualness of the mount point comes into play. The directory exists only as the link to /tmp_mnt and will disappear from the system if you don't access the mounted filesystem with the timeout (or the system gets rebooted).

In my example auto.master file, I have one built-in map, -passwd. As I mentioned, this points to machine containing the home directories that should be mounted. In conjunction with NIS, the -passwd built-in map is very useful for allowing users to log into any machine, but still have access to a single home directory.

There are a couple of things to keep in mind when using the -passwd map. First, the machine name you specify is the name of the server on which all home directories physically exist. If you are using NIS, this does not have to be the same as the NIS server. Second, on the server you need to ensure that all users home directories (as specified in /etc/passwd) are in the directory specified. For example, on each client I would need to ensure that the home directory was /home/siemau/<user_name>.

If the home directories were scattered across multiple systems, we could create direct maps for each of the users. In the auto.master file we could point to, let's say, a file /etc/auto.user For example:

/- /etc/auto.user

This would then countain the direct mappings for each user's home directory. A few lines might look like this:

/u1/jimmo -rw siemau:/usr/jimmo

/u1/kenton -rw scoburg:/u/kenton

/u2/yadira -rw scoburg:/u/yadira

There are a couple of things to note. First, is the fact that the mount points are not all in the same directory. Both jimmo and kenton have their home directories in /u1, while yadira has her's in /u2, even though her home directory physically resides on scoburg, like kenton's. The other thing is that we originally made the assumption that we are on the machine siemau. The home directory for jimmo is /usr/jimmo on siemau. This does not present a problem. There is nothing wrong with mounting a filesystem via NFS from the local machine. It is a little slower since it has to go through all the TCP layers. However, it is easier to administrate since auto.user file can then be distributed to each machine in the network. You could use this same technique for indirect maps as well.

Grabbing it all

Exported file systems are there to be used. There may come a time that you want to access all the filesystems that a machine is making available to you. The is where the -hosts built-in map comes in. If you use it, then automount will configure itself to mount all exported filesystems of all know hosts. In this context, "known" is the same as others in that if the host is listed in /etc/hosts or can be identified through the name server, then it is known.

Aside from saving you the hassle of listing each filesystem individually, this method allows you to update the list of exported filesystems without having to change all the clients. The disadvantage is that unlike direct or indirect maps, using the -hosts built-in map means that you have to use specific mount points.

By convention, the mount-point is a sub-directory under /net. Although you can specify the mount-point in auto.master. For example, the conventional way would be:

/net -hosts

However, I could write it like this:

/auto -hosts

Therefore,all mount-points would exists under /auto. Note that the filesystems are not mounted directly under this mount point, but rather under a sub-directory matching the name of the remote system. Therefore, if I were to have such an entry, there would be sub-directories for scoburg, vesta and siemau. Another difference is that if you want to mount all of the filesystems from a machine, then any time you cross any mount-point, then every filesystem gets mounted and with the same mount options.

When you cross the mount point, automount catches this and determines the name of the remote filesystem. This either queries the name server (if it is running) or looks in /etc/hosts. Automount then queries the remote mount service to get a list of exported filesystem. The list is then sorted by the length of the pathname to ensure that sub-directories don't get mounted before their parents. With the sorted list, each directory is, in turn, created and the filesystems are mounted. If the name cannot be resolved, then the filesystems cannot be mounted and therefore whatever command was used to access the mount point will fail.

If you are using a lot of different remote filesystems, you may want to ensure that certain directories do not get mounted. This is accomplished using the -null built-in map. This map tells automount not to mount any remote filesystems to this mount point. You usually see this when using NIS to distribute maps and one of those maps specifies a directory that you don't want covered. You can include this map either in the auto.master file or the direct map file. Why not in the indirect map file? Remember that the directories for indirect maps do not really exist. Therefore, you cannot tell automount not to mount a filesystem onto something that isn't there.

Automount from the command line

So far, we have been talking about the configuration files as if they were the only way to configure automount. That's not the way it is. Like other types of NFS mounts (or any mount for that matter) automount can be configured from the command line. The only caveat is that the map files must contain the necessary information. In other words, running automount from the command line, you are by-passing the auto.master file. However, all the information that is normally obtained from the master file is gotten from the command line. Let's look at an example:

automount /- /etc/auto.data

This would be the same as if the auto.master file contained the line:

/- /etc/auto.data

Here we have a direct map contained in /etc/auto.data. No matter what is contained there, it will be mounted just as if we had referenced the map file in auto.master. You can also have multiple references on the command line:

automount /- /etc/auto.direct -r /usr/doc /etc/auto.doc -r /home/siemau -passwd

which would be equivalent to the three lines from the example auto.master file at the beginning of this chapter.

What happens when you modify one of the maps will depend on which map file you are modifying. The master map is a static entity. It is only read by automount when the automount daemon is started (usually at boot-time). Therefore, you need to restart automount to make any changes in the master file take effect.

The SCO doc say that "Rebooting your machine is the safest way of restarting automount" and I would tend to agree. I have tried both killing the automount process and restarting as well as stopping and restarting NFS. Both seem to work a couple of times. However, it seems that once I have used the mount point, automount gets confused and neither of the above techniques work. What usually happens is that I usually get an error saying that the remote connection was refused, although I can still access the filesystem by hand. However, I have been successful bringing the system into init level 1 and then back up to multi-user mode. This seems to be effective since all mounted filesystems are unmounted and the automount daemon is killed. (Note: If you kill automount with -9, the entries in the mount-table do not get cleared. Therefore, it is recommended that if you do kill automount by hand, you use signal 15, kill -15).

Since indirect maps are only accessed when you cross the mount point, they can be modified at any time and the changes will take effect immediately. Keep in mind that if you add or remove an entry, this is also modifying, therefore changing the map file by adding or removing an entry will also take effect immediately.

Changes to the direct maps sort of fall between the two extremes. Not every change you make will take effect immediately, however, you can make some changes that are. Let's look at it this way: Direct map filesystems are "mounted" when automount starts. Although the connection is not established until you cross that mount point. Therefore, any changes you make to the mount point will not take effect until the next time automount is restarted. However, since the connection has not yet bet made, changes that affect the connection can be made and are valid the minute you make them. For example, you can change the server that the filesystem is mounted from and the effects will be immediate.

Well, not exactly. If the filesystem is currently mounted, then the entry exists in the mount table. When accesses files across the mount point, automount first checks the mount-table to see if the entries are there. If so, that means the filesystem is already mounted and it does not need to do any more work. To force automount to recognize the changes, you can stop and restart NFS. The alternative is to find the PID of the automount process and kill it with signal 15. However, you might experience some of the problems that I mentioned above.

In many regards, remotely mounted filesystems behave the same way as those that are mounted locally. One of the most apparent is the mount point is a flag to the filesystem access routines to say that the sub-directories under the mount-point are somewhere else. If you mount a filesystem onto a directory that contains files or directories, these will be "hidden" from you. When the mount-point is accessed, the system will start looking elsewhere for the files. For direct maps, this is the name of the of the parent directory (Which is also the mount-point). For indirect maps, the "virtual" mount-point is the parent directory. In our example, above any files that existed below /usr/doc would become hidden when automount was started.

What happens if the server containing the man-pages is down? Does this mean that you don't have access to them until the machine is brought up? Well, it all depends on how your system is configured. If you reference only one machine with the man-pages, then yes. If that one machine goes down, then you loose your man-pages. Fortunately, it's possible to get automount to look for an alternate server if the primary is unreachable. These are referred to as redundant servers and can be specified in either a direct or indirect map. The syntax for redundant servers is basically to list more than one location, for example:

/usr/man -r scoburg:/usr/man siemau:/usr/man vesta:/usr/local/man

To make it easier to read, we could escape the end of line and write the entry like this:

/usr/man -r scoburg:/usr/man \

siemau:/usr/man \


Note that in the last entry, the path name on the remote machine is different. This is possible since the connection is not made until the mount point is crossed. Therefore, it doesn't matter what it is called until the mount actually takes place. We can also take a short cut, since the directories on scoburg and siemau are the same. The resulting entry would look like this:

/usr/man -r scoburg,siemau:/usr/man vesta:/local/usr/man

Two things to keep in mind about this. First, this list does not specify any particular order. Just because we list scoburg first, does not necessarily mean it will be the one access, even if it is up. It is the first server that responds that will be connected to. Next, you need to be very careful when using redundant servers. Just because we accessed one server this time, does not mean we will do so the next time. If we update a file on one server, but don't get back to the file for some time, we may find that the change is gone if another server was mounted. However, this only applies for read-write filesystems.

Syntax Short Cuts

A couple of quite notes about the syntax in your map files. There are two metacharacters that can be used when specify entries in any of the map files. The first is the ampersand (&) and is used as a place holder for the mount-point. For example, my auto.doc file looks like this:

man -r jmohr:/usr/man scohelp -r jmohr:/usr/lib/scohelp

Since the basename of each directory is the same as each mount point, I could have written the file like this:

man -r jmohr:/usr/&

scohelp -r jmohr:/usr/lib/&

When automount tries to mount the filesystem, it automatically translates the place holder. This translation every time that automount sees the ampersand. Let's assume we had an indirect map file that looked like this:

siemau siemau:/home/siemau

scoburg scoburg:/home/scoburg

vesta vesta:/home/vesta

This says to mount the home directories that are being exported from each of these machines. Note that the name of each machine appears three times in each line. The first is the mount point, then again as the system name as well as the name of the directory under home. We could therefore rewrite the file like this:

siemau &:/home/&

scoburg &:/home/&

vesta &:/home/&

In both cases on each line, the ampersand is expanded to be the same as the mount point. Since it doesn't matter where in the line the ampersand appears, we could us it in this direct map:

/usr/man -r scoburg,siemau:& vesta:/local&

or this indirect map:

man -r scoburg,siemau:/usr/& vesta:/local/usr/&

Let's look at the direct map again. Note that there is no slash after the /local. This is because the ampersand replaces the entire string. Since the strings we use is /usr/man, the slash comes with it. In this case, there is nothing wrong with including the slash, we simply end up with /local//usr/man. However, this is something you should be aware of.

We can also use an asterisk (*) to indicate "all known servers". If we look as the entries above for exporting the home directories, we see that they all have the same format. We can take advantage of that by rewriting every line as just a single line:

* &:/home/&

In this example, the asterisk is first expanded to mean any system. The ampersand is then expanded to mean the /home/server_name directory form any server. Once the asterisk is translated, the map file is not longer parsed. For example,

/usr/man -r scoburg,siemau:& vesta:/local& * &:/home/&

is ok. However, reversing the lines:

* &:/home/&

/usr/man -r scoburg,siemau:&


causes automounter to ignore the /usr/man entry.

You might also have cases where you have large map file or are combining local maps with those propagated with NIS (more on that later). In that case, you can flag an automount map to include another map file. For example, if we had a map containing, both documentation directories and a list of user home directories:

man -r siemau:/usr/man

scohelp -r siemau:/usr/lib/scohelp


When automount reads this map file, it loads in the contents of /etc/auto.homes and behaves as if the contents where included in the first file. You could also have addition maps that were included. Each would simply be read in turn.

/etc/nfs - NFS startup script.

/etc/portmap - Portmapper. Convers DARPA ports to RPC program number.

/etc/rpc - RPC number to program name map.

/etc/mountd - NFS mount request server.

/etc/nmountall - Mount multiple NFS filesystems

/etc/numountall - Unmount multiple NFS filesystems.

/etc/nfs - NFS start-up and shutdown script.

/etc/nfsd - NFS daemon to handle client requests.

/etc/automount - The automount program.

/etc/auto.master - Default automount master file.

/etc/exportfs - List of exported NFS filesystem.

/etc/nfsclnt - Creates NFS client handles.

/etc/biod - Server for block I/O requests.

/etc/lockd - Network lock daemon.

/etc/pcnfsd - NFS authentication and and print request server.

/etc/pcnfsd.conf - pcnfsd configuration file.

/etc/rusersd - Network user name server.

/etc/rwalld - Network wall server.

/usr/bin/nfsstat - Reports NFS statistics.

/usr/bin/rpcinfo - Reports RPC information.

/usr/bin/rusers - Reports information on network users.

/usr/bin/rwall - Write to all users on the network.

/etc/services - List of network services provided.

/usr/bin/showmount - Shows information on remotely mounted filesystems.

/etc/inetd - Internet 'super-server.'

Table 0.5 Key NFS Files


NIS is not something new to OpenServer. ODT had it as well. However, there were some major differences. The most significant was that there were no client machines in the ODT implementation. That means that requests were not answered across the net, but it was all done locally. This means that the only machines were servers of one form or another. Although that works, that really isn't what NIS is all about. You could accomplish the same goal by using rcp (remote copy) at regular intervals from some master server. Because of this, I am going to forego discussion of the ODT implementation of NIS. If you are still running ODT and want NIS, then this is a good reason to upgrade. If you don't want to upgrade, then I must refer you to the ODT documentation.

On of the key aspects of NIS is the transfer of files from one machine to another. The file /etc/yp/YP_MAP_X_LATE is a listing of all the files that need to be transfered. If you want to transfer files other than the defaults ones, then this is where you need to add them. The relationship between the orginal file and what is transfered is refered to as a map. The maps in YP_MAP_X_LATE are created when you first configure NIS. This is done either by running either mkdev nis or /etc/yp/ypinit. Which one you choose is really a matter of personal preference; the two are linked.

As you can see, the key to the whole mapping scheme is the /etc/yp/YP_MAP_X_LATE file. Any time you want to add files to the default list of those being propagated, you must add the to YP_MAP_X_LATE. This file consists of two column. The first column is the long name of the map. The second column is the short name for the map. A few lines might look like this:

















With the exception of the first line, the name of the short name of the map is really shorter. The reason for this translation of long names to short ones is because of the filename length limitations under System V. Remember from our discussion of filesystems, with the AFS (as well as others), filenames could be no longer than 14 characters. If we added either the .pag or .dir extensions to some of the mapping files, we would end up with names longer than 14 characters. By using the map file, the "real" name is converted to a "disk" name so that the extension can be added. (Remember the .dir files are "directories” and the .pag files are the date files). When the server exchanges information, it uses the YP_MAP_X_LATE file to make the translation. Note that the server reads this file dynamically. Therefore changes can be made to this file, even while the server is running. By default, automount files are included in the NIS maps. This is the reason why you need to remove the -m from the automount line in /etc/nfs.

Once you have configured your master, the maps are propagated the next time you go into mutli-user mode. Afterwards maps are not propagated but wait for the clients to request them from the server. The actual transfer is done by ypxfr. This is started on the server and can either be done periodically with cron, from the command line, or by using yppush. In each case, ypxfer checks the /etc/yp/map2ascii directory. This contains shell scripts (and therefore you can look at them) with the same name as some of the maps to be transferred. If there is a file here, it is executed as the last step in the transfer process. It is these shell scripts that convert the maps back into their ASCII format and install them in the proper location.

If ypxfr is run non-interactively, (i.e. through cron), all transfer activity are automatically logged in /usr/adm/nislog. Assuming that the file exists. If so, the information is appended. If you run ypxfr interactively, then the transfer information is send to the terminal.

If you decide to propagate the maps through a crontab, then you might want to consider different intervals for different maps. For example, files like services.byname which is the map for /etc/services only change when adding new networking software so may remain unchanged for months. Other files, like hosts.byaddr, which is the map for /etc/hosts gets changed any time a new machine is added or removed from the network. If you have an organization with a lot of user turn-around, then a file like passwd.byname might change even more often.

An alternative to transfering all the maps at once or complicated crontabs is a shell scripts that checks the age of files and propogates only those files newer that a given date. An even cleaner alliterative are three scripts in /etc/yp: ypxfr1phr, ypxfr1pdy and ypxfr2pdy. These run ypfer hourly, daily and twice a day, respectively. Which files are propagated via each file is based on how often it is expected to change. Even if your system changes at a different rate, these files will usually get changed at the same time. Therefore instead of setting up cron jobs to run at the frequency intended, you can easily change the cron job. Remember that it is the master who is changing, therefore you will need to run these scripts on every client. However, stagger the times a bit so as not to overburden the server.

Whereas ypxfr is used from the client side, yppush is used on the server to propagate (push) the maps to the clients. Since yppush requires a copy of the map to already exists on the client, you cannot use it to propagate new maps. Instead, ypxfr needs to run on each client.

Now we get to how the clients differ. There are three types of non-masters. A copy-only server can only copy the information itself to other machines, hence the name. It cannot accept requests to transfer the maps. Therefore, if the master becomes unavailable, the copy-only server sits on it's tush and can do nothing. A slave server, in contrast, can satisfy requests if the server is unavailable. Copy-only servers are not updated by the master automatically, but most poll the master to get the updated information. A true client does not maintain any maps, but requests the information across the net. Keep in mind that if you make changes to the files on either of the non-master server, the changes will be overwritten the next time they are updated from the server.

Before NIS services can be provided to a host, it must be bound. This process is called binding the host to the NIS domain. This is accomplished at system startup when the ypbind daemon broadcast for a server to satisfy it's request. The machine binds to the first server in the domain to answer. Once the binding is made, process on the local machine can communicate with the ypserv daemon, whose job it is to transfer or display maps, as well as provide other information about the NIS domain.

When dealing with NIS, there are two types of user accounts: distributed and local. Distributed accounts are propagated to remote machines and must be administered on the master server. Distributed accounts, allow these users to gain access to any machine within the NIS domain. When changes are made to these accounts, the information is automatically propagated via the NIS maps.

Local accounts, as their name implies, are local to a specific machine and must be, therefore, administered locally. (Make sense, huh?) Local accounts enable you to exempt certain accounts from the NIS domain. In other words, the account only exists locally. Therefore, the user can only login on that one machine. (or any other where a local account is set-up) By default, pre-existing accounts become local accounts when NIS is initialized.

The NIS password information is maintained by several files in /etc. The /etc/passwd.yp file contains the password entries for distributed accounts. If during NIS initialization you said not to make the system accounts distributed, this will not contain any of those system accounts. However, in other respects it is identical to the /etc/passwd file. The /etc/passwd.local file contains entries for local accounts. These are created by when NIS is initialized on a system or added later using the Account Manager.

The /etc/passwd file is your normal password file and contains the information for all accounts, local and distributed. On the master server, the passwd file contains all entries from both passwd.yp and passwd.local. On slave and copy-only servers, it contains entries from passwd.local and ASCII translations of the password map, which it received from the master.

Note that entries in passwd.local take precedence over entries in passwd.yp. That is, if you have information for a specific user account in both passwd.local and passwd.yp, the information in passwd.local is incorporated into passwd and the corresponding information in passwd.yp as well as the passwd map, is overridden.

Paired with each of these files are group files. These files server the same function as their password counterparts. In addition, entries in the group.local file tape precedence over entries in the group.yp file, just like for the password files. Each NIS domain has a unique group.yp file, which existing only on the master server.

On clients, the NIS maps can be used to augment both local password and group information. Just like accounts where NIS is not running, accounts on NIS in OpenServer can be administered using the command line tools: useradd, userdel, and usermod. You can also include NIS maps in both the /etc/passwd and /etc/group files. These are essentially the same format as "normal" entries, but are preceeded with a '+'. These must be unique, in that the same user (or group) cannot exist on this system as a "non-NIS" user (or group). If there is an NIS map with this same user (or group), these entries override the maps.

If you decide that you want to have certain users that are valid only on the local machine, then you can include NIS maps in your existing /etc/passwd file. Remember when we were talking about including NIS maps in automount files? If the first character on a line in an automount map is a plus-sign (+), then automount includes the referenced file.

With NIS maps the situation is similar. A leading plus-sign tells the system to include an NIS map. Using this mechanism, you can either include individual entries or the entire passwd file. To include the entire NIS passwd map, the line simple looks like this:


To include individual entries you precede each line with the plus-sign and then an entry very similar to that in /etc/passwd. For example, the user jimmo my look like this:

+ jimmo:x:::All around good guy:/usr/jimmo:/bin/ksh

Here the general form of each entry is:


There are a couple of things to note. First, neither the UID nor the GID is included. Remember, we are trying to make access to remote machines as transparent as possible. If UIDs and GIDs are assigned on a per system basis, then you could get confused when transferring files and otherwise with the system. Since all access is based on UID and GID, the system could potentially see you as two separate users.

Next, everything inside the square brackets ( [] ) is optional. Yes, everything. If you leave out an entry, then the corresponding information is take out of the NIS map. If you wanted, you could have a line that looked like this:


However, this is just the same as:


Potentially I could limit everyone coming into the system via an NIS make to a specific home directory or login shell. For example:


Here, all users that come in via an NIS map will have the ksh. (If they are going to use my system, then they had better use a decent shell.)

As one might guess (okay, maybe not) the plus gives you access and the minus takes it away. Let's say I have a very nosey user that I don't want to give access to a sensative system. Although every other user has access, I want to keep this user out. I could therefore create and entry like this:


(Yeah. I wouldn't trust him either.)


Creating network groups You can also create network groups that are used for permission checking when making remote mounts, remote logins, etc. Network groups are defined in the /etc/netgroup file. Each line of the netgroup file defines a single group and has the format:

groupname user1 user2 ...

User names do not necessarily need to be the user name. They can what are called "tripples", with the format:


This can be used to limit access to specific users or machines, as well a define the group in broad catagories. This is done by leaving out specific entries. For example, and entry like this would create a group that everyone was a member of:

world (,,)

We can say that only certain groups have access or that certain groups are denied access. To include a netgroup, the line is preceded by a plus- or minus-sign and then an at-sign (@) to indicate it is a netgroup and not a user. Keep in mind that, like in other instances, the system will stop once it finds a map. For example, jimmo is a member of the admin netgroup. I want all members of the admin netgroup to have access, except for jimmo. So I might be tempted to have something like this:



The problem is that jimmo is a member of the admin netgroup. The system sees that members of the netgroup are allowed access and let's jimmo in. It never even gets to the next entry. Therefore, you should keep all references to denying access in front of those allowing access. Once the system finds a match, access is denied and it never continues looking for something that might mean getting access. However, if granting access was first, it would never find the reference to deny access.

However, there is an exception. If you want to allow access to an individual, but not to the entire group, then the individual entries must come first. So, it would be better to expand the statement about ordering to say, individual names come first, followed by group names and denying access comes before granting access.

Note that in the netgroup listing there was no information about passwords, comments, home directories, or shells. Since a lot of this is different for each user within the group, the system simply takes this information from the map and nothing is changed locally. Here again, whatever is first, takes precedence. If you want a particular user within a group to have something other than the default, their entry must come before the one for the group. Note also that all local entries must come first. Once you specify an NIS map, all subsequent local entries are ignored.

Some of the more observant readers may have noticed something. First, I haven't mentioned which tool you use to modify the entries in /etc/passwd to reflect the NIS maps. Well, yes, I have. It's called vi. The only way to make these changes is by using a text editor. Be very careful. If you trash your /etc/passwd file, then no one can log in.

The same thing can be done with your /etc/group file. If the first character on the line is a "+", then the system will interpret that as meaning it should read from an NIS map. As with the /etc/passwd file, the syntax for each entry is the same as for local entries. The difference is that the GID is not included. Here again, this is because the GID is taken from the NIS map, as it should be. To include every group in the NIS map, the syntax is


Note that the NIS group map is also used to add members to a local group. Assume I have a group called hackers that exists locally. There are people on other machines that I would like to include in this group. The syntax would be:


This would include the NIS users, jimmo, larry, curly, and moe to the local group hackers.

Automount Maps

As I mentioned in the section on automount, the automounts map can be propogated using NIS. The automount maps are files. Files can be propogated to other machines using NIS. So, what the problem? The first step, as with any other file is to ensure that the file exists. So, on the master NIS server, you need to create the /etc/auto.master file. Next, change directories into /etc/yp and run

ypmake auto.master

to toss the auto.master file into the NIS system and get it propagated to the other machines. Then on both the server and the clients, find the line in /etc/nfs:

automount -m -f /etc/auto.master &

The -m keeps automount from looking in the NIS database. To get automount to use NIS, you simply remove the -m option here.

If there are automount maps that we want to propagate to every machine, but still have some that are local, we can easily tell automount this. We can use the shortcut that a line in an automount file starting with a plus-sign (+) is a file that will be included in the current map file. The example was:

man -r siemau:/usr/man

scohelp -r siemau:/usr/lib/scohelp


If we take away the path name from the include line, automount treats this as an NIS map. For example:


In this case, automount will look through the NIS maps for the file auto.homes. This assumes that auto.homes is in the NIS server and you propagated it with:

ypmake auto.homes

Starting NIS

Before you start NIS, you need to ensure several things. First, NIS has to be installed. I have talked to customers that complain that something is broken when they try to configure certain products only to find that they forget that they hadn't installed it. So, please, before you call SCO Support, make sure NIS is installed.

Next, make sure that the network is configured properly. Make sure that every machine running NIS can communicate. If the machine cannot communicate, either the maps cannot be propagated or NIS requests cannot be satisfied. Also check YP_MAP_X_LATE to make sure it includes the files you want to propagate. Add the ones you want that are missing and remove those you don't want. If the file name is short, then you don't have to worry about the 14 character file length limit. Also, if all the other NIS machines are running ODT or OpenServer with an EAFS, DTFS or HTFS, then you won't have this length problem.

You should also check to ensure that the domain name on each machine matches. This does not necessarily need to be the Internet domain name. However, I have found that it is easier to manage things if names are consistent. Therefore, I get Internet and NIS names the same. You can use the domainname utility to check or modify the domain name. Without an argument, domainame returns the name of the current domain. The argument to use would be the name of the domain. For example, to set the domain to siemau, I would use:

domainname siemau

Once NIS is configure, root will probably be using a lot of NIS commands. For the most part, these are all located in /etc/yp. Therefore, you might want to consider including /etc/yp in your path.

To get NIS going, you need to initialize each server within the domain. This insure that the correct data files are created from their ASCII counterparts and are put in the right place to be propagated. In order to initialize the server, you need to be in maintenance (single-user) mode. Note that once the initialization is complete on the master server, it is safest to reboot the master and bring it into multi-user before configuring and of the other servers or clients.

You can configure NIS using either mkdev nis or ypinit. As the SCO Doc kindly points out.

"The functionality and the options for ypinit and mkdev nis are identical."

I would hope so considering they're links. To make my typing easier, I will refer to the command as ypinit. If you decide to use mkdev nis, then everytime I say ypinit, you read mkdev nis. Okay?

The first thing you are asked is what type of server. Keep in mind that you should really configure the master server before anything else. If you are in a high security mode and try to configure the server. The system screams at you and abort the configuration process. The options are:

m - master

s - slave

c - copy-only server

C - client

q - I changed my mind and want to go play video games

If you have already installed NIS, ypinit will catch that and ask you to verify that you want to re-initialize it. If ypinit doesn't catch this, then there is something wrong. If a new installation, ypinit will ask you for the domainname, if it hasn't been set. Keep in mind that this doesn't have to be the same as your Internet domain, but it is helpful to name it so.

Next you get several question asking you about your configuration files. I am not going to repeat the doc by listing every question. You should know the state of your system better than anyone, if you cannot answer the questions, then refer to the previous sections or look at the SCO Doc. One thing I would like to point out is that if this is rebuilt/reinitialization, then ypinit will overwrite (it says "destroy") the existing dbm files in /etc/yp/<domainname>. This is expected and nothing to have a heart attack over.

One question that was unclear to me at first was about the way ypbind was to run. Remember the process of relating a domainname to the database files is called "binding." This is done by ypbind. If you run ypbind with the -s option, it runs in "secure mode." This is in environments where you trust everyone in the domain. However, it does not really do much.

If you want to allow anyone to change your binding, then use the -ypset option. Because (as I just said) anyone can change your bindings, this posses some very serious security risks. This should only be used in completely secure environments. To limit changes to the bindings to the local machine only, use the -ypsetme option. However, this is no guarantee as an unsecure network, where individuals can do IP address spoofing have the potential for breaching your security. You also can run ypbind without any options. Note that you can change the bindings on you machine later by running ypbind from the command with the option you want.

After this you are prompted to add the names of all other servers in your domain. This includes master and slave servers as well as non-SCO servers. Here, too, you have the choice of propagating system accounts such as root. Although, this does make thing easier, I recommend against it. I have one network using NIS. The root password for each machine is easy to figure out once you know the scheme. In other words, if you know the root password for one machine you can figure it out for all the others. Personally I think this is a more secure scheme than using NIS. Remember the root user is all powerful. The less chance of letting people get root access the better. Therefore, I say to not propagate any system accounts.

If this is a re-initialization, then you probably already have the /usr/lib/nisrt directory (remember it is used to store backup copies of the system files?). Therefore, don't be alarmed if ypinit catches this. Next, ypinit tells you that it is proceeding with the configuration, which may take several minutes. If this is a reinitialization and you have slave servers, then you will need to run yppush to push (propagate) the new information to the slave servers. At this point, you are done and can reboot.

To initialize a slave server, you follow the same procedures as for the master server. One key difference is that when prompted for the type of NIS machine you are configuring, you input 's' for slave instead of 'm' for master. (Did I really need to say that?) Note that even though this is a slave server, the same issue with ypbind apply. Also, you will be prompted to supply the name of the master server.

Initializing a copy-only server is the same as well. (Keeping in mind that you select 'c' for copy-only). The difference is that if a copy-only server is running in a higher security level, you can configure NIS. However, NIS will not integrate certain sensative maps.

When you configure an NIS client, things are a lot easier. Like the other NIS machines, you need to be in single user mode and start ypinit. You choose a 'C' to be a client, which is capitalized. Be careful of that. Also since the client is in the domain and the maps must be bound, ypbind is running, therefore the same issue apply. Interestingly, that's it. Once the maps are bound to the domain. The client doesn't need anymore information. Instead everything is obtained at boot time.

Once NIS is up and running, you may want to include new maps. This is done on the server from the /etc/yp directory as:

ypmake <map_name>

If you have made changes to a mapped file, you need to progogate the maps to the other servers and clients. This is also done from /etc/yp on the server as:

yppush <map_name>

A couple of things to note here. If you create a new map with ypmake, you do not need to run yppush afterwards as this is done automatically. If you don't supply any arguments to ypmake, then all outdated maps will be recreated. However, if you do specify a map name, it will be updated no matter what. Also, if you have the DevSys on your system, then ypmake will use the make command to recreate the database. Therefore, if you have a makefile in /etc/yp, you can create more complex schemes for updating the database. See the make(CP) man-page for more details

/etc/yp - NIS configuration directory

/etc/yp/Makefile - Makefile used to create NIS files.

/etc/yp/YP_MAP_X_LATE - NIS file map.

/etc/yp/makedbm - make a Network Information Service (NIS) dbm file

/etc/yp/map2ascii - NIS database conversion routines

/etc/yp/map2ascii/netgroup - list of network groups

/etc/yp/revnetgroup - Reverse the netgroup file

/etc/yp/ypinit - Initialize NIS databases.

/etc/yp/ypmake - Rebuild NIS databases.

/etc/yp/yppoll - what version of an NIS map is at an NIS server host

/etc/yp/yppush - force propagation of a changed NIS map

/etc/yp/ypset - Point ypbind at a particular server.

/etc/yp/ypxfr - transfer an NIS map from an NIS server to local machine.

/etc/yp/ypxfr1pdy - Daily transfer of NIS maps.

/etc/yp/ypxfr1phr - Hourly transfer of NIS maps.

/etc/yp/ypxfr2pdy - Twice daily transfer of NIS maps.

/etc/ypbind - store NIS server locations

/etc/yppasswd - Change NIS password.

/etc/yppasswdd - Server for modifying the NIS password file

/etc/ypserv - NIS server daemon.

/etc/ypxfrd - NIS transfer program.

/usr/bin/domainname - Set or display NIS domain name.

/usr/bin/passmgmt - Password files management.

/usr/bin/ypcat - Print information in an NIS database.

/usr/bin/ypmatch - Print the values of NIS map keys.

/usr/bin/yppassmgmt - Password file management for NIS.

/usr/bin/ypwhich - Returns name of which host is the NIS server or map master.

/etc/yp/ypshad2pwd - Install and update the /etc/passwd.yp passwd file

Table 0.6 Key NIS Files


Configuring MMDF

There are couple of new features in this version of the Mail package. The first is the Mail Configuration Manager, which is a graphical interface to the mail system. Not only can it do everything the old mkdev mmdf script could, but extends beyond to configure almost every aspect of mail. Two other managers support the Mail Configuration Manager. The Host Administration Manager is used to configure and administer mail hosts and the Aliases Administration Manager is used to administer mail aliases.

Also new to OpenServer is MIME support (conformance). MIME stands for Multipurpose InternetMail Extensions which is a standard that allows messages to be exchanged which contain graphical, audio, video, multimedia and other non-text data.

Whether you run the Mail Configuration Manager on OpenServer or mkdev mmdf on ODT, the information that you need to provide will be the same. Because the mkdev mmdf script in ODT allows you to input information only in a particular order, let's run through the script and discuss the information we need to input. As we do, I talked about where you would need to input this information in the Mail Configuration Manager.

The first thing you see is a description of what will take place during the configuration and some assumptions that are made. Since we want to do the configuration, we should answer yes(y) to the question about whether we want to continue the configuration. The next question is what our fully qualified host name is. In OpenServer this is the box labeled "Configuring MMDF to use this host name:". In both ODT and OpenServer you are present with a default host name. This is based on other configuration files you may have set up. For example, if you have TCP configured, then this would be your fully qualified domain name, such as siemau.siemau.com. Otherwise the domain default to uucp, such as siemau.uucp. The machine is assigned to the MLNAME variable and the domain name is assigned to MLDOMAIN variable. Unless you have decide to "hide" the machine name (more about that in a moment.)

Next, you are asked if you want mail to root redirected to a "real" user? In OpenServer this is done by clicking on the button labeled "Redirection...". Here you can redirect mail to every system user and not just the three you can in ODT (root, mmdf, postmaster). By filling in the blank with the name of the account or using the list that's provided for you, you select for which account you want the mail re-directed. Pressing the "Select User(s).." button allows you to choose which user(s) will get the mail instead. Click on okay, when you are finished setting up the aliases.

In the mkdev mmdf script you are then asked questions about which channels you are going to use, based on what the script sees as the configuration on your system. If you answer yes to configuring UUCP, you will be asked to supply the host name of the site(s) you want to connect to. You must first have configured these sites in UUCP. At least there must be entries for these sites in the Systems file as that's where the mkdev script is looking. You are also asked if the hostname for the machine should also be the same for UUCP. Whatever UUCP name you define will be assigned to the UUname variable.

In the configuration manager you have check boxes that allow you to select which networks to configure. It won't be until you click on OK that you are prompted to select the names of the sites you want to contact. When you want to select the site, click on the site name and then the Add button. You will then be prompted for the full qualified name. All this you must do by answering the questions in ODT. In OpenServer, you can select to configure a network without selecting any hosts. Note that what is being configured here is channel programs. To configure individual hosts, use the Host Manager.

In both cases, the system notices whether you are running a name server or not. If you are, MMDF will be automatically configured to use it. Therefore, there will be no need to configured your smtp hosts by hand.

Next, you are asked about configuring mail for a smart host. This is where you configure the baduser and badhosts channels. If you answer yes to either question you are prompted for the name of that host. Keep in mind, that you must be able to reach that host. The mkdev script checks the various channels to make sure. In the Mail configuration manager, this is accomplished by selecting the "Forwarding..." button. You are given the option of either returning the mail to the sender or forwarding to the "smart host." If you select Forward, then you must input the name of a smart host. You can also select the host from the list of available host. Note: As of this writing it appears that the only hosts you are presented with are via the smtp channel if it is configured.

At this point the mkdev script is done. However, there are a couple more option you can configure in the configuration manager. For example, there is a button marked "Mailboxes...". This determines whether the users' mailbox should be in /usr/spool/mail or their home directory. This would be done in ODT by changing the MMBOXNAME. Although you can edit it by hand to whatever you like, the Configuration Manager only allows you to choose between the /usr/spool/mail directory (the default) and the user's home directory. This is done in mmdftailor file by first setting the deliver directory (MDLVRDIR). If null, ("") the user's home directory is used. Next, defined the name of the mailbox (MMBXNAME). This is the name of the file that your mail will be stored in. This will default to ".mailbox". If you want, you can also set the protection (permission) on the mail box with the MMBXPROT variable. For example:


which is the default, means that only the user has read/write permissions on their mailbox.

Here, you can also choose to hide the machine name behind the domain. This option is not as obvious as it is labeled "Select format for mail users address". You are given two choice. One which includes the machine name and one that does not. If you select to include the machine name (the default), the beginning on mmdftailor looks like this:

MLNAME "siemau"

MLDOMAIN "siemau.com"

Here the machine name (MLNAME) is siemau and the domain (MLDOMAIN) is siemau.com. If we decide we want to hide the name, mmmdftailor would look like this:

MLNAME "siemau"



Here the machine name is siemau and the domain is just de. The reason is that the machine name is simply the concatenation of the machine name onto the domain. In the second example, siemau is cat'ted onto de to give us the "machine name" siemau.com. In order to keep track of what the local machine's name really is, we set the MLOCMACHINE variable to the local machine name. Potentially, you could change these to create any domain you wanted.

Other Managers

As with many of the other aspects of you system, there is a Manager to help you configure MMDF. In ODT, this was the mkdev mmdf script. Although the script still exists in name on Open Server, the appearance is significantly different and its functionality has been enhanced. Rather than walking you through either one of these tools, I figured we would talk about them together. Using the information on the structure of MMDF from the first part of the book, we'll talk about what each part of these configuration tools means and what changes take place in the configuration files. Afterwards, I hope you will be able to not only run these utilities yourself, but also edit the files to make the changes, without the need to use the tools.

New to OpenServer is the introduction of several Managers to aid in you mail configuration. These include managers to configure, hosts, aliases, channels, domains and tables. What is missing here is the Kitchen Sink Manager which is scheduled to be included in the first maintenance supplement.

In order to be able to understand the questions each of these managers is asking, you need to understand the concepts about MMDF that we talked about in the first part of the book. Because of that, I will make the assumption that you already read that section or are somehow familiar with MMDF.

You can use the other configuration managers associated with MMDF to configure things that you needed to do by editing the mmdftailor file in ODT. One such manager is the Channel Manager, which allows you to change channel specific parameters. For example, you can select which type of addressing you want on outgoing messages. This is done by clicking the "Addressing..." option button. Should you want addresses in the format @A:B@C, you would select RFC 822-style (the default). If you want addresses in the format B%C@A, you would select RFC 733-style. If the addresses are not to be reformatted, select none. This would be done by editing the ap= entries on the MCHN line in mmdftailor. This can be ap=822, ap=733 or ap=same (for no reformatting). This is necessary when contacting machines that can only handle one style of addressing or another.

For example, setting smtp to RFC 733 style address, the line might look like this:

MCHN show="SMTP Delivery", name=smtp, que=smtp, tbl=smtpchn, pgm=smtp, ap=733

We also have several deliver options that we can select. For example, we can set the number of minutes keep retrying to connect to a host for the system considers the connection to have failed. This is set by inputting the value in time-to-live field in the deliver options or setting ttl in the MCHN entry in mmdftailor.

There are several delivery modes that you can select. This is defined in mmdftailor with the mod variable Regular (mode=reg) means that the mail is queued, but will not be sent until deliver runs. Background-Only (mod=back) means that this deliver can only be run on this channel as a background process. Passive (mod=psv) means that other sites must pickup from this machine. Immediate (mod=imm) will invoke deliver itself, without waiting for it to run.

The smtp configuration string defines the smtp name and character set to be used. By default for my machine, this is:


The hostname is what you will see in the "Received:" lines of your mail messages. Because this is required by some smtp implementation, I recommend leaving it. The 7-bit character set is the standard as defined in RFC 821. The problem is with international mail, they often need the 8-bit character set to gain access to the characters not available with 7 bits. This shows up in the MCHN entry as:


You can also defined logging on a channel-by-channel basis. This is done by selecting the "Log File" option button. Here you choose the path of the log file or leave the default of /usr/mmdf/log/chan.log. You also get a choice of the logging level. This, too, shows up in the MCHN entry as:

log=/tmp/mmdflog, level=FTR

Interestingly enough, here you can Add a new channel if you want. When you select the Add item in the Channel menu, you can define your own channel. Here you are given the choice of what channel program and what tables to use as well as the other configuration options available to the default channels. The channels available are merely the programs residing in /usr/mmdf/chans. Copying a program into this directory makes it appear in the list of available channel programs. Or if you want, you can create a new channel that uses an existing channel program.

The Domain manager, is basically a front end to add entries to the .dom files. Here you can add or otherwise modify the domain files. You need to be careful however. When you add or modify a domain, you have a pull down list of the available tables. This also includes the alias files as well. Be careful not to select one of those.

The Deliver Program

By default deliver runs once a minutes. Changing the deliver interval To change the interval between deliver processes from the default of every 10 minutes, add the

NOTE: When you start deliver with the -b option only, one deliver process manages all configured channels. The deliver daemon periodically checks each channel's queue for mail to deliver. This is known as ``sweeping the queues''. Running multiple deliver processes. When you have a single deliver(ADM) program managing a number of channels, deliver goes through the channels individually and tries to deliver all the messages in a channel's queue before going on to the next channel. You can instead configure your system to start multiple deliver programs, each servicing a single channel. In this case, the deliver daemons work in parallel. This is a desirable configuration for a mail gateway machine because it increases the overall mail bandwidth of the machine.

Mail Aliases

Aliases have changed a fair bit in OpenServer. The whole aliasing scheme has changed and no longer consists of the three aliases: alias.ali, alias.list and alias.user. Instead, the aliases are grouped according to how they are to be treated and what level of "trust" these aliases have. For example, by default there is only one alias: alias.n. Like the aliases files in ODT, this is referenced by an MTBL entry in mmdftailor:

MTBL show="nobypass aliases", name=alias-n, file=alias.n

The alias.n files contains aliases for all of the system users such as mmdf, sys, and uucp. Each one is aliased to the user _mailadmin, which is in turn aliases to root (at least by default). One thing to note is that in the show entry as well as the alias.n file itself, we see that these are "nobypass aliases." This means that you cannot bypass the aliases mechanism. For example, if we look at the first few entries in the alias.n file we see:






If I sent a mail message to adm, the alias would kick in and the message would be sent to _maiLAdmin and the next alias would kick in and it would go to root. If I mailed it to ~adm, the tilde (~) would bypass the aliasing mechanism and send the message to the user adm. Since no one ever logs in as adm, it is probably pointless to send a message to these users. You could also alias root to another in this fashion to ensure that mail messages always get to a "real" user. The '.n' at the end of the file name indicates that this is a list of no-bypass aliases.

Other characteristics you can apply to aliases is whether or not they are public. That is, can remote sites determine who is a member of this alias? If so, then it is public. Public alias files have a '.p' on the end. Trusted aliases (those that you cannot redirect to a pipe) have a '.t' on the end. What if you have a both public trusted and public no-bypass aliases but they are different lists? Quite simple, each combination gets it's one file. For example, it's possible to have the following aliases (among others):

alias.n - nobypass

alias.npt - nobypass public trusted

alias.nt - nobypass trusted

alias.pt - public trusted

alias.t - trusted

In practice, this works the same way as aliases in ODT. When you mail to a user, MMDF will first check to see if the address is local. If so, it then checks to see if the recipient is an alias by looking through these tables. If a match is found, the alias is expanded and the mail is send to the user(s) defined in the alias. Otherwise, it is send to the user it was intended for. (We're assuming here that we aren't using the bypass.)

All of these characteristics can be modified by the Alias Manager. If you want to configure a new alias, select New in the Alias menu. If you want to modify and existing alias, select Modify in the Alias menu or double click on that alias name in the list. Here you can add new members to the alias, remove them etc. This is also where you change whether the alias is public or not and whether to allow the address bypass.

If you have a long list of users that you want to make into an alias, you don't have to type them in individual or cut and paste. Instead, you can create a file containing the alias list. This is input in the Name field by using the standard input redirection (<) and the path name of the file. Here I would recommend the full path now so that there is no doubt which file you mean. For example, if we create a list called managers and place it in /usr/mmdf/table, we create an alias (call it managers) then the "name" we would give it is </usr//mmdf/table/managers. This name then appears in this format (including the less-than symbol) in both the alias manager and in the file. The advantage of this is that the address is not parsed until mail is sent. You can therefore modify the file any time, without even bothering to tell MMDF.

Because there is a separate file for each of these characteristics, there has to be MTBL entries for each of them in mmdftailor. If we had all the aliases that I described above, the MTBL entries would look like this:

MTBL show="nobypass aliases", name=alias-n, file=alias.n

MTBL show="trusted aliases", name=alias-t, file=alias.t

MTBL show="nobypass trusted aliases", name=alias-nt, file=alias.nt

MTBL show="nobypass public trusted aliases", name=alias-npt, file=alias.npt

MTBL show="public trusted aliases", name=alias-pt, file=alias.pt

If you later change your mind, you can go back and change the characteristics so that an alias will end up in another one of these files. Of if that combination didn't exist, then a new file would be created. Keep in mind that these files are only created, they are never removed. If you change the characteristics of an alias so that there are no more aliases in a particular file, the file remains. It still contains the couple of lines of header, but there are no aliases entries in it. In addition, the MTBL entries in mmdftailor also remain. Therefore, if you change the alias characteristics, it's a good idea to check the alias files and mmdftailor to clean things up.

Mailing Lists

Amailing list is a special kind of alias that is administer by a user, the mailing-list or alias "owner". This is the person who is responsible for adding or modifying users. If mail sent to the mailing list is undeliverable, the mail is sent to the owner and not the originator.

Like other kinds of aliases, mailing-lists can be kept in a file. By making that file writeable by the owner he or she can easily change the mailing list. Creating a mailing list is the exact same process as for other types of aliases. You can also change the characteristics using the Alias Administrator. One thing I would like to point out is that despite this new interface, the entries in the alias files for mailing lists are exactly the same as for ODT, with the exception of being in a different file.:


alias.pt:list42:list42-outbound@list-processor alias.pt:list42-request:jimmo

Problem Solving

One of my strongest (not fondest) memories of working in SCO support is MMDF. For the newbie system administrator, MMDF is a valuable package since 1. It's free and 2. it's fairly easy to configure. One of the problems with it is that since it is easy to change things, it is also easy to mess them up. The become a serious problem when you have a customer who cannot leave well enough alone. I dreaded these customers. Since they had a support contract, we were obligated to help them. Regrettably.

One customer in particular made my life miserable. At least as far as supporting MMDF was concerned. For simplicities sake, let's call him Mike. Whether that's his real name or not, only the two of us know for sure. (Plus a couple other people in support who know him as I did.) Mike was a system administrator for a small-sized company. The problem was that the company was really too small to have their own administrator. The result was that Mike didn't have much to do. He personified the statement that idle hands are the devil's workshop. Boy did he give me a devil of a time.

Mike liked to play. Since he had root access to the system, there were a lot of places he could play that others couldn't. He spent a lot of his time reading the manual or magazines and when he found something interesting he would say "Boy! I'd like to try that." (At least, that's what I image him doing.) The problem was that Mike was not an experienced UNIX administrator. I don't know what kind of background he had, but it wasn't UNIX. (He sounded young enough that this might have been his first job, so he didn't have any background.)

One of those things that he'd go "Boy!" about was MMDF. When he made a change that he couldn't correct (which happened literally every other week) he'd end up calling SCO Support. Being one of the few people that supported MMDF at the time, I tended to get a lot of his calls. The first couple of times were humorous. After the third time I told him to back up his MMDF configuration files before he started and he didn't, I started to get annoyed.

The real problem as not that he tried to do things and messed them up. Nor was it that despite repeated instructions to do so he never made backups first. No, the real problem was that Mike just didn't understand. He was the kind of guy who would flip through the manual, find something interesting and then take the example and try to manipulate it to his needs without really understanding what he was doing. As a result, things got messed up and I had to fix them. It is with that, that I dedicate this section to all the Mike's in the world.

If you discovered the mail isn't getting delivered, there are several very useful tools. The first one is /usr/mmdf/bin/checkque which will, as it's name implies, check the MMDF mail queues. This can tell you how many messages are waiting, how long they have been waiting and in what queues. For each queue there will be several line, therefor it is a good idea to pipe it through more. So, when I run it on my machine, I get:


Sun Sep 10 09:48: 3 queued msgs / 512 byte queue directory

3 Kbytes in msg dir

0 msg 0 Kb (local ) local


Local delivery

deliver start


Sun Sep 3 09:47

deliver message


Sun Sep 3 09:46

deliver end


Sun Sep 3 09:46 / 168 hours

0 msgs 0 Kb (list ) list


Mailing list processor

deliver start


Sun Sep 3 09:46

deliver message


Sun Sep 3 09:46

deliver end


Sun Sep 3 09:46 / 168 hours

3 msgs 1 Kb (smtp ) smtp


SMTP Delivery

deliver start


Sun Sep 3 09:47

No deliver message

*** OVERDUE ** deliver end


Sun Sep 3 09:47 / 168 hours

*** WAITING ** First message


Sun Sep 3 09:45

0 msgs 0 Kb (uucp ) uucp


UUCP Delivery

deliver start


Sun Sep 3 09:34

deliver message


Sun Sep 3 09:34

deliver end


Sun Sep 3 09:34 / 168 hours

Table 0.7

Hmmm. In my stmp queue I have mail waiting. Looking at the time they have been waiting (about a week), I would say that something is wrong here. The first thing I want to check is my mmdftailor file. Unless I have sites that I only contact once a week, then 168 hours is much too long without at least a warning that the message could not be delivered. I would then check the MWARNTIME and MFAILTIME to see how many hours they are set for.

At this point, I need to figure out where the messages where going. Fortunately, I can figure that out by looking in the MMDF spool directory, /usr/spool/mmdf/lock/home. In this case, I have three messages using the smtp channel, so let's look in q.smtp. Doing a long listing a see:

total 6

-rw-rw-rw- 2 root sys 81 Sep 3 09:45 msg.aa01203

-rw-rw-rw- 2 root sys 81 Sep 3 09:46 msg.aa01226

-rw-rw-rw- 2 root sys 81 Sep 3 09:47 msg.aa01244

So, here are the three files. Sure enough, the date is a week ago (assuming today is September 10th). If I look in these files, I can figure out who sent the message and where it is going:



- m smtp "scoburg.siemau.com" "root@scoburg.siemau.com"

Skipping past the first line (which is a time stamp), I see the sender was root on the local machine. The third line tells me that it is using to smtp channel to send to the machine scoburg.siemau.com to the user root@scoberg.siemau.com. This is a fair bit of information. I can then use another tool, /usr/mmdf/bin/checkaddr to see if this is a valid address and what that address is. If I run /usr/mmdf/bin/checkaddr, I would get:

root@scoburg.siemau.com: OK

Unfortunately, this does not tell me much. Note that there is a common misconception here. Users will see the "OK" and assume that means that the address is right. Without any option checkque simply determines if it can parse the address correctly and knows where to send it. So, instead I need to use the -w option to get checkaddr to tell me how it parses the address. This time I get:

root@scoburg.siemau.com: root@scoburg.siemau.com: queueing for smtp: via 'scoburg.siemau .com': 'root@scoburg.siemau.com' OK

This is as I expect. Since the message is being queued correctly, that is for scoburg.siemau.com via the smtp channel, the problem does not appear to be in the address. One possible problem could be that the IP address is incorrect. If I have never sent any mail to this site, then this is a common problem. The quickest way to check is looking in the smtp.chn file. If it's wrong, correct it, then run dbmbuild.

What if the IP address is correct, then what? Well, we know the address we are sending to is correct. We know that the IP address matches the machine we are sending to. What else could be the problem. Well, if you can't reach that IP address with other tools, then there is no way that MMDF can. Therefore, the first thing to do at this point is to check the physical connection. This can be done with ping, telnet or any other such tool. Keep in mind that MMDF is already doing the translation from name to IP address. Therefore, make sure you are checking the connection to the IP address.

If you look in the q.smtp directory and it is empty, then the most logical thing is that the message was send and it is the other guys fault, right? No. Not always. All this means is that MMDF did not queue the message for the smtp channel. So, where did it go. Well, let's check the address with checkaddr. This time we get:

root@scoberg.siemau.com: root@scoberg.siemau.com: queueing for badhosts: via 'scoburg.siemau.com': 'root@scoberg.siemau.com' OK

Here's the problem. The machine name is actually called scoburg and not scoberg. (This is because Coburg is a castle and not a mountain) We simply made a typo in our mail address. If we run checkaddr again. this time on root@scoburg.siemau.com we get:

root@scoburg.siemau.com: root@scoburg.siemau.com: queueing for smtp: via 'scoburg.siemau.com': 'root@scoburg.siemau.com' OK

Now we're cooking.

If you want an easier method to figure out the addresses, to need to take one more step. First look at the appropriate queue file. In this case, let's take q.smtp/msg.aa01244. We can look at the full text of the mail message by looking in the msg directory. The file has the same name so the name would be msg/msg.aa01244. If we look at that we get:

From: All around good guy <jimmo@siemau.siemau.com>

X-Mailer: SCO OpenServer Mail Release 5.0

To: kaleth@scoburg.siemau.com

Subject: paq

Date: Sun, 3 Sep 95 9:47:01 METDST

Message-ID: <9509030947.aa01244@siemau.siemau.com>

Qapla'. pltlh. ghltlh vlghltlhta'bogh DalaD'a'

batlh Daqawlu'taH


Here we see the complete message to my Klingon friend at scoburg.siemau.com. I can now use checkaddr -w to see where this message is supposed to go. (Note: Just because you can read other people's mail doesn't mean you should. This is an administration tool to help user's with mail problems, not to stick your nose in where to doesn't belong. Besides do you really think it's safe to be reading a Klingon's mail?)

This same technique can be used on all the other queues to figure out where the messages are supposed to go and how they were addressed. One thing I would like to point out is that if properly configured, the UUCP channel will immediately hand off the message to UUCP. That is, it creates a UUCP job and submits it. The message then leaves MMDF and becomes the responsibility of UUCP. Therefore, just because it is out of the UUCP queue, does not mean it has reached the other site. You need to check the UUCP spool directory first.

Another thing to keep in mind is that configuring the UUCP channel to deliver messages at longer intervals is not a good idea. For example, you could configure MMDF so that the UUCP channel is only delivered once a day. This means that messages through the UUCP channel are handed off to UUCP once a day. If you contact the remote site once a day. Then it might be two days before the message gets delivered. One in the MMDF spool directory and one in the UUCP spool directory. Therefore, you should have deliver send it immediately to UUCP and let UUCP worry about sending it further.

Another useful tool is deliver itself. Let's assume that we just ran checkque and got the above output. If we run deliver with the -w option to watch the delivery (/usr/mmdf/bin/deliver -w), we would get:

[ Checking for Local delivery (local) mail ]

sorting the queue.

[ Checking for Mailing list processor (list) mail ]

sorting the queue.

[ Checking for SMTP Delivery (smtp) mail ]

sorting the queue.

3 messages, 3 to be processed

Message 'msg.aa01203', from 'root@siemau.siemau.com'

Queued: 7 Days, 1 Hours

[ Accessing smtp (SMTP Delivery)]

Channel: smtp

root@scoburg.siemau.com via scoburg.siemau.com: trying...

connecting to []... can't...

destination not available, queuing for retry

Problem address ending: no valid addresses

End of processing for msg.aa01203

Message 'msg.aa01226', from 'root@siemau.siemau.com'

Queued: 7 Days, 1 Hours

root@scoburg.siemau.com via scoburg.siemau.com: dead host

End of processing for msg.aa01226

Message 'msg.aa01244', from 'root@siemau.siemau.com'

Queued: 7 Days, 1 Hours

root@scoburg.siemau.com via scoburg.siemau.com: dead host

End of processing for msg.aa01244

[ Checking for UUCP Delivery (uucp) mail ]

sorting the queue.

[ Checking for Smart-host Routing for hosts (badhosts) mail ]

sorting the queue.

/usr/mmdf/bin/deliver normal end

For the first two queues, local and the list processor, there were no messages. MMDF said it was sorting the queue, but in reality there was nothing to sort. When we get to the smtp queue, we see when it was trying to send the first message, MMDF could not connect to the IP address specified. Therefore, it couldn't send the message. We now can look into why we cannot contact this IP address.

After sitting in the queue for a week, whatever it was I wanted to say is probably invalid. If I wanted to clean up the mail queues, I could remove all the files by hand, but fortunately, I don't have to. MMDF has a utility to do so: /usr/mmdf/bin/cleanque. Normally, this is run once a day out of MMDF's crontab. However, you can run it any time to force MMDF to clean up the queues.

Something else you can use to see what's happening are the MMDF log files. They are kept in /usr/mmdf/log. The two to look at are chan.log and msg.log. These log, respectively, channel activity and message activity. How much is written to each file is determined by the values you set for the MCHANLOG and MMSGLOG variables. MCHANLOG determines how much channel logging is done and MMSGLOG determine how much message logging is done (or couldn't you guess?).

There are eight levels of logging from just reporting the fatals errors (FAT) to logging everything, or a full-trace (FTR). For most problems, setting both logs to FAT will provide the necessary information. Without knowing the complete syntax of all these components of the logs, you can still gleen a little from them as they are fairly obvious when errors occur. On the other hand, it may be necessary to increase logging. If it comes to that point, then my suggestion is turning it up all the way to FTR. You will get alot of information, however, you are sure not to miss anything.

I must warn you, however. If you ever turn on FTR logging, run a few tests, copy the log files somewhere safe and then immediately put it back to FAT. FTR logs everything. On a relatively quite system, I have seen the logs files get up to 30Mb in just a few days! It is therefore imperative to watch these files and set logging only as high as necessary. You could also set the size of the logging files through mmdftailor. An example would be:

MCHANLOG level=FTR, size=30

This limits the size of the chan.log file to 30 25-block units. That is 384000 bytes. To find out more about the different logging levels, check out the mmdftailor(F) and logs(F) man-pages.

Whenever you make changes to any of the table files or mmdftailor, you need to run dbmbuild before the changes take effect. I recommend you use the -n option to dbmduild. This takes a little longer, but it recreates the database rather than updating the old one.

$HOME/.mailrc - User specific mail configuration file.

/usr/bin/checkmail - Check for MMDF mail that has been submitted but not delivered.

/usr/bin/mail - User interface to mail system.

/usr/bin/rcp - Remote file copy.

/usr/bin/rcvalert - Notify user when mail is received.

/usr/bin/rcvfile - Put message into specified file.

/usr/bin/rcvprint - Print message automatically.

/usr/bin/rcvtrip - Notify mail sender that recipient is away.

/usr/bin/resend - Redistribute mail using the Resent- notation.

/usr/bin/rmail - Submit mail received via UUCP.

/usr/lib/mail/mailrc - Default mail configuration file.

/usr/bin/mailx - User interface to mail system.

/usr/lib/sendmail - Send mail over the Internet

/usr/mmdf - Primary MMDF directory.

/usr/mmdf/bin/checkaddr - Check MMDF addresses

/usr/mmdf/bin/checkque - Generate MMDF queue status report.

/usr/mmdf/bin/checkup - Check status of MMDF system.

/usr/mmdf/bin/cleanlck - Clean MMDF locks.

/usr/mmdf/bin/cleanque - Send warnings and return expired mail.

/usr/mmdf/bin/cnvtmbox - Convert XENIX-style mailboxes to MMDF format.

/usr/mmdf/bin/deliver - Manage MMDF mail delivery.

/usr/mmdf/bin/host - Look up host names using name server.

/usr/mmdf/bin/submit - Submit mail to the appropriate MMDF queue.

/usr/mmdf/chans - MMDF channel programs.

/usr/mmdf/log - MMDF log files.

/usr/mmdf/mmdftailor - MMDF configuration file.

/usr/mmdf/table - MMDF tables.

/usr/mmdf/table/dbmbuild - Build the MMDF database.

/usr/mmdf/table/dbmedit - Edit the MMDF database file.

/usr/mmdf/table/dbmlist - Report the contents of the MMDF database.

/usr/spool/mmdf/lock/home - MMDF spool directory.

/usr/spool/mmdf/lock/home/addr - Address information linked to files in q.*

/usr/spool/mmdf/lock/home/msg - Mail messages

/usr/spool/mmdf/lock/home/q.* - Address information linked to files in addr.

Next: Adding Devices


Copyright 1996-1998 by James Mohr. All rights reserved. Used by permission of the author.

Be sure to visit Jim's great Linux Tutorial web site at https://www.linux-tutorial.info/