Jim Mohr's SCO Companion
Copyright 1996-1998 by James Mohr. All rights reserved. Used by permission of
Be sure to visit Jim's great Linux Tutorial web site at http://www.linux-tutorial.info/
Talking to Other Machines
Long ago (at least in terms of the history of electronic data
processing) having two computers at the same time was something you
read about in science fiction novels. As systems became more common,
the time did come when a company or university would have two
computers. The need then arose that data be exchanged between the two
machines. This was the beginning of SNEAKER-Net (Slow,
Non-Electrical Activity, Keeping Everyone Running), which was
developed in the 1950s. With SNEAKER-Net, the technician would copy
data onto a tape or other media and using his sneakers, would run the
tape over to the other machine to be loaded. In many organizations,
SNEAKER-Net is still employed today as this is often the only type of
network they think they can afford.
In 1976, researchers at AT&T Bell Laboratories came to the
rescue. This was the development of a serial line protocol to
exchange data between UNIX machines, which came to be known as UUCP,
for Unix-to-Unix Copy. Over the years there were several changes,
upgrades revisions, etc. In 1983, AT&T released a new version
that came to be known as Honeydanber UUCP, as it was had been
developed by Peter Honeyman, David A. Nowitz
and Brian E. Redman. This is the version
currently employed on SCO systems.
Although, UUCP was a good thing, it was limited by the speed of the
serial line connecting the two computers. Since the system could only
be as fast as it's slowest component, there needed to be a way
to speed up that slowest component. Well, serial line speeds
increased, but that still was not enough. In the 1970s, Xerox came
out with Ethernet, which made high speed communication between
computers possible. It was now possible for users to access remote
computers and expect response times comparable to being logged in
locally, rather than experiencing delays as was common with the
serial line communication of the day. (We'll get into more details
on Ethernet later.)
Today, both are still wide spread. Although prices have dropped to
the point that Ethernet networks are common place (I even have one in
my house), UUCP is still used regularly when distances prevent other
types of connection, or when the connection is going to be quick or
short term and the administrator doesn't want the added hassles
of first installing the Ethernet cards.
We are going to talk about both in this chapter. First, we'll
talk about UUCP as this is a set of programs that everyone has on
their SCO system, even if they don't have ODT or OpenServer.
Later, we'll talk about networking principles, especially
TCP/IP and the treasure chest (or Pandora's box) of tools and
programs that go with it.
Perhaps the oldest form of UNIX computer networking is UUCP. UUCP
stands for UNIX-to-UNIX-copy and stems from the time when even the
ability to copy files directly between computers was a big deal.
Today, UUCP is still widely used in systems where more complex
networking is either not necessary or not possible. UUCP also
provides the ability to remotely execute jobs as well as
interactively log in to remote sites.
UUCP and most of it's associated programs are batch oriented. This
means that transfers and remote execute requests are "spooled"
to be carried out later. The user submits a request and waits for the
request to be carried out. There is often little control over when
the transfer actually takes place. This is essentially the same way
the print spooler works. Jobs are done when the system gets around to
it. In most cases, communications between remote systems is carried
out according to specific schedules, usually when the phone rates are
Because of it's complex nature and the large number of support
programs, UUCP could also be considered a system service. UUCP is not
only the name of the package or service within SCO, but is also the
name of one of the programs within the package. The uucp
command is perhaps the most commonly used of the commands, with the
interactive login program (cu)
a close second. The remote execution command, uux,
also places in the top three.
Build into UUCP is several levels of security. The first level is
normal file permissions that everyone must adhere to, whether local
or remote. The next is the system level security that allows or
denies access to your system or to remote system. There is also the
ability within UUCP to restrict access to specific directories and
commands. This means that although the permissions on a file or
directory might give you access to it and your have a valid UUCP
account on that system, you still cannot access it. With UUCP you
can not only limit access to certain directories, but also to certain
commands that can be executed. (Remember UUCP also allows remote
The SCO UUCP package is essentially the same between ODT and
OpenServer. The differences really show themselves when you are
trying to configure and administer UUCP, so we will wait until we
talk about configuration to talk about these differences. The SCO
UUCP package provides two programs for copying files between systems:
uucp and uuto.
The uucp command is the
more common of the two and provides more flexibility. On the other
hand, uuto is a little
easier to use. Personally, I have never used uuto
other than to test a user's command to see if they were doing it
For the purpose of this discussion, we're going to assume that there
is a valid uucp connection between the two machines. In the second
half of the book, when we'll talk about configuring and
troubleshooting uucp, we talk about problems ands what to do. We'll
also assume that we have three machines, all of which use UUCP. The
machine local machine is scoburg. It connects via modem to the
machine siemau. Siemau is connected via a direct serial line to the
The reason we are including this third machine is that is possible to
transfer file to and from machines via UUCP to which there is no
connection. So, if you have connection to one machine (in our case
scoburg can talk to siemau) and the second machine can talk to he
third (siemau and vesta) you can use the second machine as a bridge
between the other two.
Before we start, we need a crash course in the basics. There are
three primary files, all of which are located in /usr/lib/uucp
. First the Systems file contains a list of which sites (machines)
are known to you. The device that UUCP uses to connect to the remote
machine is defined in the Devices file. What each machine can do in
terms of copying or remotely executing commands is defined in the
local Permissions file. For example, what siemau is authorized to do
on scoburg is defined in scoburg's Permissions file regardless
of who called whom.
Let's assume that we were on scoburg and we wanted to transfer a file
to siemau. We start up UUCP which looks in the Systems file to see if
it could find siemau. When it does, it then checks to see if this is
a proper time to call (also defined in Systems). It then looks
through the Devices file looking for an appropriate device (serial
port). Once it finds the device UUCP can then beginning calling
In order for scoburg to be able to log into siemau, an account has to
be established. For UUCP to login, it goes through the same
procedures as "real" users: it needs both a valid logname
(an account) and a password. When UUCP on scoburg gives the correct
logname and password, UUCP is started up on siemau and the two UUCPs
begin to talk to each other. When they start talking, UUCP on scoburg
tells UUCP on siemau that it wants to send a file and where it wants
to put that file. UUCP on siemau checks its Permissions file to see
if scoburg has authorization to send files in the first place and if
so, can UUCP on siemau send the file to the location specified. If it
can, then the file is copied. (actually the work is done by the
By default, ODT 3.0 and OpenServer provides a UUCP login account:
nuucp. The default Permissions file also allows every uucp login to
be able to write to the /usr/spool/uucppublic
directory. This directory is available on many different versions of
UNIX as a common, generally accessible directory for transferring
files. This allows you to easily transfer files with new sites,
without having to establish special accounts for them. On the other
hand. If every site shares the same UUCP account, they all have
access to the same files. Therefore, if security is a concern, then
you should consider individual accounts.
So let's get to it. Just keep in mind something I have been saying
all along. This book is not a tutorial. I want to show you
relationships and how things fit together. In order to do that I need
to establish some base level understanding, which is what I am going
to do with UUCP We're going to quickly go through some of basics UUCP
commands to get a feel for how they work. Then we are going to go
into details of what happens behind the scenes.
The primary (read: "most used") command in UUCP is uucp.
Surprised? This command allows you to copy files from one UNIX
machine to another. UUCP has spread itself so widely throughout the
computer world that many vendors supply UUCP for non-UNIX machines,
such as what is provided in the MKS Toolkit from Mortice-Kern Systems
The syntax of the uucp is essentially the same as the "normal"
UNIX copy command: command source destination. The only difference is
that normally you want to specify a machine name in the source or
destination name (or both). However, this is not a requirement. If
you want, you could leave the machine name out completely and use
UUCP to copy files on your local machine.
Let's say I wanted to copy a letter I have from my home directory on
scoburg and put in the uucppublic
directory on siemau. The syntax would be:
The first thing to notice is the machine name (siemau) is
followed by an exclamation mark. This is a flag to UUCP to signify
that this is a machine name and not part of the file name. (If you
are using the C-Shell, you should switch to the ksh.) If you really
don't want to (or can't) then you need to be careful because
(remember the chapter on shell basics?) the exclamation mark means
something to the C-Shell. Therefore you need to 'escape' it. The
syntax for the C-Shell would be;
As with other sections, I am not going to give you both csh
and (k)sh syntax. Instead,
I will just say that when you use csh you need to be careful about
the exclamation mark.
Just like the normal cp command, the tilde (~) also has special
significant for UUCP. By itself, it refers to the
/usr/spool/uucppublic directory. Therefore, the above command could
be written as:
Not only does this save time in typing, you don't have to worry (too
much) about misspelling anything. When I don't do use the tilde, I
often have only one 'p' in the middle of uucppublic
and my UUCP request will fail.
You can also use the tilde with a user name so it expands to mean
their home directory. Therefore, I could have written the command
Which saves me a whole four characters. However, some systems I have
seen with hundreds of users, the home directories are scattered
across several filesystems. Using this mechanism, you can don't have
to know what filesystem a user is on in order to get the file to or
from their home directory.
Keep in mind that although you can specify a pathname with
this short-cut, you do not automatically have access to that
directory. Even if it is your own account. The problem is that UUCP
does not know that you are you. To UUCP these are two different
accounts, even if they have the same UID and GID. Also keep in mind
that you as the submitter of this job must have access to the
source file. However, it is UUCP that needs to have access to the
Another way of specifying the file is with relative paths, or no
paths, provided I am specifying a file in that directory. If I were
already in /usr/jimmo, the
above command could look like:
uucp letter siemau!~/letter
uucp ./letter siemau!~/letter
This also works for relative paths, such as:
When the job is submitted to UUCP it will create the
appropriate full path name for that file. Also, if you specify the
home directory for a user on a remote machine, that user need not
exist locally for the expansion to take place. The reason is that the
path name is not expanded locally. Rather, the system sends the path
name you specify to the remote machine. It is there that the
expansion takes place. Because there is no rule as to when and where
the expansion takes place, it would be best not to use relative paths
when specifying files on a remote machine.
Just as you can copy files from one directory to another, although
neither is your current directory, you can copy files from one
machine to another, although neither is you local machine. For
example, assume our local machine is scoburg, which knows about both
the source machine, siemau, and the destination machine, vesta. We
can copy files from siemau to vesta with the following command:
We see from this example that the source machine is remote (remember
that we are assuming that the local machine is scoburg). We are
taking a file from siemau and copying it to vesta. What happens if we
left out the reference to a remote machine when we specified the
destination for the file? We would be taking a file from the remote
machine and copying it to our local machine. That is, we would be
copying from siemau to scoburg.
We can take this one step further and specify multiple machines in
our destination. For example:
Since the local machine (scoburg) does not know about vesta,
this is actually the way we would have to run the command. If we
tried the first example on scoburg, uucp would have complained
immediately that it did not know about vesta.
There are a couple of commonly used options to uucp. For example, you
can use the '-j' option to
print out the job-ID. If you later decide that you really don't want
to send the file (or remotely execute the command), you can use this
job-ID to cancel the request.
The '-m' option can be
used to send mail to the requester that the file has been transferred
or the job executed. Or if you want to notify someone on the remote
machine, you can use the '-n'
option followed by that user's name.
Depending on how your system is configured, the actual transfer of
the file might not take place immediately. As I mentioned before, the
Systems file contains information on what is an appropriate time to
call. Often calls are only permitted during certain hours to either
reduce the traffic on the line or to limit calls to time when the
phone rates are less expensive. It may also be the case that your
system is configured to never call the other system. Transfer
is dependent on the other system calling you. Therefore, as a user,
if your request does not go through as quickly as you want, talk to
your administrator to find out how UUCP has been configured. If your
are an administrator and the request doesn't go through as quickly as
you expect, then you probably configured something incorrectly.
What happens to requests is that they are "spooled". This
is essentially the same as for the print spooler. There is a special
holding area for spooled uucp jobs. At what time UUCP decides to send
the message is dependent on your system configuration and we'll get
to that a little later.
As with spooled print jobs, there will probably be a time when you
would like to find out the status of your spooled UUCP requests. To
check the status in the print spooler, you use the lpstat
command. To check the status in the UUCP spooler, you use the uustat
If I would run uustat
after submitting a request:
uucp /etc/motd siemau!~/filename
the output of uustat would
look like this:
siemauN522d 05/28-14:45 S
siemau jimmo 90 /etc/motd
In this example, there is only one output line. Like lpstat
when several print jobs are queued, uustat
will also show information on multiple UUCP jobs that are queued. The
entries in each line are:
job-ID date/time type_of_job remote machine requester size_of_file
If we take a more complicated example, then the output of uustat get
more complicated as well. For example, let's say we want to copy a
file from siemau to vesta. Assume there is nothing in the queue and
we issue the following uucp command:
When we run uustat, we now
Nothing. Hmmm. What happened? Well, it has to do with the default
behavior of uustat. Just
as the default behavior of lpstat
is to only show us our pending jobs, uustat
only shows us the pending uucp jobs. As a result we don't see
anything. Because we are actually creating a local uux
job that sets up a uucp
job on vesta.
Instead we need to use the '-a'
option to uustat, which
says to show us all queues, not just uucp. This time we get:
S vesta uucp uucp -C siemau!/etc/motd !/tmp
The biggest change is that instead of a size and file name at the end
of the line, we have what appears to be a uucp command. This is
exactly what it is. What we said was that wanted to copy a file from
siemau to vesta. What happens is that UUCP actually creates a job for
remote execution on vesta which says to go get the file from
siemau. This is why we don't get any output from uustat. It is no
longer considered a uucp request, but rather a uux request. If we
look in /usr/spool/uucp/vesta
we will see both a control file (C.) and a data file (D.)
Just as uucp allows you to
copy to and from different sets of machines, uux
allows you to execute commands across sets of machines. For example,
I want to print a file on a printer attached to vesta, but the file I
want to print is on siemau. I could do this will the command:
This would actually create two UUCP jobs. This first would transfer
the file /etc/motd from
siemau to vesta. The second then prints out the file on vesta. In
principle we can use uux
to execute any command on a remote system. There are two limitations.
First, uux is an SUID
program, so it is run as the user uucp. Therefore, the user uucp
needs to have permissions to run the command. Second, the commands
that can be remotely executed may be limited by the Permissions file
on the remote machine.
Interactive UUCP Connections
There is one interactive program included in the UUCP package: cu
(call UNIX). It's purpose is to access the modem and to dial the
number specified, either through the Systems
file or directly on the command line. Once the remote modem has
answered, cu's work is
over. Granted cu does
serve as a kind of terminal emulator as it displays the output from
the remote system on your screen, however it responds to your
commands and does nothing on it's own. Because cu
is simply echoing the output of the remote system, you don't even
have to be calling another UNIX system!
If the system you called was like UNIX, where you eventually got to a
command prompt, then the commands you input would actually be
executed on the remote system. Build into cu
is also the ability to execute commands on your local system, even
though you are attached to (logged into) the remote system.
The format of cu is:
As I mentioned above this destination can either be a phone number or
the name of a system from the Systems file. If you use a telephone
number you can use special character to represent secondary
dial-tones(=), or pauses (-).
Let's assume the phone number for siemau was 555-6501. To use cu
to connect to it I could do in two ways:
You can also tell cu to access a particular device using the -l
option. rather than trying to dial a phone. For example:
-l tty2a dir
This would connect me to the modem attached to /dev/tty2a (assuming
there is a modem there)
Build into cu are several commands that are accessed by preceding the
command with a tilde (~). Normally cu passes everything to the remote
machine. However, if the first character on a line is the tilde, the
line is interpreted as a command for cu.
A tilde followed by an exclamation mark (~!) tells cu
that you want to execute a shell on the local machine.
Although you are still connected to the remote machine, everything
you now input is executed on the local machine. To return to the cu
session, simply type exit. You can include a command name with the
~!. This will execute the command locally and immediately return you
to the cu sessions.
Two other important tilde commands are ~%take
and ~%put. These allow you
to take a file from the remote machine and copy it to a local
file, or put a file onto the remote system from the local
system. The syntax is:
~%command source destination
So, if I wanted to copy the remote file file_name and copy it to
new_file on my local machine, the command would be:
~%take file_name new_file
Keep in mind that this really only works for ASCII (text)
files. The reason for this is that cu command was designed for 7-bit
communication. Cu treats
every 7-bits as a character. However, the bytes the compose a binary
file are 8-bits long. You end up having everything garbled. However,
there is a nifty trick that you can use with one of the other UUCP
The primary function of uuencode
is to convert a set of 8-bit characters to 7-bit and then back again.
Well, to the uninitiated, the syntax for uuencode is a bit confusing.
The first thing to note is that by default, uuencode
sends the results of it's work to stdout. You do give it two names as
arguments, but one is the destination name and not the place that the
output goes. Confused? Well, lets take an example.
Assume I have a program that I want to transfer named newprg.
To uuencode it I might be
tempted to input:
I am using the convention of .uue
as an extension to indicate that this is a uuencoded file. Well, if I
were use this command, I would get screen after screen of characters
scrolling by. As I said the output of uuecode
is stdout, not the filename we give as an argument. To understand
this, let's look at the first line of the output:
begin 711 newprg.uue
The first part of the line is the word 'begin'. When we finally get
around to decoding the file, this is a flag to tell uudecode
where to start. This allows us to include uuencode'd
messages as part of email.
Next we have 711. This is the permissions the resulting file will
have. At the end of the line is newprg.uue.
This is what we input, expecting to be the output file name. This is
actually the name of the resulting file when we decode the
file. In order to get this into a file so we can transfer it, the
command would actually be:
newprg newprg > temp.uue
Now we have a file (temp.uue)
that contains the output of the previous command. If we look at the
first line, we see that it is the same as the previous one except
that the resulting file name is now newprg
and not newprg.uue. We now
have a 7-bit file that we can transfer using either ~%put
When we are done, we can terminate the cu session by inputting
The best way to learn about something is to do tech support for it.
That's how I learned about UUCP. One of the first calls I had with
UUCP was, fortunately for me, a very basic call. The user was simply
having trouble communicating with another machine.
We checked all the configuration files on the local machine and
everything made sense. That is, all the files had the right values in
the right place. One of the first things to try is cu,
which provides an interactive login. If cu
works, then you know that the physical connection between the two
sites and the software configuration is correct. That is, the modems
are set up correctly, you are going through the right device and you
have the correct system name and phone number.
We tried to call, but we just couldn't communicate. The biggest
problem was that the user didn't have access to the other machine. He
was relying on the fact that the administrator from the remote side
said the everything was configured correctly. However, we were still
After about 20 minutes of checking our configuration and trying to
call site with which we could communicate, he both agreed that it had
to be something on the remote side, despite what the other
administrator said. So, the user called him up on another phone and
had him watch and listen as we tried to communicate with him.
Well, we heard the phone ring on the remote side. It rang, and rang
and rang. Finally, UUCP gave us an error saying it couldn't connect.
So we had the other administrator turn the modem on and off to make
sure that it had not gotten into some "weird" state. We
tried it again. The phone rang and rang and rang.
Fortunately, there was a set of lights on the font of the modem. One
of them was the AA light, for Auto-Answer. The other administrator
failed to notice, or failed to tell us, that this light was off. This
meant that no matter how hard we tried, the modem wouldn't answer. We
were trying to communicate with another site that just wasn't
listening. After setting the modem to Auto-Answer, we could
communicate with no problems.
Oh, the organization with the communications problems where one side
just was listening? The United Nations.
The Flow of Things
Most UUCP files can be found in one of two directories. The
contains the UUCP configuration files and many of the support
programs. The /usr/spool/uucp
directory is the working directory for UUCP. There are several
sub-directories here with various functions, which we'll get to in a
There are three key programs that manage and well as supervise UUCP
communication. The first, uucico,
is perhaps the workhorse of the UUCP package. It is the
responsibility of uucico
to determine what device should be used to make the connection,
starts up the modem dialer programs (more of them in a minute), logs
into the remote site and performs the actual transfer of the data.
Figure 0-1 The
flow of UUCP
Part of the configuration process of UUCP is to establish UUCP
accounts. With these accounts, remote sites can access through the
uucico program (among
others). When uucico calls a remote site, it logs as a predefined
user. This user should have been established as a UUCP user on the
remote site. If so, that user does not have a log in shell as we are
used to it, but rather has uucico.
When your uucico logs into
the remote site, the remote uucico
starts up and the two start to talk to each other.
To be able to communication with each other, the two uucico
programs need to determine whether they both speak the same
language. The language they speak is called a protocol and the
process they go through is called negotiating the protocol.
The most common one used in SCO UNIX systems is the 'g' protocol.
This comes in two flavors: G and g.
The entire time it is attempting to log in and afterwards as it is
processing and carrying out requests, uucico
is maintaining its supervisory roll. It is the responsibility of
uucico to ensure that
files and commands are only accessed if the system allows it.
To execute commands on a remote machine, you use the uux
command. It searches for requests for remote execution, but is also
concerned with security and can only execute them is the remote site
If the connection cannot be made (we'll get into that later), the job
remains in the spool directory. Like the print spooler the files in
the UUCP spool directory are the only record the system has of
pending jobs. If they get removed, then the job must be re-submitted.
This obviously not a good thing if the files are removed by accident,
however it does allow you to remove files that may be causing
If the remote computer or the device selected to make the connection
to the remote computer is unavailable, the request remains queued in
the spool directory. This is also what happens when you either tell
uucp to specifically hold onto the job or the time permitted to call
hasn't bee reached.
By default, there are several entries in the UUCP user's crontab that
are related to job scheduling. At 9 and 39 minutes after each hour,
cron will start the
program. Why it is called uudemon.hour
when it runs every half hour is beyond me. Maybe uudemon.half-hour
was a bit too much to write. Actually, the reason is the number of
characters. UNIX file names are restricted to 14 characters on the
traditional UNIX filesystems. Therefore, they couldn't have a
file called uudemon.half-hour.
The uudemon.hour program
is actually a shell script that contains two commands:
The uusched program (often
called a daemon) checks for pending jobs and schedules them for
execution by uucico. The
uuxqt program is the
remote end of uux, in that
it is what executes the jobs that have been submitted for remote
execution. It is the uusched program that eventually starts up
uucico on the local machine.
To get a better understanding of what happens, lets go through an
example. Let's assume your local machine is scoburg and you wish to
send a file to the remote machine siemau. The command you would enter
Remember from our discussion on the various shells that the
exclamation marks means something to the C-shell. Therefore you have
to escape it by placing a backslash in from of it, as in:
Work files are created in the
/usr/spool/uucp/siemau directory. Each system will have its
own sub-directory under /usr/spool/uucp,
but these are not created until the first request is submitted.
There is a control file, whose name has the format C.siemau#xyzx,
where # is the grade, or processing priority of the request and xyzx
is the job number of that particular request. The grade is a single
alphabet character (letter) and grades are processed in order from
A-Z and then from a-z. The default under SCO OpenServer is 'N'.
After submitting the job above we might have a file in
that looked like this:
-rw-r--r-- 1 uucp sys
41 May 27 10:49 C.siemauN5214
Since this file is readable by everyone, we could look at it no
matter who submitted the job. It contents look like this:
S /use/jimmo/file_name /tmp
jimmo -dc D.0 644 jimmo
the format of this file is as follows:
Type of request. Here we have an 'S' for Send a file to the
remote machine. This could also be an 'R' to request a file. If we
want to remotely execute a job, this would be an 'S' as we are
sending a request for remote execution.
Source path name of the file. If we specified a relative path
(or no path) uucp will determine the full-path and use the full-path
here. If we had specified a path relative to the home directory of a
particular user (with the ~username), then the entry would also
appear here in that form. In this case we are sending the file
we are requesting remote executing the source file is the data file
Destination path. Here too, if we specified a relative path
(or no path) uucp will determine the full-path and use the full-path
here. If we had specified a path relative to the home directory of a
particular user, then the entry would also appear here in that form.
Here we are simply sending it to the /tmp
directory. UUCP will ensure that it gets the right name, in this case
motd. This is the same behavior as the cp
command. If we are requesting remote execution, the
destination name is the same as the control file, but will have an X
instead of a C.
The login name of the sender.
List of command options specified to either uucp or uux. Here
we have -dc. These are the defaults. The -d says to create
directories necessary to copy the file. The -c says not to copy the
local file into the spool directory prior to transferring. Instead
the file will be copied directly from the source, in this case
/usr/jimmo/file_name. (Copy the file if you don't want to change
Name of the data file. Here is the default of D.0. This
means that UUCP will copy the file directly from the source. If I had
chosen to copy the file into the spool directory this would point to
Mode of the source file in octal. This is only used when
sending files. Check the chmod(C) man-page for details on the octal
format of file permissions.
Who to notify. The default is the sender. However, if you used
either the -n or -m options.
In this example, I am copying the file directly from the source, in
this case /usr/jimmo/file_name.
If I had specified the -C option, uucp would have first copied the
file into the spool directory. Check the permission here as well. You
see that they are 644. That means that everyone, including UUCP has
the right to read the file. If UUCP did not have the right to read
the file, it would matter whether I specified a copy or not. UUCP
would automatically make a copy of that file in the spool directory
The file created by UUCPis referred to as the data file. Its name
begins with a D and has the format:
where XXXX is the
sequence/ID number of that job and is the same as the job number on
the control file, and zzz is a number used to identify any additional
data files associated with that job. This is used when there are
several data (D.) files created for a single control file. The
format of the files is a single character to identify the type of
data followed by the data itself. The meaning of each line is as
User's name and system.
File to transmit.
This normally only appears when sending mail since the D. file
is the file being transferred in other cases.
prevents mailing acknowledgment of completion of remote
indicates message should only be send if the job failed.
Return address of
the requester. This is used by mailers that can forward to more
than one hop. This is the user mmdf (assuming you are using
After completion of the work files, UUCP can now send the job. If
UUCP had been told to simply queue up the job (with the -r)
option, the job will sit and wait until contact is initiated by
either side. It is also possible that the system administrator had
specified particular times to call that site and this is not one of
them. So, even if you don't specify the -r
option, the job will wait in the queue.
First, uucico checks the
file /usr/lib/uucp/Systems. If
the system you are trying to reachis not listed, or you are not
allowed to call at this time, you get an error message.
An entry in the Systems
file would look like this:
siemau Any ACU 300-2400 5551212
-\r\d-ogin:-\K\d-ogin:-\K\d-ogin: uusiemau asswd: NoSecret
The entries are:
system time-to-call device-type
modem-speed phone-number chat-script
The time-to-call field tells uucico
what times you can call the remote site. Here we have the entry Any
which means calls are permitted at any time. We could specify specif
times of the day or everm days of the week. We'll get into details on
this in Chapter 14.
The third entry in the Systems
file is the device type that should be used to call out. Using this
device type, it searches through the file /usr/lib/uucpDevices
for a matching entry. For example, I have an entry in my Devices
file that looks like this:
ACU tty1A - 9600-19200
The entries are
device-type port dialer-line
Here, uucico is looking
for an ACU device type. This is an Automatic Call Unit or modem.
Since this is the same device as specified in the System
file, this line would match. The entry in the Devices
file also tells what physical device to use and what modem dialer to
use (A modem dialer is a program that is used to access the modem and
dial the appropriate number). Using the telephone number from the
Systems file, along with
the device and dialer from the Devices file, uucico can now call out.
In order to prevent other UUCP processes from accessing the same
device and to prevent them from trying to call the same system,
uucico creates two lock files in /usr/spool/uucp.
The port is locked with the file LCK..ttyxx,
where xx is the non-modem
control device. This is the convention used, although it is usually
the modem control device that you are calling out on. In our example,
we are calling out on device tty1A. therefore the lock file would be
LCK..tty1a. Check the
serial(HW) man-page or the section on device nodes for mode
details on this.
The system lock is an similar form:
LCK..siemau. In both cases, the LCK file is ASCII and contains
the process ID of the process that created the lock. This is useful
when trying to figure out who's hogging the modem by downloading
dozens of image files when you are trying to transfer your end of
Now uucico can beginning
calling the remote site. At this point we will assume that the
connection can be made and the remote side answers. Uucico
will attempt to login using the values defined in the chat script
portion of the Systems
file. Essentially, the chat script tells the calling system what to
expect the remote system to send it and what it should send in
return. We'll get into more details about this in the second part of
Assuming that the logname and password were correct, scoburg has now
successfully logged into siemau. One of the first things that siemau
needs to do is to create it's own lock files. It would not be a good
thing if one process starting to dial out on the port you were
calling in on. These have the same format on siemau as on scoburg,
however the system lock (obviously) has the form
As I mentioned before, the shell that uucp users have is uucico.
It starts up on siemau and the two uucico's
start to talk to each other. One of the first things they do it to
negotiate the protocol. Keep in mind that there is a lot more to the
actual login process than that. However, this covers the main steps
and provides a good background for understanding the flow of things.
The uucico on siemau needs to ensure that the uucp user
calling from scoburg is allowed access to where it wants to go. It
does so by checking the file
Once siemau has determined that the UUCP user calling from scoburg
has access it begins transmitting the file in small "packets".
These packets are of a specific size. Changing them in SCO ODT 3.0
and earlier was not as straightforward as the
file in SCO OpenServer. Within each packet is a checksum, If siemau 0
determines that a packet was garbled during transmission, it will
ignore (drop) the packet. Since it acknowledges each packet it
receives successfully, scoburg will see that this packet was not
received and will resend it.
To ensure integrity of the destination file, the uucico
on siemau writes the file into a temporary file as it is receiving
it. This is in the /usr/spool/uucp/scoburg
directory, with the name TM.xyzx.
When the transfer is complete, the file is moved to the specified
destination, in this case /tmp/file_name.
(We hadn't specified the destination file name, so uucico
assumes that it the same name)
As this whole thing is going on, both sides are logging the
information. On scoburg there will be a file
/tmp/siemau. You can watch the progress of the transfer by
tail -f /tmp/siemau
The -f option tells tail to
keep reading from the file as new lines are appended to the end. If
you are siemau, there is a file /tmp/scoburg
which you can watch with tail as well. The information in these
files is for a particular session. Information is logged on a
semi-permanent basis in /usr/spool/uucp/.Log/uucp/<system_name>.
If scoburg has more files to send to siemau, the process continues
until there are none left. Scoburg tells siemau that it wants to
hang up. If siemau doesn't have any files to send to scoburg, then
they shut down communications. If siemau does have files, the roles
are reversed and now siemau is the master and scoburg is the slave.
The process continues until siemau no longer has files to transfer.
Siemau requests to hang up. It's possible that during the time siemau
was the master, a new job was submitted on scoburg. It then becomes
the master once again. In theory, this go could back and forth
indefinitely although I have never seen more than two role reversals.
After the connection is stopped, the uucico on both sides can remove
the lock files for both the port and the system.
Look back at the beginning of the chapter when we covered the basics
of UUCP command syntax. There was an example, where we copied a file
from a remote machine to another remote machine:
If we change it slightly so that the source is also on a remote
machine, we get:
UUCP recognizes the fact that both the source and the
destination are on a remote machine. It therefore creates a remote
execution command and not a simple file transfer. In other words, the
local uucp command creates
the necessary files so that another uucp
command is executed on the remote machine (siemau). The contents of
this command is to copy the local file
/etc/motd to the destination "vesta!!/filename".
In this case it happens that there is another remote machine involved
(vesta). As a result, the file gets copied from siemau to vesta.
(Assuming all the Permissions, etc are set right on siemau and vesta)
Where Things Live
There are three directories that UUCP uses. The first one, we've
talked about in great detail already. This is /usr/spool/uucp.
Like the /usr/spool/lp
directory is for the print spooler, this directory is the working
directory for UUCP. Here you'll find all the work files, locks, log
files, etc. There are several sub-directories here as well. These
Administrative files. Contains information on transfer
statistics and accounting information.
Corrupt work and
execute files that could not be processed
Log files. One
sub-directory for each of uucp, uucico, uux, uuxqt.
Old log files
Status files. One
file per system.
for remote executions
there is one sub-directory for each system.
Most of the UUCPconfiguration programs and files are store in
/usr/lib/uucp. Many of
them we have already mentioned when going through the example
transfer, but I would like to cover them briefly again.
The Systems file contains
a list of the remote systems that are know to you. This file
determine what kind of device can be used to contact a particular
site (Modem, direct-line or TCP), the times that are authorized to
call that site, the phone number to call, as well as chat-script,
which is sued when logging into the remote system.
The Devices file contains
information used to access the physical device, usually a serial
(tty) device. This file contains such information as the type of
device, speed, and the dialer program to use.
The Permissions file
contains a list of permissions/authorizations when transferring files
or requesting remote execution. The entries in this file determine
what programs you can run on the remote side, which directories you
can access and whether or not the remote site can request or send
files to your site.
If you are running SCO OpenServer then there is a new file:
Configuration. This contains information that determines and
configures the protocol used when transferring data. This includes
such thing as the packet size (how large each data packet is) and the
window size (how many packets can be send before an acknowledgment is
Before we talk about the details of networking, we should first talk
about the process of network communication. Let's take a network
program such as telnet.
The telnet program allows
you to login to a remote system. You end up with a shell just as if
you had logged in locally. Although you are input commands on your
local keyboard and the output is appearing on your local screen, all
other activity is happening on the remote machine.
For simplicities sake, we can say that there is a telnet
program running on each computer. When you input something on local
keyboard, the local copy of telnet is accepting input. It passes the
information through the network to the telnet on the remote machine.
The command is executed and the output is handed to the remote
telnet. That information is passed back through the network to the
local telnet, which then
displays the information on your screen.
Although it may appear as there is a constant flow of information
between your local machine and the remote one, this is not what is
happening. At any given time there may be dozens, if not hundreds of
programs using the network. Since only one can use the network at a
time there needs to be some mechanism to allow each program to have
Think back on our discussion on the kernel. When we need something
from the hard disk, the system does not read everything at once. If
it did, one process could hog the computer if it needed to read in a
large file. Instead, disk requests are send in smaller chunks and the
program only thinks that it gets everything it wants. Something
similar is done with network connections.
Computers are like humans beings in that they need to speak the same
language in order to communicate. Regardless of how they are
connected, be it serial or Ethernet, the computers must know how to
talk to each other. The communication is carried out in a pre-defined
manner, called a protocol. Like the protocols diplomats and
politicians go through, computer protocols determine how each side
behaves and how it should react to behavior by its counterpart.
Roughly speaking even the interaction between the computer and the
hardware, such as the hard disk, can be considered a protocol.
The most common protocol used by SCO is TCP/IP. However, it is more
accurate to call TCP/IP a protocol suite, or protocol family. This is
because TCP/IP actually consists of several different protocols. Even
the name consists of two different protocols as TCP/IP stands for
Transmission Control Protocol/Internet Protocol.
TCP/IP is often referred to as protocol suite as it contains
many different protocols and therefore many different ways for
computers to talk to each other. However, TCP/IP is not the only
protocol suite. There are dozens, if not hundreds of different ones,
although only a small portion have gained wide acceptance. SCO only
uses a few itself, although the TCP/IP family is what is delivered by
default and most commonly used.
Although the name refers to two specific protocols, TCP/IP actually
means an entire suite of protocols and programs. The result of many
years of planning and discussion, the TCP/IP suite includes a set of
standards which specify how computers ought to communicate. By
following these standards, computers "speak" the same
language and can therefore communicate. In addition to the actual
means of communication that the TCP/IP suite defines conventions for
connecting different networks and routing traffic through routers,
bridges and other types of connections.
The TCP/IP suite is result of a Defense Advanced Research Projects
Agency (DARPA) research project on network connectivity. However, its
availability has made it the most commonly installed network
software. Many versions provide source-code which reside in the
public domain allowing users to adapt it to many new systems. Most
vendors of network hardware (e.g. bridges, routers) support the
Whereas the data being transferred to and from the hard disk is
talked about in terms of blocks, the unit of information transfer
across a network connection is referred to as a packet.
Depending on the program you are using, this packet can be a
different size. In any event they are small enough to send across the
network fast enough, so that no one process hog the network. In
addition, the packets go across the network so fast that you don't
notice that your data is broken in to packets. This is similar to the
way the CPU manages processes. Each one gets a very small turn on the
processor. Because it switches so fast between processes it only
seems like you have the processor to your self.
If we take a step back and look at the process of network
communication more abstractly, we see each portion supported by and
supporting another. We can say that each portion sits on top of
another. Or in other words the protocols are stacked on top of
each other. Therefore, TCP/IP is often referred to as a protocol
Each portion of the stack is referred to as a layer. At the
bottom of the stack is the layer that is responsible for the physical
connected between the two computers. This is the physical layer.
Sitting on top of the physical layer is layer that is responsible for
the network portion of the stack. That is, it ensures that packets
either stay on the network or get to the right network and at the
same time ensures that packets get to the right network address. this
is the network layer.
On top of the network layer is the layer that ensures that the
packets have been transmitted correctly. That is, there are no errors
and all packets have been received. This is the transport layer.
Finally, at the top of all of this is the layer that the user sees.
Since the programs that we use are often called applications, this
upper layer is called the application layer.
Conceptually, each layers is talking to its counter part on the other
system. That is, telnet on
the local machine is passing data to telnet
on the remote machine. TCP on the remote machine sends an
acknowledgment to TCP on the local machine when it receives a packet.
IP on the local machine gets information from IP on the remote
machine that tells it that this packet is destined for the local
machine. Then there are the network interface cards that communicate
with each other using their specific language.
This communication between corresponding layers is all conceptual.
The actual communication takes place between the different layers on
each machine not the corresponding layers on both machines.
When the application layer has data to send. It prepends an
application header onto the data it needs to send. This header
contains information necessary for the application to get the data to
the right part of the application on the receiving side. The
application then calls up TCP to send the information along. TCP
wraps that data into a TCP packet, which contains a TCP header
followed by the application data (including header). TCP then hands
the packet (also called a TCP segment) to IP. Like the layers
before it, IP wraps the packet up and prepends an IP header,
to create an IP datagram. Finally, IP hands it off to the
hardware driver. If Ethernet, this includes both an Ethernet header
and Ethernet trailer. This creates an Ethernet frame.
As we see, it is the TCP layer that the application talks to. TCP
sticks the data from the application into a kind of envelope (the
process is called encapsulation) and passes it to the IP
layer. Just as the operating system has a mechanism to keep track of
which area of memory belongs to what processes, the network has a
means of keeping track of what data belongs to what process. This is
the job of TCP. It is the also the responsibility of TCP to ensure
that the packets are delivered with the correct contents and then to
put them in the right order.
Error detection is the job of the TCP "envelope" which
contains a checksum of the data contained within the packet. This
checksum information sits in the packet header and is checked on all
packets. If the checksum doesn't match the contents of the packet or
the packet doesn't arrive at all, it is the job of TCP to ensure that
packet is resent. On the sending end, TCP waits for an acknowledgment
that each packet has been received. If it hasn't received one within
a specific period it will resend that packet. Because of this
checksum and the resending of packets, TCP is considered a reliable
Another protocol that is often used is the User Datagram Protocol
(UDP). Like TCP, UDP sits on top of IP. However, UDP provides a
connectionless transport between applications. Services, such
as the Network File Service (NFS), that utilize UDP, must provide
their own mechanism to ensure delivery and correct sequencing of
packets. Since it can be either broadcast or multicast, UDP also
offers one-to-many services. Because there is no checking by UDP it
is also considered unreliable.
Closest to the hardware level, IP is a protocol that provides the
delivery mechanism for the protocols. The IP layer serves the same
function as your house addresses, telling the upper layers how to get
to where they need to. In fact, the information used by IP to get the
pieces of information to their destination are called IP addresses.
However, IP does not guarantee that the packets arrive in the right
order or that they arrive at all. Just like a letter to your house
requires it to be registered in order to ensure that it gets
delivered with the content in-tact, IP depends on the upper layers to
ensure the integrity and sequencing of the packets. Therefore, IP is
Since the hardware, that is the network cards do the actual, physical
transfer of the packets, it is important that they can be addressed
somehow. Each card has its own, unique identifier. This is the Media
Access Control, or MAC, address. The MAC address is a 48 bit number
that is usually represented by 6 pairs of hexadecimal numbers,
separated by (usually) dashes or colons. Each manufacturer of network
card is assigned a specific range of addresses which usually are
specified by the first three pairs of numbers. Each card has its own,
individual address: the MAC address.
When sending a packet, the IP layer has to figure out how to send the
packet. If the destination is on a different physical network, then
IP needs to send it to the appropriate gateway. However, if
the destination machine is on the local network, the IP layers uses
the Address Resolution Protocol (ARP) to determine what the MAC
address of the Ethernet card is with that IP address.
To figure this out, ARP will broadcast an ARP packet across the
entire network asking which MAC address belongs to a particular IP
address. Although every machines gets this broadcast, only the one
out there that matches will respond. This is then stored by the IP
layer in its internal ARP table. You can look at the ARP table at any
time by running the command:
This would give you a response similar to:
siemau 220.127.116.11 at
This has the general format:
<machine name> (IP address) at <MAC address>
Since the ARP table is cached, IP does not have to send out an ARP
request every time it needs to make a connection. Instead, it can
quickly look in the ARP table to make the IP-MAC translation. Then,
the packet is sent to the appropriate machine. This is accomplished
by the requesting machine sending out a special message (a broadcast)
to all machines in the network asking for a response from whoever
"own" a particular IP address. The machine with that
address will then respond with the MAC address of that card. This
mapping of IP address to MAC address is then stored within the
requesting machines ARP cache and is then available for all
Status and error information is exchanged between machines through
the Internet Control Message Protocol (ICMP). This information can be
used by other protocols to recover from transmission problems or by
system administrators to detect problems in the network. One of the
most commonly used diagnostic tools, "ping", makes use of
At the bottom of the pile is the hardware or link layer. As I
mentioned before, this can be represented by many different kinds of
physical connections: Ethernet, token-ring, fiber-optics, ISDN,
RS-232 to name a few.
This four layer model is common when referring to computer networks.
This is the model that SCO uses and the one that I will use through
the book. There is another model that consists of seven layers. This
is referred to as the OSI model, but we won't be using it here.
In the discussion above, I used the telnet
command as an example of one of the programs that uses TCP/IP.
However, there are many others which provide addition services such
as transferring files, electronic mail, networking printing, and
access to remote filesystems. Products such as SCO OpenServer can
expand upon these basics to provide additional services.
One kind of service that SCO OpenServer provides is remote
administration. In contrast to previous versions of SCO products
which required you to administer all machines local, the SCOAdmin
suite of programs allows you to administer machines anywhere in your
network. Other products, such as database applications may have one
central machine containing all the data and access is gained from the
other machines via TCP/IP. Often this access is invisible to the user
who just sees the "front end" of the database.
This configuration, where one machine contains the data or resource
that an other machine uses is very common in computer networking. The
machine with the resource that it is providing to other machines is
referred to as the server, because it is serving the resource to the
other machine. The machine that is using the resource is called the
client. This model, where one machine is the server and the other is
the client is referred to as a client-server model. The client-server
model is the primary model used in SCO networks.
Another common network model is the peer-to-peer model. In
this model, there is no one central machine that has all the
resources. Instead, all machines are on equal status. Often times,
these two models sort of blend together. In SCO UNIX networks, it is
possible to have multiple servers, each providing many of the same
resources. In can also happen that multiple machines all have
resources that the others need so everyone is acting as both a client
and a server, similar to peer-to-peer.
On SCO systems, there are dozens of resources available. Many of
which are well-known such as telnet,
others, such as ntp are
more obscure. Like calling into a large office building with a
central switchboard, our server needs to know what numbers are
associated with which programs in order to make the proper
connection. In the same regard, you need to know what office you want
to reach before you call. In some cases you can call and say you want
a particular extension. In other cases, you say you want a particular
office. In a office building there is a list of available "services",
called a phone book. On a SCO UNIX system the phone book is the file
The /etc/services file
contains a list of what services a particular machine may have to
offer. The concept of a service is slightly different than the
concept of a resource. A machine may provide many resources in the
form of login shells that it provides to remote users, however all of
them are accessing the machine through the one service: telnet.
In addition to what service the machine provides, /etc/services
also lists the port. To understand the idea of a port, think about
this as being the telephone number. When I call in to a machine (say
using telnet), I am
connected to the telnet
program on the other side through a particular port. This is as if I
were calling a large office building with a single switchboard. When
I reach that switchboard, I tell the operator which office or person
I want to talk to. In the ancient history of telephones, that
operator had to make the connection between the incoming line and the
A port can be thought of as the sockets that the operator plugs the
phone lines into. Like in that office building, there may be a set of
these sockets, or ports, that are directly connected to a specific
person (i.e. service). These are well-known ports. There may
be offices with their own operator (maybe just a receptionist) who
passes the incoming phone call to the right person or may even pick
someone themselves to take the call (such as when you call a
government agency with a generic question and there is no one person
responsible for that area).
On an SCO UNIX system using TCP/IP, the principle is the same. There
are dozens of services that one can connect to, but only one way into
the system, that's through your network interface card. In order for
you to be able to connect to the right service, there has do be
something like an operator to make the connection for you. This is
the program /etc/inetd.
This is the "Internet Daemon" and often referred to as a
"super server" since it is inetd's
responsibility to wait for requests to access the other
servers on your system and pass you along.
Like in our office building, you may know what number you want, that
is, which port. When you make the connection to inetd,
your process tells it what port you want to connect to and inetd
makes the connection. On the other hand, you may have a program that
does not have a well-known port. Therefore a new port needs to get
The inetd daemon "listens"
for the connections. You can say that it is listening on multiple
ports in the sense that it manages all the ports. However, it is
inetd that makes the
connection between the incoming connection and the local port, and
therefore to the local server. This mechanism saves memory since you
don't need to start up the servers you aren't going to use. This is
similar to having a central switchboard and not requiring every
office to have their own.
Normally, inetd is started
during system start up from a script in /etc/rc2.d.
When it starts, inetd
reads its configuration file (/etc/inetd.conf)
to obtain the necessary information to start the various servers. It
then builds the logical connection between the server and its
respective port. Kind of like laying the cable from the central
switchboard to the various offices. Technically it creates a socket,
which is bound to the port for that server.
When inetd gets a
connection request (the phone rings) for a connection-based port, it
"accepts" the incoming call which creates a new socket.
That is, there is a logical connection between the incoming request
and the server. Inetd can
now continue to listen on the original port for addition incoming
If the port is connection-less (UDP), the behavior is dependent on
entries in the /etc/inetd.conf
file. If inetd is told to
wait (there is a wait in
the fourth column), then the server that was called must process the
incoming message before inetd
can go on. If told not to wait (there is a nowait
in the fourth column), inetd
will continue to process incoming requests on that port. If you look
in /etc/inetd.conf you see
that almost exclusively TCP ports are no wait and UDP ports are wait.
Before we do that we need to side step a little. We need to first
talk about what goes into making a standard. Without standards, it
makes communication between computers of different type very
difficult. Just like you have bus standards like ISA and PCI so
hardware can communicate with the CPU, you need some kind of
In the Internet community, standards are both suggested and
established through Request for Comments or RFCs. To some extent this
are the law. If one product claims to comply with a particular RFC,
you know that any other applications that does so should be able to
communicate with it. However, RFCs include other things such as lists
of previous RFCs and basic introductions to things like TCP.
Becoming a standard is a three step process. Usually, the first few
paragraphs of an RFC will tell you to what stage it applies.
Assuming of course, that the RFC is part of a standards proposal. At
the first stage, the standard is proposed. Organinzing then decide to
implement the proposed standard. It requires three separate
implementations before the proposal becomes a standard. (Finally, it
becomes a standard. This is an oversimplication of the process, since
there were also be a lot of discussion about the proposed standard.)
If you need information about a specific network standard, the first
place to look is the most current RFC index, which is also published
as an RFC. Not only does this list all the RFCs, but will also tell
you if one RFC has been replaced and by which one.
Originally I had planned to include a list of the more commonly used
and significant RFCs. I eventually realized that this was an unending
task. When I started this book, there were just over 1700 RFCs. The
last time I checked before wrapping up this book, there were over
1800. Instead I will simply tell you where to get them.
The first place is from the "central repository." These are
obtainable using ftp from
ftp.ds.internic.net. There is an rfc
directory, which contains the RFCs in ASCII as well as many in
postscript format. If you know what RFC you want, this can be
obtained by sending an email message to firstname.lastname@example.org.
List each RFC you want in the format:
where XXXX is the number of the RFC. You can obtain the index
by including the entry:
In addition, the RFCS are available from archives all over the
Internet. However, rather than tying up the Internet bandwidth with a
lot of copy of files you may not need. Check out the Web page
www.cdrom.com or mail email@example.com. This is for Walnut Creek CD-ROM
and the sell a CD packed with thousands of documents related to the
Internet, including the RFCs. Another site with pretty much the same
kind of offering is InfoMagic. They can be reached at
www.infomagic.com or firstname.lastname@example.org.
For SCO UNIX systems running TCP/IP one of the most important
standards deals with Ethernet. The encapsulation (packaging) of IP
datagrams is defined for Ethernet in RFC 894. Developed in 1982 by
Digital Equipment Corporation (DEC), Intel and Xerox Ethernet
(spelled with a capital) is a standard, rather than a physical
entity. Several years later, the 802 Committee of the Institute of
Electrical and Electronic Engineers (IEEE or I-triple E), published
standards of its own that differed in many ways from the original
Ethernet standard. Collectively, these are referred to as the 802
IEEE standards. The 802.3 standard covers networks similar to
Ethernet. The IEEE 802 encapsulation was defined in RFC 1042. Both of
these use an access method called Carrier Sense, Multiple Sense with
Collision Detection or CSMA/CD.
Both of these framing types (RFC 894 and RFC 1042) using a 48-bit
addressing scheme. These are generally referred to as the MAC or
hardware address. The six bytes of both the destination and source
machine are included in the header of both framing types. however,
the remainder of the frame is different. As we talked about earlier,
This layer is responsible for sending and receiving the IP
datagrams. It is also responsible for sending and receiving other
kinds of packets as well. These are packets from the Address
Resolution Protocol (ARP) and the Reverse Address Resolution
Protocol (RARP). We'll talk about both later on.
In today's world of inter-connected computers, you may have a
connection to hundred of thousands of other machines. Granted there
is no single cable connecting all of these computers, however there
is a logical connection in that you can use the telnet program from
your PC in California and connect to a machine in Germany. The
problem is, how do the packets get from one end to another. Added to
that, how do you keep your local network in California from getting
overloaded with packets that are being sent between machines in
Germany and at the same time making sure that those telnet packets do
get through? The answer is provided by the Internet Protocol (IP).
Just as a street address is not always sufficient to get your letter
delivery, so is the IP not always sufficient to get the packet
delivered. If I sent you a letter, it could be sent to a single,
central post office, whose job it was to distribute mail throughout
the entire US. Because of the incredibly large number of pieces of
mail, this is impracticle. Instead, there are thousands of offices,
all over the country, whose job it is to route the mail for us.
If we lived in a small town, the local post office could catch a
letter destined for a local addresss before it goes further. Mail
with addresses outside could be sent to other post offices to be
A similar situation applies to IP addresses. In local, self-contained
networks, the IP address alone is sufficient. However, when multiple
networks are combined, machines spend more time trying to figure out
if the packet belongs to them than actually processing information.
The solution is a Network Mask. Just as a zip code tells a postal
worker whether to process a particular piece of mail locally or not,
the Network Mask (or netmask) tells machines whether or not they can
simply ignore a packet or need to process it further. How this works,
we'll get to in a moment.
Every machine on the network, needs to have its own, unique IP
address. Just like every house has a unique mail address. If that
network is connected to the rest of the world, that address must not
only be unique within the local network, but unique within the rest
of the world, as well. As of this writing, IP addresses are 32-bit
values. They are usually represented by four sets of numbers, ranging
from 0-255 separated by dots (.). This is referred to as
dotted-decimal notation. In dotted-decimal notation, an
address might look like this:
Since each of these numbers range between 0-255, they can be
represented by eight bits and are therefore referred to as an octet.
This IP address is often thought of as being composed of a network
portion (at the beginning) and a node (or machine) portion at the
end. This would be comparable to writing a street address as:
Where 95061 is the zip code and Main Street is the street and 42 is
the address on that street. The reason we write the street address in
this fashion, is that it's common to think of the IP address as
moving from the general to the more specific.
Currently, there are three classes of networks, which are broken down
by both the range used in the first octet and the number of octets
used to identify the network. Class A networks are the largest and
use the first octet as the network address. Networks in the class
will have the first octet in the range 1-126. Class B networks used
the first two octets, with the first being in the range 128-192. The
smallest networks, class C use the first three octets in the network
address and with the first in the range 192-223. How IP addresses are
broken down by the different network classes is shown in Table 0.1.
Range within 1st
Possible hosts per
0.1 IP Address Breakdown by Network
There are a couple of things I would like to point out about this
table. First, the network address 127 represents the local computer,
regardless of what network it is really on. This is helpful for
testing as well as many internal operations. Network addresses 224
and above are reserved for special purposes such as multicast
addresses. The terms "possible networks" and "possible
hosts per network" are those that are calculated mathematically.
In some cases, 0 and 255 are not acceptable values for either the
network address or the host address.
Keep in mind that a Class A address does not necessarily mean that
having 16 million hosts on a single network is impossible to
administrate and would over burden most network technologies. What
normally happens is that a single entitity, such as Hewlett-Packard
is given a Class A address. They will then break it down futher into
smaller sub-nets. We'll get into more details about this
A network host uses the network ID and host ID to determine which
packets it should receive or ignore and to determine the scope of its
transmissions (only nodes with the same network ID accept each
other's IP-level broadcasts). Because the sender's IP address is
included in every outgoing IP packet, it is useful for the receiving
computer system to derive the originating network ID and host ID from
the IP address field. This is done by using subnet masks, as
described in the following section.
Subnet masks are 32-bit values that allow the recipient of IP
packets to distinguish the network ID portion of the IP address from
the host ID. Like an IP address, the value of a subnet mask is
frequently represented in dotted decimal notation. Subnet masks are
determined by assigning 1's to bits that belong to the network ID and
0's to the bits that belong to the host ID. Once the bits are in
place, the 32-bit value is converted to dotted decimal notation, as
shown in Table 0.2.
Bits for subnet mask
0.2 Default Subnet Masks for Standard IP Address
The result allows TCP/IP to determine the host and network IDs of the
local computer. For example, when the IP address is 18.104.22.168 and
the subnet mask is 255.255.0.0, the network ID is 102.54 and the host
ID is 94.97.
Keep in mind that all of this with the subnet masks is the principle
and not necessarily the practice. If you (meaning your company) has
been assigned a Class B address, then the the first two octents are
assigned to you. You could then breakdown the class B net into Class
C nets. If we take a look at Table 0.1, we see that there are 65,534
possible nodes in that network. That is really too many to manage.
However, if we considered each of the third octets to represent a
sub-net of our class B network, they would all have 254 possible
nodes per sub-net. This is basically what a class C net is anyway. We
can then assign each sub-net to a department or building and then
assign one person to manage each of the class C sub-nets, which is a
little easier to do.
To keep the different class C subnet from interfering with each
other, we give each sub-net a Class C subnet-mask, although
the first octet is in the range for a Class B network. That way
machines on this subnet are only concerned with packets for the
subnet. We can also break down the sub-nets physically so that there
is a gateway or router between the subnets. That way the physical
network is not overburdened with traffic from 65,534 machines.
Let's look at a concrete example, like SCO, which has been assigned
the Class B address 22.214.171.124. The different departments within SCO
have been assigned a class C address that might look like this:
126.96.36.199. Although the first octet (132) says that this is a
class B address, it is really the subnet-mask that makes that
determination. In this case, our subnet mask would be: 255.255.255.0.
Therefore, any packet that is destined for an address other than one
starting 188.8.131.52 is not on this network.
It is the responsibility of IP to ensure that each packet ends up
going to the right machine. This is accomplished, in part, by
assigned a unique address to each machine. This address is refereed
to as the Internet address or IP address. Each network gets a set of
these IP addresses that are within a specific range. In general,
packets that are destined for an IP address within that range will
stay within the local network. Only when a packet is destined for
somewhere outside of the local network is it "allowed" to
In other words, IP is responsible for the delivery of the packet. It
functions similar to the Post Office. Whereby you have both a sending
and receiving address. Often times you have many more letters than a
single mail bag can handle. The mail carrier (or someone else at the
post office) will break down the number of letters into sets small
enough to fit in a bag. This is what IP does.
Since there are many people using the line a once, IP will break down
the TCP packets into units of a specific size. Although often
refereed to also a packets, the more correct terminology is to
refereed to IP packets as datagrams. Just like bags of mail
need to go from one post office to the next to reach their final
destination, IP datagrams must often go through different machine to
reach their final destination.
Saying that IP routing can be accomplished completely in software
isn't entirely accurate. Although, no physical router is needed, IP
can't send a packet to someplace where there is no physical
connection. This is normally accomplished by an additional network
card. With two (or more) network cards a single machine can be
connected to multiple networks. The IP layer on that one machine can
then be used to route IP packets between the two networks.
Once configured (how that's done, we'll talk about in the second part
of the book), IP maintains a table of routing information, called
(logically) a routing table. Every time the IP layer receives a
packet, it checks the destination address
Routing and IP Gateways
I mentioned a moment ago that IP is an unreliable, connectionless
protocol. That is it contains no provision to ensure that the packet
arrives correctly at the destination, nor is there anything the
guarantees that when packets do arrive they arrive in the correct
order. Although IP is responsible to ensure that the packets get to
the right machine, it has essentially no understanding of the
physical connection between the two machines. IP will happily run on
machines that are connected with something as simple as a telephone
wire, to something as complex as satellites. IP depends on some other
means to "physically" carry it across the network.
What this means is that the system administrator (or network
administrator) is responsible for laying the "map" that is
used to define which network address go with what sets of machine and
what IP addresses are assigned to individual machines.
One important job that IP does is routing. That is, getting the
packet to the right machine. If the source and destination machines
are directly connected, that is on the same network, then routing is
easy. Essentially there isn't any. IP sees this fact and simply hands
the packets off to the data link layer. Otherwise, IP has to figure
out how and where to send it.
Usually the 'how' is over a router. A router is some piece of
hardware that acts like an air traffic controller send one packet off
one way and other off a different way. Often routers are separate
pieces of equipment that can be configured in very detailed ways. The
disadvantage to this is that with power comes price. The ability to
configure a router in many different ways usually means a high price
tag. Fortunately, many operating systems, including SCO UNIX allow IP
to serve as router-software. Thereby avoiding the cost of router
In comparison to the router is the concept of a gateway. Like
a router, a gateway has knowledge of other networks and how to reach
them. In general, we can think of a router as a special piece of
hardware that does the work for us. In fact, there are companies that
sell equipment called routers. A gateway is more of a concept, in
that is the means by which you go from one network to another. Today,
the distinction between a router and a gateway is blurred. Routers
can server as gateways, gateways can serve as routers.
The path the packet takes from one machine to the next is called a
route. Although each machine can maintain static routes for
specific destinations, the default gateway is usually used to find
remote destinations. (The default gateway is needed only for
computers that are part of an internetwork.) If you have a gateway
connect to several other networks, there will (probably) be route
definitions for each of those other networks.
Let's look at this process as if we were sending a letter, as we did
a little while ago. Each letter we send has an envelope which
contains a message. On the envelope we write the source and
destination addresses. When we mail the letter it gets to the post
office and the person sorting the mail checks the destination zip
code. If it's the same as the local zip code, the envelope is sent to
one of the carriers for deliver. If the zip code is different, then
it is sent to some other location. Perhaps all non-local mail is sent
to the same place.
If you live across the country from me, the letter probably doesn't
go directly from my local post office to yours. (assuming I don't
live in San Fransisco and you don't live in New York) The same
applies to IP packets. My letter first goes to my local post office,
if it is destined for a local address it is processed there. If not,
it is sent along to a larger post office. If I sent a letter from
Santa Cruz, destined for Annsville, Pennsylvania, it will probably go
first to San Fransisco and then to New York (or Philadelphia) before
it gets sent to Annsville.
Again, the same applies to IP packets. If I were communicating with a
network on the other side of the country, my machine needs to know
how to get to the other one. This is the concept of a "gateway".
A gateway is the first step in the path, or "route" to the
remote machine. Just as there are a couple of post office between
Santa Cruz and Annsville, there can be multiple gateways between
Since San Fransisco is the closest "major" city to Santa
Cruz, it is possible that all mail bound for points beyond must first
go through there. What if I lived in Fresno, which is about halfway
between San Fransisco and Los Angeles? If I sent a letter to
Annsville, it could go through Los Angeles or it could go through San
Fransisco. To make things easy, it might always get sent through San
Fransisco it not destined for a local address. What if the letter is
bound for Los Angeles? It seems silly to go through San Fransisco
first when it is bound for LA. At the post office in Fresno, they
might have a special procedure that says all remote mail goes through
San Fransisco, except for those with a zip code in a special range.
Here, too, the same applies to IP addresses. One machine my be
defined as the "default" gateway, but if an IP packet was
bound for a particular network it could be told to use a completely
different gateway. Which gateway to use to get to a particular
machine or network is the concept of "routes." If I want
all remotely-bound packets to use a particular route, I add that
route as a default to my machine. If packets bound for a particular
network are to go via a different route, I can add that route as
When IP prepares to send a "message", it inserts the local
(source) and destination address IP addresses in the IP header. It
then checks whether the network ID of the destination and source
match (the zip codes). If so, the packet is sent directly to the
destination, since it is on the local network. If the network IDs
don't match, the routing table is examined for static routes. If none
are found, the packet is forwarded to the default gateway.
The default gateway is a computer connected to the local subnet and
other networks that has knowledge of the network IDs for other
networks and how to reach them. Because the default gateway knows how
to reach the other networks, it can forwards the packet, either to
other gateways or directly to that machine if the gateway is on the
same network as the destination. This process is known as routing.
Obviously(?), if you only have a single network, there is no reason
to have a gateway. As each machine is directly connected to every
other. It's possible that you only want certain machines within your
network to go beyond the local net to the outside. In this case,
these machine can have a default (or static) route default, while the
others have none. However, users can add routes themselves, using the
The telnetd daemon is a
server which supports the telnet
program. Makes sense huh? Telnet is a terminal program that allows
you to work interactively with remote machines, just as if you would
with the local machine. When inetd receives a incoming telnet
request, it invokes telnetd.
What you then see is no different that if you had logged in locally
to that machine (probably). You are presented with a login:
prompt, you enter you logname and password. If these are correct, you
then are given a shell that you can enter commands starts
The way telnetd works is
that it allocates a pseudo-terminal device for you. This
pseudo-terminal has the same behavior as a "normal"
terminal in that you input commands and see the results on your
screen. Internal the pseudo-terminal is broken down into two parts.
The master portion is the side that you see. Since your side is the
one that is controlling things, your side is the master. The master
side accepts input from your telnet
program and passes them to telnetd on the remote side. As you might
guess, the side that has to listen to the master is the slave. The
slave side of the pseudo-terminal serves as stdin, stdout, and stderr
for the remote application.
Similar in functionality to telnet
is rlogin. The server for
rlogin, is rlogind,
and like telnetd, is
started by inetd. One of
the primary differences is that, if configured, rlogind
can provided a connection without the normal login procedures.
The functionality of rlogind
is very similar to that of telnetd.
Pseudo-terminals are allocated and the slave portion becomes the
stdin, stdout, and stderr. During login, rlogind
uses an authentication procedure called "host equivalence",
which sets up remote machines as being "trusted". If
rlogind on the destination
machine authenticates the source machine, the user is automatically
logged in. If the authentication fails, the user must go through the
normal login procedure. How to set up host equivalence, we'll get to
TCP connections are not the only ones that are managed by inetd.
Basically all network connections are. This can be understood if we
go back to the telephone operator analogy. If the operator (inetd)
is also the receptionist, we can then think of TCP connections as
incoming telephone calls and UDP packets as incoming letters. Like
incoming phones calls, the receptionist is responsible to route the
incoming mail to the right person. (This is a valid analogy as like
regular mail, there is nothing to guarantee the delivery of the
message in the letter, although with TCP connections you can ask
your partner to resend the message.) Like TCP connections, UDP
daemons are "listening" on specific ports. Also like TCP,
connections these well-known ports are listed in
One common UDP connection is the routing daemon: routed.
Routed supplies (as you might have guessed) routing information in
the form of routing packets. If your system is serving as a router,
then routed periodically sends copies of its routing tables to other
One key difference is that routed
is not actually started by inetd.
Instead, it is normally started as the system is entering run-level
2 through the /etc/rc2.d/S85tcp
script. When it starts, routed
makes the assumption that it will forward packets between all
interfaces on the system. This only includes those that are "up"
and does not include the loopback driver. (The loopback driver is a
special TCP/IP interface that simply loops the packets back to the
local machine. Hence the name.) Routed
then transmits a REQUEST packet on each of these interfaces and waits
for a RESPONSE packet for any other hosts. Potentially there are
other machines on the network that are also sending REQUESTS packets,
so routed can also respond
The response routed gives
is based on information it has in it's routing tables. This
contains information about known routes, including how far away the
destination machine is in turns of hops or intermediary
machines. When routed receives a RESPONSE packet, it uses the
information contained in that packet to update it's own routing
tables. Look at the routed(NADM)
man-page for more information.
DNS - Finding Other Machines
If you have TCP/IP installed, by default, you machine is set up to
use the /etc/hosts file.
This is a list of IP address and the matching name of the machine.
When you try to connect to another machine, you can do it either with
the IP address or the name. If you use the name, the system will look
in the /etc/hosts file and
make the translation from name to IP address. The only real drawback
with this scheme is that every time a machine is added or removed
from the network, you have to change the /etc/hosts
file on all the affected machines.
Those you of that have had to administer large networks know that
updating every /etc/hosts
filelike this can be a real pain. There is always at least one that
you forget or you mis-type the name or address and have to go back
and change it on every machine. Fortunately, there is hope.
Provided with both ODT and OpenServer is a hostname/IP address
database called the Berkeley InternetName Domain (BIND) service.
Instead of updated every machine in the network, there is a Domain
Name System (DNS) server that maintains the database and provides the
client machines with information about both addresses and names. If
machines are added or removed, there is only one machine that needs
to get changed. This is the Name Server. (Note: The SCO documentation
translates DNS as Domain Name Server. Most every other reference I
have found calls it the Domain Name System. I have seen some
references call it Domain Name Service. Since we know what it is,
I'll just call it DNS.)
So, when do you use DNS over the /etc/hosts
file? Well, it's up to you. The first question I would ask is "Are
you connecting to the Internet?" If the answer is "yes",
"maybe" or "someday" then definitely set up DNS.
DNS functions somewhat like directory assistance from the phone
company. If your local directory assistance doesn't have the number,
you can contact one in the area you are looking. If your name server
doesn't have the answer, it will query other name servers for
that information. (assuming you told it to do so.)
If you are never going to go into the Internet, then the answer is up
to you. If you only have two machines in your network, the trouble
setting up DNS is not worth it. On the other hand, if you have a
dozen or more machines, then setting it up makes life easier in the
There are several key concepts that need to be discussed before we
dive into DNS. The first is DNS, like so many other aspects of
TCP/IP, is client-server oriented. We have the name server containing
the IP addresses and names which serves information to the clients.
Next, we need to think about DNS operating in an environment similar
to a directory tree. All machines that fall under DNS can be thought
of as files in this directory tree structure. These machines are
often referred to as nodes. Like directories and file names, there is
a hierarchy of names with the tree. This is often referred to as the
domain name space.
A branch of the DNS tree is referred to as a domain. A domain is
simply a collection of computers that are managed by a single
organization. This organization can be a company, university or even
a government agency. The organization has a name that it is know by
to the outside world. In conjunction with the domains of the
individual organizations, there are things called top-level
domains. These are broken down by the function of the domains
under it. The top level domains are:
COM - Commercial
Each domain will fall within one of these top-level domains. For
example, there is the domain sco, which falls under the commercial
top-level domain. It is thus designated as sco.COM or sco.com. The
domain assigned to the White House is whitehouse.gov. The domain
assigned to the University of California at Santa Cruz is ucsc.edu.
(Note that the dot is used to separate the individual components in
the machine's domain and name)
Keep in mind that these domains are used primarily within the US.
While a foreign subsidiary might belong to one of these
top-level domains, for the most part, the top level domain within
most non-US countries is the country code. For example the
geographical domain Germany is indicated by the domain abbreviations
de (for Deutschland). There are examples, however. I do know some
German companies within the com domain. There are also geographic
domains within the US, such as ca.us for California as compared to
just .ca for for Canada. This is often for very small domains or
non-organizations, such as individuals.
Within each domain, there may be sub-domains. However, there
doesn't have to be. You usually find sub-domains in larger domains in
an effort to break down the administration into smaller units. For
example, if you had a set of machines that was for use by Or, if SCO
Tech Support had a sub-domain it might be support.sco.com.
Keep in mind that these are just the domain names, not the machine,
or node name. Within a domain there can be (in principle) any number
of machines. A machine sitting on the desk in the oval office might
be called boss1. It's full name, including domain would be
boss1.pres.whitehouse.gov. A machine in SCO Support called darkstar
would then be darkstar.support.sco.com. This might be extended
further if there was a further division within support so that the
TEAM support analysts had their own sub-domain, the machine might be
darkstar.team.support.com. In principle, this can go on quite a ways.
Up to now, I have only seen a machine name with five components: the
machine name, two sub-domains, the company domain and then the
top-level domain. On the other hand, if there was no support
sub-domain, and everything was under the sco.com domain, the
machine's name would be: darkstar.sco.com.
You may often see the fully-qualified domain name (FQDN) of a machine
listed like this:
Including the trailing dot(.). That dot indicates the root domain.
This has no name other that root domain or .(read "dot").
Very similar to the way the root directory has no name other than
root or /. In some cases this dot is optional. However, there are
cases where is it required and we'll get to those in the section on
Like files, it is possible that two machines have the same name. The
only criteria for files is that their full path be unique. The same
applies to machines. For example, there might be a machine darkstar
at the whitehouse. (Maybe Bill's a closet Dead Head) It's FQDN would
be darkstar.whitehouse.gov. This is obviously not the same machine as
darkstar.sco.com any more than 1033 Main Street in Santa Cruz is the
same as 1033 Main Street in Annsville. Even something like
darkstar.support.sco.com is different from darkstar.sales.sco.com.
A zone is a grouping of machines that may, or may not, be the same as
a domain. This is the set of machines over which a particular name
server has authority and maintains the data. In our example above,
there might be a zone for support, even if there was no
sub-domain. On the other hand, there might be a team.support.sco.com
domain, but the zone is still sco.com. Therefore, zones can be
sub-ordinate or superior to domains. Basically, zones are used to
make the job of managing the name server easier. Therefore, what
constitutes a zone depends on your specific circumstances.
In the SCO implementation of DNS, there are five types of servers:
primary, secondary, caching-only, slave and client. Although the SCO
doc list the last one as a server (they also call it a remote
server), I hesitate to use that designation. A remote server doesn't
serve anyone but itself. It must use the DNS software to resolve all
requests through a another server.
A primary server is the master server for one or more DNS zones. Each
server maintains the database files, and is considered the authority
for this zone. It may also periodically transfer data to a secondary
server, if one exists for that zone.
DNS functions are carried out by the Internet domain name server,
named. When it starts, named reads it's configuration file to
determine what zones it is responsible for and in which files the
data is stored. By default, the configuration file /etc/named.boot.
However, named can be started with the -b
option to specify an alternate configuration file. Normally, named
is started from /etc/rc2.d/S85tcp.
SCO DNS provides a stub server. This is used when a
primary server delegates a portion of it's name space (like a
sub-zone) to another server. The primary server for the zone needs to
know the name service information about the files within that
sub-zone. One way is serving as a secondary server to that sub-zone
or serving as a stub server. Note that the concept of a stub server
is new to OpenServer so you cannot designate an ODT machine be a stub
server. A stub server is like a secondary server, except that it only
caches the NS and SOA records. With this information it know what
machine is the name server and how long the information is valid.
For example, the primary server for the sco.com domain needs to know
about the machines within the support.sco.com domain. It could server
as a secondary server to the support.sco.com domain, whereby it would
maintain all the records for the machines within that sub-domain. If,
on the other hand, it servers as a stub server, the primary for the
sco.com need only know how to get to the primary for the
support.sco.com sub-domain. Note here, that it is possible for
a server to be primary in one zone and secondary in another.
By moving responsibility to the sub-zone, the administrator of the
parent zone, does not need to concern him or herself with changing
the configurations files when a machine is added or removed within
the sub-zone. As long as the address of sub-zone primary server
remains matches the stub server entry all is well.
A secondary server takes over for the primary, should the primary go
down or be otherwise inaccessible. A secondary server maintains
copies of the database files, and "refreshes" them at
predetermined intervals. If it cannot reach the primary to refresh
it's files, it will keep trying at (again) predetermined intervals.
If after another predetermined time, the secondary still cannot
reach the primary, the secondary considers it's data invalid and
Caching-only servers saves data in a cache file only until that data
expires. The expiration time is based on a field within the data that
is received from another server. This is called the time-to-live.
Time-to-live is a regularly occurring concept within DNS.
A slave server can be a primary, secondary, or caching-only server.
If it cannot satisfy the query locally, it will pass, or forward, the
request to a fixed list of forwarders (forwarding server), rather
than interacting directly with the primary name servers of other
zones. These request are recursive, which means that the forwarder
must answer either with the requested information or saying it
doesn't know. The requesting machine then asks the next server, then
the next and then the next until it finally runs out of servers to
check or gets an answer. Slave servers never attempt to contact
servers other than the forwarders.
The concept of recursive request is in contrast to iterative
requests. Here the queried server either gives an answer or tells the
requesting machine where it should look next. For example, darkstar
asks, iguana, the primary server for support.sco.com for some
information. In a recursive query, iguana asks, boomer, the primary
server for sco.com and passes the information back to darkstar. In a
iterative query, iguana tells darkstar about boomer, and darkstar
then goes asks boomer. This process of asking name servers for
information, whether recursive or iterative is called resolution.
Keep in mind that there is client software running on the server.
When an application needs information, the client DNS server asks the
server for the information, despite the fact that the server is
running on the same machine. Applications don't access the server
There is also the concept of a root server. These are severs located
at the top of the domain tree and maintain information about the
top-level zone. Root servers are positioned at the top, or root, of
the DNS hierarchy, and maintain data about each of the top-level
Your Own IP Address
If you have a network that is completely disconnected from the rest
of the world, then there is no need for you to adhere to any of these
conventions. You might be a commericial organization, but still want
to use the EDU domain. Nothing prevents you. There is also nothing
preventing you from using IP addresses that are used by some other
organization. However, once you decide to connect to another
organization or the Internet at large, you need to ensure that both
your names and IP address are unique.
To ensure you use a unique name and network, the best thing is to
contac the Network Information Center, or NIC. Via email, they can be
contact at email@example.com. The telephone is 1-800-365-3642 or
1-703-802-4535. By regular mail, use:
DDN Network Information Center
Park Meadow Drive, Suite 200
If you are not in the United States, the NIC can still provide you
with a contact in your area.
If you would like to have one machine that connects to the internet,
but have other machines that cannot. One solution is to use one of
the IP address defined in RFC 1597. This RFC describes the need for
"private" address and lists a range of class A, B and C
addresses that can be used internally within a company.
Some routers will filter out this addresses automatically, other
require that they be so configured. This allows you to not only limit
access to and from the internet, but also limits the need for unique
addresses. If you only have a handful of machines that need Internet
access, some internet providers will sub-net a Class C address and
assign you a small block of addresses. See the RFC for more details.
The Network File System (NFS) is an industry standard means of being
able to share entire filesystems among machines within a computer
network. As with the other aspects of networking, the machines
providing the service (in this case the filesystem) are the servers
and the machines utilizing the service are the clients. Files
residing physically on the server appear as if they are local to the
client. This enables file sharing without the hassle of copying the
files and worrying about which one is the more current.
One difference that NFS filesystem have over "conventional"
filesystem is that it is possible to allow access to a portion of a
filesystem, rather than the entire one. The directory you want the
server to make available are said to be exported.
Normally, filesystem under NFS are mounted just like any other. The
same options apply and they can be mounted either automatically at
boot through entries in /etc/default/filesys
or manually by the system administrator. This can be a problem
sometimes as the server might not be active when the client boots.
Even if the server is active, mounting NFS filesystems on boot-up
slows down the boot process.
Figure 0-7 An
A solution that SCO has provided is automount
and as its name implies is can automatically mount NFS filesystems.
Once configured, any command or program that accesses a file or
directory on the remote machine within the exported directory forces
the mounting to occur. The exported directory remains mounted until
it is no longer needed.
To aid in preventing conflicting access requests to file, SCO
provided the Network Lock Manager (NLM). This consists of a device
driver and several daemon programs that support record and file
locking. Remote Execution (REX) is a service that allows users to
export their user environments to remote server in order to execute
commands. Commands that can access files in the user's current
directory and allow the execution of interactive processes such as
Keep in mind that NFS is not a stand-alone protocol. It relies on the
TCP/IP protocol suite in order to function correctly. NFS uses UDP
and not TCP. Therefore, it is up to the programs using NFS to ensure
the reliability of the data and not just assume the information
coming across is correct. NFS will tell you if the data are there by
setting the appropriate failure indicators. This causes your program
(such as cp) to fail.
Under ODT 3.0, filesystems are made available for remote mounting by
using the exportfs
command. Normally this happens at boot, however the system
administrator can run it at anytime. When run, exportfs uses
information contained in the
/etc/exports file to determine which directories and with what
The /etc/exports file is a
simple ASCII file and additions or changes can be made with any text
editor. There is one line for each directory you want to export. The
left side is the full path of the directory you want to export
and the right side is options you want to apply. For example, you can
limit access to the directory to just one machine or make the
directory read only. If changes are made, exportfs
needs to be run again to make changes take effect.
If your site is on the Internet, the /etc/exports
file can be a problem. If you export a filesystem, but do not
specify who has access, then everyone does. If you are on the
Internet, everyone who can access your machine can mount your
filesystems. Therefore, I recommend that you explictely list which
machines can mount the exported filesystems.
OpenServer provides a graphical interface to this
functionality in the form of the Filesystem Manager of SCOAdmin.
SCOAdmin can also be used to start the NFS daemons if they are not
already running as well as mount and unmount the remote filesystems.
I learned the hard way that configuring a remote filesystem to
be mountable through /etc/default
and automount are mutually exclusive. If both are configured,
automount is disabled. This caused me quite a few headaches as I was
first learning NFS. You can also user automount with the Network
Information System (NIS) in that automount maps are exported along
with the other NIS maps. This ensure that each NIS system
automatically gets access to the same NFS filesystems. (NIS is a
method of automatically exchanging files between different machines,
which we'll get into in a moment.)
The term exporting is used to describe how NFS makes local
directories available to remote systems. These directories are then
said to be exported. Therefore, an exported directory is a
directory that has been made available for remote access. Sometime
the term importing is referred to the process of remotely
mounting filesystems, although mounting is more commonly used.
If you can access a filesystem under OpenServer, you can access it
under NFS. This is because the access to the file is a multi-step
process. When you first access a file (say opening a text file to
edit it). The local system first determines that this is an NFS
mounted filesystem. NFS on the local system then goes NFS on the
remote system to get the file. On the remote system, NFS tries to
read the file that is physically on the disk. It is at this point
that it needs to go through the filesystem drivers. Therefore, if the
filesystem is supported on the remote system, NFS should have no
problem accessing it.
Once a filesystem has been exported, the client sees the filesystem
as an NFS filesystem and therefore what type it is really
irrelevant. The filesystems support by OpenServer as of this writing
are: AFS and EAFS, DOS, DTFS, High Sierra (CD-ROM), HTFS, ISO-9660
(CD-ROM), SCO Gateway for NetWare, Rockridge CD-ROM, S51K and XENIX.
There are a couple of limitations with NFS. First, although you might
be able to see the device nodes on a remote machine, you cannot
access the remote devices. Think back to the discussion on the
kernel. The device node is a file that is opened by a device driver
to gain access to the physical device. It has a major and minor
number that point and pass flags to the device driver. If you open up
a device node on a remote system, the major and minor numbers for
that device node point to drivers in the local kernel.
Also, NFS-ity is not transitive. That is, you cannot access an NFS
file system that is mounted by another machine. Keep in mind, too,
that the newer filesystems in OpenServer can have more inodes than
those in earlier releases. Since the number of inodes possible is few
on older systems, an ODT machine mounting an OpenServer machine via
NFS might not see every file.
The Flow of Things
There are two daemon processes that provide the NFS services on the
server. These are mountd
and nfsd. Mountd
is responsibility for checking access permissions to the exported
filesystem. When a clients tries to mount a filesystem, mountd
return a pointer to the filesystem if the client has permission to
The workhorse on the server side is the nfsd
daemon. It has the responsibility of handling all filesystem requests
from the clients. Once a filesystem has been mounted, all access to
the data on that remote filesystem is made through nfsd.
Remember that you could be exporting directories and not just entire
filesystems. Therefore it's better to say that access to the mount
point and below is made through nfsd.
On the client side is biod,
which handles block reads. Biod
is the Block IO Daemon. Handling just block I/O is an important
aspect. If a process is doing operations that do not require block
IO, then there little performance loss by having them each do their
own RPC (remote procedure call) system calls. However, when you start
moving blocks around, NFS needs to ensure things don't get jammed up
and will use the kernel buffer cache. (RPC is a away to execute
procedures or functions on a remote machine as if their were local to
Also key to this who process is the portmapper, portmap.
The portmapper converts TCP/IP port numbers to RPC program numbers.
What this means is that when the NFS starts up, it registers its port
with the local portmap process. The clients access the server by
asking the portmapper on the server for the port number of nfsd
and mountd. The port
number is then used on all subsequent RPC calls.
In principle, mounting a remote filesystem is like mounting a local
one. The general syntax is:
One of the primary differences is that since we are an NFS
filesystem, we have to explicitly tell mount by using the '-f NFS'
option. We can also include other options such as '-r' for read only.
Let's assume that we have an our two machines scoburg and siemau. On
siemau is an NFS filesystem that we want to mount from scoburg.
Assuming that the proper entries exist in the /etc/exports file on
siemau, the command on scoburg might look like this:
mount -f NFS siemau:/usr/data
Like other filesystems, the local mount
command parses the command into tokens and ensure that entries don't
already exist in the mount table (/etc/mnttab)
for either the filesystem or the directory. Realizing that this is a
remote filesystem, mount
gets the IP address for siemau (by whatever means are configured on
the local machine) and gets the port number of mountd
on siemau. The mount
command then passes mountd
the pathname of the requested directory (/usr/data).
Now it's the server's turn. To makes sure it can service the request,
mountd must first check /etc/exports
for the requested filesystem. In this case /usr/data.
If scoburg is permitted, mountd passes back what is called a file
handle, or pointer. Now the mount back on scoburg uses that file
handle and the mount point (/data)
as arguments the mount()
system call. Finally, an entry is place in the local mount
There are two primary NFS configuration files: /etc/exports
The /etc/exports file
exists on the server and list those files and directories that can
accessed by remote hosts. It can also be configured to allow or deny
access to specific hosts. Since these are filesystem, you can manage
anything mounted by NFS through /etc/default/filesys.
This allows you to mount remote filesystems at boot or in any way you
can with a "normal" filesystem. One advantage NFS has over
local filesystems is that you can configure them to be mounted only
when you need them. That is, if the files in the directories are not
used, the connection is not made. However, if the files are needed,
the connection is automatically made. This is the concept of
automounting, which we will get into later.
Accessing files across a network present a whole new set of problems
that need to be addressed. One of the important things is that
particularly when using NFS, each user and group has to be unique
throughout the network. Well, to be quite honest, saying "has to
be unique" is not entirely true. "Should be" would be
more accurate. Remember that an SCO system only sees the UID as a
number. If you have the same LOGNAME on two machines, but two
different UIDs, both systems will sees these as two separate users.
This can run you into trouble when copying files between the two
systems. When systems are running NFS, NIS comes in handy to prevent
SCO NFS also gives you the ability to lock files to prevent multiple
users from accessing it. The mechanism used allows the system to more
effectively clean-up in the event of a system crash or other abnormal
shutdown. This is handled by lockd
and statd. Lockd
is the Network Lock Manager and is responsible for file and record
locking. Lockd runs on
both the client and the server, processing both lock requests and
lock releases as well as recovers (or at least attempts to recover)
in the event of a system crash.
It is the responsibility of the Network Status Monitor (statd)
to provides application processes with host status information. Like
runs on both the client and server machines to monitor each other's
status (as well as their own). Part of the status that is monitored
is that of locks. When a state change occurs, lockd need to be told,
which is done by the local statd.
Let's assume that a client were to go down improperly, such as
crashing or somehow the network connection dropped. Well, it's too
late now for the client statd
to do anything. However, when it comes back up, the client statd
can tell the server statd
"Hey, I just crashed." Since any process that had a lock on
a server file no longer exists, the server lockd can free those
locks. When the server comes back up after a crash, it's statd
needs to tell the statd on
the clients. The client statd's
need to tell their lockd's
which can then "reclaim" the locks still on the server.
/etc/exports file contains entries for directories that can be
clients. This file is read automatically by the exportfs(NADM)
command. If you change this file, you must run exportfs(NADM) for the
to affect the mount daemon's operation.
when this file is present at boot time does the NFS startup script
exportfs(NADM) and start the NFS file-system daemon, nfsd(NADM),
mount daemon, mountd(NADM).
Automount provides you with the ability to mount NFS
filesystems only when you need them, automatically. They are
automatically mounted by automount, hence the name. Actually,
conventional NFS mounted filesystems can also be mounted
automatically in the sense that you can configure them in
they are automatically mounted as the system boots. Automount
filesystems, on the other hand, are mounted when a user tries to
access files or directories under the mount point. Also, if the files
or directories are not accessed within a specific time, (five minutes
by default) they are unmounted, thereby saving network resources.
When booting you also save time since the system is waiting to
connect to a remote machine that possibly could be down.
One disadvantage that automount has compared to traditional NFS
mounts is the automounted directories cannot be dynamically added.
When changes are made to the configuration, automount must be
restarted. Now, this doesn't require a reboot of the system, however
it is one extra step. In my opinion this is more than compensated by
the fact that whereas traditional NFS mounts are hard coded (meaning
one machine:one mount point), automount can be configured to attempt
to mount a secondary system if the first one is not available.
Keep in mind that the server side is oblivious to the fact that the
request is coming from automount. As far as it knows it is just your
normal every day NFS mount, therefore automounter can be used with
system that don't know about it.
Don't think of automount as your only means of mounting NFS
filesystem just because of it's advantages. If you are constantly
accessing certain filesystems, then you gain nothing by making them
automounted. In fact, you might lose something since each time the
connection is made, you need to wait. If mounted in the conventional
manner, then you only need to wait once. If you have filesystems that
are accessed regularly, but others that are accessed only on
occasion, you simply mount some at boot and the rest via automount.
A common use of automount is with NIS. Filesystems mounted via
automount are maintained in files, called maps. NIS is used to
distribute configuration files from the NIS server across the net to
the NIS clients. Why not include the automount maps in the set of
files that is being distributed? This could be useful if you wanted
to have all the documentation on a single machine to save space and
access to the doc is made through automount. Since doc is not being
constantly access, this saves the problem of having the filesystem
containing the doc be continually mounted. Another use is when you
want each user to have the same home directory no matter where they
are. If mounted by automount and distributed via NIS, every time they
logged in, no matter on what machine, they would have the same home
directory. Granted, there is the problem of not being able to access
their home directory if the server is down. However, that problem
still applies when login into a single machine.
The automount daemon, is normally started at boot up via the NFS
start-up script, /etc/rc2.d/S89nfs,
(The name may be slightly different) however, it can be started by
hand. Usually, automount is started with the -f flag and the name of
it's configuration file, which by convention is /etc/auto.master.
This contains a list of mount points and where the configuration
information for that mount is found. (We'll talk more about these
files in the chapter on configuring your network
In reality, automount behaves very similarly to traditional NFS
mounts. The system knows that the specified directory is an NFS mount
point. When something is accesses on the other side of the mount
point, the automount daemon reacts to the request basically the same
way nfsd does with a normal NFS filesystem. The automount daemon then
checks the mount table (/etc/mnttab)
to see if the filesystem is already mount and mounts it if it isn't.
Once the file system is mounted, requests are handle normally.
One point of interest is the mount points themselves. File
systems mounted via automount are not where you think they are.
Instead, all automount filesystem are mounted to sub-directories
under /tmp_mnt and
symbolic links point to these directories from what we see as the
mount points. Because of the use of symbolic links, you need to be
careful when changing directories up through the mount point when
Like other filesystems, an entry is maintained in the system mount
table (/etc/mnttab) for
all filesystem that have been mounted with automounter. When the
timeout has expired (five minutes by default), automounter removes
the entry from /etc/mnttab,
but still maintains a copy in its memory. This copy is updated
whenever mounting or unmounting a filesystem.
There are four types of automount maps: master, direct, indirect
and built-in. At first, they are a source of a fair bit a
confusion, at least with me and many people I know. A direct mount
points to a real directory. The direct maps are used to specify
direct mounts. Each of these maps contains a separate entry for each
direct mount point. If you need to, (or are looking for something to
do) you can create nested maps. This is where direct maps point to
other maps. The contents of each entry is the full path of the mount
point, mount options and the path to the remote directory (or nested
An indirect mount points to a virtual directory, which is
managed by automount and doesn't really exists. At least that's the
way the SCO doc describes it. So, what does it mean? Let's take an
example of each and hopefully that will clarify things.
First, let's look at what happens with a direct mount. Let's say we
want to set up the system so that help files were accessed through
automount on the machine scoburg. This would then be the filesystem
and for simplicity's sake, let's say we are mounting it on the local
directory. When local user runs scohelp, the mount point is crossed
and automount goes into action.
The first thing it does is to create the mount point if it doesn't
already exist. Remember the real mount-point is not where we think it
is. Instead, in this case, it would be
/tmp_mnt/usr/lib/scohelp, which automount may created if
necessary. Next, the filesystem is mounted onto
/tmp_mnt/usr/lib/scohelp, as if you had issued the command:
mount -f NFS
Since it may be the case that /usr/lib/scohelp
does not yet exist, automount would then create a symbolic link from
to /usr/lib/scohelp. Now,
whenever you change directories or access a file in
/usr/lib/scohelp, you are
really accessing them on scoburg.
An indirect mount, works different. (Obviously, since why have two
things that behave the same). Let's say that I not only want to have
the help files on a remote machine, but the man-pages as well and we
wished to mount them under
/usr/lib, as /usr/lib/man
(so we know where everything is, for example). We could create
multiple direct mounts or we could use the concept of indirect
mounts. Basically, what appears to happen is the same as with direct
mounts. First, a mount point is created under /tmp_mnt
if necessary. Next, automount mounts the remote directory onto the
mount point under /tmp_mnt.
Finally, the symbolic link is created. Note that automount only does
this when something under the mount point is accessed. If, in our
example, /usr/lib/man was
accessed only it would get mounted, but /usr/lib/scohelp
The interesting thing is that using indirect mounts, the directory
/usr/lib/scohelp do not
exist. If you were to do a listing of
/usr/lib, these two directories would not be there. Because of
the fact these directories don't really exist, you can make changes
to indirect maps without having to restart automounter.
As I mentioned before the master map points to the files containing
the other three maps. It contains the name of the mount point and
which file to look in for the configuration information, as well as
any mount options. By default this is /etc/auto.master.
Because the mount point directory does not exists for indirect
mounts, the map file contains just the name of the parent and not the
full path. Therefore, the indirect map is dependent on the master
map to tell it where to mount the filesystem.
Built-in automount maps are used to save time and effort if many
machines are going to be accessed. One of the maps is -hosts, which
is used to mount all exported file systems from all known hosts.
Using this option you cannot choose where you want the filesystems
mounted as they are all mounted as a unit. All are mounted to a
single mount point, which is /net,
by convention, but you can choose whatever one you want.
All directories mounted from a single server are mounted in the same
fashion. First, there is a sub-directory based on the name of the
server from which the filesystems are exported. For example, the
filesystems mounted from scoburg would be under /net/scoburg.
Next, all directories from that server are mounted at the same time
whenever any directory is accessed. Lastly, every directory is
mounted with the same options, or default options if none are
The -passwd map is a way of getting automount to automatically mount
each user's home directories, to that machine where they login in.
The limitation is that all home directories must reside on the same
machine. In order to create automount home directories that are on
different machines, you will need to create either direct or indirect
One of the major problems administrators have with networked systems
is the administration of users and user accounts. In the "before
time", the system administrator had to create accounts on
multiple system if users needed access to each of them. Shell
scripts could be written to automate the task, but there was always
the possibility of users not be copied to every system. The
administrator then had to go back and make sure that the user in
question got copied everywhere.
Then along came the Network Information Service, or NIS. NIS is a
system service that forms part of SCO's NFS package. Although
supplementing each other, running NIS does not mean you have to mount
remote filesystems, nor does mounting remote filesystems mean you
have to run NIS. Like DNS, the primary function of NIS is to ensure
consistency of information across the machines within a network.
Also, like DNS, there is a central machine that is the "master"
of all the others in that all the information is maintained on this
one system and then propagated to the others.
As we progress through the inner workings on NIS, you'll notice that
many of the utilities and databases begin with the prefix 'yp.' This
is historical and dates from the time when NIS was referred to as Sun
"Yellow Pages" or simply "YP." The problem is
that "Yellow Pages" is a registered trademark in Great
Britain and can't be used without permission, so Sun (the developer)
renamed it to NIS.
Like NFS, NIS works in a client-server fashion to propagate the
information. The server, or master, has the source copies of several
key files that it then copies to the other servers. Servers that
simply receive the files for their own purposes and do not propagate
them further are referred to as slaves or "copy-only"
servers. In the following discussion I will referred to both slaves
or copy-only servers as "clients." First, I get bugged by
the term slave and second it's easier to type. If there is a
distinction to be made, I will be sure to make it. Keep in mind that
true clients do not manage any information themselves. That is, the
do not receive any files from a server.
One important aspect is the ability to expand the list to cover other
files. By default, the following files are propagated:
Under NIS, there is the concept of a domain. While this is comparable
to an Internet domain, an NIS domain simply defines a group of
machines sharing information. An NIS domain may be composed of
machine entirely within a single Internet domain or even within an
MMDF domain. However, there is no other connection between these
types of domains, other than the label. NIS domains can spread across
or encompass the other kinds of domains as well. Normally the domain
name is established when NIS is first configured, but can also be set
using the domainname command. If the Internet domain name matches the
NIS domain name, this is just a coincidence.
When you initialize NIS, the information for the various files is
compiled into several databases. These databases are referred to as
maps, which are then propagated to the clients. Once on the client
side, these maps are then converted back to their original ASCII
Each of these maps is a database itself, called an ndbm database as
they are created using system calls to the ndbm library. When the
maps are received by the client (slave or copy-only server) they are
converted back to the ASCII files and are available locally. These
files exist in pairs in the /etc/yp/<domainname>
directory. The .dir files are "directories.” These are
bitmaps of indices used by the DBM database. The .pag files are the
data files. Because of the nature of the files, you cannot simply
copy them from one machine to another by hand. The databases contain
both a time stamp and the identify of the source. Using the
timestamp, the client can determine if the information it gets is
newer than what it already has. By identifying the master as well,
the client can request an update directly.
When NIS is initialized the original files with the same name as NIS
file are backed up in sub-directories under /usr/lib/nisrt.
For example, there will be a /usr/lib/nisrt/etc
directory that contains the files (among others) group
and passwd. Certain
binaries are copied into these "parallel" directories as
well. One such file is /bin/passwd.
This is copied into
/usr/lib/nisrt/bin and then /bin/passwd is linked to
Security and NIS
For the most part, this is a contradiction of terms. Where NIS is
used to open up the network, security is used to close it. This means
that NIS is substantially more difficult to administer when using
either high or improved security. This is because the TCP will not
allow NIS to modify either /etc/passwd
or /etc/group in either of
these modes. Because you are propagated password and group
information, systems running NIS cannot really be C2 secure. Servers,
for that reason, cannot be configured at the higher security levels
and run NIS. On the other hand, clients and copy-only servers can.
They can receive maps, but not propagate them. If operating in a
higher security level, the maps cannot be translated into their ASCII
If you were familiar enough with the way the TCB it put together, you
could come up with a way to override these safeguards. Just as you
could if you were not running NIS. The big question is "Why?"
If you implement a higher level of security, then you want to use it.
Circumventing the built-in restriction to "make things easier"
is comparable to having root's password taped to the console
monitor. If you do decide to do it and either C2 or NIS breaks, you
are on your own. This is unsupported and you were told not to
Serial Network Protocols
Serial Line Internet Protocol (SLIP)
The first protocol in the TCP family to be developed to run across a
serial line was the aptly name Serial Line Internet Protocol (SLIP).
One advantage that SLIP provides over PPP is simply its age. There
are still many computers that are running comparatively old systems
and some do not understand PPP. As a result, the only way to make a
serial line TCP connection was SLIP.
Basic to SLIP is the concept of a link, or SLIP link. A SLIP link is
a serial communication path between the two computers. It is over
this link that the data is transferred as serialized IP
packets. That is, the systems sends the IP packets to the SLIP
driver, which then encodes them to be able to be sent across the
serial line. In principle, just like PPP. Also like PPP, SLIP can be
configured to use either a single serial line connecting the two
machines, or the machines can be connected via mode across telephone
Improvements have been made to SLIP between ODT and OpenServer.
Originally, you were required to have specific IP addresses defined
for each end of the link. Although this did not mean that each side
has a permanent IP address, but rather the IP address had to be
defined as the link was being created. The SLIP implementation in
OpenServer provides for dynamic IP address allocation at connect
time. It still supports dedicated IP address allocation, as well.
Links are established using the slattach command which takes as
parameters the local connection (either as a tty name or a UUCP site
name), the IP address of both sides of the connection and other IP
and serial line attributes for this link. When the link is
established, SLIP has created a network interface, which
functions in principle the same way as the network interface you have
with a Ethernet card, or PPP for that matter. Like the other types of
network interfaces, a SLIP interface can be monitored with netstat
and accessed using telnet,
or any other network utility.
The SCO implementation of SLIP in OpenServer provides for up
to 64 simultaneous serial network links. This includes both SLIP and
PPP (which we'll talk about shortly). Each of these connections can
be initiated from either side and on an "as-needed" basis.
Therefore, if you only have one serial line, this can be used for
either UUCP, SLIP or PPP.
There are two ways to make the link "inactive", but only
one way I would recommend. The first is to simply mark it down using
ifconfig. The problem is
that this simply makes the link inaccessible, but does not remove it.
Since the link is still there, the connection does not stop.
Therefore if this link is on a phone line, you can experience some
unexpectedly high telephone bills. The safer alternative actually
removes the link. This is done by simply killing (i.e. with kill
-15) the slattach
process. This removes the SLIP interface, so please make sure that
there are no more processes accessing the interface as you are liable
to have angry users banging on your door.
The primary command when using SLIP is slattach
or SLIP Link attach. SLIP is attached to a serial port and creates
either a dedicated, dynamic incoming or dynamic outgoing link. Each
end of the link is given a network (IP) address, based on the
parameters passed to slattach.
Once communication is establish, the link functions essentially the
same as any other network connection. Applications on both sides of
the link can communicate, just as if it were Ethernet, for example.
Unlike UUCP that also uses serial lines, multiple processes can be
using the connection at the same time.
The newest SCO implementation of SLIP provides a means to identify
and filter out specific types of IP packers. This is accomplished
through the /etc/pppfilter
program, which, as you might have guessed from the name, is shared
with PPP. This file is created either when the SLIP STREAMS stack is
installed or by PPP and, by default, only contains comments until you
add the filters you want.
Like the other network protocols, SLIP resolves names either by the
/etc/hosts file or using
the Domain Name Service (DNS). The newest implementation also can be
configured to connect to sites listed in /usr/lib/uucp/Systems.
A dedicated SLIP connection is one in which communication is made
between hosts over a specific (dedicated) link. Either side may
initiated the link, however this is (usually) across the same line.
This type of connect can only be used with a serial line that is
always up and dedicated to a single connection. This can include
regular serial cables or leased telephone lines
A dynamic incoming line is paired with a dynamic outgoing line on the
remote line (or vis-versa). The incoming line must accept the login
and password information forwarded by UUCP to establish the
connection. This type of connection allows the local system to accept
incoming SLIP connection requests just like normal logins. That is
the calling system connections to the destination machine and must
get past getty. Therefore, this type of SLIP link requires a login
account on the side being called. When SLIP is configured using the
Network Configuration Manager, a slip account is created. Rather than
a normal shell, this account gets a special shell script that
initiates the slattach.
On the calling machine, that is, the side initiating the SLIP link an
entry must exist for the destination machine in the
file. This must include the slip account name and password for the
SLIP packet filtering By default, all packets are allowed through a
particular interface. Packet filtering allows you to control which
packets pass through the SLIP interface and which do not. It is
possible to filter out packets on several different criteria. For
example, you may choose to allow only TCP or ICMP packets or you may
want to disallow TCP and UDP packets. You could also choose to filter
out packets going to a particular destination or those coming from a
particular source. Depending on your needs, you can configure
filtering for each link individually talk about netstat
and other network tools
Point-to-Point Protocol (PPP)
SCO PPP is another means by which network connections can be made
across serial lines. Despite the limitations that might be imposed by
serial lines, PPP supports the same programs under TCP/IP that are
supported with other media. PPP has certain advantages over faster
network connections, such as Ethernet, in that it does not require
the specialized hardware. In addition, PPP can be configured to dial
up across a modem and make the connection only when necessary. This
allows network connections to anywhere in the world without the cost
of other methods. PPP provides advantages over other serial line
protocols like UUCP in that multiple, concurrent sessions are
possible as well as mixing programs such as telnet
The way PPP works is that the serial lines are linked to the IP layer
through what is called the PPP STREAMS stack. This stack consists of
several different modules: packet filter module, the PPP driver,
asynchronous HDLC protocol module, and the STREAMS message to clist
Like the other protocols at the physical layer, PPP is responsible
for encapsulating the IP packages for transmissions and unpacking
them up receipt. Each connection (local host to remote host) is
referred to as a PPP link. Because this link represents the physical
connection, Each PPP link requires a unique IP address on both
sides of the link. Since this is a network link, both sides need to
be on the same network.
Like any other network connection, each layer of PPP must be
installed. This includes starting the PPP daemon (pppd).
Normally, pppd is started
by the /etc/rc2.d/S85tcp
(which is linked to /etc/tcp).
However, the PPP configuration file (/etc/ppphosts)
must be present for pppd
to start. Starting pppd
does not necessarily mean that connections are made.
Use of the same PPP link requires only that the applications use the
same IP address to specify the remote host. By using different IP
addresses to specify the same remote host, two applications can also
use individual PPP links to the same host, provided a separate serial
line is available for each separate address and both addresses have
unique PPP network interfaces. Once a PPP link is established, it
remains active until the administrator marks the interface down or
the idle timer expires. You can use ifconfig to mark an interface
down. The idle timer is set in the PPP link configuration.
The version of PPP provide with OpenServer has many
improvements over its predecessor. This includes support for up to 64
simultaneous links; communication initiated in either direction;
dynamic acquisition of the ports (accessed as needed, thereby
allowing lines to be shared with UUCP); dynamic IP addressing,
including IP address pooling; dynamic reconfiguration which does not
require relinking the kernel or rebooting the machine; packet
filtering; Negotiation of link and IP parameters. Authentication of
the local host by the remote and of the remote host by the local. PPP
MIB support (for SNMP).
PPP provides two methods of authentication in the form of two
protocols: Password Authentication Protocol (PAP) and
Challenge-Handshake Authentication Protocol (CHAP). If authentication
is enable, the administrator can choose either one of these methods
(provided the other side understand also supports it). The
authentication is on a per-link basis so you can enable it for some,
disable it for others, as well a choose the authentication method you
If you haven't made changes and are using mail locally, then you are
using MMDF. MMDF stands for Multi-channel Memorandum Distribution
Facility and is the default mail system used on SCO UNIX systems. It
is a collection of programs that allows relatively complex mail
configurations to be quickly established and easily maintained.
SCO has a built-in mechanism (mkdev
mmdf) that allows to you automatically configure MMDF. Often
the limitations of this automatic procedure require administrators to
go beyond the basics. When this happens, you need to know how the
parts of MMDF fit together and interact with the rest of the system.
In this section, we're going to talk about the basics of MMDF. This
is more than just an overview of the functionality. We're going to
get into the interactions between the various files so that if
something goes wrong, you will be in a much better position to find
the cause of the problem.
Before we get into things we need to first talk about a couple of
terms: Mail Transport Agent (MTA) and Mail User Agent
(MAU). The user agent is the program that the user sees, such as
/bin/mail. It hands off
the message to the transport agent, which does the processing and
sends it on its way. The default transport agent on SCO UNIX systems
is MMDF as is independent of which user agent is being used. (I will
be using the terms "transport agent" and "user agent"
because I feel these abbreviations tend to be distractions.)
MMDF provides transport through several different methods. The method
used is referred to as a channel. Sets of machines are grouped
together in domains, which may or may not be equivalent to the
Internet domains we talked about earlier. Usually (by default), the
MMDF domains are based on what channel is used to get to each
machine. However, other criteria such as geographical location is
When a user wants to send a message, he or she starts up whatever
program is normally used (the user agent), compose their message and
send it off. The user agent hands the message off to the submit
program. Submit then finds
the destination machine by looking through the configured domains and
then sends the message into the appropriate channel. Once in the
appropriate channel, the deliver program starts the program that will
actually do the transfer.
The Flow of MMDF
When mail reaches the other end the process is reversed. The
receiving end of the transport agent hands the message to deliver to
be processed further. If the message is destined for a user on that
machine, it is place in that machine's local channel. If not, it is
processed further and sent through the channel appropriate for the
There are three channels that are primarily used on SCO
systems: local, uucp and smtp. The local channel is used for mail
between users on the same machine and is configured by default, even
if you've never touched MMDF. The uucp channel (logically) uses UUCP
to transfer mail between remote machines. MMDF hands the message to
UUCP along with a remote execution command to sent the job through
the remote mail system. The smtp (Simple Mail Transport Protocol)
channel is used on systems running TCP/IP.
Prior to SCO UNIX Version 3.2. Release 4.0, all of the configuration
files had to be created by hand. This meant that there were the
inevitable errors and calls to support. With SCO UNIX 184.108.40.206, a new
utility was introduce to ease the MMDF configuration: mkdev
Despite it's relatively simple nature, the mkdev
mmdf script is able to configure MMDF with very little
trouble. Even if you have a relatively complex setup, mkdev
mmdf can be used to configure your system.
The mkdev mmdf
script is fairly straight forward in what it does. You are prompted
for information about you local system such as its name, who should
root's mail get send to and so on. You are then asked questions on
the different channels you intend to use. As you answer, the mkdev
mmdf script is storing the information you input and will use
it later to build the appropriate files. In OpenServer, the mkdev
mmdf script actually starts the MMDF Manager. This is much
more extensive that the old mkdev
mmdf script. Since mkdev
mmdf in OpenServer calls the MMDF Manager, I will just refer
to the script for simplicity's sake.
Knowing what to answer is not always easy. However, going through the
mkdev mmdf step-by-step
will do little for your understanding of what is happening, nor will
it allow you to go beyond the basic configuration or troubleshoot
problem systems. In order to do that, we need to look out how MMDF is
laid out and how the files interact.
You might want to log in as either root or mmdf and go into the
/usr/mmdf directory. This
way you have complete access to all the files we need to look at.
Let's start first with an overview of the directory structure. There
are two places you will find MMDF files: /usr/mmdf
and /usr/spool/mmdf. You
only need to look at the spool files when problems arise and you need
to start tracking it own. For that reason, I am going to put off
talking about them until later.
The /usr/mmdf directory is
the heart of the MMDF sub-system. This contains all the binaries and
configuration files that mmdf uses, plus the log files it writes to
as it does it work. Is this directory is a least one file:
mmdftailor. (If you've
run mkdev mmdf at least
once then there is also mmdftailor-).
This is the central MMDF configuration file. Most of the information
provided by this file is a result of our responses to questions when
we run mkdev mmdf.
Although this file is built by mkdev
mmdf, it is important to know the structure of this file
should this be misconfigured or you want to enhance your system.
Therefore, we are going to take a quick look at it. I will be
addressing the entries that you re likely to find on your system and
not give you extensive details on what could be there. For
details on each of the individual entries, take a look at the
By default, the mmdftailor
is broken down into four functional areas. Provided they appear in
the right order in relationship to each other, there is no absolute
order that things must appear. In fact, in SCO UNIX Release 3.2
Version 2, the arrangement was much different that the current
release. The order I will be describing below is simply the
convention used by SCO. Open Server changed the layout of mmdftailor,
once again. Entries are grouped by their type and not by channel.
However, the concepts we will address here are the same.
The top portion is the administrative portion. Here, we define the
description of our system, such as the host and domain name. It is
here that you would define such things as an alternate location for
your users' mailboxes (normally it's /usr/spool/mail)
or set up name hiding. Name hiding is where you hide the name of the
machine where the mail originated and simply indicate the domain. We
get into this more in the section on configuring MMDF.
The next section is used for aliasing. This is where you define the
files, or tables, to look through when aliasing user names
addresses. This is useful when you want to send mail to people at
remote sights without having to remember long addresses. The MTBL
lines tell us the names of the alias files to use. The name=
entries is a kind of alias that will be used in the ALIAS
lines to configure that particular alias file. Keep in mind that the
name= line must
appear before the others as this defines the alias entry that will be
The ALIAS lines point to files (through the name=
entry) that are to be used when establishing user aliases. If you
look, each line points back to a different alias file. Here only the
name of the file is listed, as the system knows to look in
addition to the file name, there are configuration options for the
The next section is where the individual channels are defined and
configured. Like alias, there are MTBL entries for each channel, with
aliases defined by the name=
entries. There is one defining the domain file and one
defining the channel file. The behavior is the same as for the alias
file as an alias is defined for each of the domain of channels files
and reference made to the alias. Along with the file name entries
there are entries defining the domain specific characteristics (MDMN)
and the channel specific characteristics (MCHN).
Even if configured correctly, there are two channels that do not have
a corresponding domain. These are the badhosts channel and the
baduser channel. The badhost channel is used whenever you have
addressed mail to a address that your local machine does not know
about. If configured, the system will try to send to a machine that
you have defined as your badhosts machine. This is not necessarily a
bad thing. All it means is that your local machine does not know how
to get there. If your bad hosts does, then all is well.
This is commonly used when your email is routed through another host.
Your machine need not know how to deliver anything other than local
mail (or mail internal to your company). There may be only one
machine in your company that knows how to get to outside. All mail
directed to address outside the company goes through this one
machine. If that machine is listed as your badhosts machine and it
know how to go to the rest of the Internet, you will have no problem
The baduser machine works in the same fashion, but for users that the
local machine does not know about. If you were to mail to a user
(without any address) that your machine didn't know about, MMDF could
be configured to send it to a baduser machine. This, however, is
normally only used within a company. Often there is a central mail
server that knows about all the users. (and maybe all the machines)
in the company. Any user that the other machines don't know about end
up going here. This machine could also be the badhosts machine for
the company, which in turn has a badhosts machine outside of the
At the bottom of mmdftailor
file is where logging and locking is normally configured. Locking is
used to prevent concurrent access to a user's mailbox in order to
keep the mailbox consistent. Logging is just as the name suggests.
Although details of each of these are listed in mmdftailor,
I must mention one aspect of logging. By default, logging is only
turned on for fatal errors (FAT). This is normally sufficient. If you
are having mail problems, you may often need to turn on full trace
(FTR) logging. Be careful! If this is turned on a single correctly
addressed message will increase the log file by 10K! If you have a
busy system, you can quickly have log files that are several
megabytes in size.
When doing some MMDF testing with on my system while in
support, I came in one Monday morning to find some very disturbing
messages on my screen. Sometime over the weekend my hard disk had
filled up and I was now getting messages that the root had disk was
full. When I finally got it, I discovered that I had a 45MB MMDF log
That about covers the basic entries that you find in
mmdftailor. There a dozens
of value that can be set, most of which are not set by default or
have default values defined within MMDF. Since they are all listed in
man-page, I won't go into details about them here. Note that in
OpenServer, no logging or locking is defined by default, so each
takes on a default value.
Within the /usr/mmdf
directory, there are several sub-directories. As you would expect,
the bin directory is where
the binary programs are kept. This is where both the deliver
and submit programs
reside. The remaining programs serve various functions as MMDF is
running it's course. Two important ones are checkque
and checkaddr, both
of which we discuss in more detail in the section on configuring
The chans directory
contains the channel programs that deliver
calls to send message along through the correct channel. The
log directory is just as
its name implies it is where MMDF writes logging information. Also
as its name implies, the tmp
directory is for temporary files that MMDF creates as it's going
about its business.
The last directory, table,
is where the address oriented configuration information is kept.
There are three type of files of importance here: alias files,
domain files and channels files. As it is processing mail, MMDF used
the contents of these files to determine where mail should go.
In our discussion of mmdftailor,
we talked about the ALIAS, MCHN and MDMN entries. As you see from
their corresponding MTBL entries, these relate to the files in
example, let's take the MTBL entry:
name=lalias, file="alias.list", show="Mailing list
This refers to a file alias.list
Within the /usr/mmdf/table
directory there is a file alias.list.
In this line:
name=locdom, file="local.dom", show="Local Domain"
There is a reference to the file local.dom.
There is also a file