Jim Mohr's SCO Companion


Copyright 1996-1998 by James Mohr. All rights reserved. Used by permission of the author.

Be sure to visit Jim's great Linux Tutorial web site at https://www.linux-tutorial.info/

The Computer Itself

Hardware is my life. I love working with it. I love installing it. I love reading about it. I am by no means an expert in such a way that I can tell you about every chip on the motherboard. In fact, I enjoy being a "jack-of-all-trades." Of all the trades I am a jack of (of which I am a jack?), I enjoy hardware the most.

It's difficult to say why. There is, of course, that without the hardware nothing works. Software, without hardware, is just words on a page. However, it's something more than just that. I like the idea that it all started out as rocks and sand and now it can send men to the moon and look inside of atoms.

I think that this is what it's all about. Between the hardware and the operating system (I also love operating systems) you pretty much got the whole ball of wax.

During the several years I spent on the phone, it was common to have people call in with no idea of what kind of computer they had. I remember one conversation with a customer where he answered "I don't know” to every questioned I asked about his hardware. Finally, he got so frustrated he said, "Look! I'm not a computer person. I just want you to tell me what's wrong with my system.”

Imagine calling your mechanic to say there is something wrong with your car. He asks you whether is car has 4 or 8 cylinders, whether it has fuel injection or not, whether it is automatic or manual, and whether it uses unleaded or leaded gas. You finally get frustrated and say, "Look. I'm not a engine person, I just want you to tell me what's wrong with my car.”

The solution is to drive your car to the mechanic and have it checked. However, you can't always do that with your computer system. You have dozens of people who rely on it to do their work. Without it, the business stops. In order to better track down and diagnose hardware problems, you need to know what to look for.

This section should serve as a background for many issues we cover in elsewhere. This chapter is designed more to familiarize you with the concepts, rather than make you an expert on any aspect of the hardware. If you want to read more about PC hardware, a good place is the Winn Rosch Hardware Bible from Brady Books.

Basic Input/Output Services and the System Buss

A key concept for this discussion is the bus. So, just what is a bus? Well, in computer terms it has a similar meaning as your local county public transit. It is used to move something from one place to another. For the county transit bus, what it moves is people. For a computer bus, what it moves is information.

The information is transmitted along the bus as electric signals. If you have ever opened up a computer, you probably saw that there was one central printed circuit board with the CPU, the expansion cards and loads of chips sticking out of it. The electronic connections between these parts is referred to as a bus.

The signals that moves along a computer bus comes in two basic forms: control and data. Control signals do just that: they control things. Data signals are just that: data. How this happens and what each part does we will get to as we move along.

In today's PC computer market, there are several buses, which have many of the same functions, but approach thing quite differently. In this section, we are going to talk about what goes on between the different devices on the bus, what the main components are that communicate along the bus and then talk about the different bus types.

Despite difference in bus types, there are certain aspects of the hardware that are common with among all PCs. The Basic Input Output System (BIOS), interrupts, Direct Memory Access channels and base addresses are just a few. Although once the kernel is loaded, SCO UNIX almost never needs the system BIOS, understanding it's function and purpose is useful in understanding the process that the computer goes through from the time you hit the power switch to when SCO UNIX has full control of the hardware.

The BIOS is the mechanism DOS uses to access the hardware. DOS (or a DOS application) makes BIOS calls, which then transfer the data to and from the devices. Expect for the first few moments of the boot process and the last moment of a shutdown, SCO UNIX may never again use it again.

The "standard" BIOS for PCs is the IBM BIOS, but that's simply because "PC” is an IBM standard. However, "standard" does not mean "most common," as there are several other BIOS vendors, such as Phoenix and AMI.

DOS or a DOS applications make device independent calls to the BIOS in order to transfer data. The BIOS then translate this into device dependent instructions. For example, DOS (or application) requests that the hard disk read a certain block of data. The application does not care what kind of hard disk hardware there is, nor should it. It is the job of the BIOS to make that translation to something the specific harddrive can understand.

In SCO UNIX, on the other hand, there is a special program called a device driver that handles the functions of the BIOS. As we talked about in the section on the kernel, device drivers are sets of routines that directly access the hardware. Just as the BIOS does. It is important to note that although the SCO UNIX kernel accesses devices primary through devices drivers, there are circumstances where the BIOS is accessed. For example Certain video card drivers use it as well as the SCO UNIX kernel itself when it is rebooting the system after you issue the reboot or shutdown command,.

The fact that SCO UNIX by-passes the BIOS and goes directly to the hardware is one reason why some hardware will work under DOS and not under SCO UNIX. In some instances, the BIOS has been specially designed for the machine that it runs on. Because of this, it can speak the same dialect of "machine language” that the rest of the hardware speaks. However, since UNIX does not speak the same dialect, things get lost in the translation.

The Intel 80x86 family of processors (which SCO runs on) have an I/O space that is distinct from memory space. What this means is that memory (or RAM) is treated differently than I/O. Other machine architectures, such as the Motorola 68000 family, see accessing memory and I/O as the same thing. Although the addresses for I/O devices appears as "normal" memory addresses and the CPU is performing a read or write as it would to RAM, the result is completely different.

When accessing memory, either for a read or write, the CPU utilizes the same address and data lines as it does when accessing RAM. The difference lies in the M/IO# line on the CPU. For those not familiar with digital electronics, this can also be described as the Memory/Not IO line. That is, if the line is high, the CPU is addressing memory. If it is low, it is addressing an I/O device.

Although the SCO UNIX operating system is much different from DOS, it still must access the hardware in the same fashion. There are assembly language instruction that allow an operating system (or any program for that matter) to access the hardware correctly. By passing these commands the base address of the I/O device, the CPU knows to keep the M/IO# line low and therefore access the device and not memory.

You can see the base address of each device on the system every time you boot. The hardware screen shows you the devices it recognizes along with certain values such as the base address, the interrupt vector and the DMA channel. You can also see this same information by running the hwconfig command.

Although there are 16 I/O address lines coming from the 80386, some PCs only have 10 of these wired. So instead of having 64K of I/O address space (216), there is only 1K (210). When the system detects this you see the message 10 bits of I/O address decoding when the system is booting. Some machines have 11 or more address lines and, therefore, have a larger I/O space.

If your motherboard only uses 10 address lines, devices on the mother board that have I/O address (such as the DMA controller and PIC) will appear at their normal address as well as "image" addresses. This is because the high 6 bits are ignored, so any 16-bit address where the lower ten bits match will show up as an "image" address. Since there are 6 bits that are ignored, there are 63 possible "image” address. (64 minus the one for the "real" address)

These "image" addresses may cause conflicts with hardware that have I/O address higher than 0x3FF (1023), which is the highest possible with only 10 address lines. Therefore, if your motherboard only has 10 bits of I/O addresses, you shouldn't put devices at addresses higher than 0x3FF.

When installing, it is vital that no two devices have overlapping (or identical) base addresses. Whereas you can share interrupts and DMA channels on some machines, you can never share base addresses. If you attempt to read a device that has an overlapping base address, you may end up getting information from both devices.

If you are installing a board, whose default base address is the same as one already on the system, one needs to get changed before they both can work. Additionally, the base address of a card is almost always asked during its installation. Therefore you will need to keep track of this. See the section on troubleshooting for tips on maintaining a notebook with this kind of information.

Table 0.1 contains a list of the more common devices and the base address ranges that they use:




Motherboard devices (DMA Controller, PIC, timer chip, etc.)


Fixed disk controller (WD10xx)


Parallel port 2


Serial port 2


Parallel port 1


Monochrome display and parallel port 2


EGA or VGA adapter


CGA, EGA or VGA adapter


Floppy disk controller


Serial port 1

Table 0.1 Common hex addresses

The Expansion Bus

It is generally understood that the speed and capabilities of the CPU is directly related to the performance of the system as a whole. In fact, the CPU is a major selling point of PCs, especially among less experienced users. One aspect of the machine that is less understood and therefore less likely to be an issue is the expansion bus.

The expansion bus, simply put, is the set of connections and slots that allow users to add to, or expand, their system. Although not really an "expansion" of the system, you often find video cards and hard disk controllers attached to the "expansion bus."

Anyone who has opened up their machine has seen parts of the expansion bus. The slots used to connect cards to the system are part of this bus. A thing to note is that people will often refer to this bus as the bus. While it will be understood what is meant, there are other buses on the system. Just keep this in mind as you go through this chapter.

Most people are aware of the differences in CPUs. This could be whether the CPU is 16-bit or 32-bit, what the speed of the processor is, whether there is a math co-processor, and so on. The concepts of BIOS and interrupts are also very commonly understood.

One part of the machines hardware that is somewhat less known and often causes confusion is the bus architecture. This is the basic way in which the hardware components (usually on the motherboard) all fit together. There are three different bus architectures on which SCO operating systems will currently run. (Note: Here I am referring to the main system bus, although SCO can access devices on other buses.)

The three major types of bus architectures used are the Industry Standard Architecture (ISA), the Extended Industry Standard Architecture (EISA), and the Micro-Channel Architecture. Both ISA and EISA machines are manufactured by a wide range of companies, but only a few (primarily IBM) manufacture MCA machines.

In addition to the three mentioned above, there a few other bus types that can be used in conjunction or supplementary to the three. These include the Small Computer System Interface (SCSI), Peripheral Component Interconnect (PCI) and The Video Electronics Standards Association Local Bus (VL-Bus or VLB).

Both PCI and VLB exist as separate buses on the computer motherboard. Expansion cards exist for both these types of buses. You will usually find either PCI or VLB in addition to either ISA or EISA. Sometimes, however, you can also find both PCI and VLB in addition to the primary bus. In addition, it is possible to have machines that only have PCI, since it is a true system bus and not an expansion bus like VLB. However, as of this writing few machines provide PCI-only expansion buses.

SCSI, on the other hand, compliments the existing bus architecture by adding an additional hardware controller to the system. There are SCSI controllers (more commonly referred to as host adapters) that fit in ISA, EISA, MCA, PCI or VL-Bus slots.

Industry Standard Architecture (ISA)

As I mentioned before, most people are generally aware of the relationship between CPU performance and system performance. However, every system is only as strong as it's weakest component. Therefore, the expansion bus also sets limits on the system performance.

There were several drawbacks with the expansion original bus in the original IBM PC. First, it was limited to only 8 data lines. This meant that only 8 bits could be transferred at a time. Second, the expansion bus was, in a way, directly connected to the CPU. Therefore, it operated at the same speed as the CPU. This meant that in order to improve performance with the CPU, the expansion bus had to be altered as well. The result would have been that existing expansion cards would be obsolete.

In the early days of PC computing, IBM was not known to want to cut it's own throat. It has already developed quite a following with the IBM PC among users and developers. If it decided to change the design of the expansion bus, developers would have to re-invent the wheel and users would have to buy all new equipment. Instead of sticking with IBM, there was the risk that users and developers would switch to another platform.

Rather than risking that, IBM decided that backward compatibility was a paramount issue. One of the key changes was severing the direct connection between the expansion bus and CPU. As a result expansion boards could operate at a different speed than the CPU. This allowed users to keep existing hardware and allowed manufacturers to keep producing their expansion cards. As a result, the IBM standard became the industry standard and the bus architecture became known as the Industry Standard Architecture, or ISA.

In addition to this change, IBM added more address and data lines. They doubled the data lines to 16 and increased the address lines to 24. This meant that the system could address up to 16 megabytes of memory, the maximum that the 80286 CPU (Intel's newest central processor at the time) could handle.

When the 80386 came out, the connection between the CPU and bus clocks were severed completely, since no expansion board could operate at the 16MHz or more that the 80386 could. The bus speed does not need to be an exact fraction of the CPU speed, but an attempt has been made to keep it there, since by keeping the bus and CPU synchronized it is easier to transfer data. The CPU will only accept data when it coincides with it's own clock. If an attempt is made to speed up the bus a little, the data must wait until the right moment in the CPUs clock cycle before it can pass the data. Therefore, nothing has been gained by making it faster.

One method used to speed up the transfer of data is Direct Memory Access or DMA. Although DMA existed in the IBM XT, the ISA bus provided some extra lines. DMA allows the system to move data from place to place without the intervention of the CPU. In that way, data can be transferred from, let's say, the hard disk to memory while the CPU is working on something else. Keep in mind that in order to make the transfer, the DMA controller must have complete control of both the data and the address lines, so the CPU cannot be accessing memory itself at this time.

Figure 02831 - Direct Memory Access

Let's step back here a minute. It is somewhat of a misnomer to say that a DMA transfer occurs without intervention from the CPU, as it is the CPU that must initiate the transfer. However, once the transfer is started, the CPU is free to continue with other activities. DMA controllers on ISA-Bus machines use "pass-through" or "fly-by" transferred. That is, the data is not latched, or held internally, but rather is simply passes through the controller. If it were latched, two cycles would be needed: one to latch into the DMA controller and the second to pass it to the device or memory (depending on which was it was headed).

Devices tell the DMA controller that they wish to make DMA transfers through the use of one of three "DMA Request" lines, numbered 1-3. Each of these lines is given a priority based on it's number, with 1 being the highest. The ISA-Bus includes two sets of DMA controllers. There are four 8-bit channels and four 16-bit channels. The channels are labeled 0-7, with 0 having the highest priority.

Each device on the system that is capable of doing DMA transfers is given it's own DMA channel. The channel is set on the expansion board usually by means of jumpers. The pins that these jumpers are connected to are usually labeled DRQ, for DMA Request.

The two DMA controllers (both Intel 8237), each with 4 DMA channels, are cascaded together. The master DMA controller is the one that is connected directly to the CPU. One of it's DMA channels is used to connect to the slave controller. Because of this, there are actually only seven channels available.

Everyone who has had a baby knows what an interrupt driven operating system like SCO UNIX goes through on a regular bases. Just like a baby when it needs its diaper changed, when a device on the expansion bus needs servicing it tells the system by generating an interrupt. For example, when the hard disk has transferred the requested data to or from memory, it signals the CPU by means of an interrupt. When keys are pressed on the keyboard, the keyboard interface also generates an interrupt.

Upon receipt of such an interrupt, the system executes a set of functions commonly referred to as an Interrupt Service Routine, or ISR. Since the reaction to a key being pressed on the keyboard is different from the reaction when data is transferred from the hard disk, there needs to be different ISRs for each device. Although the behavior of ISRs is different under DOS than UNIX, their functionality is basically the same. For details of how this work under SCO, see the chapter on the kernel.

On the CPU there is a single interrupt request line. This does not mean that every device on the system is connected to the CPU via this single line. Just like there is a DMA controller to handle DMA requests, there is also an interrupt controller to handle interrupt requests. This is the Intel 8259 Programmable interrupt controller, or PIC.

On the original IBM PC, there were five "Interrupt Request" lines, numbered 2-7. Here again the higher the number the lower the priority. (Interrupts 0 and 1 are used internally and are not available for expansion cards.)

The ISA-Bus also added an additional PIC, which is "cascaded" off the first one. With this addition, there were now 16 interrupt values on the system. However, not all of these were available to devices. Interrupts 0 and 1 were still used internally, but also were interrupts 8 and 13. Interrupt 2 was something special. It too was reserved for system use, but instead of being a device of some kind, an interrupt on line 2, actually means that an interrupt is coming from the 2nd PIC. Similar to the way cascading works on the DMA controller.

A question that I brought up when I first started learning about interrupts is "What happens when the system is servicing an interrupt and another one comes in?" Well there are two mechanism for helping in this.

Remember that the 8259 is a "programmable" interrupt controller. There is a machine instruction called 'Clear Interrupt Enable' or CLI. If a program is executing what is called a critical section of code (on that should not be stopped in the middle), the programmer can call the CLI instruction and disable acknowledgment of all incoming interrupts. As soon as the critical section is left, the program should execute a 'Set Interrupt Enable', or STI instruction within a timely manner.

I say "should" because the programmer doesn't have to. There could be a CLI instruction in the middle of a program somewhere and if the STI is never called, no more interrupts will be serviced. Nothing, aside from common sense, prevents him or her from doing this. Should the program take too long before it calls the STI, interrupts could get lost. This is common on busy systems when characters from the keyboard 'disappear'.

The second mechanism is that the interrupts are priority based. The lower the interrupt request level, or IRQ, the higher the priority. This has an interesting side effect since the second PIC (or slave) is bridged off the first PIC (or master) at IRQ2. The interrupts on the first PIC are numbered 0-7 and on the second PIC 8-15. However, interrupt 2 is where the slave PIC is attached to the master. Therefore, the actual priority is 0,1,8-15,3-7.

Table 0.2 contains a list of the standard interrupts.




system timer




2nd level interrupt






Printer 2




Printer 1




not assigned


not assigned


not assigned


not assigned


math co-processor


Hard Disk


Hard Disk

Table 0.2- Default Interrupts

One consideration needed to be made when dealing with interrupts. On XT machines, IRQ 2 was a valid interrupt. Now on AT machines, IRQ 2 was bridged to the second PIC. So, in order to ensure that devices configured to IRQ 2 worked properly, the IRQ 2 pin on the all the expansion slots was connected to the IRQ 9 input of the second PIC. In addition, all the device attached to the second PIC, have a IRQ value associated with where they are attached to the PIC, plus the fact they generate and IRQ 2 on the first PIC.

The PICs on an ISA machine are edge-triggered. This means that they react only when the interrupt signal is transitioning from low to high. That is, it is on a transition edge. This becomes an issue when you attempt to share interrupts. This is where two devices use the same interrupt.

Assume you have a serial port and floppy controller both at interrupt 6. If the serial port generates an interrupt, the system will "service" it. If the floppy controller generates an interrupt before the system has finished servicing the interrupt for the serial port, the interrupt from the floppy gets lost. There is another way to react to interrupts called "level triggered” which we will get to shortly.

As I mentioned earlier, a primary consideration in the design of the AT Bus (as the changed PC bus came to be called) was that it maintained compatibility with it predecessors. It maintains compatibility with the PC expansion cards, but takes advantage of 16-bit technology. In order to do this, connectors were not changed only added. Therefore, card designed for the 8-bit PC bus, could be slide right into a 16-bit slot on the ISA-Bus and no one would know the difference.

Micro-Channel Architecture (MCA)

The introduction of IBM's Micro Channel Architecture (MCA) was a redesign of the entire bus architecture. Although IBM was the developer of the original AT architecture, which later became ISA, there were many companies producing machines that followed this standard. The introduction of MCA mean that IBM could produce machines that it alone had the patent rights to.

One of the most obvious differences is the smaller slots required for MCA cards. ISA cards are 4.75 x 13.5 inches, compared with the 3.5 x 11.5 inches of MCA cards. As a result, the same number of cards can fit into a smaller area. The drawback was that ISA cards can not fit into MCA slots and MCA cards could not fit into ISA slots. Although this might seem like IBM had decided to cut its own throar, the changes made in creating MCA made it very appealing.

Part of the decrease in size was a result of surface mount components or surface mount technology (SMT). Previously cards used "through-hole" mounting were holes were drilled through the system board (hence the name). Chips where mounted in this holes or into holders that were mounted in the holes. Surface mount does not use this and as a result, looks "flattened" by comparison. This not only saves space, but also time and money as SMT cards are easier to produce. In addition, the spacing between the pins on the card ( 0.050") corresponds to the spacing on the chips. This makes designing the boards much easier.

Micro Channel also gives increases in speed since there is a ground on every fourth pin. This reduces interference and as a result, MCA bus can operate at ten times the speed of non-MCA machines and still comply with FCC regulations in terms of radio frequency interference.

Another major improvement was the expansion of the data bus to 32 bits. This meant that machines were no longer limited to 16 megabytes of memory, but could now access 4 gigabytes.

One of the key changes in the MCA architecture was the concept of hardware-mediated bus arbitration. With ISA machines, devices could share the bus, and the OS was required to arbitrate who got a turn. With MCA, that arbitration is done at the hardware level, freeing the OS to work on other things. This also enables multiple processors to use the bus. To implement this, there are several new lines to the bus. There are four lines that determine the arbitration bus priority level, which represents 16 different priority levels that a device could have. Who gets the bus, is dependent on the priority.

From the user's perspective, installation of MCA cards is much easier than for ISA cards. This is due to the introduction of the Programmable Option Select, or POS. With this, the entire hardware configuration is stored in the CMOS. When new cards are added, you are required to run the machine's reference disk. In addition, each card comes with an options disk which contains configuration information for the card. With the combination of reference disk and options disk, conflicts are all but eliminated.

Part of the MCA spec is that each card has it's own unique identifying number encoded into the firmware. When the system boots, the settings in the CMOS are compared to the cards that are found on the bus. If one has been added or removed, the system requires you to boot using the reference disk to ensure things are set up correctly.

As I mentioned, on each options disk is the necessary configuration information. This is contained within the Adapter Description File (ADF). The ADF contains all the necessary information to get the expansion card to be recognized by your system. Because it is only a few kilobytes in size, many ADF files can be store on a floppy. This is useful in situations like we had in SCO Support. There were several MCA machines in the department, with dozens of expansion cards, each with their own ADF file. Rather than having copies of each of the diskettes, the analysts who supported MCA machines (myself included) each had a single disk with all the ADF files. (Eventually that too became burdensome, so we copied the ADF files into a central directory where we could copy them as needed.) Any time we needed to add a new card to our machines for testing, we didn't need to worry about the ADF files, as they were all in one place.

Since each device has it's own identification number and this number is stored in the ADF, the reference diskette can find the appropriate one with no problem. All ADF files have names such as @BFDF.ADF, so it isn't obvious what kind of card the ADF file is for, just by looking at the name. However, this since the ADF files are simply text files, it is easy to figure out by looking at the contents.

Unlike ISA machines, the MCA architecture allows for interrupt sharing. Since many expansion boards are limited to a small range of interrupts, it is often difficult, if not impossible to configure every combination on your system. Interrupt sharing is possible on MCA machine because they use something called level-triggered interrupts or level-sensitive interrupts.

With edge-triggered interrupts, or edge-sensitive interrupts, that the standard ISA-bus use, an interrupt is generated and then drops. This sets a flag in the PIC, which figures out which device generated the interrupt and services it. If interrupts were shared with edge-triggered interrupts, any interrupt that arrived between the time the first one is generated and serviced would be lost. This is because the PIC has no means of knowing that a second one occurred. All it sees is that an interrupt occurred.

Figure 02832 Interrupt signal

With level-triggered interrupts, when an interrupt is generated it is held high until the PIC forces it low after the interrupt has been serviced. If an other device were on the same interrupt, the PIC would try to pull down the interrupt line, however, the seconds device would keep it high. The PIC would then see that it was high and would be able to service the second device.

Despite the many obvious advantages of the MCA, there are a few drawbacks. One of the primary drawbacks is the interchangeability of expansion cards between architectures. MCA cards can only fit in MCA machines. However, it is possible to use an ISA card in an EISA machine and EISA machines is what we will talk about next.

Extended Industry Standard Architecture (EISA).

In order to break the hold that IBM had on the 32-bit bus market with the Micro-Channel Architecture, a consortium of computer companies, lead by Compaq, issued their own standard in September, 1988. This new standard was an extension of the ISA bus architecture and was (logically) called the Extended Industry Standard Architecture (EISA). EISA offered many of the same feature as MCA, but with a different approach.

Although EISA provides some major improvements, it has maintained backward compatibility with ISA boards. Therefore, existing ISA boards can be used in EISA machines. In some cases, such boards can even take advantage of the features that EISA offers.

In order to maintain this compatibility, EISA boards are the same physical size as there ISA counterparts as well as providing connections to the bus in the same locations. The original designed called for an extension of the bus slot, similar to the way the AT slots were an extension on the XT slots. However, this was deemed impractical as some hardware vendors had additional contacts that extended beyond the ends of the slots. There was also the issue that in most cases, the slots would extend the entire length of the motherboard. This meant that the motherboard would need to be either longer or wider to handle the longer slots.

Instead, the current spec calls for the additional connections to be "intertwined" with the old ones and extending lower. In what used to be gaps between the connectors, there are now leads to the new connectors. Therefore, EISA slots are deeper than those for ISA machines. By looking at EISA cards you can easily tell them from ISA cards by the two rows of connectors.

Figure 02833 shows what the ISA and EISA connections look like. Note that this is not to scale.

Figure 02833 Comparison of ISA and EISA connections

Another major improvement of EISA over ISA is the issue of bus arbitration Bus arbitration is the process by which devices "discuss” whose turn it is on the bus and then let one of them go. In XT and AT class machines, control of the bus was completely managed by the CPU. EISA includes additional control hardware to take this job away from the CPU. This does two important things. First, the CPU is now 'free' to carry on more important work and second the CPU gets to use the bus only when it's turn comes around.

Hmmm. Does that sound right? Since the CPU is the single most important piece of hardware on the system, shouldn't it get the bus whenever it needs it? Well, yes and no. The key issue of contention is the use of the word "single." EISA was designed with multi-processing in mind. That is, computers with more than one CPU. If there are more than one CPU, which one is more important?

The term used here is bus arbitration. Each of the six devices that EISA allows to take control of the bus, has it's own priority level. A device signals it's desire for the bus by sending a signal to the Centralized Arbitration Control (CAC) unit. If conflicts arise (i.e. multiple requests), the CAC units resolves them according to the priority of the requesting devices. Certain activity such as DMA and memory refresh have the highest priority, with the CPU following close behind. Such devices are called "bus mastering devices” or "bus masters” as the become the master of the bus.

The EISA DMA controller was designed for devices that cannot take advantage of the bus mastering capabilities of EISA. The DMA controller supports ISA, with ISA timing and 24 bit addressing as the default mode. However, it can be configured by EISA devices to take full advantage of the 32-bit capabilities.

Another advantage that EISA has is the concept of dual buses. Since cache memory is considered a basic part of the EISA specification, the CPU can often continue working for some time even if it does not have access to the bus.

A major drawback of EISA (as compared with MCA) is that in order to maintain the compatibility to ISA, EISA speed improvements cannot extend into memory. This is because the ISA bus cannot handle the speed requirements of the high-speed CPUs. Therefore, EISA requires separate memory buses. This results in every manufacturer having its own memory expansion cards.

In our discussion on ISA we talked about the problems with sharing of level-triggered interrupts. MCA on the other hand uses edge-triggered which allows interrupt sharing. EISA uses a combination of the two. Obviously, it needs to support edge-triggered to maintain compatibility with ISA cards. However, it allows EISA boards to configure that particular interrupt as either edge or level triggered.

As with MCA, EISA allows each board to be identified at boot up. Each manufacturer is assigned a prefix code to ease identification of the board. EISA also provides a configuration utility, similar to the MCA reference disk to allow configuration of the cards. In addition, EISA supports automatic configuration which allows the system to recognize the hardware at boot-up and configure itself accordingly. This can present problems for SCO system as drivers in the kernel rely on the configuration to remain constant. Since each slot on an EISA machine is given a particular range of base address, it is necessary to modify your kernel prior to making such changes. This is often referred to as the EISA-config, EISA Configuration Utility or ECU.

VESA Local Bus (VLB)

As I've said before and all say again, the system is only as good as its weakest part. With computer systems, that weakest link has been the IO subsystem for many years. CPUs got faster, but the system was still limited by slow communication with the outside world. The 32-bit buses of MCA and EISA made significant advances and increased throughput by a factor of 5 or more. However, this was not enough.

The Video Electronics Standards Association, or VESA, ( a consortium of over 120 companies) came up with an immediate solution to this problem. Although originally intended as a means of speeding up video transfer, the VESA local bus, or VL-Bus can achieve data transfer speeds that make it a worthy partner to fast 80386, 80486 CPUs and even the Intel Pentium.

Like EISA, the VL-Bus is a hybrid. That is, it is not a complete change from ISA as MCA is. Whereas EISA interleaves the new connections with the old, the VL-Bus extends the existing slots, something EISA decided not to do. Because of the load put on the system by the VL-Bus, usually only three slots on the mother board have the VL-Bus extension. The other remain just ISA, EISA or MCA.

The reason for the three card limit is one of performance. There is the slight cost increase for adding the extra connectors and traces, however the lure of the increased performance would outweigh the cost. Alas, things are not that easy. The CPU is directly accessing the control, address and data pins of the VL-Bus cards (That's why it's call local). However, unless you want to reduce the speed of the CPU, (ya, right) the CPU just can't handle more than three external loads. In practice, this means that although there are three slots, the CPU can't have more than one or two at speeds greater than 33MHz.

However, on the other hand it is relatively inexpensive to change an existing ISA or EISA design into a VL-Bus. There are a few new chips, a couple of new traces on the motherboard and two or three new connectors. There isn't even a change to the BIOS.

VL-Bus is not intended as a replacement for ISA, although MCA and EISA sell themselves as such. (Or a replacement for each other, depending on whose literature you read) Current technology doesn't seem to allow it. As I mentioned, you can only have one or two VL-Bus devices before you have to consider reducing your CPU speed. Therefore, you have to have some other kind of bus slots, as well.

ISA/ESIA slots are the same length, with VLB slots hanging down "below" them. Because the VL-Bus slots are an extension of the existing slots, it is not necessary to leave those slots empty if you have only one or two VL-Bus cards. In fact, all the slots with the VL-Bus extension can be filled with other cards. (ISA, EISA or MCA).

Watch out for machines that are advertised as "local bus". It is true that they might be, however there is a catch. Sometimes they have an SVGA chip or hard disk controller built onto the mother board. These are connected directly to the CPU and are therefore "local", but they do not adhere to the VL-Bus spec.

Peripheral Component Interconnect (PCI)

More and more machines you find on the market today are being included with PCI local buses. One advantage that PCI offers over VL-BUS is the higher performance, automatic configuration of peripheral cards, and superior compatibility. A major drawback with the other bus types (ISA, EISA, MCA) is the I/O bottleneck. Local buses overcome this by accessing memory using the same signals lines as the CPU. As a result, they can operate at the full speed of the CPU as well as utilizing the 32-bit data path. Therefore, I/O performance is limited by the card and not the bus.

Although PCI is referred to as a local bus, it actually lies somewhere "above" the system bus. As a result it is often referred to as a "mezzanine bus" and has electronic "bridges" between the system bus and the expansion bus. As a result, the PCI bus can support up to 5 PCI devices, whereas the VL-BUS can only support two or three. In addition, the PCI bus can reach transfer speeds four times that of EISA or MCA.

Despite PCI being called a mezzanine bus, it could replace either ISA, EISA or MCA buses. Although in most cases, PCI is offered as a supplement to the existing bus type. If you look at a motherboard with PCI slots, you will see that they are completely separate from the other slots. Whereas VLB slots are extensions of the existing slots.

PCI offers additional advantages over the VLB as the VLB cannot keep up with the speed of the faster CPUs, especially if there are multiple VLB devices on the system. Because PCI works together with the CPU it is much more suited to multi-tasking operating systems like UNIX. Whereas the CPU cannot work independently if a VLB device is running.

Like EISA and MCA, PCI boards have configuration information built into the card. As the computer is booting, the system can configure each card individually based on system resources. This configuration is done "around" existing ISA, EISA and MCA cards on your system.

To overcome a shortcoming PCI has when transferring data, Intel (designer and chief proponent of PCI) has come up with a PCI specific chip sets, which allows data to be stored on the PCI controller, freeing the CPU to do other work. Although this may delay the start of the transfer, however once the data flow starts, it should continue uninterrupted.

A shortcoming of PCI, (at least from SCO's perspective) is that ISA and EISA cards can be swapped for VLB cards, without any major problems. This is not so for the PCI cards. Significant changes need to be made to both the kernel and device drivers to account for the differences.

The Small Computer Systems Interface (SCSI)

The SCSI bus is an extension of your existing bus. A controller card, called a host adapter, is place into one of your expansion slots. A ribbon cable, containing both data and control signals then connect the host adapter to you peripheral devices.

There are several advantages to having SCSI in you system. If you have a limited number of bus slots, then the addition of a single SCSI host adapter allows you to add up to seven more device by taking up only one slot with older SCSI systems and up to 15 devices with wide-SCSI. SCSI has higher throughput than either IDE or ESDI. SCSI also supports many more different types of devices.

There a five different types of SCSI devices. The original SCSI specification is commonly referred to as SCSI-1. The newer specification, SCSI-2 SCSI: offers speed and performance increases over SCSI-1 as well as adds new commands. Fast-SCSI SCSI: increases throughput to over 10MB/second. Fast-Wide SCSI SCSI: provides a wider data path and throughput of up to 40MB/second and up to 15 devices. The last type, SCSI-3 SCSI: is still being developed as of this writing and it will provide the same functionality as Fast-Wide SCSI as well as support longer cables and more devices.

Each SCSI device has it's own controller and can send, receive and execute SCSI commands. As long as it communicates with the host adapter using proper SCSI commands, internal data manipulation is not an issue. In fact, most SCSI hard disks have an IDE controller with a SCSI interface built onto them.

The fact that there is a standard set of SCSI commands, new and different kinds of devices can be added to the SCSI family with little trouble. However, IDE and ESDI and limited to disk type devices. Because the SCSI commands need to be "translated" by the device, there is a slight overhead. This is compensated for by the fact that SCSI devices are intrinsically faster than non-SCSI devices. SCSI devices also have higher data integrity than non-SCSI devices. The SCSI cable consist of 50 pins, half of which are ground. Since every pin has it's own ground, it is less prone to interference, therefore it has higher data integrity.

On each SCSI host adapter there are two connectors. One is at the top of the card (opposite the bus connectors) and is used for internal devices A flat ribbon cable is used to connect each device to the host adapter. On internal SCSI devices, there is only one connector on the device itself. Should you have external SCSI devices, there is a connector on the end of the card (where it attaches to the chassis). Here SCSI devices are "daisy-chained together.

The SCSI bus needs to be closed in order to work correctly. By this I mean that each end of the bus must be terminated. There is usually a set of resistor (or slots for resistors) on each device. The device that is physically at either end of the SCSI bus needs to has such resistors. This is referred to as terminating the bus and the resistors are called terminating resistors.

It's fine to say that the SCSI bus needs to be terminated. However, that doesn't much to help your understanding of the issue. As with other kinds of devices, SCSI devices reacts to commands sent along the cable to them. Unless otherwise, impeded the signals reach the end of the cable and bounce back. There are two outcomes, both of which are undesirable: either the bounced signal interferes with the valid one or the devices reacts to a second (unique in its mind) command. By placing a terminator at the end of the bus, the signals are "absorbed" and, therefore, don't bounce back.

Figure 02836 and Figure 02837 show examples of how the SCSI bus should be terminated. Note that Figure 02836 says that it is an example of "all external devices." Keep in mind that the principle is still the same for internal devices. If all the devices are internal, then the host adapter would be still be terminated as well as would be the last device in the chain.

Figure 02836 Example of SCSI Bus with all external devices

Figure 02837 Example of SCSI Bus with both external and internal devices

If you don't have any external devices (or only external) then the host adapter is at one end of the bus. Therefore, it too must be terminated. Many host adapters today have the ability to be terminated in software, therefore is no need for terminating resistors (also known as resistor packs).

Each SCSI device is "identified" by a unique pair of addresses. This is the controller addresses is also referred to as the SCSI ID and is usually set by jumpers or dip switches on the device itself. Keep in mind that the ID is something that is set on the devices itself and is not related to location on the bus. Note that is in Figure 02836, above, the SCSI ID of the devices are ordered ID 0, 6 and 5.

Care be taken when setting the SCSI ID. It is important that you are sure of what the setting is, otherwise the system will not be able to talk to the device. OpenServer supports SCSI host adapters with multiple buses, therefore this is a triplet of numbers rather than a pair. This increases the possibility of mistakes by 50%.

This sounds pretty obvious, but some people don't make sure. They make assumptions about what they see on the device as to how the ID is set and do not fully understand what it means. For example, I have an Archive 5150 SCSI tape drive. On the back are three jumpers, labeled 0,1 and 2.

I have had customers call in with similar hardware with their SCSI tape drive set at 2. After running 'mkdev tape' and rebooting, they still cannot access the tape drive. Nothing else is set at ID 2, so there are no conflicts. The system can access other devices on the SCSI bus, so the host adapter is probably okay. Different SCSI devices can be plugged into the same spot on the SCSI cable, so it's not the cable. The SCSI bus is terminated correctly, so that's not the problem.

Rather than simply giving up and saying that it was a hardware problem, I suggested that the customer change the SCSI ID to 3 or 4 to see if that works. Well, he can't. The jumpers on the back only allow him to change the SCSI ID to 0, 1 or 2. Then it dawns on me what the problem is. The jumpers in the back are in binary! In order to set the ID to 2, the jumper needs to be on jumper 1 and not jumper 2. Once we switched it to jumper 1 and rebooted, all was well. (Note: I had this customer before I bought the Archive tape drive. Went I got my drive home and wanted to check the SCSI ID, I saw only three jumpers. I then did something that would appall most users: I read the manual! Sure enough, it explained that the jumpers for the SCSI ID were binary.)

Figure 02838 Examples of binary for SCSI IDs

An additional problem to this whole SCSI ID business is that manufacturers are not consistent. Some might label the jumpers (or switches) 0,1 and 2. Others label them 1,2 and 4. Still others label them ID0, ID1, ID2. I have even seen some with a dial on them with 8 settings, which makes configuration a lot easier. The key is that no matter how they are label, 3 pins or switches is binary and their values are added to give you the SCSI ID.

Let's look at Figure 02838. This represents the jumper settings on a SCSI device. In the first example, none of the jumpers are set, so the SCSI ID is 0. In the second example, the jumper labeled 1 is set. This is 21 or 2, so the ID here is 2. In the last example, the jumpers labeled 2 and 0 are set. This is 22 + 20 = 4 + 1 or 5.

On an AT-bus, the number of devices added is limited only by the number of slots (Granted the AT-Bus is limited in how far the slot can be away from the CPU and therefore is limited in the number of slots). However, on a SCSI bus, there can be only seven devices in addition to the host adapter. Whereas devices on the AT-bus are distinguished by their base address, devices on the SCSI bus are distinguished by their ID number.

ID numbers range from 0-7 and unlike base addresses, the higher the ID the higher the priority. Therefore, the ID of the host adapter should always be a 7. Since it manages all the other devices, it should have the highest priority. On the newer Wide-SCSI buses, there can be up to 15 devices, plus the host adapter, with SCSI Ids from 0-15.

Now back to our story...

The device address is known as the logical unit number (LUN). On devices with embedded controllers, such has hard disks, the LUN is always 0. All the SCSI devices directly supported by SCO UNIX have embedded controllers. Therefore, you are not likely to see devices set at LUNs other than 0.

In theory, a single-channel SCSI host adapter can support 56 devices. There are devices called bridge adapters that connect devices without embedded controllers to the SCSI bus. Devices attached to the bridge adapter had LUNs between 0-7. If there are 7 bridge adapters, each with 8 LUNs (relating to 8 devices), there are 56 total devices possible.

The original SCSI-1 spec, only defined the connection to hard disks. The SCSI-2 spec has extended this to such devices like CD-ROMS, tape drives, scanners and printers. Provided these devices all adhered to the SCSI-2 standard they can be mixed and match even with older SCSI-1 hard disks.

One common problem with external SCSI devices is the fact that the power supply is external as well. If you are booting your system with the power to that external device turned off, once the kernel gets past the initialization routines for that device (the hardware screen) it can no longer recognize that device. The only solution is to reboot. To prevent this problem, it is a good idea to have all your SCSI devices internal. (This doesn't help for scanners and printer, but since SCO doesn't yet have drivers for them, it's a mute point.)



There are several ways a computer stores the data it works with. Both are often referred to as memory. Long term memory, the kind that remains in the system even if there is no power, is called non-volatile memory and exists in such places as on hard disks or floppies. This is often referred to as secondary storage. Short term, or volatile memory is stored in memory chips, called RAM, for Random Access Memory. This is often referred to as primary storage.

There is a third class of memory that is often ignored, or at least not though of often. This is memory that exists in hardware on the system, but does not disappear when power is turned off. This is called ROM, or Read Only Memory.

We need to clarify one thing before we go on. Read-only memory is as it says, read-only. For the most part it cannot be written to. However, like Random-Access Memory the locations within it can be accessed in a "random" order, that is, at the discretion of the programmer. Also read-only memory isn't always read-only, but that's a different story that goes beyond this book.

The best way of referring to memory to keep things clear (at least the best way in my opinion) is to refer to that memory we traditional call RAM as "main" memory. This is where our programs and the operating system actually reside.

There are two broad classes of memory: Dynamic RAM or DRAM (read Dee-Ram) and Static RAM or SRAM (read Es-Ram). DRAM is composed of tiny capacitors that can hold their charge only a short while before they require a "boost." SRAM is static because it does not require an extra power supply to keep it's charge. As a result of the way it works internally, SRAM is faster and more expensive than DRAM. Because of the cost, the RAM that composes main memory is typically DRAM.

DRAM chips hold memory in ranges from 64k up to 16Mb and more. In older systems, individual DRAM chips were laid out in parallel rows called banks. The chips themselves were called DIPPs, for Dual In-Line Pin Package. These look like you average, run-of-the-mill computer chip, with two rows of parallel pins, one on each side of the chip. If memory ever went bad in one of these banks, it was usually necessary to replace (or test) dozens of individual chips. Since the maximum for most of these chips was 256 kilobits (32Kb), it took 32 of them for each megabyte!

On newer systems, the DIPP chips have been replaced by Single In-Line Memory Modules, or SIMMs. Technological advances have decreased the size considerably. Whereas a few years ago you needed an area the size of standard piece of binder paper to hold just a few megabytes, today's SIMMs can squish twice that much into an area the size of a stick of gum.

SIMMs come in powers of 2 (1, 2, 4, 8, etc) megabytes and are generally arranged in banks of four or eight. Because of the way the memory is accessed, you sometimes cannot mix sizes. That is, if you have four 2Mb SIMMs, you cannot simply add an 8Mb SIMM to get up to 16Mb. Bare this in mind when ordering your system or ordering more memory. You should first check the documentation that came with the motherboard or the manufacturer.

Many hardware salespeople are not aware of this distinction. Therefore, if you order a system with 8 MB that's "expandable" to 128Mb, you may be in for a big surprise. True there are 8 slots that can contain 16Mb each. However, if the vendor fills all eight slots with 1 Mb SIMMs to give you your 8 MB, you may have to throw everything out if you ever want to increase you RAM.

However, this is not always the case. My motherboard has some strange configurations. The memory slots on my motherboard consist of two banks of four slots each. (this is typical of many machines) Originally, I had one bank completely full with four 4Mb SIMMs. When I installed Open Server this was barely enough. Once I decided to start X-Windows and Wabi, this was much too little. I could have increased this by 1Mb by filling the first bank with four 256K SIMMs and moving the four 4Mb SIMMs to the second bank. However, if I wanted to move up to 20Mb, I could use 1Mb instead of 256K. So, here is one example where everything does not have to match. In the end, I added four 4 MB SIMMs to bring my total up to 32 MB. The moral of the story: read the manual!

Another issue that needs to be considered with SIMMs is that the motherboard design may require you to put in memory in either multiples of two or multiples of four. The reason for this is the way the mother board accesses that memory. Potentially, a 32-bit machine could read a byte from four SIMMs at once, essentially reading the full 32-bytes in one read. Keep in mind that the 32 bits are probably not being read simultaneously. However, being able to read them in succession is faster that reading one bank and then waiting for it to reset.

Even so, this requires special circuitry for each of the slots, called address decode logic. The address decode logic receives a memory address from the CPU and determines which SIMM it's in and where on the SIMM. In other words it decodes the address to determine which SIMM is needed for a particular physical address..

This extra circuitry makes the machine more expensive as this is not just an issue with the memory, but rather the motherboard design as well. Accessing memory in this fashion is called "page mode" as the memory is broken up into sets of bytes, or pages. Because the address decode logic is designed to access memory in only one way, the memory that is installed must fit the way it is read. For example, my motherboard requires each bank to be either completely filled or completely empty. Now, this requires a little bit of explanation.

As I mentioned earlier, DRAM consists of little capacitors for each bit of information. If the capacitor is charged, then the bit is 1, if there is no charge, the bit is 0. Capacitors have a tendency to drain over time, and for capacitors this small, that time is very short. Therefore they must be regularly (or dynamically) recharged.

When a memory location is read, there must be some way of determining if there is a charge in the capacitor or not. The only way of doing that is to discharge the capacitor. If it can be discharged, that means there was a charge to begin with and the system knows the bit was a 1. Once discharged, internal circuitry recharges the capacitor.

Now, assuming the system wanted to read two consecutive bytes from a single SIMM. Since there is no practical way for the address decode logic to tell that the second read is not just a re-read of the first byte, the system must wait until the first byte has recharged itself. Only then can the second byte be read.

By taking advantage of the fact that programs run sequential and rarely read the same byte more than once at any given time, the memory subsystem can interleave its reads. That is, while the first bank is recharging, it can be reading from the second, while the second is recharging, it can be reading from the third and so on. Since subsequent reads must wait until previous one have completed, this method is obviously not as fast as simultaneous reads. This is referred to as "interleaved" or "banked" memory.

Figure 02839 Comparison of 30-pin and 72-pin SIMMs

Since all of these issues are motherboard dependent, it best to check the hardware documentation when changing or adding memory. Additionally, settings, or jumpers, may need to be adjusted on the motherboard to tell it how much RAM you have and in what configuration.

Another issue that addresses speed is the physical layout of the SIMM. SIMMs are often described as being arranged in a "by-9" or "by 36" configuration. This refers to the number of bits that are immediately accessible. So, in a "by-9" configuration 9 bits are immediately accessible with one used for parity. In a "by-36" configuration, 36 bits are available with 4 bit for parity (1 for each 8 bits). The "by-9" configuration come on SIMMs with 30 pins, whereas the "by-36" come on SIMMs with 72 pins. The 72-pin SIMMs can be read 32-bits simultaneously . So, there are even faster than 30-pin SIMM at the same speed.

There are also different physical sizes for the SIMM. The SIMMs with 30 pins are slightly smaller than those with 72 pins. The larger, 72-pin variety are called PS/2 SIMMs as they are used in IBM's PS/2 machines. Aside from being slightly larger, these have a notch in the center so it is physically impossible to mix up the two. In both cases there is a notch on one end. This fits into a key in the slot on the motherboard, which makes putting the SIMM in backwards almost impossible.

SIMMs come in several different speeds, the most common today are between 60-80 nanoseconds. Although there is usually no harm in mixing speeds, there is little to be gained. However, I want to emphasize the word usually. Mixing speeds has been known to cause panics. Therefore, if you mix speeds, it is best keep all the SIMMS within a single bank at a single speed. If your machine does not have multiple banks, then it is best not to mix speeds. Even if you do, remember that the system is only as fast as its slowest component.

Cache Memory

Based on the principle spatial locality a program is more likely to be spending it's time executing code around the same set of instructions. This is demonstrated by that fact that tests have shown that most programs spend 80% of their time executing 20% of their code. Cache memory takes advantage of that.

Cache memory, or sometimes just cache, is a small set of very high speed memory. Typically it uses SRAM which can be up to ten times more expensive than DRAM, which usually makes it prohibitive for anything other than cache.

When the IBM PC first came out, DRAM was fast enough to keep up with even the fastest processor. However, as CPU technology increased, so did its speed. Soon, the CPU began to outrun its memory. The advances in CPU technology could not be utilized unless the system was filled with the more expensive, faster SRAM.

The solution to this was a compromise. Using the locality principle, manufactures of fast 386 and 486 machines began including a set of cache memory consisting of SRAM, but still populated main memory with the slower, less expensive DRAM.

To better understand the advantages of this scheme, let's cover the principle of locality in a little more detail. For a computer program we deal with two types of locality: temporal (time) and spatial (space). Since programs tend to run in loops (repeating the same instructions over and over), the same set of instructions need be read in over and over. The longer a set of instructions is in memory without being used, the less likely it is to be used again. This is the principle of temporal locality. What cache memory does is allows us to keep those regularly used instructions "closer" to the CPU making access to them much faster.

Spatial locality is the relationship between consecutively executed instructions. We just said that a program spends more of it's time executing the same set of instructions. Therefore, in all likelihood, the next instruction the program will be executing lies in the next memory location. By filling cache with more than just one instruction at a time, the principle of spatial locality can be taken advantage of.

Is there really such a major advantage to cache memory? Cache performance is evaluated in terms of cache hits. A hit occurs when the CPU requests a memory location and it is already in cache. (it does not have to go to main memory to get it) Since most programs run in loops (including the OS), the principle of locality results in a hit ratio of 85%-95%. Not bad!

On most 486 machines, two levels of cache are used. They are called (logically) first level cache and second level cache. First level cache is internal to the CPU. Although nothing (other than cost) prevents it from being any larger, Intel has limited the first level cache in the 486 to 8k.

Figure 028310 Level-1 and Level-2 caches

Second level cache is the kind that you buy extra with your machine. This is often part of the ad you see in the paper and is usually what people are talking about when they say how much cache is in their system. This kind of cache is external to the CPU and can be increased at any time, whereas first level cache is an integral part of the CPU and the only way to get more is to buy a different CPU. Typical sizes of second level cache range from 64K-256K. This is usually in increments of 64K.

A major problem exists when dealing with cache memory and that is the issue of consistency. What happens when main memory is updated and cache is not? What happens when cache is updated and main memory is not? This is where the cache's write policy comes in.

The write policy determines if and when the contents of the cache are written back to memory. Write-Through cache simply writes the data through the cache directly into memory. This slows things down on writes, but you are assured that the data is consistent. Buffered write through is a slight modification of this, where data is collected and everything is written at once. Write-Back improves cache performance by only writing to main memory when necessary. Write-Dirty is when it writes to main memory only when it has been modified.

Cache (or main memory for that matter) is referred to as "dirty" when it is written to. Unfortunately, the system has no way of telling whether anything has changed, just that it is being written to. Therefore it is possible, but not likely, that a block of cache is written back to memory even if it not "really" dirty.

Another aspect of cache is its organization. Without going into detail (that would take most of a chapter itself) we can generalize by saying there are four different types of cache organization.

The first kind is fully associative. This means that every entry in the cache has an slot in the "cache directory" indicating where it came from in memory. Usually these are not individual bytes, but chunks of four bytes or more. Since each "slot" in the cache has a separate directory slot, any location in RAM can be placed anywhere in the cache. This is the simplest scheme, but also the slowest since each cache directory entry must be searched until a match (if any) is found. Therefore, this kind of cache is often limited to just 4Kb.

Direct-mapped or 1-way set associative cache requires that only a single directory entry be searched. This speeds up access time considerably. The location in the cache is related on the location in memory and is usually based on blocks memory equal to the size of the cache. For example, if the cache could hold 4K 32-bit (4-byte) entries, then the block that each entry is associated with is also 4K x 32 bits. The first 32 bits in each block are read into the first slot of the cache. The second 32 bits in each block are read into the second slot, and so on. The size of each entry, or line, usually ranges from 4 to 16 bytes.

There is a mechanism called a tag, to tell us which of the blocks this came from. Also, because of the very nature of this method, the cache cannot hold data from multiple blocks for the same offset. If, for example, slot 1 was already filled with the data from block 1 and a program wanted to read the data at the same location from block 2, the data in the cache would be overwritten. Therefore, the shortcoming in this scheme is when data is read at intervals that are the size of these blocks, the cache gets constantly over-written. Keep in mind that this does not occur too often as the due to the principle of spatial locality.

The third type is an extension of the 1-Way Set Associative Cache, called the 2-way set associative. Here there are two entries per slot. Again, data can end up in only a particular, slot but there are two places to go within that slot. Granted, the system is slowed a little by having to look at the tags for both slots. However, this scheme allows data at the same offset from multiple blocks to be in the cache at the same time. This is also extended to 4-way set associative cache. In fact, the cache internal to 486 and Pentium has a 4-way set associate cache.

Although this is interesting stuff (at least to me), you may be asking yourself "Why is this memory stuff important as a system administrator?" Well, first, knowing about the differences in RAM (main memory) can aide you in making decisions about your upgrade. Also, as I mentioned earlier, it may be necessary to set switches on the motherboard if you change memory configuration.

Knowledge about cache memory is also important for the same reason, but also because this may be adjustable by you. On many machines, the write policy can be adjusted through the CMOS. For example, on my machine I have a choice of Write-Back, Write-Through and Write-Dirty. Depending on the applications you are running, you may want to change this to improvce performance.

Odd and Ends

In most memory today, an extra bit is added for each byte. This is a parity bit. Parity is a simple way of detecting errors within a memory chip (among other things). If there is an odd number of bits set, the parity bit will be set to make the total number of bits set an even number. (Most memory uses even parity) For example, if three bits are set, the parity bit will also be set to make the total bits set four.

When data is written, the number of set bits is calculated and the parity bit set accordingly. When the data is read, the parity bit is also read. If the total number of bits set is even, all is well. However, if there is a odd number of data bits set and the parity bit is not set or if there is an even number of data bits set and the parity bit is set, this is not the way it ought to be. A parity error has just occurred.

When a parity error occurs in memory, the state of the system is uncertain. In order to prevent any further problems, the parity checking logic generates a Non-Maskable Interrupt (NMI) and the CPU immediately jumps to special codes called the NMI service routine.

When SCO UNIX is interrupted with an NMI as the result of a parity error, it too realizes things are not good and the system panics. The panic causes the system to stop everything and shutdown. Certain machines support ECC RAM, which corrects parity problems before killing your system.

Even as I wrote this section, the computer industry was shifting way from from the old SIMMs toward extended data out RAM or EDORAM. Although as of this writing (NOV 1995), EDORAM is somewhat more expensive than SIMMS, it is expected that by early 1996, the demand for EDORAM will be such that the price difference will disappear.

The principle behind EDORAM is an extention of the fast page mode (FPM) RAM. With FPM RAM, you rely on the fact that memory is generally read sequentially. Since you don't "really" need to wait for each memory location to recharge itself, you can read the next location without waiting. Since you have to wait until the signal is stabilized, there is still some wait. However, this is much less than waiting for the memroy to recharge. At CPU speeds, greater than 33 Mhz, the the CPU is requesting memory faster than memory can deliver it and the CPU needs to wait.

EDORAM works by "latching" the memory, which means that secondary memory cells are added. These detect the data being out from memory and store the signals so the CPU can retrieve it. This works at bus speeds of 66Mhz. This process can be sped up even faster by including "burst" EDORAM. This extends the locality principle even further. Since we are going to read sequentially, why don't we anticipate the processor and read more than just that single location. In some cases the system will read 128 bits at once.

Keep in mind, however, you cannot just install EDORAM in your machine and expect it to work. You need a special chip-set on your motherboard. One such chip-set is the Intell Triton chip-set.

The Central Processing Unit

Sometimes you get people that just don't understand. At first, I thought that they "didn't have a clue”, but that's was really the problem. They had a clue, but a single clue doesn't solve a crime, nor does it help you run an SCO UNIX system.

It seems like a simple thing. You use doscp to copy a program from a DOS diskette onto an SCO UNIX system. In all likelihood the permissions are already set to be executable. So you type in the name of the program and press enter. Nothing happens or you get an error about incorrect format. Hmmm. The software says to it runs on a 386 or higher (which you have), a VGA monitor (which you have) and at least 2 Mb of hard disk space (which you have). Why doesn't it work?

Yes, this is a true story. A customer called in saying that our operating system (SCO UNIX) was broken. This customer had a program that worked fine on his DOS PC at home. It too, was a 386 so there shouldn't be a problem right? Unfortunately, wrong. Granted that in both cases the CPU is reading machine instructions and executing them. In fact, they are the same machine instructions. They have to be.

The problem is comparable to German and English. Although both use (basically) the same alphabet, words (sets of characters) written in German are not understandable by someone reading them as English and visa-versa. Sets of machine instructions that were designed to be interpreted under DOS are not going to be understood under SCO UNIX. (Actually, the problem is a little more complicated, but you get the basic idea.)

Just like your brain has be told (taught) the difference between German and English, a computer needs to be told the difference between DOS and UNIX programs.

In this section we talk about the CPU, the brains of the outfit. It is perfectly reasonable for users and administrators alike to have no understanding of what the CPU is doing internally. However, a basic knowledge of some of the key issues is important, in order to completely understand some of the issues I get into elsewhere.

It's like trying to tune-up your car. Now you don't really need to know how oxygen mixes with the gasoline in order to be able to adjust the carburetor. However, knowing about it makes adjusting the carburetor that much easier.

I don't go into details about the instruction cycle of the CPU, that is how it gets and executes instructions. While I like things like that and would love to talk about them, it isn't really necessary to understand what we need to talk about here. Instead we are going to talk mostly about how the CPU enables the operating system to create a scheme whereby many programs can be in memory simultaneously. These are the concepts of paging and multi-tasking.

Although it is an interesting subject, the ancient history of microprocessors is not really important to the issues at hand. It might be nice to learn how the young PC grew from a small, budding 4-bit system to the gigantic, strapping 64-bit Pentium. However, there are many books that covered this subject and unfortunately I don't have the space. Besides, you can read it elsewhere and SCO UNIX only runs on Intel 80386 (or 100% compatible clones) and higher processors.

So, instead of setting the Way-Back machine to Charles Babbage and his Analytic Engine, we leap ahead to 1985 and the introduction of the Intel 80386. Even compared to it's immediate predecessor, the 80286, the 80386 (386 for short) was a powerhouse. Not only could it handle twice the amount of data at once (now 32-bits), its speed rapidly increased well beyond that of the 286.

New advances were added in to increase the 386's power. Internal registers were added as well as increasing their size. Built into the 386 was the concept of virtual memory. This was a way to make it appear as if there was much more memory on system than there actually was. This substantially increased the system efficiency. Another major advance was the inclusion of a 16-byte, pre-fetch cache. With this the CPU would load instructions before it actually processed them. Thereby, speeding things up even more. Then the most obvious speed increase came by increasing the speed of the processor from 8Mhz to 16Mhz.

Although the 386 had major advantages over its predecessors, at first, it's cost seemed relatively prohibitive, In order to allow users access to the multi-tasking capability and still make the chip fit within their customers' budgets, Intel made an interesting compromise. By making a new chip where the interface to the bus was 16-bits instead of 32-bits, Intel made their chip a fair bit cheaper.

Internally this new chip, designated the 80386SX, is identical to the standard 386. All the registers are there and are the full 32-bits wide. However, data and instructions are accessed 16-bits at a time, therefore requiring two bus accesses to fill the registers. Despite this "short-coming", the 80386SX is still faster than the 286.

Perhaps, the most significant advance of the 386 for SCO is it's paging abilities. We talked a little about paging in the section on operating system basics so you already have a general idea of what it's about. We will also go into more details about paging in the section on the kernel. However, we need to talk about it a littler here to fully understand the power that the 386 has given us and to see how the CPU helps the OS.

SCO does have a product, SCO XENIX, that does run on 286s. In fact, there was even a version of SCO XENIX that ran on the 8086. Because SCO UNIX was first released for the 386, we are not going to go into an more details about the 286 nor the differences between the 286 and 386. instead I will just be describing the CPU used by SCO UNIX as sort of an abstract entity. In addition, since most of what I will be talking about is valid for the 486 and Pentium as well the 386, I will simply call it "the CPU" instead of 386, 486, or Pentium.

(Note: SCO will run also run on non-Intel CPUs. However, the issues we are going to talk about are all common to Intel-based or Intel-derived CPUs.)

I need to take a side-step here for a minute. On PC buses, multiple things are happening at once. The CPU is busily processing while much of the hardware is being access via DMA. Although these are multiple tasks that are occurring simultaneously on the system, this is not what is referred to by "multi-tasking".

When we talk about multi-tasking we are referring to multiple processes being in memory at the same time. Because the time it takes the computer to switch between these processes, or tasks, is much faster than the human brain can recognize, it appears as if they are running simultaneously. In reality, what is happening is that each process gets to use the CPU and other system resources for a brief time and then it's someone else's turn.

As it runs, the process could use any part of the system memory it needed. The problem with this is that a portion of RAM that one process wants may already contain code from another process. Rather than allowing each process to access any part of memory it wants, protections are needed to keep one program from overwriting another one. This protection is built-in as part of the CPU and is called, quite logically, "protected mode." Without it, SCO UNIX could not function.

Note, however, that just because the CPU is in protected mode, does not necessarily mean that the protections are being utilized. It simply means that the operating system can take advantage of the built in abilities if it wants.

Although this capability is built into the CPU, it is not the default mode. Instead, the CPU starts up in what I like to call "DOS compatibility mode." However, the correct term is "real mode." Real mode is a real danger to an operating system like UNIX. In this mode, a there are no protection (makes sense since protections exist in protected mode.) A process running in real mode has complete control over the entire system and can do anything it wants. Therefore, trying to run multi-user system on a real mode system would be a nightmare. All the protections would have to be build into the process as the operating system couldn't prevent a process from doing what it wanted.

Also built in is a 3rd mode. This is called "virtual mode." In virtual mode, the CPU behaves to a limited degree that it is in real mode. However, when a process attempts to directly access registers or hardware, the instruction is caught, or trapped, and the operating system is allowed to take over.

Let's get back to protected mode as this is what makes multitasking possible.

When in protected mode, the CPU can use virtual memory. As I mentioned, this is a way to trick the system into thinking there is more memory that there really is. There are two ways of doing this. The first is called swapping. Here, the entire process is loaded into memory. It is allowed to run it's course for a certain amount of time. When its turn is over, an other process is allowed to run. What happens when there is not enough room for both process to be in memory at the same time? The only solution is that the first process is copied out to a special part of the hard disk called the swap space or swap device. Then, the next process is loaded into memory and allowed its turn.

Because it takes such a large portion of the system resources to swap process in and out of memory. This can be very inefficient. Especially when you have a lot of process running. Let's take this a step further, what happens if there are too many process and the system spends all of it's time swapping? Not good.

In order to avoid this problem, a mechanism was devised whereby only those parts of the process that were needed are in memory. As it goes about its business, a program may only need to access a small portion of it's code. In fact, empirical tests show that a program spends 80% of its time executing 20% of its code. So why bother bringing in those parts that aren't being used? Why not wait and see if they are used?

To make things more efficient only those parts of the program that are needed (or expected to be needed) are brought into memory. Rather than accessing memory is random units, it is divided into 4K chunks, called pages. Although there is nothing magic about 4K, per se, this value is easily manipulated. In the CPU, data is referenced in 32-bit (4 byte) chunks and 1K (1024) of them is a page (4096). Later you will see how this helps things work out.

As I mentioned, only that part of the process currently being used needs to be in memory. When the process wants to read something that is not currently in RAM, it needs to go out to the hard disk to pull in the other parts of the process. That is it goes out and reads in new pages. This process is called "paging". When the process attempts to read from a part of the process that is not in physical memory, a "page fault" occurs.

One thing we must bear in mind is that fact that a process can jump around a lot. Functions are called which sends the process off somewhere completely different. It is possible, likely for that matter, that the page containing the memory location to where the process needs to jump to is not currently in memory. Since it is trying to read a part of the process not in physical memory, this too is called a page fault. As memory fills up, pages that haven't been used in some time are replaced by new ones. (Much more on this whole business later.)

Assume that a process has just made a call to a function somewhere else in the code and the page needed is brought into memory. Now there are two pages of the process from completely different parts of the code. Should the process take another jump or returns from the function, it needs to know if where it is going is in memory or not. The operating system could keep track of this. However, it doesn't need to. The CPU will keep track for it.

Stop here for a minute! This is not entirely true. The OS must first set-up the structures that the CPU uses. However, it is the CPU that uses these structures to determine If a section of a program is in memory or not. Although not part of the CPU, but rather RAM, the CPU administers the RAM utilization through page tables. As their names imply they are simply tables of pages. In other words, they are memory locations in which other memory locations are stored.

Confused? I was at first, so let's look at this concept another way. Each running process has a certain part of it's code currently in memory. The system uses these page tables to keep track of what is currently memory and where it is physically located. To limit the amount the CPU has to work, each of these page tables is only 4K or one page in size. Since each contain a set of 32-bit addresses, a page table can contain only 1024 entries.

Although this would imply that a process can only have 4K*1024, or 4Mb loaded at a time, there is more to it. Page tables are grouped into page directories. Like the page table, the entries in a page directory point to memory locations. However, rather than pointing to a part of the process, page directories point to page tables. Again, to reduce the work of the CPU, a page directory is only one page. Since each entry in the page directory points to a page, this means that a process can only have 1024 page tables.

Is this enough? Let's see. A page is 4K or 4096 bytes, which is 212. Each page table can refer to 1024 pages. This is 210. Each page directory can refer to 1024 page tables. This is also 210. Multiply this out we have:

page_size * pages_in_page_table * page_tables_in_page_directory


(212) * (210) * (210) = 2 32

Since the CPU is only capable of accessing 232 bytes, this scheme allows access to every possible memory address that the system can generate.

Are you still with me?

Inside of the CPU is a register called the Control Register 0 or CR0 for short. There is a single bit in this register that turns on this paging mechanism. If turned on, any memory reference that the CPU gets is interpreted as a combination of page directories, page tables and offsets, rather than an absolute, linear address.

Build into the CPU is a special unit that is responsible to make the translation from the virtual address of the process to physical pages in memory. It's called (what else?) the Paging Unit. To understand more about the work the Paging Unit saves the operating system or other parts of the CPU, let's see how the address is translated.

Figure 028311 Translation of Virtual to Physical Address

When paging is turned on, the Paging Unit receives a 32-bit value that represents a virtual memory location within a process. The Paging Unit takes theses values and translates them as shown in Figure 028311. At the top we see that the virtual address is handed to the paging unit which converts it to a linear address. This is not the physical address in memory. As you see, the 32-bit linear address is broken down into three components. The first 10 bits (22-31) are the offset into the page directory. The location in memory of the page directory is determined by the Page Directory Base Register (PDBR).

The page directory entry contains 4 bits which point to a specific page table. The entry in the page table, as you see, is determined by bits 12-21. Here again, we have 10 bits, which means each entry is 32 bits. These 32 bits point to a specific page in phyiscal memory. Which byte we are referencing in physical memory is determined by the offset portion of the linear address, which is bits 0-11. These twelve bits represents the 4096 (4K) bytes in each physical page.

Keep in mind a couple of things. First, page tables and page directories are not part of the CPU. They can't be. If a page directory were full, it would contain 1024 references to 4K chunks of memory. For the page tables alone, you would need 4Mb just for the page tables! Since this would create a CPU hundreds of times larger than it is. Page table and directories are stored in RAM.

Next, page tables and page directories are abstract concepts that the CPU knows how to utilize. They occupy physical RAM and operating systems such as SCO UNIX know how to switch on this capability within the CPU. All the CPU is doing is the "translation" work. When it starts, SCO UNIX turns on this capability and sets-up all the structures. This structures are then handed off to the CPU, where the Paging Unit does the work.

As I just said, a process with all of its page directory entries full would require 4Mb just for the page tables. However, this would imply that the entire process is somewhere in memory. Since each of the page table entries points to physical pages in RAM, you would need 16Gb of RAM. Not that I would mind having that much RAM, it is a bit costly and even if you had 16Mb SIMMs you would need 1000 of them.

Like pages of the process, it's possible that a linear address passed to the Paging Unit translates to a page table or even a page directory that was not in memory. Since the system is trying to access a page (which contains a page table and not part of the process) that is not in memory, a page fault occurs and the system must go get that page.

Since page tables and the page directory or not really part of the process, but are important only to the operating system, a page fault causes these structures to get created rather than read in from the hard disk or elsewhere. In fact, as the process is starting up, all is without form and void. No pages, no page tables and no page directory.

The system accesses a memory location as it starts the process. The system translates the address as we described above and tries to read the page directory. It's not there. A page fault occurs and the page directory must be created. Now that the directory is there, the system finds the entry that points to the page table. Since no page tables exist, the slot is empty and another page fault occurs. So, the system needs to create a page table. The entry in the page table for the physical page is found to be empty, therefore another page fault occurs. Finally, the system can read in the page that was referenced in the first place.

Now this whole process sounds a bit cumbersome, but bear in mind that this amount of page faulting only occurs as the process is being started. Once the table is created for a given process, it won't page fault again on that table. Based on the principle of locality, the page tables will hold enough entries for a while, unless of course the process goes bouncing around a lot.

The potential for bouncing around brings up an interesting aspect of page tables. Since page tables translate to physical RAM in the same way all the time, virtual addresses in the same area of the process end up in the same page tables. Therefore, page tables get filled up since the process is more likely to execute code in the same part of a process than elsewhere (this is spatial locality).

There is quite a lot there, huh? Well, don't get up yet as we're not finished. There are a few issues that we haven't addressed.

First, I often referred to page tables and the page directory. Each process has a single page directory (it doesn't need anymore). Although the CPU supports multiple page directories, there is only one for the entire system. When a process needs to be switched out, the entries in the page directory for the old process are overwritten by the ones for the new process. The location of the page directory in memory is maintained in the Control Register 3 (CR3) in the CPU.

There is something here that bothered me in the beginning and may still be bother you. As I described above, each time a memory reference is made, the CPU has to look at the page directory then a page table then calculate the physical address. This means that for every memory reference, the CPU has to make two more references just to find out where the next instruction or data is coming from. I though that was pretty stupid.

Well, so did the designers of the CPU. They have included a functional unit called the Translation Lookaside Buffer, or TLB. The TLB contains 32 entries and like the internal and external caches point to sets of instructions, the TLB points to pages. If a page that is being looked for is in the TLB, a TLB hit occurs. (just like a cache hit) As a result of the principle of spatial locality, there is a 98% hit rate using the TLB.

When you think about it, this makes a lot of sense. The CPU does not just execute one instruction for a program, then switches to something else. It executes hundreds or even thousands before it is someone else's turn. If each page contains 1024 instructions and the CPU executes 1000 before it's someone else's turn, all 1000 will most likely be in the same page. Therefore they are all TLB hits.

Now, let's take a closer look at the page table entries themselves. Each is a 32-bit value, pointing to a 4K location in RAM. Since it is pointing to an area of memory larger than a byte, it does not need all of the 32 bits to do it. Therefore, it has some bits left over. Since the page table entry points to an area that has 220 bytes (4096 bytes = 1 page), there are 12 bits that it doesn't need. These are the low order 12 bits and the CPU uses them for other purposes related to that page. A few of them are unused and the operating system can, and does, use them for its own purposes. There are also a couple reserved by Intel and should not be used.

One of the bits, the 0th bit, is the present bit. If this bit is set, the CPU knows that the page being referenced is in memory. If not set, the page is not in memory and if the CPU tries to access it, a page fault occurs. Also, if this is not set, none of the other bits have any meaning. (How can you talk about something that's not there?)

An important bit, is the accessed bit. Should a page be accessed for either read or write, the CPU sets this bit. Since the page table entry is never filled in until the page is being accessed, this seems a bit redundant. If that was all there was too it, you'd be right. However, there's more.

At regular intervals the operating system goes around and clears the access bit. If a particular page is never used again, the system is free to reuse that physical page if memory gets short. When that happens. All that needs to get done is clear the present bit and the page is now considered "invalid."

Another bit used to determine how a page is accessed is the dirty bit. If a page has been written to, it is considered dirty. Before the system can make a dirty page available, it must make sure that whatever was in that page is written to disk. Otherwise the data is inconsistent.

Finally, we get to the point that all this protected mode stuff is all about. The protection in protected mode essentially boils down to two bits in the page table entry. One bit, the user/supervisor bit, determines who has access to a particular page. If the CPU itself is running at user level, then it only has access to user level pages. If the CPU is at supervisor level is has access to all pages.

I need to say here that this is the maximum access a process can have. There are other protections that may prevent a user level or even supervisor level process from even getting this far. However, these are implemented at a higher level.

The other bit in this pair is the read/write bit. As the name implies, this determines whether a page can be written to or not. This is a single bit, so it is really just an on-off switch. If the page is there and you have the right to read it if you can. (That is, either you are a supervisor-level process or the page is a user page) However, if the write ability is turned off, you can't write to it, even as a supervisor.

If you have a 386 CPU, then all is well. If you have a 486 and decided to use one of those bits that I told you were reserved by Intel you are now running into trouble. Two of these bits were not defined in the 386 are now defined in the 486. These are Page Write Through (PWT) and the Page Cache Disable (PCD).

PWT determines the write policy (see the section on RAM) for external cache in regards to this page. If set, then this page has a write-through policy. If clear, a write-back policy is allowed.

PCD decides whether this page can be cached at all. If clear, this page cannot be cached. If set, then caching is allowed. Note that I said "allowed." Having this bit set does not mean that the page will be cached. There are other factors involved that really go beyond what I am trying to get across here.

Well, we've talked about how the CPU helps the OS keep track of pages in memory. We also talked about how the CR3 register helps keep track of which page directory needs to get read. We also talked about how pages can be protected by the use of a couple of the bits in the page table entry. However, there is one more thing that's missing in order to complete the picture. That's is keeping track of which process is currently running, this is done with the Task Register (TR).

The TR is not where most of the work gets done. It is simply used by the CPU as a pointer to where the important information is kept. This is the Task State Descriptor, TSD. Like the other descriptors that we've talked about, the TSD points to a particular segment. This segment is the Task State Segment, or TSS. The TSD also contains, among other things, the privilege level that this task is operating at. Using this information along with that in the page table entry, you get the protection that "protected mode" allows.

The TSS contains essentially a snapshot of the CPU. When a process's turn on the CPU is over, the state of the entire CPU needs to be save so that the program can continue where it left off. This information is stored in the TSS. This functionality is built into the CPU. When the OS tells the CPU a task switch is occurring (a new process is getting its turn), the CPU knows to save this data automatically.

If we put all of these components together, we get an operating system working together with the hardware to provide a multi-tasking, multi-user system. Unfortunately, what we talked about here a just the basics. You could spend a whole book just talking about the relationship between the operating system and the CPU and still not be done.

There is one thing I didn't talk about, and that was the difference between the 80386, 80486 and Pentium. With each new processor came new instructions. The 80486 added an instruction pipeline to improve the performance to the point where the CPU could average almost one instruction per cycle. The Pentium has dual instructions paths (pipelines) to increase the speed even further. It also contains branch prediction logic which is used to "guess" where the next instruction should come from.

Hard disks

You've got to have one. I mean it's one of the minimum hardware requirements to install an SCO system. I guess that with a little trickery, you could get a system up and running from an floppy and RAM disk. I know I could, but what's the point? Life is much better with a hard disk. The larger the better. Right?

A hard disk is composed of several disks of (probably) aluminum coated with either an "oxide" media (the stuff on the disks) or "thin film" media. Since "thin film" is thinner than oxide, the more dense (read: larger) hard disks are more likely to have thin film. Each of these disks is called a platter and the more platters you have the more data you can store.

Platters are usually the same size as floppies. Older ones were 5.25" in diameter and the newer ones are 3.5" in diameter. (If someone knows the reason for this, I would love to hear it.) In the center of each platter is a hole, though which the spindle sticks. In other words, as they rotate, the platters rotate around the spindle. The functionality is the same as with a phonograph record. (remember those?)

The media that is coated onto the platters is very thin, about 30 millionths of an inch. The media has magnetic properties that can change its alignment when exposed to a magnetic field. That magnetic field comes in the form of the hard disk's read/write heads. It is the change in alignment of this magnetic media that allows data to be stored on the hard disk.

As I said, there is a read/write head that does just that: it reads and writes. There is usually one head per surface of the platters (top and bottom). That means there are usually twice as many heads as platter. However, this is not always the case. Sometimes the top and bottom most surfaces do not have heads.

The head is moved across the platters that are spinning at several thousand times a minute. (At least 60 times a second!) The gap between head and platter is smaller that a human hair, smaller than a particle of smoke. For this reason hard disk are manufactured and repaired in rooms where the number of particles in the air is less than 100 per cubic meter.

Because of this very small gap and the high speeds that the platters are rotating, should the head come into contact with the surface of a platter, the result is (aptly named) a head crash. More than likely this will cause some physical damage to your hard disk. (Imagine burying your face into an asphalt street going 'only' 20 MPH)

The heads are moved in and out across the platters by means of the older stepping motors, or the new, more efficient voice-coil motor. Stepping motors rotate and monitor their movement based on notches or indentations. Voice-coil motors operate on the same principle as a stereo speaker. A magnet inside the speaker causes the speaker cone to move in time with the music (or with the voice). Since there are no notches to determine movement, one of the surfaces of the platters is marked with special signals. Because the head above this surface have no write capabilities, this surface cannot be used for any other purpose.

The voice-coil motor allows finer control and is not subject to problems of heat expanding the disk as the marks are expanded as well. An other fringe benefit is that since the voice-coil operates on electricity, once power is removed, the disk moves back to its starting position as it is no longer resisting a "retaining" spring. This is "automatic head parking."

Physically, data is stored on the disk in concentric rings. The head does not spiral in like a phonograph record but rather moved in and out across the rings. These rings are called tracks. Since the heads move in unison across the surface of their respective platters data is usually stored not in consecutive tracks, but rather from the tracks that are positioned directly above or below it. The set of all tracks that are the same distance from the spindle are called a cylinder. Therefore, hard disks read from successive tracks on the same cylinder and not the same surface.

Think of it this way. As the disk is spinning under the head it is busy reading data. If it needs to read more data than fits on a single track it has to (obviously) get it from a different track. Assume data was read from consecutive tracks. When the disk finished reading from one track, it would have to move in (or out) to the next track before it could continue. Since tracks are rings and the end is the beginning, the delay in moving out (or in) one track causes the beginning of the next track to spin passed the position of the head before you can start reading it. Therefore, it must wait until the beginning comes around again. Granted you could stager the start of each track, but this makes seeking to a particular spot much more difficult.

Let's now look at it where data is read from consecutive tracks (that is, it read one complete cylinder before it goes on). Once the disk has read the entire contents of a track and has reached the end, the beginning of the track just below it is just now spinning under the head. Therefore, by switching the head it is reading from, it can begin to read (or write) as if nothing was different. No movement needs to take place and the reads occur much faster.

Each track is broken down into smaller chunks, called sectors. The number of sectors that each track is divided into is referred to as sectors per track, or sectors/track. Although any value is possible, common values for sectors/track are 17, 24, 32 and 64.

Each sector contains 512 bytes of data. However, each sector can contain up to 571 bytes of information. Each sector contains information indicating the start and end of the sector, which is only ever changed by a low-level format. In addition, space is reserved for a checksum contained in the data portion of the sector. If the calculated checksum does not match the checksum in this field, the disk will report and error.

Figure 028312 Logical components of a hard disk

This difference between the total number of bytes per sector and the actually amount of data has been cause for a fair amount of grief. For example, trying to sell you a hard disk, the salesperson might praise the tremendous amount of space that the hard disk has. You might be amazed at the low cost of a one gigabyte drive.

There are two things to watch out for. Computers count in twos, humans count in tens. Despite what the sales person wants you to believe or (believes himself), a hard disk with 1 billion bytes is not a 1 gigabyte drive. It is only 109 bytes. One gigabyte means 230 bytes. A hard disk with 109 (1 billion) is only about 950 megabytes. This is five percent smaller!

The next thing is that seller will often state the unformatted storage capacity of a drive. This is the number that you would get if you multiplied all the sectors on the disk by 571 (see above). Therefore, the unformatted size is irrelevant to almost all users. Typical formatted MFM drives give the user 85% of the unformatted size and RLL drives give the user about 89%. (MFM and RLL are formatting standards, the specifics of which are beyond the scope of this book. )

On SCO UNIX 324.0 and ODT 1.1 and earlier, this distinction became a very important issue. In these earlier releases, the SCSI commands could only access disks up to 1 Gb (that is 230 bytes). Since drive manufactures often report the unformatted size, a 1 Gb formatted drive is a 1.2 Gb unformatted drive. Certain drives would be reported as problems, these were those that were 1.2 GB unformatted. A customer sees that this larger drive is a problem and since his 1.1 (formatted) drive is smaller that the 1.2 GB, he thinks he's safe. He installs and eventually his data get corrupted.

This brings up an interesting question. If the manufacturer is telling us the unformatted size and the formatted size is about 85% for MFM and 89% for SCSI/IDE (using RLL), how can I figure our how much useable space three really is. Elementary, my dear Watson. It's called multiplication. (Sarcastic, ain't I?)

Let's start at the beginning. Normally when you get a hard disk, it comes with some reference that indicates how many cylinders, heads and sectors per track there are. (Among other things) The set of all tracks at the same distance from the spindle is a cylinder. The number of cylinders is simply the number of tracks, since a track is on one surface and a cylinder is all tracks at the same distance. Since you can only use those surfaces that have head associated with them, we can calculate the number of total tracks by multiplying cylinders times heads. In other words, take the number of tracks on a surface and multiple it by the number of surfaces. This gives is the total number of tracks.

From our discussion of tracks, we know that each track is divided into a specific number of sectors. To find the total number of sectors, we simply multiply the number of total tracks that we calculated above, times the sectors per track. Once we have the total number of sectors we multiply this by 512 (the number of bytes of data in a sector). This give us the total number of bytes on the hard disk. To figure out how may megabytes this is, simply divide this number by 1048576. (1024*1024 = 1 MB)

For those of you who need it as an equation (Yeah, I always hated word problems myself):

All PC-based operating systems need to break down the hard disk into units called partitions. A partition can be anywhere from just a couple of megabytes to the entire disk. Each partition is defined in a partition table that appears at the very beginning of the disk. This partition table contains information about what kind of partition it is, where it starts and where it ends. This table is the same whether you have a DOS based PC, UNIX or both.

Since the table is the same for DOS and UNIX, there can be only four partitions total as there are four entries in the table. DOS gets around this by creating logical partitions within one physical partition. This is a characteristic of DOS, not the partition table. Both DOS and UNIX must first partition the drive prior to installing the operating system and both provide the mechanism during the installation process in the form of the fdisk program. Although their appearance is very different, the DOS and SCO UNIX fdisk commands perform the same function.

When you run the SCO UNIX fdisk utility, the values you see and input are all in tracks. To figure out how big each fdisk partition is, simply multiply that value by 512 times the number of sectors per track. (Remember that each sector holds 512 bytes of data.)

Under SCO UNIX, each partition is broken down even further into filesystems or divisions. Each partition can contain up to seven filesystems, although a single filesystem can span the entire drive. Since a partition can span (almost) the entire drive, a filesystem that spans the whole partition can span the entire drive as well. (There is a subtle difference between a division and a filesystem. We get into details about this in the section on filesystems.)

Comparable to the partition table, the division table contains entries for each of the filesystems within the partition. Each has a name, as well as a starting and ending block. The filesystem is the basic unit by which files are grouped under SCO UNIX. (Here, too, we went into more details about it in the section on filesystems.)

To physically connect itself with the rest of the computer, the hard disk has five choices: ST506/412ESDI, SCSI, IDE and the newest Enhanced IDE (EIDE). However, the interface the operating system sees for ST506/412 and IDE are identical and there is no special option for an IDE drive. At the hardware level there are some difference that need to be covered for completeness.

To be quite honest, only ESDI and ST506/412 are really disk interfaces. SCSI and IDE are referred to as "system-level interfaces" and they incorporate ESDI into the circuitry physically located on the drive.

The ST506/412 was developed by Seagate Technologies (Hence the ST) for it's ST506 hard disk which had a whopping 5Mb formatted capacity. (Hey! Be fair. This was 1980 when 360K was a big floppy) Seagate later used the same interface in their ST412 which doubled the drive capacity. (Still less hard disk space than I have RAM. Oh, well) Other drive manufacturers decided to incorporate this technology and over the years it has become a standard. One of it's major drawbacks is that is 15-year-old technology. It can no longer compete with the demands of today's hard disk users.

In 1983, the Maxtor Corporation established the Enhanced Small Device Interface (ESDI) standard. The enhancements provided by ESDI were higher reliability as they had built the encoder/decoder directly into the drive and therefore reduced the noise; high transfer rates; and the ability to get drive parameters directly from this disk. This means that users no longer had to run the computer setup routines to tell the CMOS what kind of hard disk it had.

One drawback that I have found with ESDI drives is the physical connection between the controller and the drive itself. Two cables were needed: a 34-pin control cable and a 24-pin data cable. Although the cables are different sizes and can't be (easily) confused, the separation of control and data is something I was never a big fan of. The connectors on the drive itself, were usually split into two unequal halves. In the connector on the cable, a small piece of plastic, called a key, prevented the connector from being inserted improperly. Even if the key is missing, you can still tell which end is which by the fact the pins on the hard disk are labeled and the #1 line on the cable has a colored striped down its side. (The may not always be the case, but I have never seen one that isn't)

Another drawback that I have found is that the physical location on the cable determines which drive is which. The primary drive is located at the end of the cable, with the secondary in the middle. The other issue is the number of cables. ESDI hard disk: drives require three separate cables. Each drive has it's own data cable and they share a common control cable.

Although originally introduced as the interface for hard cards (these were hard disks directly attached to expansion cards) the IDE (Integrated Drive Electronics) interface has grown in popularity to the point where it is perhaps the most commonly used hard disk interface today (rapidly being replaced by SCSI). As its name implies, the controller electronics are integrated onto the hard disk itself. The connection to the motherboard is made through a relatively small adapter, commonly referred to as a "paddle board." From here a single cable is used to attach two hard disks in a daisy-chain. This is similar to the way floppy drives are connected and often IDE controllers have connectors and control electronics for floppy drives as a well.

IDE drives often play tricks on systems by presenting a different face to the outside world than is actual the case on the disk. For example, since IDE drives are already pre-formatted when they reach you, they can have more physical sectors in the outer tracks, thereby increasing the overall amount of space on the disk that can be used for storage. When a request is made to read a particular block of data on the drive, the IDE electronics translates this to the actual physical location.

Because IDE drives come pre-formatted, you should never low-level format an IDE drive, unless specifically permitted by the manufacturer. This has the potential for wiping out the entire drive to the point where it must be returned to the factory for "repair." Certain drive manufacturers, such as Maxtor, provide low-level format routines that accurately and safely low-level format your drive. Most vendors that I am aware of today, simply "zero" out the data blocks when doing a low-level format. However, don't take my word for it! Check the vendor.

The next great advance in hard disk technology was SCSI. SCSI is not a disk interface, but rather a semi-independent bus. More than just hard disks can be attached to a SCSI bus. Because of it's complex nature and the fact that it can support such a wide range of devices, I talked in more detail about SCSI earlier. However, there are a few specific SCSI issues that relate to hard disk in general and the interaction between SCSI and other types of drives.

The thing to note is that the BIOS inside the PC knows nothing about SCSI. Whether this is an oversight or intentional, I don't know. The SCSI spec is over ten years old, so there has been plenty of time to include it. On the other hand, the BIOS is for DOS. DOS makes BIOS calls. In order to be able to access all the possible SCSI devices through the BIOS is must be several times larger. Therefore, every PC-based operating system needs to have extra drivers to be able to access SCSI devices.

Since the BIOS does not understand about SCSI, in order to boot from a SCSI device, you have to trick the PC's BIOS a little. By telling the PC's BIOS that there are no drives installed as either C: or D:, we force it to quit before it goes looking for any of the other types. Once it quits, this gives the BIOS on the SCSI host adapter a chance to run.

The SCSI host adapter obviously knows how to boot from a SCSI hard disk and does so wonderfully. This is assuming that you enabled the BIOS on the host adapter. If not, you're hosed.

There is also the flip side of the coin. The official SCO doctrine says that if you have a non-SCSI boot driver, then you have to disable the SCSI BIOS as this causes problems. However, I know customers who have IDE boot drives and still leave the SCSI BIOS enabled. SCO simply reacts as if the SCSI BIOS were not enabled. So, what do to? My suggestion is to see what works. The only thing I can add is that if you have multiple host adapters, then only one should have the BIOS enabled.

Another thing is that once the kernel boots from a SCSI device, you loose access to other kinds of drives. Just because it doesn't boot from the IDE (or whatever), does this mean you cannot access it at all? Unfortunately, yes. This is simply the way the kernel is designed. Once the kernel has determined that it has booted off of a SCSI hard disk, it can no longer access a non-SCSI one.

The newest member of the hard disk family is Enhanced IDE or EIDE. The most important aspect of this new hard disk interface is it's ability to access more than 504 megabytes. This limitation is because the IDE interface can access only 1024 cylinders, 16 heads and 63 sectors per track. If you multiple this out using the formula I gave you above, you get 504 Mb.

EIDE also has other advantages such as higher transfer rates, ability to connect more than just two hard disks, as well as attach more than just hard disks. One of the drawbacks the EIDE had at the beginning was part of its very nature. In order to overcome the hard disk size limit that DOS had, EIDE drives employ a method called logical block addressing (LBA). This ended up breaking backwards compatibility with the standard IDE/ST-506 interface that SCO used. As a result, up until Open Server, EIDE was not supported. Although you could disable the logical block addressing (setting the drive to standard IDE) to allow both DOS and ODT to see the drive, you end up loosing the additional size.

The idea behind LBA is that is that the system's BIOS would "re-arrange" the drive geometry so that drives larger than 528Mb could still be booted. Because SCO does not use the BIOS to access the hard disk, the fact that the BIOScould handle the EIDE drive meant nothing. New drivers needed to be added to account for this. This is was done for OpenServer.

A couple of things to note when dealing with EIDE drives. First, support for multiple drives where some use LBA and some do not, is machine dependent. SCO recommends that either all the drives have LBA enabled or all of them do not. This eliminates many potential problems.

If you want to use LBA and the drive is new, there is not problem. There is also no problem is the drive was previously used with LBA enabled. Problems arise when it was previously used with LBA disabled and you now what to install SCO with LBA enabled. In order to do this corrected, you need to delete the disk parameter table from the hard disk.

Boot the system with the install floppy and type tools at the Boot: prompt. Select the option: "Execute a shell on a ramdisk filesystem." When you get to a prompt type:

dd if=/dev/zero of=/dev/rhd00 bs=1b count=1

When the command completes, reboot from the install floppy. What this command does is to write a single byte with the value of 0 onto the hard disk, which invalidates the drive information.

Floppy Drives

A customer once called in to SCO Support with a system that would not boot. For some unknown reason, the system crashed and would no longer boot from the hard disk. It got to a particular point in the boot process and hung. Even the copy of unix.old hung in the same way.

Fortunately the customer had an emergency boot floppy which allowed them to boot and get access to the hard disk. We stuck the floppy in the drive and pressed the reset button. After a moment, there was the "Boot:" prompt. Since we wanted to make sure things were fine before we continued, I decided to first boot from the floppy to see if we could access the hard disk. So, we pressed enter.

After a moment, the familiar dots went running across the screen. All seemed to go fine until suddenly the dots stops. The customer could hear the floppy drive straining and then came the dreaded "floppy read error." Rather than giving up I decided to try it again. Same thing.

At that point I started to get concerned. The hard disk booted, but the kernel hung. The floppy booted, but somewhere in the middle of loading the kernel, there was a bad spot on the floppy. This was not a happy thing.

The floppy disk was brand new and they had tested it out immediately after they made it. The most logical thing that cause this problem was having the floppy too close to a magnetic field. Nope! That wasn't isn't either. They were told to keep this floppy in a safe place and that's what they did.

What was that safe place? They had tacked it to the bulletin board next to the monitor. Not through the hub or at one of the corners, but right through the floppy itself. They were careful not to stick the pin through the media access hole, as they were told never to touch the floppy media itself.

In this section, we're going to talk about floppy disks, lovingly referred to as floppies. They come in different sizes and shapes, but all floppies serve the same basic functions. Interaction with floppies can be a cause of great heartache for the unprepared. So, we're going to talk about what they are like physically, how they are accessed and what kinds of problems you can have with them.

Although, they hold substantially less data, floppies appears and behave very much like hard disks. Like hard disks, floppies are broken down into sectors, tracks, and even cylinders. Like hard disks, the number of tracks tells us how many tracks are on a given surface. Therefore, a floppy described as 40 tracks (such as a 360Kb floppy) actually contains 80 tracks, or 40 cylinders.

Other common characteristics are the header and trailer of each sector, resulting in 571 bytes per sector, of those 512 being data. Floppy disks almost universally use MFM data encoding.

SCO floppy drivers support a wide range of floppies: from the ancient 48 tracks per inch/8 sectors per track, 5.25" floppies to the newest 135 tracks per inch/36 sector per track, 3.5" floppies that can hold almost 3Mb of data. More commonly however, the floppy devices found on systems today are somewhere in-between.

Because they are as old as PCs themselves, floppies have changed little except for their size and the amount of data that can be stored on them. As a result, very few problems are encountered with floppies. One of the most common problems is that customers are unsure of which floppy device goes to which type of drive. Sometimes customers do know the difference and try to save money by forcing the floppy to format in a density higher than it was designed for. That's where I come in. I carry a keyboard.

It's was Thursday, the weather was warm (as always) in Santa Cruz. We were working the afternoon shift on OSD. The boss's name is David, second tier was Wyliam. My name's Mohr. James Mohr.

We had gotten a call from an irate customer claiming there was a bug in our software. She insisted that she immediately talk to an analyst. She wanted help to get data back that was apparently on the floppy that she claimed our floppy driver had toasted. Even without a support contract, a customer reporting a bug always get through. It's part of our job to ensure a quality product.

When I first picked up the phone, the customer seemed angry that it took so long to reach someone. I explained that the queue was on a first come, first served basis. Rather than satisfying her, that seemed to enraged her more.

"There must be a lot a bugs in your software if so many people call in with problems." She said.

I took a sip of coffee and said, "I'm sorry you feel that way ma'am. What can I do to help you?"

The customer continued her tirade on how sloppy the software was and that it was amazing we managed to stay in business so long with such a shoddy product and how we were legally responsible for helping her get her data back.

"Just the facts, ma'am," I said. "Tell me what happened."

"Well, I have been saving data on this floppy to take to our other office. All of a sudden there are errors all over the place."

"It could be a bad floppy." I suggested calmly.

"No sir!" she insisted, "These are high quality floppies. Guaranteed 100% error free."

"How old are they ma'am. You know, floppies do lose their ability to store data over time."

"These are brand new floppies, young man, and don't you going trying to put the blame on the floppies. It's your floppy driver that's the problem."

I took another sip of coffee, realizing that this would not be an easy call. "Could be ma'am, but I need some more information to make that determination. What size floppy is it?"

"Five and a quarter."

"I see. And what device did you use to format it with?"

"I use /dev/rfd096ds15. That's a high density 5 1/4, you know!"

"Yes, ma'am. Does the floppy have a reinforcement ring in the middle? I mean is there a ring in the center of the floppy that sort of sticks up a little."

"Why, yes, but I don't see what that has to do with anything."

"Well, you see ma'am. That's a low density floppy. You can't format those at high density."

"What do you mean? We've been doing this for a long time and we've never had problems before. My boss said that we can do this to save money."

"No, ma'am, you can't. Up to now, you've been lucky. Keep in mind ma'am that floppies can only be formatted as high as the manufacturer allows."


"Have a good day, ma'am."

The story you have just read is true. The names were changed to protect the innocent. Actually the opposite is true. I kinda glorified the story a bit, but the names are true.

The truth of the matter is, that you can't format floppies higher than you're supposed to. That is, higher than the manufacturer specifies. To some extent you might get away with punching holes in single sided floppies to make them double sided. However, forcing a floppy to a format at a higher density (if it works) isn't worth risking your data on.

In order to understand why this is so, we need to talk about the concept of coercivity. That is, how much energy (how strong the magnetic field) must be used in order to make a proper recording on a disk. Older floppies had a lower coercivity and therefore required a weaker magnetic field to hold the signal. That is, less energy was required to "coerce” them into a particular pattern.

This seems somewhat contradictory, but look at it another way. As densities increased, the magnetic particles got closer together and started to interfere with each other. The result was to make the particles weaker magnetically. Because the weaker the particles are magnetically, a stronger force was needed to "coerce" them into the proper patterns to hold data. Therefore, high density disks have a higher coercivity.

As the capacity of drives increased, the tracks became narrower. The low density 5.25" floppies had 48 tracks per inch and could hold 360K of data. The high density 5.25" floppies have twice as many tracks per inch and can hold 1.2Mb (the added increase is also due to the fact they have 15 sectors per track instead of nine.) Since there are more tracks in a given space, they are therefore thinner. Problems arise if you use a disk formatted at 360K in a 1.2Mb drive. Because, the 1.2 Mb drive writes the thinner tracks, not all of the track of the 360K floppy is overwritten. This may not be a problem in the 1.2Mb drive, but if you ever try to read that floppy in a 360K drive the data will run together. That is the larger head will read data from more than one track.

Formatting a 360K floppy as a 1.2Mb usually fails miserably, because of the different number of tracks, so you usually can't get yourself into trouble. However, with 3.5" floppies the story is a little different. For both the 720Kb and 1.44Mb floppies, there are 80 tracks per side. The difference is that the 1.44Mb floppies are designed to handle 18 sectors per track instead of just 9. As a result, formatting appears to go fine. It is only later that you discover that the data is not written correctly.

The reason for this is that the magnetic media for the lower-density 720Kb floppies is less sensitive. By formatting it as 1.44Mb, you subject it to a stronger magnetic field than you should. After a while, this "overdose" causes the individual magnetic fields to begin interfering with one another. Since high-density, 1.44Mb floppies are well below $1.00 apiece, it's not worth risking data by trying to force low-density to high-density to save money.

While on the subject of money, buying unformatted floppies to save money is becoming less and less the smart thing to do. If you figure that formatting floppies takes at least two minutes a piece and the cost difference between a package of ten formatted floppies and ten unformatted is $2, then it would only make sense (or cents) to have someone format these if they were only making $6.00 an hour. Rarely does a company have someone whose sole job it is to format floppies. It usually falls on those people who use them and most of them get more than $6.00 an hour.

(I actually did some consulting work for a company whose president insisted that they buy unformatted floppies. Since the only people who used the floppies were his programmers and system administrators, they earned well above $6.00 an hour. In one case, I calculated that turning a package of ten unformatted floppies into formatted ones worked out to costing twice as much for the unformatted as for the formatted ones. That didn't phase him a bit as the system administrators were on salary and getting paid no matter what. By saving the few dollars by buying unformatted ones, his profit margin looked better. At least it did on paper.)

Tape Drives

For the longest time tape drives literally were a block to me. Although I understood the basic concept (writing to a tape similar to a music cassette) there were "just so many" that it took me quite a fair bit of time before I felt comfortable with them.

Because this device has the potential for saving your data or opening up career opportunities for you to flip burgers, knowing how to install and use them is an important part of your job as a system administrator. Since the tape device node is usually read/write, regular users can also backup their own data with it.

The first tape drives supported under SCO UNIX were quarter inch cartridge tapes, or QIC tapes. QIC is not just an abbreviation for the size of the media, but is also a standard.

In principle a QIC tape is like a music cassette. Both consist of a long tape consisting of two layers. The 'backing' is usually made of cellulose acetate (photographic film) or polyester (1970's leisure suits) with polyester being more common today. The 'coating' is the actual media that holds the magnetic signals.

The difference is in the way the tapes are moved from the supply reel to the take up reel. In cassette tapes, movement is accomplished by a capstan and the tape is pinched between two rollers. QIC tapes spread the driving pressure out over a larger area by means of a drive belt. Additionally, more care is taken to ensure that the coating touches only the read/write heads. Another major difference is the size. QIC tapes are much larger (a little bit smaller than a VHS video tape).

The initial size of QIC tapes was 300 feet and it held approximate 30Mb of data. This is a DC300 tape. The tape that next appeared was a DC600, which was 600 feet long and could hold about 60Mb. As with other technologies, tape drives got better and were able to hold more data and tapes got longer. The technology advanced to the point where the same tapes could be used in new drives and could store as much as twice as much as they could before.

There are currently several different QIC standards for writing to tape drive depending on the tape and tape drive being used. Older, 60Mb drives would use a QIC-24 format when writing to 60Mb tapes. Newer drives use the QIC-525 format to write to several different kinds of tapes. As a result, different tapes yield different capacity depending on the drive where they are written.

For example, I have an Archive 5150 tape drive, that is "officially" designed to work with 150MB tapes (DC6150). However, I can get 120Mb from a DC600. Why? The DC600 is 600 feet long and the DC6150 is just 20 feet longer. However, a tape drive designed to use DC600 tapes only writes in 9 tracks, however, a tape that uses DC6150's (like mine) write in 15 tracks. In fact, there are many different combinations of tapes and drives that can be used.

One thing I would like to point out from a technical standpoint is that there is no difference between 150Mb QIC tape drives and 250Mb QIC drives. When the QIC standard was enhanced to include 1000 feet tapes, 150Mb drives automagically became 250Mb drives. (I wish I had known this before I went out and bought so many DC6150 tapes. Oh, well. Live and learn.)

A similar thing happened with 320 and 525Mb tapes. The QIC-320 standard was based on 600 feet tapes. However, the QIC committee decided to go with the qic-525 standard based on 1000 feet tape. That's why a 600 foot tape writing with the qic-525 standard writes 320Mb.

Notice that this entire time, I never referred to QIC02 tapes. Well, that's because QIC-02 is not a tape standard, but a controller standard.

An interesting side note is just how the data is actually written to the tape. QIC tape drives use a system called "serpentine recording." Like a serpent it winds its way, back and forth along the length of the tape. It starts at one end and write until it reaches the other end. The tape drive then reverses directions and begins to write toward the other end.

Another common tape drive are QIC40 and QIC-80 tape drives, which provide 40Mb and 80Mb, respectively. These provide an inexpensive backup solution. These tape drives are connected to standard floppy controllers and in most cases, the standard floppy cables can be used. The size of the tapes used for this kind of drive is about the same as a pack of cigarettes.

Aside from using the same type of controller, QIC-40/80 tape drives have other similarities with floppy drives. Both use modified frequency modulation (MFM) when writing to the device. Sectors are assigned in similar fashions and each tape has the equivalent of a file allocation table to keep track of where each file is on the media.

QIC-40/QIC-80 tapes need to be formatted prior to use, just like floppies. Because the size of data storage is substantially greater than for a floppy, formatting takes substantially longer. Depending on the speed of the tape drive, formatting can take up to an hour. Pre-formatted tapes are also available and like their floppy counterparts the prices are only slightly higher than unformatted ones.

Because these tape drives run off the floppy controller, it is often a choice between a second floppy drive and a tape drive. The deciding factor is the floppy controller. Normally, floppy controllers can only handle two drives, therefore this is usually the limit.

However, this limit can be circumvented if the tape drive supports soft select (sometimes called "phantom select"), whereby the software chooses the device number for the tape drive when it is using it. The ability to do "soft select" is dependent on the drive. While more and more floppy tape drives support this capability, many of the older drives do not. We get into more details about this in the second part of the book when we talk about installing and using tape drives.

Similar in size and functionality are Irwin tape drives. Although almost identical to QIC-40/QIC-80 tapes drives, the behavior of Irwins in regard to the operating system is somewhat different. Therefore there is a special driver that is needed to access Irwin tape drives. There is also a special option in the 'mkdev tape' script to do so. This is the option "Mini-Cartridge." We also talk about this later in the book.

On larger systems, neither QIC nor mini-tapes can really handle the volume of data being stored. While some QIC tapes can store up to 1.3 Gb, they cannot compare to digital audio tape (DAT) devices. Such devices use Digital Data Storage (DDS) media. Rather than storing signals similar (or analogous) to those coming across the bus, DDS stores the data as a series of numbers or digits on the tape. Hence, the name "digital." The result is much higher reliability.

Physically, DAT tapes are the smallest that SCO supports. The actual media is 4mm, hence DAT tapes are sometimes referred to as 4-mm tapes.

Hewlett-Packard DAT Tapes can be divided into multiple logical tapes. This is useful when making backups if you want to store different filesystem to different "tapes" and you don't want to use any extra physical tapes. Device nodes are created to represent these different logical tapes. DAT tape drives can quickly scan for the location of subsequent partitions (as they are called), making searches much faster than with backups to single tapes. For more details on this, see the dat(HW) man-page.

One thing to watch out for is that data written to DAT tapes is not as standardized as data written to QIC tapes. Therefore, it is possible that data written on one DAT drive cannot be read on another.

There are two reasons for this problem. This first is the blocking factor. The blocking factor is the minimum space each file will take up. A one Kb file with a blocking factor of 20 will have 19 Kb of wasted space. Such a situation is faster in that the tape drive is streaming more, but there is a lot of wasted space. DAT tape drives use either a variable or fixed block size. Each drive has a default blocking factor that is determined by the drive itself.

Another problem is data compression, which if done, is performed at the hardware level. Since there is no standard for data compression, it is very unlikely that two drives from different manufactures that both do data compression will be able to read each other's tapes.

These are just a couple of the reasons why SCO doesn't provide tape installation media other than QIC tapes.


SCO installation media is becoming more and more prevalent on CD-ROMs. A CD-ROM takes a lot less space than the 50 floppies or even the quarter inch cartridge (QIC) tape, so the media is easier to handle. Added to this, the CD-ROMs are significantly faster than either floppy or tape media. The CD-ROM media is also cheaper. You will save substantial amount of cash by ordering the CD-ROM media since ODT and OpenServer are several hundred dollars cheaper on CD-ROM than on floppy. If you already have a supported SCSI host adapter in your system, the money you save by ordering CD-ROM media literally pays for the cost of the CD-ROM drive. Once installed, the CD-ROM is yours to keep.

OpenServer is available on almost 150 floppies and costs an extra $300. By not paying that much extra for the floppies and buying the CD-ROM version, you can spend the $300 on a CD-ROM drive. Besides, installing from so many floppies would probably take all day and you can install from CD-ROM in less than two hours.

Another important aspect of CDs when it comes to installation media is their size. Therefore, it is possible to get a large number of products on the CD. You can then ship a single CD to a custom and have them pull of what they need (i.e. what they paid for). If they later decide they want an additional product, they don't have to wait for the media to be shipped.

CD-ROMs, in fact CDs technology in general, has always fascinated me. It amazed me that you could get some much information into such a small place and still have such quick access to your data.

The basic principle behind data storage on a CD is really nothing more than Morse-code. A series of light and dark (dots and dashes) compose the encoded information on the disk. Commercial CDs, whether music or data, almost universally have data on one side of the disk. Although there is nothing technologically preventing a CD from having a flip side, convention limits data to just a single side. This is enough when you consider that you can get over 600Mb of data on a single CD. As the technology improves, the amount is steadily increasing. In addition, certain manufacturers are working are dual-sided CDs.

On the surface of the disk are a series of "dents" or holes in the surface, called "lands." The areas between the lands are called "pits". A laser is projected onto the surface of the disk and the light is either reflected by the pits or scattered by the lands. If reflected, the light reaches a light-sensing receptor, which then sends an electrical signal that is received by the control mechanism of the CD-ROM drive. Just as the pattern of alternating dots and dashes form the message when using Morse-code, it is the pattern of reflected light and no light that indicates the data being stored on the disk.

When I first thought of CD-ROMs, I conceptualized them as being like WORM drives (Write-Once Read-Many). Which they are, sort of. I visualized them as being a read-only version of a hard disk. However, after looking more closely at the way data is stored, the more you look at how CDs, the less they have in common with hard disks.

If you remember our discussion of hard disks, each surface is composed of concentric rings called tracks, and each track is divided into sectors. The disk spins at a constant speed as the heads move in and out across the drive's surface. Therefore the tracks on the outer edge are moving faster than those on the inside.

Take, for example, a track that is half-an-inch away from the center of the disk. The diameter of the circle representing the track is one inch, so the radius of that circle is approximately 3.1415 inches. Spinning 60 times a second, the track goes at a speed of about 190 inches per second. Now, take a track at one inch from the center, or twice as far. The diameter of the circle representing that track is 6.2830 inches. It, too, is going around at 60 revolutions per second. However, since it has to travel twice as far in each revolution it has to be going twice as fast.

A CD-ROM isn't like that. CD-ROMS rotate in a manner called "constant linear velocity." The motor keeps the CD moving at the same speed regardless of where the CD reader is reading from. Therefore, as the light detector moves inward, the disks slows done so that each revolution takes the same amount of time per track.

Let's look at hard disks again. They are divided into concentric tracks that are divided into sectors. Since the number of sectors per track remains constant, the sectors must get smaller toward the center of the disk. (This is because the circumference of the circle representing the track is getting smaller as you move in.)

Again, a CD-ROM isn't like that. Actually there is no reason why it should. Most CD-ROMs are laid out in a single spiral, just like a phonograph record. There are no concentric circles, so there is no circumference to get smaller. As a result, the sectors in a CD can remain the same size no matter what. The added advantage of sectors remaining the same size means there can be more on the disk and therefore more data for the user.

ODT only supports SCSI CD-ROM drives that are attached to supported host adapters. However, most newer SCSI CD-ROMs adhere to the SCSI-2 standard. Since the SCO CD-ROM driver (Srom) is issuing standard SCSI commands, there is no reason why SCSI CD-ROMs that are not "officially" supported should not work. In fact, there are very few cases there they don't as long as they are attached to a supported SCSI host adapter. In addition, OpenServer supports ATAPI/EIDE style CD-ROM drives.

A little known fact about SCO CD-ROM devices that most people are not aware of is that there are actually two different kinds. The first one is what everyone is familiar with. This is the kind that comes as a DOS or UNIX filesystem and can be mounted like any other. These have the ISO-9660 format (referred to as High-Sierra under SCO) file system format. The second kind of CD-ROM drive is referred to as a CD-Tape drive. This is the device that is being accessed by the system when doing and install. However, these kind cannot be mounted.

It is relatively easy to understand how this is accomplished since, as we mentioned before, the CD is a single spiral. If we were to unwind this spiral, we would have a long "strip" of data, just as if it were a tape. The logic behind this one of practicality. SCO had been using tapes as its installation media for some time before it started on CDs as installation devices. There were already well documented procedures to create installation tapes, and there was nothing yet in place to make installation CD-ROMs. Rather than an almost complete redesign of the installation process, a much simpler change was made to the CD-ROM tape driver.

In addition to the CD-Tape format, SCO supports several CD-ROM formats including UNIX filesystems, ISO-9660 and Rockridge formats (which is similar to ISO-9660, but supports things like long file names). Although there are no problems mounting another UNIX filesystem, adding an ISO-9660 format filesystem has a unique set of problems.

When installing a CD-ROM drive, you have two options. The first option is to install a normal CD-ROM and the other is to install a CD-Tape device. The CD-Tape device has one purpose; that is to install SCO software. The standard CD-ROM allows you to access both UNIX filesystems as well as ISO-9660 format.

In order to access the ISO-9660 format, you need to add the right driver to the system. Since this is necessary to access the standard CD-ROM, you are asked during the installation if you want to add it.

Magneto-Optical Drives

Similar in operation to hard disks, magneto-optical (or just MO) drives are very different in the way the data is written to and read from the drive. One of the key differences is that MO drives write to a much smaller area than a regular hard disk. In order to accomplish this they rely on a laser to get the data area down to a much smaller size than a hard disk.

Because of the media they are written to has a higher coercivity, the data on MO disks have a much longer lifetime than normal hard disks. Manufacturers of MO drives estimate that the magnetic fields holding the data on MO will remain at "useable” levels for 10-15 years. (Interesting speculation when you consider the MO technology is not that old.)

Ever placed a floppy near a magnetic field (e.g. stereo speaker) and have it lose data? Because of the MO disk's very high coercivity this is less likely to happen. Remember that coercivity is basically the force needed to put the magnetic field in the correct pattern to represent data. If it takes a lot of force to turn it into data. It takes a lot of force to make it garbage. Therefore, MO disks are much less effected by stray magnetic fields.

There is a problem here as well. How do you get a strong enough magnetic field to whip the MO disk into shape? Well, there is an interesting aspect of magnetism called the Curie temperature. Each media has a specific Curie temperature. Close to this temperature, it takes almost no force to change the magnetic field. That is, the coercivity gets close to zero.

By using the laser to heat up a spot to close to the Curie temperature, a weaker magnetic field is needed. Even though the field created by the head covers a wide area, only the pin-point spot generated by the laser is hot enough to be effected.

MO drives have one major disadvantage compared to conventional hard disks: a longer access time when writing data. This is because the design of the MO drive cannot change the orientation (polarity) of the field fast enough. As a result, the drive must make two passes over the same area. The first pass aligns the field in area to be written in a single direction. The second pass then aligns it the way the data would have it. Adding the slower rotational speed gives access times that are two or three times longer than conventional hard disks.

Reading the disk is slightly different. Whereas the writing process using a self-generated magnetic field to orient the particles, reading uses the laser. The photons in the laser are aligned with respect to each other. When they enter the magnetic field generated by the disk, their alignment changes. This change can be noticed and is what allows us to read the data.

Serial Ports

Most machines that are sold today come with two serial ports attached. These can either be built into the motherboard, part of a serial/parallel card, or part of an "all-in-one" card that has serial ports, parallel ports, games port and even hard disk and floppy controllers.

A serial board is an expansion card that translated bus signals where at least eight bits arrive simultaneous into signals that send single bits at time. These bits are encapsulated into groups of one byte. The encapsulation contains other signals that represent the start and end of the byte, as well as a parity bit. Additionally, the number of bits that are used to represent data can either be 7 or 8.

Parity is the mechanism which single bit errors can be detected during transmission. The number of bits set to one is counted and based on whether even or odd parity is used the parity bit is set. For example, if even parity is used and there are 3 bits that are set, then the parity bit is also set to make the total number of bits set an even number. However, if odd parity is used, the number of bits set is already odd, therefore the parity bit is left unset. When you are using some other means to detect errors, parity can be turned off and you are said to be using no parity. This is the default for modems in SCO UNIX.

Serial communication parameters must be agreed upon by both ends. These parameters are often referred to in triples, such as 8-1-N. (Read eight-one-none) In this instance there are eight data bits, 1 stop bit and no parity is used. This is the default for SCO UNIX systems.

One of the key elements of a serial board is the Universal Asynchronous Receiver-Transmitter, or UART. The transmitter portion takes a byte of parallel data written by the serial driver to the card and transmits it one bit at a time (serially). The receiver does just the opposite. It takes the serial bits and converts them into parallel data that is sent down the bus and is read by the serial driver.

Although SCO only provides drivers for standard serial ports, intelligent ones are often installed to allow many more logins (or other connections) to the system. The most significant difference is that intelligent serial boards (often referred to as smart serial boards) have a built in CPU. This allows it to take all of the responsibility for processing the signals away from the system CPU.

In addition, intelligent serial boards can better buffer incoming signals that keep them from getting lost. With non-intelligent boards, the system may be so busy that it does not get around in time to read characters of the board. Although the 16550 UART common on most serial boards today contains 16-byte buffers, this is often not enough. Under heavy load, the serial driver does not react fast enough and character are overwritten.

Serial board performance is also increased by intelligent boards. Since signals are buffered and sent in large chunks, there is less overhead on a per character basis. With non-intelligent boards, single characters are often transmitted, so the per-character overhead is much larger. In fact, most non-intelligent boards generate and interrupt and the associated over head with each character.

Because there is a lot of processing done on the board itself, intelligent serial boards require special drivers from the manufacturer. Since SCO does not have access to the drivers and cannot determine what is and what is not correct behavior, support for these devices is often difficult. As with other devices, if the device driver is not included with the product, then SCO is under no obligation to support it. This doesn't mean they won't. SCO has very good official and non-official relationships with many vendors and both sides are willing to help out the other.

It is possible to obtain supported serial boards that have multiple ports. Although such boards have multiple UARTs, they do not have the performance of intelligent boards, but do provide a low cost alternative. For a discussion on the device nodes used for such boards, see the section on the device directory.

Originally designed to connect mainframe computers to modems, the RS-232 standard is use exclusively for serial ports on PCs. Two kinds of devices are considered with RS-232: Data Terminal Equipment or DTE and Data Communication Equipment or DCE. DTE is the serial port side and DCE is the modem side.

Two types of connections are used: DB25 (with 25 pins) and DB9 (with 9 pins). Although they serve the same basic function, the numbering of the pins is slightly different. Below is a table of the main pins, their functions and a mnemonic that is commonly used to refer to them:











Request to send



Clear to send



Data Set Ready



Signal Ground



Carrier Detect



Data Terminal Ready



Ring Indicator


Table 0.3 Common Pins on DB-25 connector





Carrier Detect









Data Terminal Ready



Signal Ground



Data Set Ready



Request to Send



Clear to Send



Ring Indicator


Table 0.4 Pins on DB-9 Connector

Figure 028313 - Physical layout of pins on serial cables

Note that on a DB25 connector, pin 1 is chassis ground, which is different from signal ground. Chassis ground ensures that both serial connectors are operating at the same electric potential and keeps you from getting a shock.

In order to communicate properly, the DTE device must say that it is ready to work by sending a signal on the DTR line. The DCE device must also do the same on the DSR line.

One side indicates that it has data by sending a signal on the RTS line (it is requesting to send data). If ready, the other side says that it is ready by sending a signal on the CTS line (the sender is clear to send the data). What happens when the receiving side can't keep up? That is, if the sending side is sending too fast. If the receiving side needs to stop (perhaps a buffer is full), it stops the CTS signal (meaning the sender is no longer clear to send the data). This causes the sending side to stop. This is referred to as hardware handshaking, hardware flow control or RTS/CTS flow control.

Problems arise when connection other types of devices. Some devices, such as printers, are themselves DTE devices. If you tried to connect a standard RS-232 cable, TX is connected to TX, RX is connect to RX, DSR is connected to DSR, and DTR is connected to DTR. The result is nothing happens. The solution is a cross-over cable which internally swaps the appropriate signals and makes sure they end up going to the right place.

If you have a terminal, things are easier. First off, although the data is going in both directions, the data coming from the terminal will never exceed the speed of the serial port (I'd like to see you type at 240 characters per second). Data heading toward the terminal is displayed on the screen, which will display it as fast as it comes. Therefore, you only need three signals: send, transmit and ground.

Should the terminal be displaying the data too fast for you to read, you can stop it by sending an XOFF character back to the system. This is usually a CTRL-S and unless turned off, this will stop incoming data. To turn the flow of data back on again, you send the system an XON (CTRL-Q) character. This type of flow control is called software flow control or XON/XOFF flow control. In some cases, depending on how the terminal is configured, sending any character.

Both the serial(HW) man-page and the System Administrator's Guide provide additional information on serial ports, especially how they work as terminal devices. We will also be talking more about serial devices later when we talk about modems.

Parallel Ports

Parallel ports are a common way printers are attached to an SCO UNIX system. Although there are many different problems that arise with printers attached to parallel ports, there are not many issues that arise with parallel ports.

First, let's take a look at how parallel ports work.

One of the key difference between parallel and serial ports is the way data is sent across. From our discussion of serial ports, you know that data goes across a serial line one bit at a time across a single data line. Parallel ports send data across a byte (eight bits) at a time across eight data lines.

Another key difference is the cable. Looking at the computer end, it is easily confused with a serial connector. Both have 25 pins in the same layout. On the printer end is where things are different. Here is a special kind of 36-pin connector called a Centronics connector named after the printer manufacturer Centronics. A cable that has a 25-pin D-shell connector on one end and a 36-pin on the other is called a Centronics or parallel cable. Unlike serial cables, there are not different kinds of cables (like straight-through or crossed). Because of this, all that usually needs to get done is to plug in the cable at both ends and go.

Figure 028314 Comparion of Centronic and DB-25 connectors

Although some devices allow communication in both directions along a parallel, SCO UNIX does not support any of these. In fact, the only thing that SCO directly supports on parallel ports are printers.

Because there is no guarantee that all the data bits arrive at the port at the same time, there needs to be some way of signal the printer that the data is ready. This is done with the strobe line. Once a character (or any byte of data) is ready, the system sends a signal along the stobe line. The use of the strobe line also prevents characters from being read more than once.

Often times the printer cannot keep up with the data flow from the parallel port. Just like RTS-CTS flow control on serial ports, parallel ports also need a way to be told to stop. This is done with the busy line. Actually, the busy line is set after each character, in case the printer cannot process the character fast enough. Once the character is processed, the printer can turn off the busy signal.

However, this is not enough to get the parallel port to send the next character. The printer must first tell the parallel port it has received the character by sending a signal along the acknowledge line. Note that this acknowledge occurs after every character.

There are other control lines the printer uses. One is the select which indicates that the printer has been selected, or is "on-line.” There is also a special line to say when the paper source is empty. This is the paper empty line. If the problem is unknown, the printer can send a signal along the fault line that basically says that "something” is wrong.

One thing that comes up regularly is the confusion as to what physical parallel port is related to what lp device. In order to work correctly, it is necessary to configure your parallel ports according to the following table:

Device name












Table 0.5 Default Parallel Port Devices

If parallel port is configured with the correct addresses, but with the wrong interrupt, the parallel port will be recognized, but will not work. Note that the hardware screen and hwconfig show you the interrupt that it expects the port to have and not what it is actually configured at. This is true for all devices.

The use of two devices with the same interrupt is not supported on ISA machines. This means that a system with both /dev/lp0 and /dev/lp1 installed is not supported. The result could any one (or more) of the following:

- attempting to access either port generates the error message:

"cannot create"

"no such device or address"

- attempting to access either port hangs with no error message

- attempting to access both ports at the same time causes

either one or both ports to hang or to print extremely slowly.

On the other hand, I have heard of systems work with all three parallel ports enabled, provided they are not all accessed simultaneously.

Video Cards and Monitors

Without a video card and monitor you don't see anything. In fact, every PC that I have ever seen won't even boot unless there is a video card in it. Granted your computer could boot and even do work without it being attached to a monitor (and I have seen those), however it's no fun unless you get to see what's going on.

When PCs first hit the market, there was only one kind of video system. High resolution and millions of colors were something you read about in science-fiction novels. Times changed and so did graphics adapters. The first dramatic changed was the introduction of color with IBM's color graphics adapter (CGA), which required a completely new (and incompatible) video sub-system. In an attempt to integrate color and monochrome systems, IBM came out with the enhanced graphics adapter (EGA).

But were not going to talk about those. Why? First off, no one buys them any more. I doubt that anyone still makes them. If you could find one, there would be no problem at all installing them and getting them to work. So, the second reason that I am not going to talk about them is that they are not that common. Since "no one” uses them any more, the time I spent telling you why I won't tell you about them is already too much.

What are we going to talk about instead? Well, the first thing is VGA. VGA (or Video Graphics Array) is the standard by which virtually all video card manufacturers base their products. Although an enhancement to VGA (Super VGA or SVGA) exists, it is all based on VGA.

When talking about VGA, we first need to talk about some basics of video technology. The first issue is just how things work. Digital signals are sent by the operating system to the video card, which sends them through a digital to analog converter (DAC). Usually there is a single chip that contains three DACs, one for each color (Red, green and blue or RGB). The DAC has a lookup table that determines the voltage to be output on each line for the respective color.

The voltage that the DAC has found for a given color is send to the three electron guns at the back of the monitor's cathode ray tube(CRT). Again, one for each color. The intensity of the electron stream is a result of this voltage.

The video adapter also sends a signal to the magnetic deflection yoke which aims the electron beams to the right place on the screen. This signal determines the how far apart the dots are as well as how often the screen is redrawn. The dots are referred to as pixels, the distance apart they are is the pitch and how often the screen is redrawn is the refresh rate.

In order to keep the beams precisely aligned, they first pass through a shadow mask, a metal plate containing hundreds of thousands of little holes. The dot pitch is how closely aligned the holes are. The closer the holes the higher the pitch. A higher pitch means a sharper image.

The electrons from the electron guns strike the phosphors on the inside of the monitor screen and make them glow. Three different phosphors are used, one for each color. The stronger the beams the more intense the color. Colors other than RGB are created by changing the amount each of these three colors is displayed. That is, by changing the intensity of each color. For example, purple would be created by exciting red and blue phosphors, but no green, After the beams stops hitting the phosphor, it will still continue to glow for a short time. To keep the image on the screen, the phosphor must be recharged by the electron beam again.

The electron beams are moved across the screen by changing the deflection yoke. When the beams reach the other side, they are turned off and returned to the starting side, just below the line where they left off. When the guns reach the last line, they move back up to the top. This is called raster scanning and it is done approximately 60 times a second.

Some monitor manufacturers try to save money by using less expensive components. The trade off is that the beams cannot scan every line during each pass. Instead, they scan every other line during the first pass, then the lines they missed during the second pass. This is called interlacing as the scan lines are interlaced. Although this provides higher resolutions in less expensive monitors, the images will "flicker" as the phosphors begin to fade before they can be recharged. (This flickering gives me, and other people, a headache)

For most users, the most important aspect is the resolution. Resolution determines the total number of pixels that can be show on the screen. In graphics mode, standard VGA has a resolution of 640 pixels horizontally and 480 pixels vertically. By convention, you say that your resolution in 640-by-480.

A pixel is actually a set of three phosphors rather than just a single phosphor. So, in essence, a pixel is a single spot of color on the screen. What color is shown at any given location is an interaction between the operating system and the video card. In the before time, the operating system (or program) had to tell the video card where each dot on the screen was. It had an internal array (or table) of pixels each containing the appropriate color values. Today, some video cards can be told to draw. They don't need to know that there is a row of red dots between points A and B. Instead, they are simply told to draw a red line from point A to point B.This results in faster graphics as much of the work is being taken over by the video card.

In other cases, the system still needs to keep track of which colors are where. If we had a truly monochrome video system, then any given pixel would either be on or off. Therefore a single bit can be used to store that information. If we go up to 16 colors, we need 4 bits, or half a byte of information (24=16). If we go to a whole byte, then we can have 256 colors at once (28). Many video cards use three bytes to store the color data, one for each of the primary colors (RGB). In this way they can get over 16 million(!) colors

Now, 16 million colors seems like a lot, and it is. However, it's actually too much. Humans cannot distinguish that many, so much of the ability is wasted. Added to that that most monitors are limited to just a few hundred thousand colors. So, no matter what your friends tell you about how wonderful their video card is that it does 16 million colors, you need not be impressed. The odds are the monitor can't handle them and you certainly can't see them.

However, don't go thinking that the makings of video cards are trying to rip us off. In fact, it's easier to design cards that are multiples of whole bytes. If we had a 18-bit display (needed to get the 250K of colors that monitors could handle) we either use six bits of three different bytes or two whole bytes and two bits of the third. Either way things are wasted and you spend time processing the bits. If you know that you have to read three whole bytes, one for each color, then there is not as much processing.

How many pixels and how many colors a video card can show are interdependent. When you bought it, your video card came with a certain amount of memory. The amount of memory it has limits the total number of pixels and colors you can have. If we take the standard resolution of a VGA card of 640x480 pixels, that's 307,200 pixels. If we want to show 16 colors that's 307200 x 4 bits or 1,228,800 bits. Divide this by eight gives you 153,600 bytes needed to display 640x480 in 16 colors. Since memory is usually produced in powers of two, the next smallest size is 256 Kilobytes. Therefore a video card with 256K of memory is needed.

Maybe this is enough. For me, I don't get enough on the screen with 640x480 and only 16 colors looks terrible (at least to me). However, if you never run any graphics applications on your machines such as X-Windows, then there is no need for anything better. Operating in text mode your video card does fine.

As I said, I am not happy with this, I want more. If I want to go up to the next higher resolution (800x600) with 16 colors, I need 240,000 bytes. I am still under the 256K I need for 640x480 and 16 colors. If, instead, I want 256 colors (which requires 8 bits per pixel), I need at least 480000. I now need 512K on the video card.

Now I buy a great big monitor and want something closing to "true color”. Let's not get greedy, but say I wanted a resolution of 1024x768 (the next higher up) and "only” 65,635 colors. I now need 1,572,864 bytes of memory. Since my video card has only 1Mb of memory, I'm out of luck!

But wait a minute! Doesn't the VGA standard only support resolutions up to 640x480? True. However, the Video Electronics Standards Association (VESA) has defined resolutions above 640x480 as Super VGA. In addition to the ones mentioned previously (800x600 and 1024x768), SVGA also include 1280x1024 and 1600x1200.

Okay. The mere fact you have a video card that handle SVGA resolutions does not mean you are going to get a decent picture (or at least not the picture you want). Any system is only as good as its worst component. This also applies to your video system. It is therefore important to understand a characteristic of your monitor: pitch. I mentioned this briefly before, but it is important to talk about it further.

When shopping for a monitor, you will often see that among the characteristics used to sell it is the pitch. The values you would see could be something like .39 or .28. This is the spacing between the holes in the shadow mask, measured in millimeters. Therefore, a pitch of .28 is just over one-quarter of a millimeter. The lower the pitch, the closer together the holes and the sharper the image. Even if you aren't using any graphics oriented programs, it's worth the few extra dollars to get a lower pitch and the resulting sharper image.


Up to this point, we've talked about things that most every computer user has and almost always come with the system. We are now moving into a new area. This is where we get one computer to talk to another. In my opinion, this is where computers begin to show their true power.

Perhaps the earliest means of getting computers to talk to one another (at least over long distances) was the modem. Modem stands for Modulator/Demodulator, which is its basis of operation. It takes the digital signals that come across the bus from the CPU and converts them into a signal that can be understood by the telephone wires. This process is called modulation. On the other end, the signals are converted back from telephone signal into computer signals by a process called demodulation.

Underlying the transmission of data across the telephone line is the concept of a carrier. A carrier is a signal running along the phone line that is at a constant strength (amplitude), frequency and phase. Because all of these are known values, changes in them can be detected. It is the changes that are used to encode the data.

Figure 028315 Transfer of data across modems

When sending data a relatively low speeds, the exact timing of data being sent is not important. Markers are used within the transmitted data to indicate the beginning at the end of each piece. These are the start and stop bits. (Note: you could have two stop bits.) If each modem is set to the same values, it knows when one piece of data stops and the next one begins. This is called asynchronous transfer.

How big is that piece of data? Usually just a single character. All the modems that I have ever seen either have seven bits for data or eight. That means that there are seven or eight bits between the start-stop bit pairs .This, too, is something that both modems need to agree upon.

Parity works like this: Let's assume that a specific bytes has three bits which are set. If you are using even parity, then the parity bit would be set to make the total number set four, which is an even number. (I hope you knew that) If you are using odd parity, the number of bit is already an odd number (3) and the parity bit would not be set.

When determining the settings you modem needs to be at, the order is usually the number of data bits, followed by the number of stop bit, then the parity. By default, SCO UNIX uses eight data bits, one stop bit and no parity. It is common to refer to this a "eight, one and none” or 8-1-N. Other might be 7-2-E for seven data bits, two stop bits and even parity.

Another important characteristic is the speed at which the modem is transmitting the data. Although the exact timing is not critical, signals have to be received within a certain time or there is problems. (You could be waiting for months if the connection suddenly dropped)

Now, let's go back to the modulated carrier wave. The term for the number of changes in the carrier wave per second is baud, named after the French telegraph expert, J.M.E Baudot Baudot,. One way of encoding data, is based on the changes to the carrier wave. This is called frequency shift keying, or FSK. The number of changes that can take place per second is the number of bits of information that can be send per second (one change=one bit).

Let's take a modem connection operating at 2400 baud, 8 data bits, one stop bit and no parity. This gives us a total of 10 bits used for each character sent. (Did you forget the start bit?) Since baud is a measurement of the number of bits sent per second, 2400 baud means that 240 characters can be sent per second.

Other encoding methods result in getting more bits per baud. For example, the Bell 212A standard operates at 300 baud. However, since it gets four bits of data per baud, it gets 1200 bits per second for those 300 baud. This rate is accomplished by changing more than just the frequency. If we changed both frequency and amplitude, we have four distinct values that we could use.

Have you ever had someone tell you that you have a 9600 baud modem? Don't believe them! There is no such thing. In fact, the fast baud rate is only 2400. So what are people taking about when they say there modem goes 9600 or 14400? They are talking about the bits-per-second (bps). If you get one bit-per-baud, then these terms are synonymous. However, all 9600 modems get more than that. They operated at 2400 baud, but use a modulation technique that yields 4 bits per baud. Thus a 2400 baud modem gives 9600 bits per second.

Modem Standards

As with all the other kinds of hardware we've talked about, modems need to have standards in order to be useful. Granted you could have a modem that can only communicate with another from the same manufacturer, but even that is a kind of standard.

Modem standards are like opinions: everyone has one. There are the AT&T standards, the Bell standards, the International Telecommunications Union (ITU) standards (which was formally the Comite Consultatif International Telegraphique et Telephoneique - CCITT) and the Microcom Networking Protocol (MNP) standards.

As of this writing, the two most common standards are the CCITT and MNP. The MNP standards actually work in conjunction with modems that adhere to the other standards and for the most part define technologies rather than speeds or other characteristics.

The CCITT/ITU standards define (among other things) modulation methods that allow speeds up to 9600 bps for the V.32 standard and 14,00bps for the V.32.bis standard. The new V.34 standard supports 2800bps. One of the newer standards, V.42 is accepted world-wide and provides error-correction enhancements to V.32 and V.32bis. The V.42 standard also incorporates the MNP 4 standard allowing one modem that supports V.42 to communicate with another that supports MNP 4. (For many more details on the different standards look at The Winn L. Rosch Hardware Bible, Third Edition and the Modem Reference, Second Edition, by Michael Banks. Both are published by Brady Books)

One standard we need to go into is the Hayes command set. This was developed by, and named for the modem manufacturer Hayes and is used by almost every modem manufacturer. It consists of dozens of commands that are used to modify the functionality as well as read the characteristics of your modem. Most of the commands in this set begin with AT (which is short for "attention”), so this is often referred to the AT command set. Note that the AT and almost every other letter is capitalized.

Several AT commands can be combined in a single strings, and this is often used to initialize the modem prior to use. This can set the default speed, whether the modem should automatically answer when someone calls in, and even how many rings to wait for. We'll talk about these in more detail later when we talk about configuring modems.

Modems come in two forms: internal and external. Because a modem is a serial device (it communicates serially as opposed to parallel) it will always take up a serial port. With an external modem, you have to physically make the connection to the serial port, so you are more conscious on the fact that the modem is taking up a port. With internal modems, you are still taking up a serial port, however, this fact is less obvious. Since you don't actually see the modem. Some users miss the fact that they no longer have a COM1 (or COM2).

External modems are usually connected to the computer via 25-pin RS-232 connector. Some serial ports have only a 9-pin serial port so you need to get an adapter to convert the 25-pin to 9-pin, since every modem I have every seen has a 25-pin connector.

So, what happens when I want to dial into another site, or send an email message to my sister in Washington? Well, the communications software (maybe cu or uucp) sends a signal (an increase in voltage) along pin 20 (Data terminal ready - DTR) to tell the modem that it is ready to transmit data. On the modem, the equivalent pin is #6, (Data Set Ready-DSR).

The modem is told to go "off hook" via the Transmit Data line (TX, line 2). Shortly thereafter the system sends the AT-commands to have the modem start dialing either with pulses (ATDP) or with tones (ATDT). Commands are acknowledged by the modem via the line 3 (receive data -RX).

The modem dials just like a phone and tries to connects to some device on the other end. This is probably a modem and if auto answer is enabled, the modem being called should answer or pick-up the modem. When the connection is made, the calling modem sends a high pitched signal to tell the receiving modem that a modem is calling. The receiving modem sends a higher pitched acknowledgment. (You can hear this if your modem has a speaker)

The carrier signal is then established between the two modems, which is kept at a steady, pre-determined frequency. It is this signal that is then modulated to actually transmit the data. When the modem as begun receiving this carrier signal, it send another signal back to the system via line 8 (Carried Detect-CD). This is held active for the duration of the call.

The two modems must first decide how they will transmit data, This negotiation is called a handshake. The information exchanged includes many of the things that are defined in the different standards we talked about earlier.

When the system is ready to send data, it first raises line 4 (Request to Send - RTS). If ready, the modem says that its okay by raising line 5 (Clear to Send -CTS). Data then is sent out on line 2 and received on line 3. If the modem cannot keep up, it can drop the CTS line to tell the system to stop for a moment.


Although more and more companies are trying to transform into a "paperless office,” you will undoubtedly see a printer somewhere. Even if the office is paperless internally. It will have to use paper of some kind to communicate with the rest of the world.

Printers come in many different shapes, sizes, format, means of connection to the system, way of printing characters, speeds, and so on, ad naseum. The two most common ways of connecting printers is by serial or parallel ports. SCO UNIX also supports Hewlett-Packard Laser Jet printers equipped with JetDirect cards. These are cards that allow the printer to be attached directly to a network, thereby increasing its speed. We'll talk more about these later. In addition, although not supported by SCO as of this writing, SCSI printers have shown themselves on the market.

In previous sections, we talked about serial and parallel connections, so I don't need to go into details about them. I do talk about these connections in more details in the second part of the book when we talk about installing and configuring printers.

There are two kinds of printers that, although once very common, are now making way for their more advanced brethren. These are daisy-wheel and chain printers. The distinction that these printers had is that they had pre-formed characters.

In the case of a daisy-wheel printer, printing was accomplished by means of a wheel, where the characters we at the end of thin "leaves”. This made the daisy shape. The wheel was rotated very fast and as the appropriate letter came into position it was struck with a "hammer” which forced the leaf with the character on in into the ink ribbon, which then struck the paper. This mechanism is the same principle as a normal typewriter. In fact, there are type writers that use the same daisy-wheel principle.

Chain printers also have pre-formed letters. However, instead of a wheel, the letters are on a long strip, called a chain. Instead of rotating, the chains moves back and forth to bring the appropriate letter into position.

Although these printers are fairly quick, they are limited in what they can print. You could get pretty tricky in what characters you use and come up with some rather cute pictures. However, they don't have the ability to do anything very detailed.

The next step was impact dot-matrix printers. These too have hammers, but rather than striking pre-formed letters, it is the hammers themselves that strike the ink ribbon. Instead of a single hammer, there is a column of usually 9 or 24 hammers, or pins. Such printers are called 9-pin or 24-pin.

As the printer prints, the heads are moved across the page and print out columns of dots. Depending on what character is to printed, some of the pins do not strike the ink ribbon. For example, when printing a dash, only the middle pin(s) will strike the ribbon. When printing a more complex character like an ampersand (&), the pins strike at different times as the print head moves across the page.

As with monitors, the more dots you have, the sharper the image. Therefore, a printer with 24-pins can produce a sharper image than one with only 9-pins. In most cases, it is obvious the moment you see something that it was printed with a 9-pin printer. Some 24-pin printers require a little closer look before you can tell.

Next, image getting rid of the ink ribbon and replacing the pins with little sprayers connected to a supply of ink. Instead of striking something, these sprayers squirt a little dot of ink onto the paper. The result is the same as an impact dot matrix printer. This is what an ink jet printer does.

There are two advantage that ink jets have over impact dot matrix printers First, is the noise. Since there are no pins striking the ink ribbon, the printer is a lot quieter. Second, by extending the technology a little you can increase the number of jets in each row. Instead of just squirting out black, you could squirt out in color, which is how many color printers work.

The drawback is the nature of the print process itself. Little sprayers, squirting ink all over the place is messy. Without regular maintenance, ink jets can clog up.

Using a principle very similar to video systems, laser printers can obtain very high resolution. A laser inside the printer (hence the name) scans across a rotating drum that has been given a static electric charge. When the laser hits a spot on the drum, that area looses it's charge. Toner is then spread across the drum and sticks to those areas that retain their charge. Next the drum rolls across the paper, smashing the toner into the paper. Finally, the toner is fused into the paper by means of a heating element.

Although it may appear as a solid image, laser printers still work with dots. The dots are substantially smaller than those of a 24-pin dot matrix, but they are still dots. As with video systems, the more dots the sharper the image. Because a laser is used to change the characteristics of the drum, the areas effected are very small. Therefore, with laser printers you can get resolutions of even 300 dots-per-inch on even the least expensive printers. Newer ones are approaching 1200 dpi, which is comparable to photographs.

Some laser printers, like HP's LaserJet III and use a technology called resolution enhancement. Although there are still a limited number of dots-per-inch, the size of each dot can be altered, thereby changing the apparent resolution.

Keep in mind that printers have the same problem with resolution as do video systems. The more dots that are desired the more memory is needed to process them. An 8 1/2”x11” page with a resolution of 300dpi take almost a megabyte with of memory.

With printers such as daisy-wheel and chain printers, you really don't have this issue. Even a buffer as small as 8K is more than sufficient to hold a whole page of text including control characters that can change the way the other characters appear. While such control characters may cause the text to be printed bold or underlined, they are relatively simple in nature. For example, underlining normally consists of printing the character, backing up one space, then printing an underline.

Multiple characters sets or fonts is something that this kind of printer just can't handle. Different character sets (e.g. German) or changing their form (e.g. italic) can easily be accomplished when the letter is created "on-the-fly” with dot-matrix printers. All that is needed is to change the way the dots are positioned. This is usually accomplished by using escape sequences. First, an escape character (ASCII 27) is sent to the printer to tell it that the next character (or characters) is a command to change it's behavior.

Different printers react differently to different escape sequences. Although there is a wide range of sets if escape sequences, the two most common ones are those for IBM Proprinters and Epson printers. Most dot-matrix printers can be configured to behave like one of these. Some, like my Panasonic KX-P1123, can be configured to behave like either one.

The shortcoming with this is that you are limited to a small range of character types and sizes. Some printers, like mine, can get around this limitation by the fact that they can print in graphics modes as well. By viewing the page as a one complete image, composed of thousands of dots, they can get any font, any size and with any attribute (assuming the software can handle this). This is how printers like mine can print charts, table and to some extent pictures.

Viewing the page as a complete image works when you have graphics or diagrams, however it's a waste of memory when dealing with straight text. Therefore, most laser printers operate in character-mapped mode. The characters are stored in memory and are the dots are generated as the page goes through the printer.

Printers are controlled by other means than just escape sequences of treating the page as a single image. One of the most widely used is Adobe System's Postscript page description language. It is as much a language as the programming languages C or Pascal, with syntax and vocabulary. In order to utilize it, both the software and the printer have to support it. However, the advantage is that many applications allow you to print Postscript to a file. That file can then be transferred to a remote site with a Postscript printer. The file is then sent to a printer (as raw data) and the output is the same as if it were printed directly from the application. The nice thing is that the remote side does not even have to have the same application, so long as their printer is Postscript capable.

Hewlett-Packard has it's own language: Printer Control Language or PCL. PCL is a lot simpler than Postscript and was easily incorporated into SCO's printer interface scripts, which are used by the operating system to control printers. (more on those in the section on printers).


Although a mouse can be used with default SCO UNIX programs such as vi and sysadmsh, it really begins to show its stuff when you have a graphical user interface (GUI) such as X-Windows. Although some GUIs (like Microsoft Windows) allow you to run without a mouse, SCO's implementation of X-Windows does not. If you ever want to install X-Windows or even if you want to use a mouse with sysadmsh, then you will need to know about them.

The basic principle is that by moving the mouse, the cursor (pointer) on the screen is moved in the same manner. Actions can be carried out by clicking one of up to three buttons on the mouse.

As the mouse is moved across a surface, a ball underneath rolls along with it. This ball turns small wheels (usually three of them) inside of the mouse. The amount each of these wheels can be measured and it is this movement that is translated into movement of the cursor.

Because the ball underneath needs to roll in order to make the mouse work, it has to remain on a flat surface. The surface must also have a certain amount of friction for the ball to roll. Although you can get a certain amount of movement by shaking the mouse, picking it up and expecting the cursor to move is a waste of your time. (Despite what I have seen some users do)

Originally, mice were connected by a thin cable to the computer. As technology progressed, the cable was done away with and replaced with a light emitting diode (LED) on the mouse and a photodetector near the computer. This had the advantage of not having the cable getting buried under a pile of papers and thereby limiting the mouse's movement. The disadvantage is that the LED must remain with line-of-sight of the photodetector in order to function. Some manufacturer's have overcome this by using an alternate form of light that is not dependent on line-of-sight: radio.

Another major problem with all of these kinds of mice is desk space. My desk is not neat. Space is at a premium. Even the small space needed for a mouse pad is a luxury that I rarely have. Fortunately, companies such as Logitech have heard my cries and come to the rescue. The solution is, as an old UNIX guru called it, a dead mouse.

This is a mouse, lying on its back, with its feet (or at least the ball) sticking up in the air. Rather than moving the mouse to move the ball to move the wheels to move the cursor, you simply move the ball. The ball is somewhat larger than the one inside of a mouse, which makes it a lot easier to move. Such a mouse is called a trackball and is very common with laptop computers. Provided the signals sent to the operating system are the same, a trackball behave functionally the same as a mouse.

The mouse's interface to the operating system cam take on of three forms. The mouse is referred to, based on this interface, as either a serial mouse, bus mouse or a keyboard mouse.

As its name implies, a serial mouse is attached to you computer through a serial ports. Bus mice have their own interface card that plugs into the bus. Keyboard mice, despite their name, usually do not plug into the keyboard. Although I have seen some built into the keyboard, these were actually serial mice. Instead a keyboard mouse is plugged into its own connector, usually next to the keyboard connector, which is then attached directly to the motherboard. These are usually found on IBM PS/2 and some Compaq computers, however more and more computer manufacturers are providing a connector for a keyboard mouse.

When talking about the movement of the mouse, you often hear the term resolution. For a mouse, resolution is referred to in terms of clicks per inch or CPI. A click is simply the signal sent to the system to tell it that the mouse has moved. The higher the CPI, the higher resolution. Both mice and trackballs have resolution, since both rely on the movement of a ball to translate to movement of the cursor.

Keep in mind, that despite the way it appears at first, a mouse with a higher resolution is not more precise. In fact, the opposite is almost truer. Higher resolution means that the mouse move further for each given movement on the ball. The result is that the movement is faster not more precise. Since precision is really determined by your own hand movements, experience has shown me that you get better precision with a mouse that has a lower resolution.

Uninterruptable Power Supplies

New to OpenServer is direct support for uninterrupted power supplies (UPS). In previous releases, UPS were supported by third party products such as PowerChute Plus from American Power Conversion (APC), which would interface directly with APCs various UPSs. Despite the fact that a UPS is not intended to replace your primary power supply, it does provide an interim power source, which allows you to shutdown gracefully.

The first thing I want to address is the concept of uninterruptable power. If we take that term literally and say that a power supply that goes out at all it has been interrupted. In that case, then many UPS are not correctly named, since there is a brief moment (ca. 30 milliseconds) between the time they notice power has gone out and the battery kicks in. This time is too small for the computer to notice, but it is there. (Normally power must be out for at least 300 milliseconds) As a result, most UPS should be referred to as stand-by power supply (SPS), since they switch to the battery when the primary supply shuts off. Since Underwriter's Laboratories uses UPS to describe both, that's what I will do here.

The basic UPS provides limited power conditioning (keeping the voltage within a specific range) but no protection against surges and spikes. This is useful if the power goes out, but doesn't protect you if the voltage suddenly jumps (such as the result of a lightning strike). A double-conversion model provides the power, when the main power fails, but also protection against surges. This is done by first passing the power through the batteries. Although this does provide the protection, it is less efficient since power is constantly being drawn from the battery

Next: Installing and Upgrading


Copyright 1996-1998 by James Mohr. All rights reserved. Used by permission of the author.

Be sure to visit Jim's great Linux Tutorial web site at https://www.linux-tutorial.info/