APLawrence.com -  Resources for Unix and Linux Systems, Bloggers and the self-employed

Adding more space

© December 1998 Tony Lawrence

This article is fairly specific to SCO Unix, but the general concepts of moving data and creating a symbolic link do apply to any Unix and Linux.

Sooner or later it happens: you type "df -v" and see that the "%used" is pushing 99%. Or, worse, you get awful messages on the console: No space on device 1/40.

(If you are out of space right now, follow this link for suggestions as to what to do right now).

So, it's time to add more disk space. There are two basic ways to approach this problem. You could replace the current disk with something larger, or simply add another disk. Both methods have their own positive and negative aspects. Let's consider some of them:

Replace the current disk

This can be the simplest solution of all, particularly if you are using emergency boot and recovery disks such as can be made by Microlite Edge or other similar products. Do a backup, pop out the old disk, in with the new, boot with the Recovery disks, and very quickly you have a running system.

Another way to do that is detailed at Transfer data and make bootable secondary (SCO Unix).

But this assumes that you either have some other use for your current disk, or that it's old and worthless enough that you are willing to toss it out. That may be the case, but in those situations, you may have a further problem if you have to replace the old disk with a different architecture. You may have an old IDE disk, and want to replace it with a new SCSI Raid system, for example. In those situations, the Emergency Recovery disks may not be enough, and you may have to start with a fresh install of the OS and proceed from there.

Add another disk

Adding storage can, of course, involve similar trade-offs. You could add a SCSI drive to an existing IDE drive, for example, but generally not vice-versa. You also need to decide what information you are going to store on that new space, and how you are going to get it there. Some programs have the ability to make use of new space fairly simply, but more often than not, you are going to have to do some data rearranging.

See Adding a disk drive for details on how to add the disk

Dividing the disk

Whichever way you decide to go, you need to start thinking about partitions before you do anything. Partitions, as you may now, are the areas that your hard drive is divided into. It can be a little confusing, because there are really two kinds of divisions: "fdisk" partitions and "divvy" partitions (which are called "divisions" by SCO, but "partitions" by some other Unixes, which is why it gets confusing).

If you have an older version of SCO, you may have to be concerned about fdisk partitions for very large drives. More modern versions are able to support about anything you could conceivably purchase (assuming proper drivers, of course). If you have SCO 3.2v4.2, you will probably need or want uod429a for EIDE or large SCSI drives. This supplement has some confusing aspects to it, so read the README file carefully and completely.

I'll assume for the moment that you have no issues that cannot be resolved by proper drivers or supplements, and that you will be creating one Unix fdisk partition. It's now time to think about divvy divisions.

These divisions divide your Unix partition into filesystems or raw disk partitions. Raw partitions are strictly for the use of programs like Informix that are written to use these partitions, generally to obtain higher performance. Without such programs, your divisions will be used for ordinary Unix filesystems. Older systems usually had just a root division or possibly a root and a /u (The "root" division is the one that shows up as "/" mounted on "/dev/root"). Release 5 systems usually have a root and a small /stand. You need to decide what partitions you will create, and how large they will be.

On older releases, the OS constrains your choice to some degree because 3.2v4.2 and below simply cannot handle a filesystem larger than 2 gigabytes. If you had a raw partition, that could be larger, but any filesystem prior to 3.2v5.x has to be 2 gigabytes or less.

Why divide?

Assuming Release 5, or a 4.2 system with a 2 gig drive, why not dispense with extra divisions and simply have one large file system?

You could do that. With release 5, you'll probably still want a small (15-20MB) /stand filesystem, because Unix can't boot from an HTFS filesystem, and that's what you would want the root to be. The /stand would be an (older) EAFS filesystem, which Unix can boot from, and the root would be the higher performance and more reliable HTFS type.

Even on 3.2v4.2, you could have one large root filesystem (up to 2 gigabytes). It's very convenient to do this; you don't have to worry about how much space to allocate to what, you just let the whole disk be one big filesystem.

For several reasons, you probably don't want to do that. On the older releases (and even on the newer, under some conditions), big file systems can take a long time to clean and repair in the event of an unexpected crash. It is also true that damage to a hard drive usually confines itself to one file system. In other words, if a defective hard drive trashes your root filesystem, a separate /u file system may escape damage entirely. This can make recovery simpler and more complete, because the data on /u may be current right up to the crash.

Deliberately limiting space

You may also want to create file system divisions to specifically limit how much space can be used. At first, this may seem silly, but consider the often seen case where a naive user keeps submitting the same large print job over and over, not understanding that it is not printing for some other reason (printer offline?) and that each submission is using up more disk space. Consider also a runaway program that keeps creating a larger and larger file. Wouldn't it be nice to let a separate file system run out of space before the whole drive is used up? In the case of the printing, you could create a small division and mount it at /var/spool/lp/temp.

Note that you have to be a little bit careful about creating partitions that will be used for system directories, like /user, and on Release 5, the mess of symbolic links means you have to tread even more carefully. With enough care and forethought, you can do whatever you need to do. You have to watch out for things that need to be present when the system is booting; if you are planning on moving anything like that elsewhere, you will have to leave some of it behind, ultimately to be hidden beneath your new file system. This can get tricky, so be careful or hire competent help.

Deciding what to move

You may have one large area that will continue to grow. For example, /usr/mas90, /appl/filepro, and /usr/rwc65 are common SCO programs that have defined directory structures. If you have or can identify such an area, then this is the data you will move.

You may, however, have two or more areas to move. You can solve this by creating multiple file systems in the new space, or by using one file system and symbolic links to point to the new data areas.

There are things you cannot move. You cannot, for example, move /usr or any part of it that needs to be present as the machine boots or in single user mode. DO NOT MOVE ANYTHING THAT ISN'T YOURS and you'll be safe.

Moving Data

If you are installing a replacement drive, you are not concerned with moving data. If, for example, your old system had a large /data filesystem, you could simply specify a larger division when divvy'ing the new disk. The restore will simply put the old data in the new, larger, division.

But when adding a new hard drive, you may want to move the contents of a directory to the new space. The following procedure is useful:

Create the new space with "mkdev hd" as required. Relink, and reboot. Rename the directory you are going to move, using the "mv" command. For example:

mv /usr/data /usr/data.safe

Note the ownership and permissions of /usr/data.safe, and create a new /usr/data directory with the same permissions and ownership. Example:

# cd /usr
# mv data data.safe
# ls -ld data.safe
drwxr-xr-x   2 root     sys          512 Nov 16 18:47 data.safe
# mkdir data
# chown root data
# chgrp sys data
# chmod 755 data
# ls -ld data
drwxr-xr-x   2 root     sys          512 Nov 16 18:47 data

Run "mkdev fs", giving the name of the directory you want to move as the mount point (/usr/data). Then type:

mountall to mount the new space at /usr/data (or simply type "mount /usr/data").

Type the following to copy the data:

# cd /usr/data.safe
# find . -depth -print | cpio -pdlmv ../data

After this is done, and you are completely satisfied that everything is working as it should be, you can "rm -r /usr/data.safe"

If you are using symbolic links, the procedure is similar. First, create a new directory on the other drive:

mkdir /bigdrive/newdata

Assign appropriate permission to newdata.

Copy the data as before, and mv the oldname aside:

cd /usr/data
find . -depth -print | cpio -pdlmv /bigdrive/newdata
mv /usr/data /usr/data.sf

Now create the symbolic link:

ln -s /bigdrive/newdata /usr/data

Again, when you are satisfied that everything is working, come back and "rm -r /usr/data.safe".

See also:

Got something to add? Send me email.

(OLDER)    <- More Stuff -> (NEWER)    (NEWEST)   

Printer Friendly Version

-> Adding more diskspace - and then what?


Inexpensive and informative Apple related e-books:

iOS 10: A Take Control Crash Course

Take Control of IOS 11

Photos: A Take Control Crash Course

Take Control of OS X Server

Sierra: A Take Control Crash Course

More Articles by © Tony Lawrence

Mon Dec 17 21:20:31 2012: 11560   ToddPorter


I have many SCO OS 5.0.5-5.0.7 boxes that I administer but must admit that I'm not quite an expert yet. I have a 5.0.7 box that has a built in SATA controller which is enabled but the hd and tape drive are on a Adaptec SCSI card. I need more space to work with on the system and tried 2 things:
1. I added a SCSI hd to the system, went thru the mkdev hd, relinked and rebooted but second run of mkdev hd (fdisk/divvy) says:
"Disk already configured as disk number 1 (/dev/dsk/1s0)" and then
"/etc/fdisk: cannot open /dev/rdsk/1s0 for reading: No such device or address (error 6)
/etc/fdisk failed."
2. I tried adding a SATA drive and the system will boot to the kernal but then not be able to mount root to continue the boot proccess

I DO have the wd driver update for SATA ports (cd-rom is working fine)
current hd is ID 0
current tape drive is ID 2
I have tried adding the HD with both ID 4 and 8 (changing the jumper on the hd)
I have had a few secondary SCSI drives on the system before, for customer data recovery, and they have worked fine after running mkdev hd and selecting the SCSI bus/ID/LUN. Those secpondary drives have since been removed and I am trying to readd another "second" drive. Do I have to remove those old 2nd drives somewhere b/c SCO is seeing the old 2nd drive parameters? I have this in /etc/conf/cf.d/mscsi:

*ha attach number ID lun bus
ad320 Sdsk 0 0 0 0
wd Srom 0 0 0 0
ad320 Sdsk 0 4 0 0
ad320 Sdsk 0 8 0 0
ad320 Stp 0 2 0 0

Mon Dec 17 21:27:24 2012: 11561   TonyLawrence


It has been so long since I have done anything with SCO at that level that I simply do not remember that enough I can trust to offer any help.

SCO is dead. Get off it.


Printer Friendly Version

Have you tried Searching this site?

This is a Unix/Linux resource website. It contains technical articles about Unix, Linux and general computing related subjects, opinion, news, help files, how-to's, tutorials and more.

Contact us

Printer Friendly Version

The three chief virtues of a programmer are: Laziness, Impatience and Hubris. (Larry Wall)

Linux posts

Troubleshooting posts

This post tagged:





Unix/Linux Consultants

Skills Tests

Unix/Linux Book Reviews

My Unix/Linux Troubleshooting Book

This site runs on Linode