Popular Pages


Jim Mohr's SCO Companion


Copyright 1996-1998 by James Mohr. All rights reserved. Used by permission of the author.

Be sure to visit Jim's great Linux Tutorial web site at http://www.linux-tutorial.info/

Shells and Basic Utilities

Interactively Editing Files with vi


SCO UNIX provides a handful of programs used to edit files. Depending on the circumstances, each one could be useful. The two I most frequently use are the full screen editor vi and the stream editor, 'sed'.

The use and benefits of any editor like vi is almost religious. Often times the reasons people choose one over the other is purely a matter of personal taste. Each offers its own advantages and functionality. Some versions of UNIX provide other editors, such emacs. However, the nice thing about vi is that every dialect of UNIX has it. You can sit down at any UNIX system and can edit a file. For this reason more than any other, I think it is worth learning.

One of the problems vi has is that is very intimidating. I know, I used to not like it. I frequently get into discussions with people who have spent less than 10 minutes using it and rant about how terrible it is. Often I see them spending hours trying to find a free or relatively cheap add-on so they don't have to learn vi. The problem with that approach is that if you spend as much time learning vi as you do trying to find an alternative, you can actually become quite proficient with vi.

There is more to vi than just its availability on different UNIX systems. To me vi is magic. Once you get over the initial intimidation, then you will see that there is a logical order to the way the commands are laid out and fit together. Things fit together in a pattern that is easy to remember. So, we get into it, let me tempt you a little.

Among the "magical" things vi can do, you can:

  • Automatically correct words that you misspell often

  • Create your own vi commands

  • Insert the output of UNIX commands into the file you are editing

  • Automatically indent each line

  • Shift sets of lines left or right

  • Check for pairs of {}, () and [] (Great for programmers)

  • Automatically wrap around at the end of a line

  • Cut and paste between documents

I am not going to mention every single command. Instead, I am going to show you a few and how they fit together. At the end of this section, there is a table containing the various commands you can use inside of vi. You can then apply the relationships to the commands I didn't mention.

In order to see what is happening when you enter commands, what don't you find a file that you can poke around in. Make a copy of the termcap file (/etc/termcap) in your home directory and then edit it (cd /tmp; cp /etc/termcap .; vi termcap).

Before we really can jump into the more advanced features of vi, I need to cover some of the basics. Not command basics, but rather some behavior basics. In vi, there are two modes: command mode and input mode. While you are in command mode, every key stroke is considered part of a command. This is were you normally start when you first invoke vi. The reverse is also true. While in input mode, everything is considered input.

Well, that isn't entirely, true and we'll talk about in a minute. However, just remember that there are the two modes. If you are in command mode, you go into input mode using a command to get you there, such as append or insert (We'll talk about these in a moment). If you want to go from input mode to command mode, you press ESC.

When, vi starts it goes into full-screen mode (assuming your terminal is set-up correctly) and it essentially clears the screen. If we started the command as vi search, at the bottom of the screen you see "search" [New file]. Your cursor is at the top left-hand corner of the screen, and there is a column of tildes (~) down the left side to indicate that these lines are non-existent.

Basic Editing and Movement Commands

Most editing and movement commands are single letters and are almost always the first letter of what they do. For example, to insert text at your current cursor position you press 'i'. To append text you press 'a'. To move forward to the beginning of the next word, you press 'w'. To move back to the beginning of the previous word you press 'b'.

The capital letter of each has a similar behavior. An 'I' will start you inserting at the beginning of a line. An 'A' start the append from the end of the line. To move "real" words, use a 'W' to move forward and a 'B' to move back.

Real words are those terminated by whitespaces (space, tab, newline). Assume we wanted to move across the phrase 'static-free bag'. If we start on the 's', pressing 'w', will move me to the '-'. Pressing 'w' again, we move to the 'f' and then to the 'b'. If we are on the 's' and press 'W', we jump immediately to the 'b'. That is, to the next "real" word.

Moving in vi is also accomplished in other ways. Depending on your terminal type, you can use the traditional method of arrows keys. If vi doesn't like your terminal type, you can use the keys h-j-k-l. If we want to move to the left we press 'h'. Make sense since this in on the left end of these four characters. If we want to move right we press 'l'. Again, this makes sense as the 'l' is on the right end of the row.

Movement up and down is not as intuitive. One of the two remaining characters (j and k) will move us up and the other will move us down. But which one? Unfortunately, I don't have a very sophisticated way of remembering. If you look at the two letters physically, then maybe it helps. If you imagine a line running through the middle of these characters, then you see that the 'j' hangs down below that line. Therefore, use the 'j' to move down. on the other hand, the 'k' sticks up above the middle, you use the 'k' to move up. However, in most cases the arrow keys work, so you don't need to remembering. However, it is nice to know them, as you can then leave your fingers on the keyboard.

As I mentioned, some keyboard types will allow you to use the arrow keys. However, you might be surprised by their behavior in input mode. This is especially true if you are used to word processor where the arrow and other movement keys are the same all the time. The problem lies in the fact that most keyboards actually send more than one character to indicate something like a left-arrow or page-up key. The first of these is normally an escape (ESC). When you press one of these characters in input mode, the ESC is interpreted as your wish to leave input mode.

If we want to move to the first character on a line, we press '0' (zero) or '^'. To move to the last character, press '$'. Now these are not all that intuitive. However, if you think back to our discussion on regular expressions, you'll remember that the '^' (caret) represents the beginning of a line and the '$' (dollar-sign) represents the end of a line. Although, these two characters do not necessarily have an intuitive logic, they do fit in with other commands and programs that you find on an SCO UNIX system.

We can also take advantage of the fact that vi can count as well as combine movement with this ability to count. By pressing a number before the movement command, vi will behave as if we had pressed the movement key that many times. For example, 4w will move us forward four words or 6j will move us 6 lines down.

If we want to move to a particular line we input the number and the G. So, to move to line 43, we would press 42G, kind of like 42-Go!. If instead of G we pressed <ENTER>, we would move ahead that many lines. For example, if we was on line 85, pressing 42<ENTER> would put me on line 127. (No, you don't have to count lines, vi can display them for you as we'll see in a minute.)

As you might have guessed we can also use these in conjunction with the movement keys (all except CTRL-u and CTRL-d). So, to delete everything from your current location to line 83, you would input d83G. (Note that Delete begins with 'd') Or to change everything from the current cursor position down 12 lines, you would input c12+ or c12<enter>.

There are a couple of other, special editing commands. Pressing dd will delete the entire line you are on. 5dd would then delete 5 complete lines. If you want to open up a line for editing, press o to open it up after the line you are currently on and O for the line before. The letter x will delete the character that the cursor and can also be used with numbers,

When we want to move something we just deleted, we put the cursor on the spot where we want it. Then he either press p to put that text after the current cursor position or P to Put it before the current position. A nice trick that I always use to swap characters is xp. The x deletes the character you are on and the p immediately inserts it. The result is that you swap characters. So if I had typed the word 'into' as 'inot', I would place the cursor on the 'o', and type xp, which would swap the 'o' and the 't'.

If we wanted to repeat the edit we just did, be it deleting 18 lines or inputting "I love you", we could do so by pressing '.' (period) from command mode. In fact, any edit command can be repeated with the dot.

Changing Text

To make a change you press c followed by a movement command or number and movement. For example, if we wanted to change everything from where we were to the next word, we would press cw. To change everything from where we were to the end of the line we would press C or c$. If you do that, then a dollar-sign will appear indicating how much we intend to change.

If we go back into command mode (press ESC), before we reach the dollar-sign, then everything from by current position to the dollar sign is removed. When you think about this, it is actually logical. If you press 'C', you tell vi that you want to change everything to the end of the line. When you press enter, you are basically saying that you are done inputting text, however, the changes should continue to the end of the line, thereby deleting the rest of the line.

If we wanted to undue the last edit, what do we press? Well, what's the first letter of the word 'undue'? Keep in mind that pressing u only does the last change. For example, let's assume we enter the following:

o to open a new line and go into input mode

I love

ESC to go back to command mode

a to append from current location


ESC to return to command mode

The result of what we typed was to have a new line with the text "I love you". We see it as one change, however, from the perspective of vi, there were two changes. First we entered 'I love', then we entered 'you'. If we were to press u, only the 'you' would be removed. However, if u undoes that last change, what command do you think returns the line to its original state? What else, U. As you are making changes, vi keeps track of the original state of that line. When you press U, the line is returned to that original state.

Searching and Replacing

If you are trying to find a particular text you can get vi to do that for you. You tell vi that you want to enter a search pattern by pressing '/'. This will bring you down to the bottom line of the screen where you will see that '/'. You then can type in what you want to look for. When you press <enter>, vi will start searching from your current location down toward the bottom of the file. If you use press '?' instead of '/', then vi will search from your string toward the top of the file.

If the search is successful, that is the string is found, you are brought to that point in the text. If you decide that you want to search again, you have three choices. Either you can press '/' or '?' and input the search string again. Or, you could press 'n' which is the first letter of the word 'next'. Or you could simply press '/' or '?' with now text following it and vi would continue the search in the applicable direction. If you wanted to find the next string that matches but was in the opposite direction, what do you think would be the command? (Hint: what is the capital form of 'n'?).

Once you have found what you are looking for, you can edit the text all you want and then continue searching. This is because the search string you entered is kept in a buffer. So, when you press '/', '?' , 'n' or 'N', the system remembers what you were looking for.

You can also include movement commands in these searches. First you enclose the search pattern with the character used to search (/ or ?), then add the movement command. For example, if you wanted to search forward for the phrase "hard disk" and then move up a line, you would enter ?hard disk?-. If you wanted to search backward for the phrase "operating system" and then move down three lines, you would enter /operating system/+3.

All this time, you have been referring to it as a search string. As you just saw, you can actually enter phrases. In fact, you can use any regular expression you want, when searching for patterns. For example, if you wanted to search for the pattern SCO, but only when it appears at the beginning of a line, you would enter /^SCO. If you wanted to search for it at the end of the line, we would enter /SCO$.

You can also do more complicated searches such as /^new [Bb][Oo][Aa][Tt], which will search for the word 'new' at the beginning of a line, followed by the word 'boat' with each letter in either case.

No good text editor would be complete without the ability to not only search for text but to replace it as well. One way of doing this is to search for the pattern and edit the text. Obviously, this starts to get annoying after the second or third instance of the pattern you want to replace. Instead, you could combine several of the tools you learned so far.

For example, let's say that everywhere in the next you wanted to replace 'Unix' with 'UNIX'. First, do a search on Unix with /Unix, tell vi that you want to change that word with cw, then input UNIX. Now search for the pattern again with /. Now simply press '.'. Remember that repeats your last command. Now do the search and press the period again.

Actually, this technique is good if you have a pattern a that you want to replace, but not every time it appears. Instead, you want to do this selectively. You can just press 'n' (or whatever) to continue the search without carryout the replacement.

What if you know that you want to every instance of the pattern with something else? Are you destined to search and replace all 50 occurrence? Of course, not. Silly you. There is a way.

Here we introduce what is referred to as escape or ex-mode since the commands you enter are the same as in the ex editor. To get to ex-mode you press ':' (colon). Like with searches you are brought down to the bottom of the screen, this time you see the ':'. The syntax is:

: <scope> <command>

An example of this would be:


This tells vi the scope is lines 45 through 100. The command is s/Unix/UNIX/ which says you want to substitute (s) the first pattern (Unix) with the second pattern (UNIX). Normally in English we would want to say "substitute UNIX for Unix". However, the order here is in keeping with the UNIX pattern of source first, then destination. (or what is was first, and what it will become second. Like mv source destination)

Note that this only replaces the first occurrence on each line. In order to get all of them we need to include a 'g' for global at the end of each line. Like this:


A problem arises if you want to modify only some of occurrences. You could then add the modifier c for confirm. The command would then look like this:


This causes vi to ask for confirmation before it makes the change.

If you wanted to do the search and replace all every line in the file we could specify every line, such as :1,48. Assuming there were 48 lines in the file (Oh, by the way, use CTRL-g to find out what line you are on and how many lines there are in the file.) Instead of having to check how many lines there were each time, you could simply use the special character $ to indicate the end of the file. (Yes, it also means the end of the line, but in this context it means the end of the file). So, the scope would look like: :1,$.

Once again, the developers of vi made life easy for you. They realized that making changes throughout the file is something that is probably done a lot. They included a special character to mean the entire file: %. Therefore: % = 1,$.

Here again, the search patterns can be regular expressions. For example if we wanted to replace every occurrence of boat (in either case) with the word 'ship', the command would look like this:


As with regular expressions in other cases, you can use the asterisk (*) to mean any number of the preceding character or a period (.) to mean any single character. So, if you wanted to look for the word boat (again in either case), but only when it was at the beginning of the line and only if there were preceded by at least one dash, the command would look like this:


The reason you have two dashes there is that the search criteria specified at least one dash. Since the asterisk can be any number, including zero, you have to consider the case where it would mean zero. That is, where boat was at the beginning of the line and there were no spaces. If you didn't care what the character was as long as there was at least one, you could use the fact that in a search context a dot means any single character. The command would look like this:


This ex-mode also allows you to do many things with the file itself. Among them are:

  • :w to write the file to disk

  • :q to quit the file (:q! if the file has been changed and you don't want to save the changes)

  • :wq to write the file and quit

  • :e to edit a new file (or even the same file)

  • :r read in a new file starting at the current location


Remember when we first starting talking about searching, I mentioned that whatever expression you were looking for was held in a buffer. The same thing can be made to happen here. In this case, whatever was matched by /[Bb][Oo][Aa][Tt] can be held in a buffer. We can then use that buffer as part of the replacement expression. For example, if we wanted to replace every occurrence of UNIX with SCO UNIX, we could do it like this:


The scope of this command is defined by the %, the shortcut way of referring to the entire text. Or you could first save UNIX into a buffer, then use it in the replacement expression. To enclose something in a buffer, we enclose it within matching pairs of \( and \). This defined the extent of a buffer You can even have multiple pairs that define the extent of multiple buffers. These are reference by \#, where # is the number of the buffer.

In this example:

:%s/\(UNIX\)/SCO \1/g

The text, UNIX is placed into the first buffer. You then reference this buffer with \1 to say to vi to plug in the contents of the first buffer. Since the entire search pattern is the same as the pattern buffer you could have written it like this:

:%s/\(UNIX\)/SCO &/g

As the ampersand represents the entire search pattern.

This obviously doesn't save much typing. In fact, in this example, it requires more typing to save UNIX into the buffer and then use it. However, If what you wanted to save was longer, you would save time. You also save time if you want to use the buffer twice. For example, assume you have a files with a list other files. Some were C language source files. All of them end in '.c'. You now want to change just the names of the C files so the ending is 'old' instead of .c. To do this you have to insert an mv at the beginning of each line as well as produce two copies of the file name. One with .c and one with .old. You could do it like this:

:%s/^\(.*\).c/mv \1.c \1.old\g

In English, this line says:

  • For every line (%)

  • substitute (s)

  • for the pattern starting at the beginning of the line (^) consisting of any number of characters ( \(.*\) ) (placing this pattern into the buffer #1) followed by a .c

  • and use the pattern mv followed by the contents of buffer #1 (\1) followed by a .c, which is again followed by the contents of buffer #1 (\1) followed by .old

  • and do this for every line (g), (i.e. globally)

Now each line is of the form:

mv file.c file.old

I can now change the permissions to make this a shell script and execute it. we would then move all the files as I described above.

Using numbers like this is useful if there is more that one search pattern that you want to process. For example, assume that we had a three column table that you wanted to be able to change the order in which the columns appear. For simplicities sake, let's also assume we that each column is separated by a space so as not to make the search pattern too complicated.

Before we start we need to introduce a new concept to vi, but one that you have seen before: [ ]. Like the shell, the [ ] pair inside of vi is used to limit sets of characters. Inside of the these brackets, the caret (^) takes on a new meaning. Rather than indicating a the beginning of a line, here it negates the character we are searching for. So we could do:

%s/\([^ ]*\) \([^ ]*\) \([^ ]*\)/\3 \1 \2/g


Here we have three regular expressions all referring to the same thing: \([^ ]*\). As we discussed above, the \( \) pairs delimits each of the buffers, so everything inside is the search pattern. Here, we are searching for [^ ]*, which is any number of matches to the set enclosed within the bracket. Since the brackets are limiting a set, the set is ^, followed by a space. Since the ^ indicates the negation, what we are placing any number of characters that is not a space into the buffer. In the replacement pattern, we told vi to print pattern3, a space, pattern1, another space then pattern2.

In the first two instance we followed the pattern with a space. As a result, those spaces were not saved into any of the buffers. We do this because we may want to define our column separator different. Here we just used another space.

I have often had occasion to want to use the pattern buffers more than once. Since they are cleared after use, you can use them as many times as you want. Using the example above, if we change it to:

%s/\([^ ]*\) \([^ ]*\) \([^ ]*\)/\3 \1 \2 \1/g


We would get pattern3, then pattern2, then pattern2, and at the end pattern1 again.

Believe it or not, there are still more buffers. In fact there are dozens that we haven't touched on. The first set is the numbered buffers and are numbered 1-9. These are used when we delete text and they behave like a stack. That is the first time we delete something, say a word , it is placed in numbered buffer 1. We next delete a line which is place in buffer 1 and the word that was in buffer 1 is now in buffer 2. Once all the numbered buffers all full, any new deletions push the oldest ones out the bottom of the stack and are no longer available.

To access them we first tell vi that we want to use one of the buffers by pressing the double-quote '"', then the number of the buffer, say 6, the either p or P to put it, as in "6p. When you delete text and then do a put, without specifying any buffer, it automatically comes from buffer 1.

There are some other buffers, in fact, 26 of them, that you can use by name. These are the named buffers. If you can't figure out what their names are think about how many of them there are (26). With these buffers, we can intentionally and specifically place something into one of these buffers. First, we say which buffer we want by preceding its name with a double-quote ("), for example, "f. This says we want to place some text in the named buffer f. Then, we place the data in the buffer, for example by deleting the whole line with dd or deleting two words with d2w. We can later put the contents of that buffer with "fp. Until we place something new in that buffer, it will contain that deleted line.

Now if you want to put something into a buffer without having to delete it, you can. You do this by "yanking it". Okay, folks, another pop quiz. What letter do we use to "yank" text?

To yank an entire line you could done one of several things. First, there is yy. Next Y. Then you could use y, followed by movement commands as in y+4, which would yank the next 4 lines (including the current one) or y/expression, which would yank everything from your current position up to and including expression and put it into the default buffer (numbered buffer 1).

If you want to place it into a named buffer, it is the same procedure as deleting it. For example if we wanted to yank the next 12 lines into named buffer h, we would do "h12yy. Now those 12 lines are available to us. Keep in mind that we do not have to store full lines. Inputting "x12yw will put the next 12 words into the buffer h.

Some of the more observant readers might have noticed that since there are 26 letter and each has both an upper- and lowercase, we could have 52 named buffers. Well, up to now the uppercase letter did something different. If uppercase letters were different buffers, then the wouldn't follow the pattern. Have no fear, it is.

Instead of being different buffers than their lowercase brethren, the upper case letters are the same buffer. The difference is that the yanking or deleting something into the upper case buffer appends the contents rather that overwriting it.

You can also have vi keep track of up to 26 different places with the file you are editing. These function just like bookmarks in other word processors. Pop Quiz: If there 26 of them, what are their names?

To mark a spot, move to that place in the file, type m for mark (what else?) then a single backquote (') followed by the letter you want to use for this bookmark. To go back to that spot, press the backquote (') followed by the appropriate letter. So, to assign book mark q to a particular spot, you would enter ‘q. Keep in mind that reloading the current file or editing a new one makes you loose the book marks.

vi Magic

I imagine that long before now, you have been wondering how to turn on all that magic I said that vi could do. Okay, let's do it.

The first thing I want to talk about is abbreviations. You can tell vi that when you type in a specific set of characters it is supposed to automagically change it to something else. For example, we could have vi always change USA to United States of America. This is done with the abbr command.

To create a new abbreviation you need to get into ex-mode by pressing the colon (:) in command mode. Next, type in abbr followed by what you want to type in, then what vi should change it to. For example:

:abbr USA United States of America.

Note that the abbreviation cannot contain any spaces, since vi interprets everything after the second word as being part of the expansion.

If we later decide we don't want that abbreviation any more, we enter:

:unabbr USA

Since it is likely that we will want to use the abbreviation USA, it is not a good idea to use an abbreviation that would normally occur, such as USA. It would be better, instead, to use an abbreviation that doesn't occur normally, like Usa. Keep in mind, that abbreviations only apply to complete words. Therefore, something like the name Sousa, won't get translated to SoUSA. In addition, when your abbreviation is followed by a space tab, ENTER, or ESC, the change is made

Let's take this one step further. What if we were always spelling 'the' as 'teh'. We could then create an abbreviation:

:abbr teh the

Every time we misspelled 'the' as 'the', vi would automatically correct it for me. What if we had a whole list of words that we regularly misspelled and created similar abbreviations. Then every time we entered one of these misspelled words, it would get replaced with the correctly spelled word. Wouldn't that be automatic spell correction?

If we ever want to "force" the spelling to be a particular way (that is, turn of the abbreviation momentarily), we simply follow the abbreviation with a CTRL-V. This tells vi to ignore the special meaning of the following character. Since the next character is a whitespace which would force the expansion of the abbreviation (which makes the whitespace special in this case), "turning off" the whitespace keeps the abbreviation from being expanded.

We can also use vi to re-map certain sequences. For example, I have created a command so that all I need to do to save a file is do CTRL-W for write. If I want to save the file and quit, I enter CTRL-X. This is done with the 'map' command.

The most common maps that I have seen have always been using control characters, since most of the other characters are already taken up. Therefore, we need to side step a moment. First, we need to know how to access control characters from within vi. This is done in either command mode or input mode by first pressing CTRL-V, then pressing the control character we want. So to get the CTRL-W, I would type CTRL-V then CTRL-W. This would appear on the screen as ^W. This looks like two characters, but if you inserted it into a text and moved over it with the cursor, you would realize that vi sees it as only one. Note that although I pressed the lowercase 'w', it will appear as upper case on the screen.

So, if we wanted to map CTRL-W so that every time we pressed it, we would write our current file to disk, the command would be:

map ^W :w^M

This means that when we press ctrl-W, vi interprets it as if at actually pressed :w followed by an <ENTER> (the CTRL-M, ^M). The <ENTER> at the end of the command is a good idea, since you usually want the command to be executed right away. Otherwise you would have to press enter yourself.

Also keep in mind that this can also be used with the function keys. Since I am used to many Windows and DOS applications where the F2 key means to save, I map our F2 with CTRL-V then F2. It then looks like this:

map ^[[N :w^M (The ^[[N is what the F2 key displays on the screen)

If we want we can also use shifted function characters. Therefore, we can map SHIFT-F2 to something else. Or, for that matter, we can also use shifted and control functions keys.

It has been my experience that, for the most part, if you use shift and control with non-function keys, vi only sees the control and not the shift. Also, the ALT may not work since on the system console, ALT plus a function key tells the system to switch multiscreens.

I try not to use the same key sequences that vi already does. First, it confuses me since I often forget that I remapped something. Second, the real vi commands are then inaccessible. However, if you are used to a different command set (that is, from a different editor), you can "program” vi to behave like that other editor.

Never define a mapping that contains its own name as this ends up recursively expanding the abbreviation. The classic example is: :map! n banana. Every time you typed in the word banana, you'd get:


and depending on what version you are running, vi will catch the fact that this is an infinite translation and stop.

Inserting Command Output

It often happens that we want the output of UNIX commands in the file we are editing. The sledgehammer approach is to run the command and redirect it to a file, then edit that file. If that file containing the command's output already exists, we can use the :r from ex-mode to read it in. But, what if it doesn't yet exist. For example, I often want the date as the in text files as log of when I input things. This is done with a combination of the :r (for read) from ex-mode and a shell-escape.

A shell escape is where we go from one program and jump out of it (escape) to a shell. Ourr original program is still running, but we are now working in a shell that is a child process of that program.

To do a shell escape, we need to be in ex-mode. Next, press the exclamation mark followed by the command. For example, if we want to see what time it was, we could do :!date. We then get the date at the bottom of the screen with the message to press any key to continue. Note that this didn't change our original text, just showed us the output of the date command.

In order to read in that output, we need to include the :r command, as in: :r!date. Now, the output of the date is read into the file (it is inserted into the file). We could also have it replace the current line by pressing '!' twice as in '!!date'. Note that we are brought down to the last line on the screen and there is a single '!'.

If we want we can also read in other commands. What is happening is that vi is seeing the output of the command as a file. Remember that :r <file_name> will read a file into the one we are editing. Why not read from the output of a file? With pipes and redirection, both stdin and stdout can be files. Why not here as well. In fact, that is basically what is happening.

We can also take this one step further. Image we are editing a file containing a long list. You know that many lines are duplicated and you also want the list sorted. We could do :%!sort, which if we remember from our discussion above is a special symbol meaning all the lines in the file. These are then sent through the command on the other side of the '!'. Now we can do:


to remove all the duplicate lines.

Remember that this is a shell escape. From the shell we can combine multiple commands using pipes. We can here as well. So to save time, we could enter:

:%!sort | uniq

Which would sort all the lines and remove all duplicate ones. If we only wanted to sort a set of lines, we could do that like this:


which would sort lines 45 through 112. We can take this one step further by either writing line 45-112 to a new file with :45,112w file_name or reading in a whole file to replace line 45-112 with :45,112r file_name.

More vi magic

If we want, we can start editing a file at a point part of the way through it, rather than at the beginning. This is done from the command line like:

vi +# filename

where # is the line number to start at. If we leave of the number, vi will start editing at the end of the file. This is great form hand written logs and the like. If we want vi to first find a particular phrase and jump there, this is just as easy. As in: vi +/expression If we need to, we can also edit multiple files. This is done like:

vi file1 file2 file3

Once we are editing we can switch between files with :n for the next file and :p for the previous one. Keep in mind that the file names do not wrap around. In other words, if we keep pressing :n and get to file3, doing it again does not wrap around and bring me to file1. If we know the name of the file, we can jump directly there, with the ex-mode edit command as in

:e file3

The ability to edit multiple files has another advantage. Do you remember those numbered and named buffers? They are assigned for a single instance of vi and not on a per-file basis. Therefore you can delete or yank text from one file, switch to the next and then insert it. This is a crude, but effective cut and paste mechanism between files.

You can specify line numbers to set your position within a file. If you switch to editing another file (using :n or :r), or reload the original file (using :rew!), the contents of the deletion buffers are preserved so that you can cut and paste between files. The contents of all buffers are lost, however, when you quit vi.

Odds and Ends

As with me as I wrote this book, you often re-use the same vi commands over and over again. Here too, vi can help. Since the named buffers are simply sequences of characters, you can store commands in them for later use. For example, when editing file in vi I need to mark new paragraphs is some way as my word processor normally sees all the end-of-line characters as new paragraphs. Therefore, I created a command that entered a para marker for me.

First, I need to create the command. I do this by opening up a new line in my current document and type in the text:


Had I typed this from command mode, this would have inserted the text 'Para' at the beginning of the line. I then load it into a named buffer with "pdd. This deletes the line and loads it into buffer 'p'. To execute it I enter @p. The at-sign (@) is what tells vi to execute the contents of the buffer.

Keep in mind that many of the commands, abbreviations, etc are transitive. For example, when I want to add a new paragraph I don't write "Para" as the only characters on the line. Instead, I use something less common: {P}. I am certain that I will never have {P} at the beginning of a line, however, there are contexts where I might have "Para" at the beginning of a line. Instead I have an abbreviation "Para" that translated to {P} .

Now I can type in Para at the beginning of a line in input mode and it will be translated to {P}. When I execute the command I have in buffer 'p', it inserts "Para" which is then translated to: {P}

So why don't I just have {P} in the buffer 'p'? Because the curly brackets are one set of movement keys that I did not mention, yet. The '{' moves you back to the beginning of the paragraph and the '}' move you forward. Since paragraphs are defined by vi as being separated by blank line or delimited by nroff macros, I never use them (nroff is an old UNIX text processing language). Because vi sees the brackets as something special in command mode, I need to use this transitivity.

If you are a C-programmer you can take advantage of a couple of nifty tricks of vi. The first is the ability to show you matching pairs of parenthesis ( ), square brackets [ ] and curly brackets {}. In ex mode (:), type set showmatch. Afterwards, every time you enter the closing one ( ),], or } ), you are bounced back to it's match. This is useful in checking whether or not you have the right number of each.

We can also jump back and forth between these pairs by using the percent-sign (%). No matter where we are within a { } pair, pressing % once moves us to the first (opening) one. Press % again and we are moved to it's match (the closing one). We can also place the cursor on the closing one and press % to move us to the opening one.

As a programmer, we may like to indent blocks of code to make things more readable. Sometimes changes within the code make us want to shift blocks to the left or right to keep the spacing the same. To do this we use the << (two less-than signs) to move the text one "shift-width" to the left and >> (two greater-than signs) to move the text one "shift-width" to the right. A "shiftwidth" is defined in ex: mode with set shiftwidth=n, where n is some number. When we shift a line, it moves left or right n characters.

If we want to shift multiple lines, input a number before we shift. For example, if we input 23>>, we would shift the next 23 lines one "shift-width" to the right.

There are a lot of settings that can be used with vi to make our life easier. These are done in ex mode, using the set command. For example, use :set autoindent to have vi automatically indent. This, along with other set commands can be abbreviated. See the vi(C)man-page for more details.

Other useful set commands:

  • wrapmargin=n

automatically "word wrap" when you get to within n spaces of the end of the line.

  • showmode

tells you if you are in insert mode or not.

  • number

display line numbers at the left-hand edge of the screen.

  • autowrite

Save any changes that have been made to the current file when you issue a :n, :rew, or :! command.

  • ignorecase

Ignore the case of text while searching.

  • list

Print end-of-line characters as ``$'', and tab characters as ``^I''. These characters are normally invisible.

  • tabstop=n

Set the number of spaces between each tab stop on the screen to n.

  • shiftwidth

Set the number of spaces << and >> shift each line

Table 0.2

Configuring vi

Since we started talking about vi, I mentioned that there were a lot things that we could do to configure it. There are mappings and abbreviations and settings that we can control. The problem is that once we leave vi, everything we added is now lost.

Fortunately, there is hope. Like may programs, vi has its own configuration file: .exrc (note the dot at the front). Normally this file does not exist by default, so vi just takes its standard settings. If this file resides in our home directory, it will be valid every time we start vi. The exception is when we have an .exrc file in our current directory. This will then take precedence. Having multiple .exrc files is useful when doing programming as well as editing text. When writing text, I don't need line numbers or autoindent like I do when programming.

The content and syntax of the lines is exactly the same as in vi, however we don't have the leading colon. Part of the .exrc file in my text editing directory looks like this:

map! ^X :wq

map x :wq

map! ^W :w

map w :w

set showmode

set wm=3

abbr Unix UNIX

abbr btwn between

abbr teh the

abbr refered referred

abbr waht what

abbr Para {P}

abbr inot into

The Next Step

No one can force you to learn vi. Just as no one can force you to do backups. However, in my opinion, doing both with make you a better administrator. There will come a time that having done regular backups saves your career. There may also come a time when knowing vi saves you the embarrassment of having to tell your client or boss that you can't accomplish a task because you need to edit a file and the only editor is the one that comes default with the system: vi.

Non-Interactively Editing Files with sed

Suppose you have a file that you need to make some changed in. You could load up vi and make the changes that way. What if what you wanted to change was the output of some command before you send it to a file? You could first send it to a file and then edit that file. Or you could use sed, which is a Stream EDitor and is specifically designed to edit data streams.

If you read the previous section or are already familiar with either the search and replace mechanisms in vi or the editor 'ed', you already have a jump on learning sed. Unlike vi, sed in not-interactive, but can handle more complicated editing instructions. Since it is non-interactive, commands can be saved in text files and used over and over again. This makes debugging the more complicated sed constructs that much easier. For the most part, sed is line oriented which allows it to process files of almost any size. However, this has the disadvantage that sed cannot do editing that is dependent on relative addressing.

Unlike the section on vi, I am not going to go into to as many details about sed. However, sed is a useful tool and I use it very often. The reason I am not going to covered it in too much detail is three-fold. First, much of what is true about pattern searches, addressing, etc that are true in vi are also true in sed. Therefore, I don't feel a need to repeat. Second, it is not that important that you become a sed expert you be a good system administrator. There are a few cases where scripts on an SCO UNIX system will use sed. However, they are not that difficult to understand, provided you have a basic understanding of sed syntax. Lastly, sed is like any programming language. You can get by with simple things. However, to get really good you need to practice and we just don't have the space to go beyond the basics.

In this section we are going to talk about the basics of sed syntax, as well as some of the more common sed commands and constructs. If you want to learn more in order to write your own or just because you are curious, I recommend getting sed & awk by Dale Dougherty from O'Reilly and Associates. This will also help you in the section on awk coming up next,

The way sed works is that it reads input a line at a time, and then carries out whatever editing changes you specify. When it has finished making the changes it writes them to stdout. Like commands, such as grep and sort, sed acts like a filter. However, with sed you can create very complicated programs. Since I normally use sed as one end of a pipe, most of the sed commands that I use have the following structure:

first_cmd | sed <options> <edit_description>

This is useful, when the edit descriptions that you are using are fairly simple. However, if you want to perform multiple edits on each line, then this way is not really suited. Instead, you can put all of your changes into one file and start up sed like this:

first_cmd | sed -f editscript


sed -f editscript <inputfile

As I mentioned before, the addressing and search replace mechanism within sed is basically the same as within vi. It has the structure:

[address1[,address2]] edit_description [arguments]

As with vi, addresses do not necessarily need to be line numbers, but can be regular expressions that sed needs to search for. If you leave the address of, sed will make the changes globally, as applicable. The edit_description tells sed what changes to make. There are several arguments that can be used and we'll get to them as we move along.

As sed reads the file, it copies each line into its pattern space'. This pattern space is special buffer that sed used to hold to the line of text as it processes it. As soon as it has finished reading the line, sed begins to apply the changes to the pattern space based on the edit description.

Keep in mind that although sed will read a line into the pattern space, it will only make changes to addresses that match the addresses specified. Sed does not print any warnings when this happens. In general sed either silently ignores errors, or terminates abruptly with an error message as a result of a syntax error, not because there we no matches. If there are no lines that contain the patter, no lines match and the edit commands are not carried out.

Because you can have multiple changes on any given line, sed will carry them each out in turn. When there are no more changes to be made, sed sends the result to its output. The next line is read in and the whole process starts over. As it reads in each line, sed will increment an internal line counter, which keeps track of the total number of lines read, not lines per file. This is an important distinction if you have multiple files that are being read. Assume you had two 50 line files. From sed's perspective, line 60 is the 10th line in the second file.

Each sed command can have 0, 1, or 2 addresses. A command with no addresses specified is applied to every line in the input. A command with one address is applied to all lines that match that address. For example:


substitutes the first instance of "john'' for "fred'' only on those lines containing "mike''. A command with two addresses is applied to the first line that matches the first address, then to all subsequent lines until a match for the second address has been processed. An attempt is made to match the first address on subsequent lines, and the process is repeated. Two addresses are separated by a comma.

For example:


Substitutes the first instance of "john'' for "fred'' from line 50 to line 100 inclusive. (Note that there should be no space between the second address and the s command.) If an address is followed by an exclamation mark (!), the command is applied only to lines that do not match the address. For example:


substitutes the first instance of "john'' for "fred'' everywhere except lines 50 to 100 inclusive.

Sed can also be told to do input and output based on what it finds. The action it should perform is identified by an argument at the end of the sed command. For example, if we wanted to print out lines 5-10 of a specific file, the sed command would be:

cat file | sed -n '5,10p'

The -n is necessary so that every line isn't output in addition to the lines that match.

Remember the script we created in the first section of this chapter where we wanted just lines 5-10 of every file. Now that we know how to use sed, we can change it to be a lot more efficient. It would now look like this:

find ./letters/taxes -print | while read FILE


echo $FILE

cat $FILE | sed -n '5-10p'


Rather than sending the file through head and then the output of that through tail, we send the whole file through sed. It can keep track of what line it is one and then print the necessary lines.

In addition, sed allows you to write lines that match. For example, if we wanted all the comments in a shell script to be output to a file, we could use sed like this:

cat filename | sed -n '/^#/w filename'

Note that there must be exactly one space between the w and the name of the file. If we wanted to read in a file, we could do that as well. Instead of a w to read, we use an r to write. The contents of the file will be appended after the lines specified in the address. Also keep in mind that writing to or reading from a file are independent of what happens next. For example, if we write every line in a file containing the name John, but in a subsequent sed command change John to Chris, the file will contain references to John, as no changes are made. This is logical since sed works on each line, and the lines are already in that file before the changes are made.

Keep in mind that every time a line is read in, the contents of the pattern space are overwritten. In order to save certain data across multiple commands, sed provides what is called the "hold space." Changes are not made to the hold space directly, rather the contents of either one can be copied into the other for processes. The contents can even be exchanged if needed. Table 0.3 contains a list of the more common sed commands, including what commands are used to manipulate the hold and pattern spaces.


Appends text to the pattern space


branch to a label


append text


delete text


Deletes all the characters from the start of the pattern space up to and including the first newline


overwrites the pattern space with the holding area


appends the holding area to the pattern space, separated by a newline.


overwrites holding area with the pattern space


appends the pattern space to the holding area, separated by a newline


insert text


list the contents of the pattern space


adds a newline to the pattern space


Appends the next input line to the pattern space, lines are separated by newline.


print the pattern space


Prints from the start of the pattern space up to and including the

first newline.


read in a file


substitutes patterns


branches only if a substitution has been made to the current pattern space


write to a file


The exchange function interchanges the contents of the pattern space

and the holding area. The maximum number of addresses is two.

Table 0.3 Sed Commands

Programming with AWK

Another language that SCO provides and is standard on many (most?) UNIX systems is awk.  The abbreviation awk is an acronym composed of the first letter of the last of it it's developers: Alfred Aho, Peter Weinberger and Brian Kernighan. Like sed, awk  is an interpreted, pattern matching language. In addition, awk like sed can also read stdin as well as be passed the name of a file containing its arguments.

One of the most useful aspects of awk (at least useful for me and the many SCO scripts that use it) is it's idea of a field. Like sed, awk will read whole lines, but awk can immediately break into segments (fields) based on some criteria. Each of the fields is separated by field separator. By default, this is a space. By using the -F option on the command line or the FS variable within an awk program you can specify a new field separator. For example, if you specified a colon (:) as a field separator, you could read in the lines from the /etc/password file and immediately break it into fields.

A programming language it its own right, awk has become a staple of UNIX systems. The basic purpose of the language is manipulation and processing of text files. However, awk is also a useful tool when combined with output from other commands, allowing you to format that output in ways that might be easier to process further. One of the major advantages of awk is that it can accomplish in a few lines what would normally return dozens of lines in sh or csh shell script, or may even require writing something in a lower-level language like C.

The program awk appears in three forms on SCO UNIX systems. The binary /usr/bin/awk is the program that ought to be called from scripts or from the command line and may be linked to either /usr/bin/oawk (Old awk) or /usr/bin/nawk (New awk). On SCO OpenServer systems, all three are linked together. Although there are slight differences between the behavior with oawk and nawk, they do not interfere with the basic functionality of the language. Therefore, I am not going to address them. If you want details about the differences, check out the awk(C) man-page.

The basic layout of an awk command is:

pattern { action }

Where the action to be performed is included within the curly-braces. Like sed, awk reads input a line at a time, but awk sees this line as a record broken up into fields. Fields are separated by an Input Field Separator (FS), which by default is a tab or a space. The FS can be changed to something else, for example a semi-colon, with FS=;. This is useful when we want to process text that contains blanks. For example, data of the form:

Blinn, David;42 Clarke Street;Sunnyvale;California;95123;33

Dickson, Tillman;8250 Darryl Lane;San Jose;Calfifornia;95032;34

Gibberson, Suzanne;102 Truck Stop Road;Ben Lomond;California;26

Holder, Wyliam; 1932 Nuldev Street;Mount Hermon;California;95431;42

Nathanson, Robert;12 Peabody Lane;Beaverton;Oregon;97532;33

Richards, John;1232 Bromide Drive;Boston;Massachusettes;02134;36

Shaffer, Shannon;98 Whatever Way;Watsonville;California;95332;24

Here we have name, address, city, state, zip code and age. Without using ; as a field separator, Blinn and David;42 would be the two fields. Here, we would want to treat each name, address city, etc a single unit, rather than multiple fields.

The basic format of an awk program or awk script, as it is sometimes called, is a pattern followed by a particular action. Like sed, each line of the input is checked by awk to see if it matches that particular pattern. Both sed and awk do well when comparing string values, However, whereas checking numeric values is difficult with sed, this functionality is an integral part of awk.

If we wanted we could output only the names and cities of those people under 30. First we need an awk script, called awk.scr, that looked like this:

FS=; $6 < 30 { print $1, $3 }

Next, assume that we have a data file containing the seven lines of data above, called awk.data. We could process the data file in one of two ways:

awk -f awk.scr awk.data

The -f option, tells awk that it should read its instructions from the file that follows. In this case, awk.scr. At the end, we have the file that awk need to read its data from. Alternatively, we could start it like this:

cat awk.data | awk -f awk.scr

We can even make string comparisons as in:

$4 == "California" { print $1, $3 }

Although, it may make little sense, we could make string comparisons on what would normally be numeric values. As is:

$6 == "33" { print $1, $3 }

This prints out fields 1 and three from only those lines where the sixth field equal the string "33".

Not to be outdone by sed, awk will also allow you to use regular expressions in your search criteria. A very simple example is one where we want to print every line containing the characters 'on'. (NOTE: These have to be adjacent and in the appropriate case. This line would look like this:

/on/ {print $0}

However, the regular expressions that awk uses can be as complicated as those in sed. One example would be:

/[^s]on[^;]/ {print $0}

This says to print every line containing 'on', but only if it is not preceded by an 's' nor followed by a semi-colon(;). The trailing semi-colon eliminates the two town names ending in "on" (Boston and Beaverton) and the leading 's' eliminates all the names ending in 'son'. When we run awk with this line, our output is:

Gibberson, Suzanne;102 Truck Stop Road;Ben Lomond;California;96221;26

Hmmm. Doesn't the name "Gibberson" contain "son"? Shouldn't it be ignored along with the others? Well, yes. However, that's not what matched. The reason this line was printed out was because of the 'on' in Ben Lomond.

We can also use addresses as part of the search criteria. Assume we wanted to print out only those lines, where the first field name (i.e. the person's last name) is the first half of the alphabet. Since this list is sorted we could look for all the line between those starting with A and those starting with M. Therefore, we could use a line like this:

/^A/,/^M/ {print $0}

When we run it, we get:

Hmmm. What happened? There are certainly are several names in the first half of the alphabet. Why didn't this print anything? Well, it printing exactly what we told it to print. Like the addresses in both 'vi' and sed, awk searches for a line that matches the criteria we specified. So, what we really said was "Find the first line that starts with an 'A' and then print all the lines up to and including the last one starting with an 'M'." Since there was no line starting with an 'A', the start address didn't exist. Instead, the line to get what we really want would look like this:

/^[A-M]/ {print $0}

This is to print all the lines whose first character is in the range A-M. Since this checks every line and isn't looking for starting and ending addresses, we could have even used an unsorted file and would have gotten all the lines we wanted. The output then looks like this:

Blinn, David;42 Clarke Street;Sunnyvale;California;95123;33

Dickson, Tillman;8250 Darryl Lane;San Jose;Calfifornia;95032;34

Gibberson, Suzanne;102 Truck Stop Road;Ben Lomond;California;96221;26

Holder, Wyliam; 1932 Nuldev Street;Mount Hermon;California;95431;42

If we did want to use a start and end address, we would have to specify the start letter of name that actually existed in our file. For example:

/^B/,/^H/ {print $0}

Since printing is a very useful aspect of awk, it's nice to know that there are actually two ways of printing with awk. The first we just mentioned. However, if you use printf instead of print, you can get much more detailed in the way you what your output format. If you are familiar with the C programming language, then you already have a head start as the format of this printf is essentially the same as in C. However, there are a couple of differences that you will see immediately if you are a C programmer.

For example, if we wanted to print out both the name and age with this line:1

$6 >30 {printf"%20s %5d\n",$1,$6}

The output would look like this:

Blinn, David


Dickson, Tillman


Holder, Wyliam


Nathanson, Robert


Richards, John


The space used to print out each name is 20 characters long, followed by 5 spaces for the age.

Since awk reads each line as a single record and blocks of text in each record as fields, it needs to keep track of how many records there are and how many fields. These are denoted by the NR variable.

Another useful way of using awk is the end of a pipe. You may have the multiple-line output from one command or another, but we only want one or two fields from that line. For example, we may only want the permissions and file name from an ls -l output. we would then pipe it through awk like this:

ls -l | awk '{ print $1" "$9 }'

The output might look something like this:

-rw-r--r-- mike.letter

-rw-r--r-- pat.note

-rw-r--r-- steve.note

-rw-r--r-- zoli.letter

This brings up the concept of variables. Like other languages awk allows you to define variables. A couple are already pre-defined and come in handy. For example, what if we didn't know off the top of our head that there were 9 fields in the ls -l output? Since we know that we wanted the first and the last field, we could have used the variable that specifies the number of fields. The line would then look like this:

ls -l | awk '{ print $1" "$NF }'

In this example the space enclosed in quotes is necessary, otherwise awk would print $1 and $NR right next to each other.

Another variable that awk has is used to keep track of the number of records read so far: NR. This can be useful, or example, if you only want to see a particular part of the text. Remember our example at the beginning of the section where we wanted to see lines 5-10 of a file? (This was to look for an address in the header) In the last section, we showed you how to do it with sed, now I'll show you with awk.

We can use the fact that the NR variable keeps track of the number of records, since each line is a record the NR variable also keeps track of the number of lines. So, we tell awk that we want to print out each line between 5-10, like this:

cat datafile | awk '{NR >=5 && NR <= 10 }'

This brings up four new issues. The first is the NR variable itself. The second is the use of the double ampersand (&&). Like in C this means a logical "and". That both the right side and the left side of the expression must be true fin order for the entire expression to be true. In this example, if we read a line and the value of NR is greater than or equal to 5 (i.e., we have read in at least 5 lines) and the number of lines read must be nor more than 10. The third issue is that there is no print statement. The default action of awk when it doesn't have any additional instructions is to print out each line that matches the pattern.

The last issue is the use of the variable NR. Note that here, there is no dollar-sign ($) in front of the variable. This is because the we are looking for the value of NR. Not what it points to. You do not need to prefix it with $ unless it is a field variable. Confused? Let's look at another example.

Let's say we wanted to print out only the lines where there were more than nine fields. We could do it like this:

cat datafile | awk '{ NF > 9 }'

Compare this:

cat datafile | awk '{ print $NF }'

which prints out the last field in every line.

Up to now we've been talking about one line awk commands. These all have performed a single action on each line. However, awk has the ability to do multiple tasks on each line as well as task before it begins reading and after it has finished.

We use the BEGIN and END pair as markers. These are treated like any other pattern. Therefore, anything appearing after the BEGIN pattern, is done before the first line is read. Anything after the END pattern is done after the last line is read. Let's look at this script:

BEGIN { FS=";"}

{printf"%s\n", $1}

{printf"%s\n", $2}

{printf"%s, %s\n",$3,$4}

{printf"%s\n", $5}

END {print "Total Names:" NR}

Following the BEGIN pattern we define what the field separator is. This is therefore done before the first line is read. Each Line is processed four times, where we print a different set of fields. When we get done, we have output that looks like this:

Blinn, David

42 Clarke Street

Sunnyvale, California


Dickson, Tillman

8250 Darryl Lane

San Jose, California


Gibberson, Suzanne

102 Truck Stop Road

Ben Lomond, California


Holder, Wyliam

1932 Nuldev Street

Mount Hermon, California


Nathanson, Robert

12 Peabody Lane

Beaverton, Oregon


Richards, John

1232 Bromide Drive

Boston, Massachusettes


Shaffer, Shannon

98 Whatever Way

Watsonville, California


Total Names:7

Aside from having a pre-defined set of variables to use, awk allows us to define variables ourselves. If in the last awk script we had wanted to print out, let's say, the average age, we could add a line in the middle that looked like this:

{total = total + $6 }

Since $6 was the age of each person, every time through the loop, it is added to the variable total. Unlike other languages, like C, we don't have to initialize the variables, awk will do that for us. Since are initialized to the null string and numeric variables are initialized to 0.

After the END, we can include another line to print out our some, like this:

{print "Average age: " total/NR}




less than


less than or equal to


equal to


not equal to


greater than or equal to


greater than

Table 0.4 Awk Comparison Operators





number of command-line arguments



array of command-line arguments



name of current input file



record number in current file



input field separator

space or tab


number of fields in the current record



number of records read



numeric output format



output field separator



output record separator



input record separator


Table 0.5 Default Values of awk Built-in Variables

Is that all there is to it? No. In fact, we haven't even touched the surface. awk is a very complex programming language and there are dozens of more issues that we could address. Build into the language are mathematical functions, if and while loops, the ability to create your own functions, strings and array manipulation and much more.

Unfortunately, this is not a book on UNIX programming languages. Some readers may be disappointed that I do not have the space to cover awk in more detail. I also am disappointed. However, I have given you a basic introduction to the constructs of the language to enable you to better understand the over 100 scripts on your system that use awk in some way.

Putting Things Together

Since I wasn't trying to make you shell or awk programming experts, there are obviously things that we didn't have a chance to cover. However. hopeful I have give you the basic tools to create your own tools and configure at least your shell environment the way you need or want it.

Like any tool or system, the way to get better is to practice. Therefore, my advice is you play with the shell and programs on the system to get a better feeling for how they behave. By creating your own scripts, you will become more family with both vi and shell script sytnax, which will help you to create your own tools and understand the behavior of the system scripts. As you learn more, you can add awk and sed components to your system to make some very powerful commands and utilities.

Next: Users and User Accounts


Copyright 1996-1998 by James Mohr. All rights reserved. Used by permission of the author.

Be sure to visit Jim's great Linux Tutorial web site at http://www.linux-tutorial.info/