APLawrence.com -  Resources for Unix and Linux Systems, Bloggers and the self-employed

Some UNIX tools I love

by Girish Venkatachalam

Girish Venkatachalam is a UNIX hacker with more than a decade of networking and crypto programming experience. His hobbies include yoga,cycling, cooking and he runs his own business. Details here:



UNIX Is known for its tool chest which is ever expanding. There is no abating to the repertoire of tools that are at the disposal of UNIX programmer and sys admins.

Without the clean design and power of UNIX OS, there could not have been the ecosystem and the atmosphere for programs that do great things.

Wherever computing world has seen greatness most of the time we find that some UNIX tool is behind that. Let us take for instance the first ever worm written by Robert Tappan Morris.

The finger tool was used for that.

This is a very bad example but wherever power is manifested we find UNIX behind it in some form or other.

This article is a free wheeling attempt by me to outline some of the nifty tools that grabbed my attention in my long sojourn with UNIX.

I can be described as a UNIX power user and I love to get more and more efficient and I tend to use more and more of what other people have done in lieu of what I can do myself.

I normally look around quite a bit before I dive into doing something on my own.

99 out of 100 times I find that someone else has already done something that I wanted to do and most of the time that would be quite adequate.

Anyway now let us get to the first tool that interested me and benefited me.

1) dump and restore

It is not one tool but two tools that complement each other.

I am talking about complete OS imaging/ghosting tool found in the BSD world. A close cousin of this is the dd tool which I am sure most of you have heard of.

After using these tools for a long time now I know reasonably well which tool should be used for what. And which switches will make which tool perform better.

A simple example is the bs parameter given to dd. If you give a small block size(bs) then your dd will become painfully slow!

Such revelations come only by experience.

Coming back to the dump/restore toolset.

dump is used to dump a filesystem in its entirety. Both are usable only by root.

So you dump a filesystem like this:

# dump af foo.bin /dev/wd0a

If you wish to copy a partition of the disk to a file "foo.bin" you would execute this command.

For restoring the contents on a different machine, you would enter this:

# restore foo.bin

But before this step, you need to create a partition and create a file system on it with newfs. And you also need to mount the partition and execute this command from the mounted directory.

If you instead execute this command in some other directory you will find that all the files will show up under the current directory which may not be what you want.

You can also avoid the intermediary file like this.

# newfs sd0a
# mount /dev/sd0a /target
# cd /target

# dump af - /dev/wd0a | restore rf -

This will dump from hard disk to a USB stick.

This is the beauty of UNIX. It allows you to do great things with minimal difficulty.

Now let us get to dd a bit.

dd can do what dump can do but it is dumber in a way, smarter in a way.

Dumber since it does not understand anything about file systems. It will get you all the

blocks on the disk regardless of whether the blocks contain useful data or not.

It is smarter since it is quicker, sometimes much quicker.

Smarter is not the right term but we will make an exception here.

Let us now get on to the next tool in my favorites.


2) sipcalc

Here is a sample output of this command:

$ sipcalc

-[ipv4 :] - 0

Host address            -
Host address (decimal)  - 2076805552
Host address (hex)      - 7BC989B0
Network address         -
Network mask            -
Network mask (bits)     - 28
Network mask (hex)      - FFFFFFF0
Broadcast address       -
Cisco wildcard          -
Addresses in network    - 16
Network range           - -
Usable range            - -

There you go!

You have the entire IP range now. If you know the netmask you can know the IP ranges and so on.

There are other tools like aggregate and ipcalc but sipcalc is the best.

Ever since I found this my brain was freed up for doing some useful work.

We will wrap up this article with one more tool.

3) p7zip

All of us need to compress or decompress data at some point.

We can't be sending raw data and gobble up bandwidth wastefully.

p7zip is the most efficient compression algorithm today. It is available for Windows and UNIX.

It is slow in compressing(eats up a lot of memory and CPU), but it decompresses really fast and I have achieved even 10 times improvement in compression. Which is to say that I could compress a 1 GB file to 100 MB.

It is normally used like this:

$ 7za a foo.bin.7z foo.bin

to compress foo.bin.

And to decompress:

$ 7z e foo.bin.7z

It would create a new file foo.bin.

It has the ability to hold multiple files like zip but I am sure it is not as complete as tar.

It also allows you to read and write to stdin and stdout respectively with -si and -so switches.

Have fun with UNIX as always.

Got something to add? Send me email.

Increase ad revenue 50-250% with Ezoic

More Articles by © Girish Venkatachalam

Kerio Samepage

Have you tried Searching this site?

Support Rates

This is a Unix/Linux resource website. It contains technical articles about Unix, Linux and general computing related subjects, opinion, news, help files, how-to's, tutorials and more.

Contact us