I have sometimes seen people use a pipeline that includes "sort | uniq". The result of that is no different than just adding a -u flag to sort and absolutely requires more time and processing power - not that it usually matters; unless the input is humongously long, you'd need to run them through "time" to spot any difference. So why use "uniq"?
For cases like that, where there is no difference in the output, it's probably just habit - you may be accustomed to using "uniq" for other jobs and just reach for it automatically. I'll argue that it's a good habit to have: if you are in the habit of using "sort -u", you may tend to forget about "uniq" and that could cause you do do something much more difficult and clumsy when a job needs something that "uniq" does well.
However, it's also true that "sort" has tricks that "uniq" lacks, so if you only know about "uniq", you again could make your life more difficult.
One of the helpful abilities that "sort" has is the ability to specify the field separator. Let's take a sample file:
If all we cared about was removing duplicatelines, we could use "sort -u file" or "sort file | uniq". But what if we want to sort by the second field?
We can do that directly with "sort -t: -k 2 -u", but it's much harder to do with "uniq" because you can't tell it a separator character. You can get around that partially with "tr" or "sed", translating ":"'s to spaces or tabs, but that's clumsy. Even after translating, "uniq" only lets you skip fields, so you don't get quite the same output:
$ sort -t: -k 2 -u file
$ cat file | tr ":" " " | sort | uniq -f1
A B C D
a b c d
a b x d
f a x d
foo b x d
t b c d
We could argue about which output truly represents unique lines when sorted on field 2, but the point to understand is that skipping fields isn't the same as what "sort" does.
You can also lock down fields with "sort" :
$ sort -t: -u -k2,2 file
As "uniq" can only skip fields and can't anchor to one field only, it's much harder to get these results. However, "uniq" again has tricks that "sort" can't do: it can skip a specific number of characters in addition to skipping fields. It can also give you only the unique lines or only the lines that were repeated:
$ sort file | uniq -u # only the unique, non-repeated lines
$ sort file | uniq -d # repeated lines
Either of those is extremely convoluted without "uniq", and the need for one or the other does come up surprisingly often.
Somebody thought that we could use "sort" and "uniq" in one program: Sortu is the result.
The sortu program is a replacement for the sort and uniq programs. It is common for Unix script writers to want to count how many separate patterns are in a file. For example, if you have a list of addresses, you may want to see how many are from each state. So you cut out the state part, sort these, and then pass them through uniq -c. Sortu does all this for you in a fraction of the time.
I think by the time I figured out how to use "sortu" I could have already done the job another way, but you might find it interesting anyway.
I think the important thing is to realize that "sort" and "uniq" have both conflicting and complementary abilities. Don't tie yourself in pipeline knots with either of them; learn to use each of them appropriately and your scripts will be easier.
Got something to add? Send me email.
Increase ad revenue 50-250% with Ezoic
More Articles by Anthony Lawrence
Find me on Google+
© 2009-11-17 Anthony Lawrence