Computers are never fast enough. No matter how much money you spend, there's always some job that would just make everyone so very happy if it were only a little faster. Or sometimes the computer really isn't fast enough: it used to be, but lately it seems sluggish. You know what needs to be done: you need to tune your computer.
Unfortunately, tuning a computer is not like tuning a car (or maybe it is, if you are thinking of high performance racing tuning). It's complex: what looks like a CPU problem can really be a disk problem. What looks like a disk problem might be caused by the network. You aren't going to figure out what the real problem is by scratching your head; you need to collect data so that you can identify bottlenecks and find out exactly where the problems really are.
The tools you need to collect that data are shipped with the system: all you have to do is run /usr/lib/sa/sar_enable -y (if you haven't done it already) to start sar collecting data. Sar will quietly collect performance information regularly, and will store days or even a full months worth of data in the /var/adm/sa directory for your later review.
Don't turn sar on when you have a problem; turn it on when your system is running normally. The baseline data you collect is valuable: it shows you what your system should look like.
Sar doesn't make your system slow, as some people think. In the first place, it only runs every twenty minutes (or less) by default. Secondly, when it does run, it is over and done with quickly. Try:
timex /usr/lib/sa/sa1to verify that for yourself.
Nor do you have to be concerned about using up all your disk space. Sar will use about 1000 blocks for a whole week's worth of data, and it normally only stores two weeks worth, and won't keep more than a month. It is entirely self limiting.
But reviewing the data is your problem. Sar collects it all for you, and will happily report anything it knows (see man sar), but you have to determine what those reports mean. Sar just delivers the news; it doesn't interpret it.
By default, on current versions, the script /usr/lib/sa/sa2 removes files older than 7 days from /var/adm/sa. You can modify that script by commenting out the "find" at the end of it. This will cause a full months worth of sar data to be kept. Because sar files are named with the day of the month only, new data will overwrite old each new month, and the directory will not continue to grow.
To analyze todays activity, just type "sar" followed by any flags desired. For example, "sar -r" reports memory usage. To analyze a different day, look in /var/adm/sa for files named sa01, sa02, etc. Those represent data from the day of the month indicated by the numeric part of the name. Type "sar -r -f sa01" to analyze data from the 1st, for example.
The "sar01", "sar02" files are complete reports run with all of sar's options turned on. They are ASCII text; you can view them or print them directly.
Interpreting that data is a complex subject. If you don't have a full appreciation of that, I'd suggest reading Brian Wong's Configuration and Capacity Planning for Solaris Servers (it is Sun specific, but the concepts apply to any Unix system). You could devote a lot of time to really becoming expert at tuning.
If you want to do that, I do recommend Brian Wong's book referenced above, and also don't overlook the System Performance Guide in your on-line documentation. There is also a book on SCO Performance Tuning but it is only useful on older versions.
Or, you could hire someone like Brian Wong or Adrian Cockroft (another performance guru). That would set you back a fair hunk of cash, and you'd probably be distressed to learn that for maximum effectiveness, they'd need to come back and repeat the process regularly as your systems changed.
Or.. you could try sarcheck. What sarcheck does is to analyze sar (and ps) data and try to make intelligent recommendations based on what it sees.
Now I'm not saying that running sarcheck is like having Wong and Cockroft hovering over your shoulder. This is just a computer program, and it's not terribly hard to fool it into making bad recommendations. But let's be honest here: it's not all that hard to fool me, either. Maybe sometimes I might see something that sarcheck wouldn't, but the converse is just as true. So while running sarcheck might not be as wonderful as having professional tuning gurus at your beck and call, it might just be as good or better than turning your average consultant loose on the problem.
I like sarcheck's philosophy: it uses the built in tools (sar and ps) to collect data. That keeps it uninvolved with your kernel, which means it can't possibly screw anything up in that regard. Unlike some other products, sarcheck doesn't make any changes. It recommends changes, and it explains why the recommendations are made. It's also very nice that the driver for the analyzer is a well documented shell script (/usr/local/bin/sarcheck).
Sarcheck's analysis is pretty complete. It spots memory leaks and run-away processes. It analyzes disk access, cpu bottlenecks and the usual buffers and tables. In addition to making recommendations for improving performance, it also estimates how much load you can add given your current resources. That's handy for growing companies.
Got something to add? Send me email.
Increase ad revenue 50-250% with Ezoic