The saying, “you can’t
manage what you don’t measure,” is especially true of
system and workgroup performance. Here are some ways to gauge your
workgroup’s performance against the “Guidelines” earlier in this section.
Checking Disk Load with sar and iostat
To see how disk
activity is distributed across your disks, run sar -d with a time interval and frequency, for example:
sar -d 5 10
This runs sar -d ten times
with a five-second sampling interval. The %busy column shows the percentage of time the disk (device) was busy during the sampling interval.
Compare the numbers for each of the disks the
shared file systems occupy (note the Average at the end of the report).
Another way to sample disk activity is to run iostat with a time interval, for example:
This will report activity every five seconds.
Look at the bps and sps columns for the disks (device) that hold shared file systems. bps shows the number of kilobytes transferred per second during the
period; sps shows the number of seeks
per second (ignore msps).
If some disks with shared file systems are consistently
much busier than others, you should consider redistributing the load.
See HP-UX System Administrator’s Guide: Logical
|NOTE: On disks managed by the Logical Volume Manager
(LVM), it can be hard to keep track of what file systems reside on
what disks. It’s a good idea to create hardcopy diagrams of
your servers’ disks; see HP-UX System Administrator’s
Guide: Logical Volume Management.|
Checking NFS Server/Client Block Size
In the case of an HFS file system, the
client’s NFS read/write block size should match the block size
for that file system on the server.
On the NFS client, use
HP SMH to check read/write block size.
to Tools, Disks and File Systems, File Systems and select each imported file system
in turn to view read and write buffer sizes. Refer to the Detailed
View at the bottom of the page under Mount Options.
Read Buffer Size and Write Buffer Size should match
the file system’s block size on the server.
If it does not, you can use HP SMH to change it.
Modify NFS Server/Client Block Size
the HP SMH Homepage as root.
Select Tools, Disks and File Systems, File Systems.
the file system by clicking on the Unmount/Remove... action on the right side of the page.
the Unmount box and click on the Unmount/Remove button at the bottom of the page. The file system will be unmounted.
on the Done button to return to the File
file system should still be selected. Click on the ModifyNFS... action on the right side of the page. This will display the Modify NFS File System page.
the desired Read and Write buffer sizes, select Mount now and save configuration in /etc/fstab , and click on the Modify NFS button.
on the Done button. You will be returned to the File Systems page. The selected file system will be remounted
with the new buffer sizes
Checking for Asynchronous Writes
writes tells the NFS server to send the client an immediate acknowledgment
of a write request, before writing the data to disk. This improves
NFS throughput, allowing the client to post a second write request
while the server is still writing out the first.
This involves some risk
to data integrity, but in most cases the performance improvement is
worth the risk.
You can use HP SMH to see whether asynchronous
writes are enabled on a server’s shared file systems.
the HP SMH Homepage as root.
Select Tools → Network Services Configuration → Networked
File Systems → Share/Unshare File Systems (Export FS). The Share page will be displayed.
the desired file system and a table of shared file properties will
be displayed. Check to see that Asynchornous Writes are allowed.
If needed you can change the setting of the Asynchronous Writes flag, while the file system
is still mounted and shared.
Measuring Memory Usage with vmstat
a wealth of information; use the -n option to make
it more readable on an 80-column display.
The column to watch most closely is po. If it is not zero, the system is paging. If
the system is paging consistently, you probably need more RAM.
Checking for Socket Overflows with netstat -s
Although many different processes use sockets, and can
contribute to socket overflows, regular socket overflows on an NFS
server may indicate that you need to run more nfsd processes. The command,
-s | grep overflow
will show you a cumulative number for socket overflows
(since the last boot). If you see this number rising significantly,
and NFS clients are seeing poor response from this server, try starting
more nfsds; see “Increasing the Number of nfsd Daemons”.
Checking for Network Overload with netstat -i
If you have followed all
the “Guidelines” and
are still seeing poor response time, the problem may be with the network
itself - either with a particular piece of hardware or with the configuration
of the network.
To see cumulative statistics on a server, run
If your system has been running for a long time,
the numbers will be large and may not reliably reflect the present
state of things. You can run netstat iteratively;
netstat -I lan0 -i 5
In this case (after the first line), netstat reports activity every five seconds.
Input and output errors should be very low in
relation to input and output packets - much less than 1%. A higher
rate of output errors on only one server may indicate a hardware problem
affecting the server’s connection to the network.
Collisions (colls) should be less than 5%; a higher rate indicates heavy network use
which your users are probably experiencing as poor performance. Network
traffic and configuration may be beyond your control, but you can
at least raise a flag with your network administrator.