14.6. NFS server loggingSolaris 8 introduces the new NFS Server Logging utility. This utility enables the system to log file transfer operations between an NFS server and any of its clients. This utility was created to provide logging facilities to sites that publish their archives via NFS within the intranet, and via WebNFS over the Internet. The NFS Server Logging utility provides the system administrator with the tools to track file downloads and uploads, as well as directory modification operations on NFS exported filesystems. Be careful not to confuse this functionality with UFS Logging.[36][36]UFS logging is the process of storing transactions (changes that make up a complete UFS operation) in a log before the transactions are applied to the filesystem. Once a transaction is stored, it can be later applied to the filesystem. This prevents filesystems from becoming inconsistent, eliminating the need to run fsck.The NFS Server Logging utility is not intended to serve as a debugging tool that can be turned on to peek at filesystem traffic during a short period of time and then be turned back off. NFS Server Logging is most useful when it is enabled before the filesystem is shared for the first time, and remains enabled the entire time the filesystem is exported. It needs to run continuously in order to monitor all NFS filesystem activity on the server, otherwise, important path mapping information may not be obtained. This is discussed in more detail in Section 14.6.5, "Filehandle to path mapping". This utility provides functionality different from that provided by the public domain tools previously discussed. These tools generate records of individual RPC transactions, whereas NFS Server Logging generates records of conceptual file operations. Network sniffer tools like Ethereal and snoop report a file copy as a sequence of distinct NFS read operations of certain length and offset performed by the client. In contrast, the NFS Server Logging utility generates a single record specifying the total transfer size and the duration of the transfer. The NFS Server Logging utility reports accesses at the conceptual level (file uploads or downloads), where network sniffers report the details of the RPC and NFS operations. Consequently, the logs generated by the NFS Server Logging utility are orders of magnitude smaller and more manageable than sniffer output. The NFS Server logs can be useful to determine the frequency with which files in the archives are accessed or to determine what NFS clients have accessed the files. These logs can be used to manually or programmatically track access to objects within the exported filesystem in the same way that FTP logs are used. As previously pointed out, the information recorded in the NFS log is not intended to serve as a debugging tool. The network sniffer tools previously described are a better choice for that. Consider the case where the server zeus exports a filesystem with NFS logging enabled. The client rome then copies the file /net/zeus/export/foo.tar.Z to its local disk. The NFS Server Logging utility records the access with a single record of the form: This entry indicates that on Fri Jul 28 2000 at 09:27:12 in the morning, a file was downloaded by the host rome. The file was 136663 bytes in length and was located on the server at /export/foo.tar.Z. The file was downloaded by userID 32721 using nfs. The meaning of each field is explained in detail later in this section. In contrast, the snoop utility generates multiple transactions:Fri Jul 28 09:27:12 2000 0 rome 136663 /export/foo.tar.Z b _ o r 32721 nfs 0 * A single user-level copy command translates into multiple NFS operations. The NFS client must first find the object via the LOOKUP3 operation, determine access rights to the object via the ACCESS3 and GETACL3 operations and then finally read the information from the server via multiple READ3 operations. The NFS Server Logging mechanism was designed to emulate the FTP logging mechanism found in many FTP public domain implementations. The log generated is specifically compatible with the log format generated by the popular Washington University's FTP daemon (WU-ftpd). WU-ftpd log format was chosen because of the popularity of this particular FTP service, as well as the availability of a number of public domain and home-grown utilities that already consume WU-ftpd logs. Each NFS log record contains the following space-separated fields:1 0.00000 rome -> zeus NFS C LOOKUP3 FH=0222 foo.tar.Z 2 0.00176 zeus -> rome NFS R LOOKUP3 OK FH=EEAB 3 0.00026 rome -> zeus NFS C ACCESS3 FH=0222 (lookup) 4 0.00125 zeus -> rome NFS R ACCESS3 OK (lookup) 5 0.00018 rome -> zeus NFS_ACL C GETACL3 FH=EEAB mask=10 6 0.00139 zeus -> rome NFS_ACL R GETACL3 OK 7 0.00026 rome -> zeus NFS C ACCESS3 FH=EEAB (read) 8 0.00119 zeus -> rome NFS R ACCESS3 OK (read) 9 0.00091 rome -> zeus NFS C READ3 FH=EEAB at 0 for 32768 10 0.00020 rome -> zeus NFS C READ3 FH=EEAB at 32768 for 32768 11 0.00399 zeus -> rome UDP IP fragment ID=56047 Offset=0 MF=1 12 0.02736 zeus -> rome UDP IP fragment ID=56048 Offset=0 MF=1 13 0.00009 rome -> zeus NFS C READ3 FH=EEAB at 65536 for 32768 14 0.00020 rome -> zeus NFS C READ3 FH=EEAB at 98304 for 32768 15 0.00017 rome -> zeus NFS C READ3 FH=EEAB at 131072 for 8192 16 0.03482 zeus -> rome UDP IP fragment ID=56049 Offset=0 MF=1 17 0.02740 zeus -> rome UDP IP fragment ID=56050 Offset=0 MF=1 18 0.02739 zeus -> rome UDP IP fragment ID=56051 Offset=0 MF=1
14.6.1. NFS server logging mechanicsThere are three main components involved in the logging process. First, the share command is used to enable NFS Logging on the filesystem. Second, the kernel stores enough information about each RPC operation in a temporary work buffer file. Third, the nfslogd daemon processes the RPC information stored in the temporary work buffer file, consolidates the operations into file transfer operations and generates the final NFS log file. Table 14-2 lists the various files involved in the logging process, the information contained in them, who or what program creates and modifies them and who consumes their contents. The /var/nfs/nfslog file contains the actual NFS transaction log records. The /etc/nfs/nfslog.conf and /etc/default/nfslogd files specify various logging configuration values. The /var/nfs/fhpath file contains the path mapping information. The remaining two files are temporary and only needed to help construct the NFS transaction log records. Each file will be discussed in more detail throughout this chapter.Table 14-2. NFS server logging files
14.6.2. Enabling NFS server loggingBefore enabling logging on a filesystem, make sure to first define the default directory where the NFS log and working files are to be created. Solaris ships with the default directory set to /var/nfs. Make sure you have enough disk space available in /var/nfs or set the default directory to a different partition. Instructions on how to change the default directory and how to spread the logs and files across multiple partitions are provided in Section 14.6.3, "NFS server logging configuration". Once the location of the files has been specified, logging NFS traffic on a filesystem is simple. First, export the filesystem using the -o log directive. Second, start the nfslogd daemon if it is not yet running. The NFS log file will be created a few minutes later in the directory previously specified, after the kernel has gathered enough information to generate the NFS transaction records. Note that setting the -o log directive in the /etc/dfs/dfstab file will cause the nfslogd daemon to be started automatically the next time the machine is booted. The daemon will automatically detect when other filesystems are shared with logging enabled. The rest of the chapter explains the specifics of how the NFS Server Logging mechanism works, its main components, and configuration parameters. Enabling logging is straightforward, unfortunately cleaning up working files after logging has been disabled requires some manual work. We will explain this in Section 14.6.9, "Disabling NFS server logging". To enable NFS Server Logging on a filesystem, the filesystem must first be exported with the -o log [=<tag>] directive:When no tag is specified, the kernel will record the temporary RPC information in the default work buffer file /var/nfs/nfslog_workbuffer_in_process. Again, this temporary file does not contain any information useful to the user, instead it's used by the NFS Logging mechanism as a temporary buffer. It is the nfslogd daemon that reads this work buffer, processes its information, and generates the NFS log file. By default, the NFS log file is stored in /var/nfs/nfslog. The nfslogd daemon must be running in order to generate the NFS log file. Note that the daemon is started at boot time only when one or more filesystems in /etc/dfs/dfstab have the -o log directive specified. If you share a filesystem manually with logging enabled and the nfslogd daemon had not previously been started, you must invoke it manually:# share -o log /export To assure that the nfslogd daemon is started after a reboot, make sure to specify the -o log directive in /etc/dfs/dfstab.# /usr/lib/nfs/nfslogd 14.6.3. NFS server logging configurationBy default, the NFS log file, the temporary work buffer files, and the filehandle mapping tables are created in the /var/nfs directory. These defaults can be overridden by sharing the filesystem with specific logging parameters associated with a logging tag. Logging tags are defined in the /etc/nfs/nfslog.conf file. Each entry in the file consists of a mandatory tag identifier and one or more logging parameters. The following is a sample /etc/nfs/nfslog.conf configuration file:The global tag specifies the default set of values to be used when no tag is specified in the share command. Note that the eng, corp, and extended tags do not specify all possible parameters. The global values are used, unless they are specifically replaced in the tag. Take for example:# NFS server log configuration file. # # <tag> [ defaultdir=<dir_path> ] \ # [ log=<logfile_path> ] [ fhtable=<table_path> ] \ # [ buffer=<bufferfile_path> ] [ logformat=basic|extended ] global defaultdir=/var/nfs \ log=logs/nfslog \ fhtable=workfiles/fhtable buffer=workfiles/nfslog_workbuffer eng log=/export/eng/logs/nfslog corp defaultdir=/export/corp/logging extended logformat=extended log=extended_logs/nfslog where the NFS log file will be named nfslog and located in the /export/eng/logs directory. The work buffer file and filehandle table (explained later) remain under /var/nfs/workfiles. Any of the global values can be overridden by specific tags. The following describes each parameter in the configuration file:# share -o log=eng /export/eng
Table 14-3. Logging files under different tags
The temporary work buffers can grow large in a hurry, therefore it may not be a good idea to keep them in the default directory /var/nfs, especially when /var is fairly small. It is recommended to either spread them out among the filesystems they monitor, or place them in a dedicated partition. This will allow space in your /var partition to be used for other administration tasks, such as storing core files, printer spool directories, and other system logs. 14.6.3.1. Basic versus extended log formatLogging using the basic format only reports file uploads and downloads. On the other hand, logging using the extended format provides more detailed information of filesystem activity, but may be incompatible with existing tools that process WU-Ftpd logs. Tools that expect a single character identifier in the operation field will not understand the multicharacter description of the extended format. Home-grown scripts can be easily modified to understand the richer format. Logging using the extended format reports directory creation, directory removal, and file removal, as well as file reads (downloads) and file writes (uploads). Each record indicates the NFS version and protocol used during access. Let us explore the differences between the two logs by comparing the logged information that results from executing the same sequence of commands against the NFS server zeus. First, the server exports the filesystem using the extended tag previously defined in the /etc/nfs/nfslog.conf file:Next, the client executes the following sequence of commands:zeus# share -o log=extended /export/home The resulting extended format log on the server reflects corresponding NFS operations:rome% cd /net/zeus/export/home rome% mkdir test rome% mkfile 64k 64k-file rome% mv 64k-file test rome% rm test/64k-file rome% rmdir test rome% dd if=128k-file of=/dev/null 256+0 records in 256+0 records out Notice that the mkfile operation generated two log entries, a 0-byte file, create, followed by a 64K write. The rename operation lists the original name followed by an arrow pointing to the new name. File and directory deletions are also logged. The nfs3-tcp field indicates the protocol and version used: NFS Version 3 over TCP. Now let us compare against the basic log generated by the same sequence of client commands. First, let us reshare the filesystem with the basic log format. It is highly recommended to never mix extended and basic log records in the same file. This will make post-processing of the log file much easier. Our example places extended logs in /var/nfs/extended_logs/nfslog and basic logs in /var/nfs/logs/nfslog:zeus# cat /var/nfs/extended_logs/nfslog Mon Jul 31 11:00:05 2000 0 rome 0 /export/home/test b _ mkdir r 19069 nfs3-tcp 0 * Mon Jul 31 11:00:33 2000 0 rome 0 /export/home/64k-file b _ create r 19069 nfs3- tcp 0 * Mon Jul 31 11:00:33 2000 0 rome 65536 /export/home/64k-file b _ write r 19069 nfs3-tcp 0 * Mon Jul 31 11:00:49 2000 0 rome 0 /export/home/64k-file->/export/home/test/64k- file b _ rename r 19069 nfs3-tcp 0 * Mon Jul 31 11:00:59 2000 0 rome 0 /export/home/test/64k-file b _ remove r 19069 nfs3-tcp 0 * Mon Jul 31 11:01:01 2000 0 rome 0 /export/home/test b _ rmdir r 19069 nfs3-tcp 0 * Mon Jul 31 11:01:47 2000 0 rome 131072 /export/home/128k-file b _ read r 19069 nfs3-tcp 0 * Next, the client executes the same sequence of commands listed earlier. The resulting basic format log on the server only shows the file upload (incoming operation denoted by i) and the file download (outgoing operation denoted by o). The directory creation, directory removal, and file rename are not logged in the basic format. Notice that the NFS version and protocol type are not specified either:zeus# share -o log /export/home zeus# cat /var/nfs/logs/nfslog Mon Jul 31 11:35:08 2000 0 rome 65536 /export/home/64k-file b _ i r 19069 nfs 0 * Mon Jul 31 11:35:25 2000 0 rome 131072 /export/home/128k-file b _ o r 19069 nfs 0 * 14.6.4. The nfslogd daemonIt is the nfslogd daemon that generates the ultimate NFS log file. The daemon periodically wakes up to process the contents of the work buffer file created by the kernel, performs hostname and pathname mappings, and generates the file transfer log record. Since the filesystem can be reshared with logging disabled, or simply be unshared, the nfslogd daemon cannot rely on the list of exported filesystems to locate the work buffer files. So how exactly does the nfslogd daemon locate the work buffer files? When a filesystem is exported with logging enabled, the share command adds a record to the /etc/nfs/nfslogtab file indicating the location of the work buffer file, the filesystem shared, the tag used to share the filesystem, and a 1 to indicate that the filesystem is currently exported with logging enabled. This system table is used to keep track of the location of the work buffer files so they can be processed at a later time, even after the filesystem is unshared, or the server is rebooted. The nfslogd daemon uses this system file to find the location of the next work buffer file that needs to be processed. The daemon removes the /etc/nfs/nfslogtab entry for the work buffer file after processing if the corresponding filesystem is no longer exported. The entry will not be removed if the filesystem remains exported. The nfslogd daemon removes the work buffer file once it has processed the information. The kernel creates a new work buffer file when more RPC requests arrive. To be exact, the work buffer file currently accessed by the kernel has the _in_process string appended to its name (name specified by the buffer parameter in /etc/nfs/nfslog.conf ). The daemon, asks the kernel to rename the buffer to the name specified in the configuration file once it is ready to process it. At this point the kernel will again create a new buffer file with the string appended and start writing to the new file. This means that the kernel and the nfslogd daemon are always working on their own work buffer file, without stepping on each others' toes. The nfslogd daemon will remove the work buffer file once it has processed the information. You will notice that log records do not show up immediately on the log after a client accesses the file or directory on the server. This occurs because the nfslogd daemon waits for enough RPC information to gather in the work buffer before it can process it. By default it will wait five minutes. This time can be shortened or lengthened by tuning the value of IDLE_TIME in /etc/default/nfslogd.14.6.4.1. Consolidating file transfer informationThe NFS protocol was not designed to be a file transfer protocol, instead it was designed to be a file access protocol. NFS file operations map nicely to Unix filesystem calls and as such, its file data access and modification mechanisms operate on regions of files. This enables NFS to minimize the amount of data transfer required between server and client, when only small portions of the file are needed. The NFS protocol enables reads and writes of arbitrary number of bytes at any given offset, in any given order. NFS clients are not required to read a file on an NFS server in any given order, they may start in the middle and read an arbitrary number of bytes at any given offset. The random byte access, added to the fact that NFS Versions 2 and 3 do not define an open or close operation, make it hard to determine when an NFS client is done reading or writing a file. Despite this limitation, the nfslogd daemon does a decent job identifying file transfers by using various heuristics to determine when to generate the file transfer record.14.6.5. Filehandle to path mappingMost NFS operations take a filehandle as an argument, or return a filehandle as a result of the operation. In the NFS protocol, a filehandle serves to identify a file or a directory. Filehandles contain all the information the server needs to distinguish an individual file or directory. To the client, the filehandle is opaque. The client stores the filehandles for use in a later request. It is the server that generates the filehandle:Consider packets 1, 2, and 9 from the snoop trace presented earlier in this chapter. The client must first obtain the filehandle for the file foo.tar.Z, before it can request to read its contents. This is because the NFS READ procedure takes the filehandle as an argument and not the filename. The client obtains the filehandle by first invoking the LOOKUP procedure, which takes as arguments the name of the file requested and the filehandle of the directory where it is located. Note that the directory filehandle must itself first be obtained by a previous LOOKUP or MOUNT operation. Unfortunately, NFS server implementations today do not provide a mechanism to obtain a filename given a filehandle. This would require the kernel to be able to obtain a path given a vnode, which is not possible today in Solaris. To overcome this limitation, the nfslogd daemon builds a mapping table of filehandle to pathnames by monitoring all NFS operations that generate or modify filehandles. It is from this table that it obtains the pathname for the file transfer log record. This filehandle to pathname mapping table is by default stored in the file /var/nfs/fhtable. This can be overridden by specifying a new value for fhtable in /etc/nfs/nfslog.conf. In order to successfully resolve all filehandles, the filesystem must be shared with logging enabled from the start. The nfslogd daemon will not be able to resolve all mappings when logging is enabled on a previously shared filesystem for which clients have already obtained filehandles. The filehandle mapping information can only be built from the RPC information captured while logging is enabled on the filesystem. This means that if logging is temporarily disabled, a potentially large number of filehandle transactions will not be captured and the nfslogd daemon will not be able to reconstruct the pathname for all filehandles. If a filehandle can not be resolved, it will be printed on the NFS log transaction record instead of printing the corresponding (but unknown) pathname. The filehandle mapping table needs to be backed by permanent storage since it has to survive server reboots. There is no limit for the amount of time that NFS clients hold on to filehandles. A client may obtain a filehandle for a file, read it today and read it again five days from now without having to reacquire the filehandle (not encountered often in practice). Filehandles are even valid across server reboots. Ideally the filehandle mapping table would only go away when the filesystem is destroyed. The problem is that the table can get pretty large since it could potentially contain a mapping for every entry in the filesystem. Not all installations can afford reserving this much storage space for a utility table. Therefore, in order to preserve disk space, the nfslogd daemon will periodically prune the oldest contents of the mapping table. It removes filehandle entries that have not been accessed since the last time the pruning process was performed. This process is automatic, the nfslogd daemon will prune the table every seven days by default. This can be overridden by setting PRUNE_TIMEOUT in /etc/default/nfslogd. This value specifies the number of hours between prunings. Making this value too small can increase the risk that a client may have held on to a filehandle longer than the PRUNE_TIMEOUT and perform an NFS operation after the filehandle has been removed from the table. In such a case, the nfslogd daemon will not be able to resolve the pathname and the NFS log will include the filehandle instead of the pathname. Pruning of the table can effectively be disabled by setting the PRUNE_TIMEOUT to INT_MAX. Be aware that this may lead to very large tables, potentially causing problems exceeding the database maximum values. This is therefore highly discouraged, since in practice the chance of NFS clients holding on to filehandles for more than a few days without using them is extremely small. The nfslogd daemon uses ndbm[37] to manage the filehandle mapping table.1 0.00000 rome -> zeus NFS C LOOKUP3 FH=0222 foo.tar.Z 2 0.00176 zeus -> rome NFS R LOOKUP3 OK FH=EEAB ... 9 0.00091 rome -> zeus NFS C READ3 FH=EEAB at 0 for 32768 ... [37]See dbm_clearerr(3C). 14.6.6. NFS log cyclingThe nfslogd daemon periodically cycles the logs to prevent an individual file from becoming extremely large. By default, the ten most current NFS log files are located in /var/nfs and named nfslog, nfslog.0, through nfslog.9. The file nfslog being the most recent, followed by nfslog.1 and nfslog.9 being the oldest. The log files are cycled every 24 hours, saving up to 10 days worth of logs. The number of logs saved can be increased by setting MAX_LOGS_PRESERVE in /etc/default/nfslogd. The cycle frequency can be modified by setting CYCLE_FREQUENCY in the same file.14.6.7. Manipulating NFS log filesSometimes it may be desirable to have the nfslogd daemon close the current file, and log to a fresh new file. The daemon holds an open file descriptor to the log file, so renaming it or copying it somewhere else may not achieve the desired effect. Make sure to first shut down the daemon before manipulating the log files. To shut down the daemon, send it a SIGHUP signal. This will give the daemon enough time to flush pending transactions to the log file. You can use the Solaris pkill command to send the signal to the daemon. Note that the daemon can take a few seconds to flush the information:Sending it a SIGTERM signal will simply close the buffer files, but pending transactions will not be logged to the file and will be discarded.# pkill -HUP -x -u 0 nfslogd 14.6.8. Other configuration parametersThe configuration parameters in the /etc/default/nfslogd tune the behavior of the nfslogd daemon. The nfslogd daemon reads the configuration parameters when it starts, therefore any changes to the parameters will take effect the next time the daemon is started. Here is a list of the parameters:
14.6.9. Disabling NFS server loggingUnfortunately, disabling logging requires some manual cleanup. Unsharing or resharing a filesystem without the -o log directive stops the kernel from storing information into the work buffer file. You must allow the nfslogd daemon enough time to process the work buffer file before shutting it down. The daemon will notice that it needs to process the work buffer file once it wakes up after its IDLE_TIME has been exceeded. Once the work buffer file has been processed and removed by the nfslogd daemon, the nfslogd daemon can manually be shutdown by sending it a SIGHUP signal. This allows the daemon to flush the pending NFS log information before it is stopped. Sending any other type of signal may cause the daemon to be unable to flush the last few records to the log. There is no way to distinguish between a graceful server shutdown and the case when logging is being completely disabled. For this reason, the mapping tables are not removed when the filesystem is unshared, or the daemon is stopped. The system administrator needs to remove the filehandle mapping tables manually when he/she wants to reclaim the filesystem space and knows that logging is being permanently disabled for this filesystem.[38][38]Keep in mind that if logging is later reenabled, there will be some filehandles that the nfslogd daemon will not be able to resolve since they were obtained by clients while logging was not enabled. If the filehandle mapping table is removed, then the problem is aggravated.
Copyright © 2002 O'Reilly & Associates. All rights reserved. | |||||||||||||||||||||||||||||
|