13.3.2. Name Server Startup (BIND 9, Debug Level 1)
Here's what a BIND 9 name server looks like starting up:
1) Sep 15 15:34:53.878 starting BIND 9.1.0 -d1
2) Sep 15 15:34:53.883 using 1 CPU
3) Sep 15 15:34:53.899 loading configuration from './named.conf'
4) Sep 15 15:34:53.920 the default for the 'auth-nxdomain' option is now 'no'
5) Sep 15 15:34:54.141 no IPv6 interfaces found
6) Sep 15 15:34:54.143 listening on IPv4 interface lo, 127.0.0.1#53
7) Sep 15 15:34:54.151 listening on IPv4 interface eth0, 192.249.249.3#53
8) Sep 15 15:34:54.163 command channel listening on 0.0.0.0#953
9) Sep 15 15:34:54.180 now using logging configuration from config file
10) Sep 15 15:34:54.181 dns_zone_load: zone 0.0.127.in-addr.arpa/IN: start
11) Sep 15 15:34:54.188 dns_zone_load: zone 0.0.127.in-addr.arpa/IN: loaded
12) Sep 15 15:34:54.189 dns_zone_load: zone 0.0.127.in-addr.arpa/IN: dns_journal
_rollforward: no journal
13) Sep 15 15:34:54.190 dns_zone_maintenance: zone 0.0.127.in-addr.arpa/IN: enter
14) Sep 15 15:34:54.190 dns_zone_maintenance: zone version.bind/CHAOS: enter
15) Sep 15 15:34:54.190 running
The first difference you probably noticed between BIND 9's
debugging output and BIND 8's is BIND 9's terseness.
Remember that BIND 8 has been around for three years, and the authors
have had plenty of time to add debugging messages to the code. BIND 9
is brand-spanking-new, so there aren't as many debugging
messages yet.
You probably also noticed that BIND 9 includes a timestamp for each
debugging message, which can be handy if you're trying to
correlate messages to real-world events.
Lines 1 and 2 show the version of BIND we're running (9.1.0)
and the configuration file it's reading. As with the previous
example, we're using named.conf in the
current directory. Line 3 tells us we're using only one
CPU -- to be expected on a box with just one processor.
Line 4 gives us a simple warning that the default for the
auth-nxdomain substatement (covered in Chapter 10, "Advanced Features") has changed. Line 5 reminds us that our host
doesn't have any IP Version 6 network interfaces; if it did,
BIND 9 could listen on those interfaces for queries.
Lines 6 and 7 show the name server listening on two network
interfaces: lo, the loopback interface, and
eth0, the Ethernet interface. BIND 9 displays
the address and port in the format address#port,
unlike BIND 8, which uses [address].port. Line 8
shows named listening on port 953, the default
port, for control messages.
Lines 10-12 show the name server loading 0.0.127.in-addr.arpa. The
start and loaded messages
are self-explanatory. The no journal message
indicates that no journal was present. (A journal, described in Chapter 10, "Advanced Features", is a record of dynamic updates the name
server received for the zone.)
Finally, lines 13 and 14 show the name server doing maintenance on
the 0.0.127.in-addr.arpa and
version.bind zones.
(version.bind is a built-in
CHAOSNET zone that contains a single TXT record, attached to the
domain name version.bind.)
Zone maintenance is the process that schedules periodic tasks, such
as SOA queries for slave and stub zones or NOTIFY messages.
13.3.6. A Slave Name Server Checking Its Zone (BIND 8, Debug Level 1)
In addition to tracking down problems
with name server lookups, you may have to track down why a slave
server is not loading from its master. Tracking down this problem can
often be done by simply comparing the zone's SOA serial numbers
on the two servers using nslookup or
dig, as we'll show in Chapter 14, "Troubleshooting DNS and BIND". If your problem is more elusive, you may have
to resort to looking at the debugging information. We'll show
you what the debugging information should look like if your server is
running normally.
This debugging output was generated on a "quiet" name
server -- one not receiving any queries -- to show you exactly
which lines pertain to zone maintenance. Remember that a BIND 4 or 8
slave name server uses a child process to transfer the zone data to
the local disk before reading it in. While the slave logs its
debugging information to named.run, the
slave's child process logs its debugging information to
xfer.ddt.PID. The PID
suffix, by default the process ID of the child process, may be
changed to ensure that the filename is unique. Beware -- turning
on debugging on a slave name server will leave
xfer.ddt.PID files lying around, even if you are
only trying to trace a lookup. Our trace is at debugging level 1, and
we turned on the BIND 8 logging option
print-time. Debug level 3 gives you more
information, more than you may want if a transfer actually occurs. A
debugging level 3 trace of a zone transfer of several hundred
resource records can create an xfer.ddt.PID file
several megabytes in size.
21-Feb 00:13:18.026 do_zone_maint for zone movie.edu (class IN)
21-Feb 00:13:18.034 zone_maint('movie.edu')
21-Feb 00:13:18.035 qserial_query(movie.edu)
21-Feb 00:13:18.043 sysquery: send -> [192.249.249.3].53 dfd=5
nsid=29790 id=0 retry=888048802
21-Feb 00:13:18.046 qserial_query(movie.edu) QUEUED
21-Feb 00:13:18.052 next maintenance for zone 'movie.edu' in 2782 sec
21-Feb 00:13:18.056 datagram from [192.249.249.3].53, fd 5, len 380
21-Feb 00:13:18.059 qserial_answer(movie.edu, 26739)
21-Feb 00:13:18.060 qserial_answer: zone is out of date
21-Feb 00:13:18.061 startxfer( ) movie.edu
21-Feb 00:13:18.063 /usr/etc/named-xfer -z movie.edu -f db.movie
-s 26738 -C 1 -P 53 -d 1 -l xfer.ddt 192.249.249.3
21-Feb 00:13:18.131 started xfer child 390
21-Feb 00:13:18.132 next maintenance for zone 'movie.edu' in 7200 sec
21-Feb 00:14:02.089 endxfer: child 390 zone movie.edu returned
status=1 termsig=-1
21-Feb 00:14:02.094 loadxfer( ) "movie.edu"
21-Feb 00:14:02.094 purge_zone(movie.edu,1)
21-Feb 00:14:30.049 db_load(db.movie, movie.edu, 2, Nil)
21-Feb 00:14:30.058 next maintenance for zone 'movie.edu' in 1846 sec
21-Feb 00:17:12.478 slave zone "movie.edu" (IN) loaded (serial 26739)
21-Feb 00:17:12.486 no schedule change for zone 'movie.edu'
21-Feb 00:42:44.817 Cleaned cache of 0 RRs
21-Feb 00:45:16.046 do_zone_maint for zone movie.edu (class IN)
21-Feb 00:45:16.054 zone_maint('movie.edu')
21-Feb 00:45:16.055 qserial_query(movie.edu)
21-Feb 00:45:16.063 sysquery: send -> [192.249.249.3].53 dfd=5
nsid=29791 id=0 retry=888050660
21-Feb 00:45:16.066 qserial_query(movie.edu) QUEUED
21-Feb 00:45:16.067 next maintenance for zone 'movie.edu' in 3445 sec
21-Feb 00:45:16.074 datagram from [192.249.249.3].53, fd 5, len 380
21-Feb 00:45:16.077 qserial_answer(movie.edu, 26739)
21-Feb 00:45:16.078 qserial_answer: zone serial is still OK
21-Feb 00:45:16.131 next maintenance for zone 'movie.edu' in 2002 sec
Unlike the previous traces, each line in this trace has a timestamp.
The timestamp makes it clear which debug statements are grouped
together.
This name server is a slave for a single zone, movie.edu. The line with time
00:13:18.026 shows that it is time to check with the master server.
The server queries for the zone's SOA record and compares
serial numbers before deciding to load the zone. The lines with times
00:13:18.059 through 00:13:18.131 show you the zone's serial
number (26739), tell you the zone is out of date, and start a child
process (pid 390) to transfer the zone. At time 00:13:18.132, a timer
is set to expire 7200 seconds later. This is the amount of time the
server allows for a transfer to complete. At time 00:14:02.089, you
see the exit status of the child process. The status of 1 indicates
that the zone data was successfully transferred. The old zone data is
purged (time 00:14:02.094), and the new data is loaded.
The next maintenance (see time 00:14:30.058) is scheduled for 1846
seconds later. For this zone, the refresh interval is 3600, but the
name server chose to check again in 1846 seconds. Why? The name
server is trying to avoid having its refresh timer become
synchronized. Instead of using 3600 exactly, it uses a random time
between half the refresh interval (1800) and the full refresh
interval (3600). At 00:45:16.046, the zone is checked again and this
time it is up to date.
If your trace ran long enough, you'd see more lines like the
one at 00:42:44.817 -- one line each hour. What's happening
is that the server is making a pass through its cache, freeing any
data that has expired to reduce the amount of memory used.
The master server for this zone is a BIND 4 name server. If the
master were a BIND 8 name server, the slave would have been notified
when the zone changed rather than waiting for the refresh interval to
pass. The slave server's debug output would look almost exactly
the same, but the trigger to check the zone status is a NOTIFY:
rcvd NOTIFY(movie.edu, IN, SOA) from [192.249.249.3].1059
qserial_query(movie.edu)
sysquery: send -> [192.249.249.3].53 dfd=5
nsid=29790 id=0 retry=888048802