cc/td/doc/product/rtrmgmt/tempo/1_1
hometocprevnextglossaryfeedbacksearchhelp
PDF

Table of Contents

Performance Monitoring

Performance Monitoring

This chapter describes the Performance Monitoring components of the Cisco Internet OSS for VoIP: Infrastructure Manager (Cisco VoIP: Infrastructure Manager) Solution and is organized as follows:

Overview

Performance and accounting are two of the five elements in the Fault, Configuration, Accounting, Performance, Security (FCAPS) model of traditional Network Management.

Performance Management measures and records resource utilization and quality of service throughout the network. Accounting Management specifically deals with the collection and analysis of data to support billing. The Cisco CNS Performance Engine (Cisco CNS PE) application supports these functions by being the single point of collection for both types of data. The Cisco CNS Performance Engine application is not intended to be a high level analysis or correlation platform. These functions should be performed by upstream Performance Management and Billing applications. However, there are requirements for Cisco CNS Performance Engine to perform call leg correlation and data reduction as well as Threshold Crossing Alert (TCA) generation based upon raw data from the network.

In the Cisco VoIP: Infrastructure Manager Solution, Cisco CNS PE is an Element Management System (EMS) data collection application that complements the provisioning and fault monitoring applications and services of the total Cisco VoIP: Infrastructure Manager Solution.

This chapter is an installation and initial configuration guide for an original deployment of the Cisco CNS PE application. In many instances, it is excerpted from the Cisco CNS Performance Engine User Guide and Cisco CNS Performance Engine Installation Guide. The intention of this chapter is to present a step-by-step introduction to the Cisco CNS PE. It presents a certain network scenario and takes you through the steps required to install and configure the Cisco CNS PE for that network. If you wish to understand the full capabilities of the Cisco CNS PE application, you should refer to the Cisco CNS Performance Engine User Guideat:

http://www.cisco.com/univercd/cc/td/doc/product/rtrmgmt/cnspe/1_0/user_gd/index.htm

and the Cisco CNS Performance Engine Installation Guide at:

http://www.cisco.com/univercd/cc/td/doc/product/rtrmgmt/cnspe/1_0/install/index.htm.

Audience

This document is aimed at system engineers and customer support engineers, as well as integration partners intending to implement the network management for service provider voice solutions. In order to best utilize the information in this chapter, you should be familiar with network management fundamentals such as SNMP, AAA, RADIUS, and Syslog. Most of the NMS applications are deployed on Sun Solaris hardware/software and a familiarity with UNIX administration is also helpful.

Application Description

The primary purpose of the Cisco CNS PE application is to provide an efficient data collection interface that is close to the network element and to hide the complexity of multi-component billing and performance data from third parties and customers. It provides a single point for issuing performance data for voice solutions. It also includes the generation of Threshold Crossing Alerts (TCAs) for performance data. TCAs are in the form of SNMP Traps and Cisco CNS notifications.

Target Market

The Cisco CNS PE application is designed to function in service provider Voice over IP networks. It services medium to large scale networks. It can be deployed in a distributed architecture and can scale to larger networks by adding new instances of the Cisco CNS PE application. With larger networks, the Cisco CNS PE application may be deployed in a hierarchical fashion, with the top tier performance engine collecting and correlating data from the bottom tier performance engines.

Applications and Services

The Cisco CNS PE application is a component application in the Cisco VoIP: Infrastructure Manager Solution (Voice over IP) networks. As a performance data collection device and fully functional RADIUS server, it can also be deployed as a separate performance/accounting application. Cisco CNS PE does not contain a report creation tool, so a third party reporting tool is necessary for graphically viewing the collected performance data.

Operating in conjunction with the Service Provider NMS suite of applications, the Cisco CNS PE application can be enabled to send threshold crossing alerts to the NMS fault processing engine (Cisco Info Center). A collector can be configured on Cisco CNS PE to collect performance data from the PGW2200 (SS7 Interconnect) through the Cisco MGC Node Manager (CMNM).

Scope of the Solution

Cisco CNS PE is the single application in the Cisco VoIP: Infrastructure Manager Solution responsible for the collection and correlation of accounting and performance data from the network elements.

Solution Architecture

This section describes the Cisco VoIP: Infrastructure Manager Solution architecture.

NMS Architecture

Figure 3-1 shows the component architecture for the Cisco VoIP: Infrastructure Manager Solution. The Cisco CNS Performance Engine application is shown on the left of the diagram in the EMS layer. The Cisco CNS Performance Engine application is shown collecting data from network elements and uploading or publishing normalized performance and fault data to the upstream NMS applications.

Other Cisco VoIP: Infrastructure Manager Solution components and component applications are documented in other chapters of this document.


Figure 3-1: Cisco Internet OSS for VoIP: Infrastructure Manager Solution Architecture


NMS Performance Architecture

Figure 3-2 highlights the performance component of the Cisco VoIP: Infrastructure Manager Solution illustrating in more detail the transport and protocol specifics of the performance component. The Cisco CNS PE application can collect data, either through polling SNMP MIB variables, singly or in bulk, or through an upload as a RADIUS or FTP server.

Data arrives at the Cisco CNS PE application by means of a variety of transport methods depending upon the kinds of collector and data handlers configured. These Cisco CNS PE application components are described in the Cisco CNS Performance Engine User Guide. The Cisco CNS PE application also provides examples and templates to aid network operators in configuring the collector for different collection methods and data types.

This chapter is a subset of the Cisco CNS Performance Engine User Guide. It details a particular network scenario and designs certain recommended collectors, data handlers, and devices in order to produce a basic level of performance and accounting data collection. For more detailed or functionally specific configuration information, refer to the Cisco CNS Performance Engine User Guide.


Figure 3-2: Performance Management Component Architecture


Functional Overview

The Cisco CNS PE application collects selected network data from various data sources such as MIB's, bulk MIB files, flat files, and call history detail records through RADIUS. It transports the collected data in a normalized format to other systems, programs, and management applications, on demand or at scheduled intervals, using the Cisco CNS Integration Bus or some other file transport protocol. Using this schema eliminates the complexity of access to the data by building a homogeneous and consistent interface on top of a complex, Cisco provided packet-based, network environment.

Within the FCAPS model, the Cisco CNS PE application covers Accounting and Performance.

The primary purpose of the Cisco CNS PE application is to provide an efficient data collection interface that is close to the network elements (NE) and transparent to the network operator. It also provides a single point of collection of performance data for service provider voice networks.

Data Formats and Transport Protocols

The Cisco CNS PE application accepts data as a bulk File Transfer Protocol (FTP) upload, network response data from the IOS SAA agents, RADIUS call data, as well as polling network elements for SNMP variables. The processing engine within Cisco CNS PE processes response data, correlates call record data, and transparently passes threshold crossing alerts data to an upstream fault processing application (Cisco Info Center). Correlated call record data is passed either through bulk FTP or XML over the Cisco CNS Integration Bus to an upstream, third party reporting application. A graphical representation of this functionality is shown in Figure 3-3.

Cisco CNS PE is not designed as a persistent data store but instead relies on upstream applications for historical storage. Cisco CNS PE, in a large service provider network, processes enough data that it is recommended to purge the stored data on a frequent (default 24 hours) and regular basis. Because it supports FTP file upload/download, if you wish to retain more than one or two days worth of data, you can easily configure an export function to store that data on a separate storage server.


Figure 3-3:
Cisco CNS Performance Engine Functional Architecture


Interconnection/Interoperability

The subsystems of the Cisco CNS PE can interact within one instance of the Cisco CNS PE application or across multiple instances depending upon the Cisco VoIP: Infrastructure Manager Solution deployment. Interconnection to elements on a network are done through SNMP, FTP, and Telnet to and from a network device interface and the results are pushed through the collection subsystems in order to perform correlation if needed. The information is then made available to the application interface, thus allowing it to be collected and displayed by third party applications.

Solution Deployment Scenario

This chapter details a sample network scenario in which you are instructed about:

    1. Initial installation of the Cisco CNS PE application with recommended hardware, Operating System, and disk partitioning requirements.

    2. Startup and shutdown of the Cisco CNS PE processes and the use of a GUI and CLI administrative interfaces.

    3. Creation of the Cisco CNS PE elements (for example, devices, collectors, data handlers) to facilitate collection, processing, and transport of performance and accounting data from a representative array of voice network elements.

    4. Data export.

    5. Procedures for upgrading the Cisco CNS PE software.

    6. Some troubleshooting and monitoring procedures to analyze the Cisco CNS PE application's performance.

Network Elements

Figure 3-4 is the sample network to be used for configuring the Cisco CNS PE application. Collectors that process performance data from the PGW2200 complex will be configured. Collectors to process and correlate RADIUS accounting stop records, SAA voice related data, and SNMP variable and table elements from IOS network elements will also be configured.

For ease of use, this chapter configures the Cisco CNS PE to collect data from specific devices with specific IP addresses. These are fictitious addresses and are included for consistency in device specific data required in collectors. In this manner, you can use the examples and templates in this chapter and included in the Cisco CNS PE distribution, to build collectors and data handlers for your own specific networks.

This chapter also instructs you how to build the most common collectors and data handlers. Using this chapter in conjunction with the more comprehensive Cisco CNS Performance Engine User Guide should enable you to be operational in a reasonably short amount of time. Further customizations to accommodate specific or unique network requirements can be added to the configuration incrementally.


Figure 3-4:
Model Network Diagram


Limitations and Caveats

One Cisco CNS PE application, installed on the hardware, shown in Figure 3-4, is targeted to support, a maximum of 300 fully loaded E1s worth of call data with an average call hold time of three minutes. Factors such as E1 loading and average call duration contribute to performance limits. Performance numbers are being refined as early deployments provide more guidelines and this chapter will be upgraded as new data is received. Because the Cisco CNS PE is a distributed application, scaling can be achieved by adding a new instance of the application when increases in network traffic warrant.

Solution Related Tasks

This section describes the tasks you are required to perform in order to use the performance related functionality provided by this Cisco VoIP: Infrastructure Manager Solution.

Installation

The installation information presented here is taken directly from the Cisco CNS Performance Engine Installation Guide and is included here only for completeness in accomplishing the tasks described above.

Hardware Recommendations

The recommended platform for the Cisco CNS PE application is a Sun Netra20 (T4) or equivalent. Sun's suggested configuration for the T4 is:

Disk Layout

It is recommended you install the Cisco CNS PE database on one hard disk and the data on another. There should be at least twice as much swap space as there is RAM. The executable can be stored on either disk.

An example layout might look like:

# df -k

Filesystem kbytes used avail capacity Mounted on

/dev/dsk/c1t1d0s0 13838253 1852670 11847201 14% /

/proc 0 0 0 0% /proc

fd 0 0 0 0% /dev/fd

mnttab 0 0 0 0% /etc/mnttab

/dev/dsk/c1t1d0s7 1984564 8690 1916338 1% /var

swap 2393816 16 4883800 1% /tmp

/dev/dsk/c1t2d0s0 15121031 9 14969812 1% /data

/dev/dsk/c1t1d0s3 15121031 9 14969812 1% /db

/dev/dsk/c1t2d0s3 19887394 2133 19686388 1% /opt

Preparation

Verify network connectivity between the Cisco CNS PE application and the elements it is intended to manage. Remove any previous installations before installing this instance. Refer to section "Uninstalling Cisco CNS Performance Engine" for uninstall instructions.


Note   Cisco CNS PE, version 1.0, required a dasadmin user for installation. It is not recommended, and therefore unlikely, that you would install only version 1.0 without the version 1.0.1 patch. However, should that occasion arise, you must be aware of the dasadmin user requirement. This chapter does not detail that option, as it is not recommended.

Assuming you are installing version 1.0.1 or later, it is necessary to install the application as user root.

The distribution is contained in a tar file available from Cisco either on a CD-ROM or from CCO. Accompanying the tar file is a readme.txt file that contains information about the release. One of the items in the readme.txt file is the checksum value of the distribution tar file. Verify the checksum value of the tar file matches the listing in the readme.txt file. If it does not, it may have become corrupted and you must get an uncorrupted distribution file.

Open the readme.txt file, and run the cksum utility against the distribution tar file:

perf-tme# cd /opt

perf-tme# ls

CNS-PE CNS-PEimage SUNWconn SUNWebnfs SUNWits SUNWrtvc

lost+found

perf-tme# cd CNS-PEimage

perf-tme# ls

CNS-PE-solaris-sparc-1.0.0.4.tar patch setup

das.tar readme.txt

perf-tme# more readme.txt

Date: 5/13/2002

File: Cisco CNS PE1.0.0.4 readme

Content: I) cksum, II) Installation, III) Documentation IV) Resolved bugs

I. Before proceeding please make sure the cksum of the Cisco CNS PE release tar file is:

=============================================================================

cksum CNS-PE-solaris-sparc-1.0.0.4.tar

2271916327 77045760 CNS-PE-solaris-sparc-1.0.0.4.tar

---------------remaining text of readme suppressed----------------

Check your dist file against the checksum listed in the readme.txt file. Our examples match, so all is well.

perf-tme# cksum CNS-PE-solaris-sparc-1.0.0.4.tar

2271916327 77045760 CNS-PE-solaris-sparc-1.0.0.4.tar


Note   If your tar file is numbered or named differently, the checksum value will be different. Check the readme.txt file for the correct checksum value.

Installing the Software

This section provides a list of steps you must follow to install the Cisco CNS PE software.


Step 1   Login into the Cisco CNS PE host as user root:

Step 2   Change directory to where the installation file was loaded:

perf-tme# cd /opt/CNS-PEimage

Step 3   Untar the CNS-PE-solaris-sparc-1.0.0.4.tar file (your particular tar file may be numbered differently):

perf-tme# tar -xvof CNS-PE-solaris-sparc-1.0.0.4.tar

Step 4   Run the setup script:

perf-tme# ./setup

Step 5   Enter the absolute directory path where the Cisco CNS PE software should be installed:

perf-tme# Directory to install Cisco CNS PE: /opt/CNS-PE

Step 6   Enter the absolute directory path, where the Cisco CNS PE database should be created or press Enter to accept the default. It is recommended you provide a separate partition for the database.

perf-tme# Directory for database [path_to_db]: /db

Step 7   Enter the absolute directory path to where the Cisco CNS PE data should be created or press Enter to accept the default. It is recommended you provide the database its own partition.

perf-tme# Directory for data [path_to_data]: /data

$DAS_HOME is an environmental variable set by the installation script. This variable specifies the absolute path to where the Cisco CNS PE application is installed. Once the Cisco CNS PE application is installed, you can optionally change the default properties in two properties files. Consult the Cisco CNS Performance Engine Installation Guide for instructions on changing the properties files.

The Cisco CNS PE version 1.0 did not support the Auto-restart feature (where the Cisco CNS PE application restarts automatically when its host machine restarts). To remedy this, a patch for version 1.0 was created. This patch supports two main changes in the application: Auto-restart and changing the HTTP port number. The Auto-restart feature requires you to be root for the install. Consult the readme.txt file for additional details. The following dialog details the installation of the version 1.0.1 patch.

The readme.txt file that accompanies the version 1.0.1 patch is similar to the one that accompanies the main distribution. This file also contains a checksum value, against which you must compare your patch tar file as was done previously.

Step 8   Go to the directory where the Patch tar file is stored:

perf-tme# cd patch

Step 9   List the directory's contents:

perf-tme# ls

CNS-PE-solaris-sparc-1.0.1-patch.tar install_patch
das_patch.tar.Z readme_1.0.1.txt

Step 10   Read the contents of the readme_1.0.1.txt file:

perf-tme# more readme_1.0.1.txt

Date: 5/28/2002
File: CNS-PE1.0.1 readme
Content: I) cksum, II) Installation, III) Documentation, IV) Resolved bugs

I. Before proceeding please verify make checksum of the CNS-PE patch release tar

file as follows:

cksum CNS-PE-solaris-sparc-1.0.1.tar
3256364822 14755840 CNS-PE-solaris-sparc-1.0.1-patch.tar

----------------remainder of content suppressed-------------------------------------------

Step 11   Compare the checksum values:

perf-tme# cksum CNS-PE-solaris-sparc-1.0.1-patch.tar

3256364822 14755840 CNS-PE-solaris-sparc-1.0.1-patch.tar

Step 12   If the values agree, continue.

Step 13   Login as user root.

Step 14   If the Cisco CNS PE application is running:

perf-tme# /opt/CNS-PE/bin/stop.sh all

perf-tme# ps -ef | grep rvd

root 430 1 0 May 21 ? 22:37 /opt/ssng/bin/rvd -listen 7500

Step 15   Backup the Cisco CNS PE 1.0 files. You will need the backup files to restore the Cisco CNS PE 1.0 files only if the Patch installation aborts and corrupts the original install:

perf-tme# $DAS_HOME/bin/backup.sh <absolute_path_backup_directory>

Step 16   Create a cnspepatch directory in /tmp:

perf-tme# mkdir /tmp/cnspepatch

Step 17   Download the Patch tar file to the /tmp/cnspepatch directory and extract the contents of the Patch tar file:

perf-tme# cd /tmp/cnspepatch

perf-tme# ls

CNS-PE-solaris-sparc-1.0.1-patch.tar

perf-tme# tar -xof CNS-PE-solaris-sparc-1.0.1-patch.tar

Step 18   Start the Cisco CNS PE Patch installation script:

perf-tme# ./install_patch

Cisco Systems, Inc.

Cisco CNS Performance Engine (Cisco CNS PE) patch installation

The Cisco CNS Performance Engine must be stopped before installation of the patch.

Step 19   At the "Directory of Cisco CNS PE:" prompt, enter the absolute path to where you installed the Cisco CNS PE 1.0 software.

For example,

Directory of CNS-PE:/opt/CNS-PE

Temporary directory for installation files [/tmp/cnspepatch/das_temp]:

Step 20   If the path is correct, hit Enter or specify another path and then hit Enter.

Uncompressing and extracting files...

Patching 1.0 with 1.0.1

Proceed? [y/n] y

Removing temporary files...

CNS-PE patch installation completed.

Use /opt/CNS-PE/bin/start.sh to start CNS-PE.


Starting the Cisco CNS Performance Engine Application

This section describes how to start the Cisco CNS PE application.


Step 1   As user root, change to the directory where Cisco CNS PE is installed and list the directory's contents:

perf-tme# cd /opt/CNS-PE

perf-tme# ls -l

total 48

drwxrwxr-x 2 root staff 13312 Mar 18 15:34 MIBS

drwxr-xr-x 2 root staff 512 Mar 18 15:35 bin

drwxr-xr-x 3 root staff 512 Mar 18 15:35 classes

drwxrwxr-x 4 root staff 512 Apr 30 09:40 config

drwxrwxr-x 6 root staff 512 Mar 18 15:35 data

drwxr-xr-x 2 root staff 512 May 17 10:56 db

drwxr-xr-x 5 root staff 512 Mar 18 15:35 examples

drwxr-xr-x 2 root staff 512 May 17 09:36 logs

drwxr-xr-x 2 root staff 512 Mar 18 15:35 schema

drwxrwxrwx 2 root staff 512 May 1 09:21 tmp

drwxr-xr-x 6 root staff 512 Mar 18 15:35 tomcat

drwxr-xr-x 14 root staff 512 Mar 18 15:35 tools

Step 2   Run the Cisco CNS PE start script:

perf-tme# /bin/start.sh

start.sh: Tue May 28 09:30:44 PDT 2002

start.sh: Tue May 28 09:35:43 PDT 2002

start.sh: Guessing DAS_HOME from start.sh to /opt/CNS-PE

Step 3   Read the readme.txt file for more details on the Auto-restart feature and the http port change options.


Checking for Proper Operation

This section describes how to check to ensure the Cisco CNS PE application has started properly.


Step 1   Check for processes with search strings for the different application processes. The first search string is java. Other strings to search for are rvd for the Cisco CNS Integration Bus daemon, and dbeng and nco for the Cisco CNS probe if it is installed and started.

perf-tme# ps -ef | grep /opt/CSCOdas

root 390 1 0 Jun 12 ? 1:19

/opt/CSCOdas/tools/java1.3.1/bin/../bin/sparc/native_threads/java -Dtomcat.home

root 411 1 0 Jun 12 ? 2:36

/opt/CSCOdas/tools/java1.3.1/bin/../bin/sparc/native_threads/java -Dorg.omg.COR

perf-tme# ps -ef | grep rvd

root 13511 12468 0 14:41:36 pts/3 0:00 grep rvd

root 424 1 0 Jun 12 ? 6:41 rvd -listen tcp:7500

perf-tme# ps -ef | grep dbeng

root 403 1 0 Jun 12 ? 6:54 dbeng7 -n DASDBServer -ti 0 -m -ch

50P DASmain.db -hn 7

Operations that occur when the Cisco CNS PE application is started are recorded in the dasadmin.log file. A typical startup produces an entry such as:

start.sh: Tue May 28 09:35:43 PDT 2002

Database options chosen: -ch 50P

Java options chosen: -Xms64M -Xmx128M

start.sh: java -version

java version "1.3.1"

Java(TM) 2 Runtime Environment, Standard Edition (build 1.3.1-b24)

Java HotSpot(TM) Client VM (build 1.3.1-b24, mixed mode)

start.sh: Starting web server...

Using classpath: /opt/CNS-PE/tomcat/lib/*:/opt/CNS-

PE/tools/java1.3.1/lib/tools.jar:.:/opt/CNS-PE/tools/jakarta-tomcat

t-3.2.3/lib/ant.jar:/opt/CNS-PE/tools/jakarta-tomcat-3.2.3/lib/jasper.jar:/opt/CNS-

PE/tools/jakarta-tomcat-3.2.3/lib/jaxp.

jar:/opt/CNS-PE/tools/jakarta-tomcat-3.2.3/lib/parser.jar:/opt/CNS-PE/tools/jakarta-

tomcat-3.2.3/lib/servlet.jar:/opt/das/

CSCOdas/tools/jakarta-tomcat-3.2.3/lib/test:/opt/CNS-PE/tools/jakarta-tomcat-

3.2.3/lib/webserver.jar:/opt/CNS-PE/tools/tib

rv/lib/tibrvj.jar

start.sh: Starting database...

start.sh: Starting RVA...

start.sh: Starting CNS-PE...

2002-05-28 09:35:45 RVA: Started rva

2002-05-28 09:35:45 RVA: Reading configuration file /opt/CNS-PE/config/rva.config

2002-05-28 09:35:45 RVA: Using ticket file /opt/CNS-PE/tools/tibrv/bin/tibrv.tkt

2002-05-28 09:35:45 RVA: Host: perf-tme 20

2002-05-28 09:35:45 RVA: IP Address: 172.19.49.10

2002-05-28 09:35:45 RVA: Http interface: http://perf-tme 20:7680/

2002-05-28 09:35:45 RVA: Listen: 7600

2002-05-28 09:35:45 RVA: HTTP Tunnel: disabled

2002-05-28 09:35:45 RVA: RVD service: NULL

2002-05-28 09:35:45 RVA: RVD network: NULL

2002-05-28 09:35:45 RVA: RVD daemon: NULL

2002-05-28 09:35:45 RVA:7600: Active

start.sh: completed script at Tue May 28 09:35:46 PDT 2002

Starting tomcat. Check logs/tomcat.log for error messages

perf-tme #


Starting and Logging into the Cisco CNS Performance Engine

This section describes how to start and login to the Cisco CNS PE application.


Step 1   Start a web browser and enter http:// <CNS-PE_hostname>:8080 as the URL, where <CNS-PE_hostname> is the name or IP address of the host machine where Cisco CNS PE is installed and running.

The Cisco CNS PE web page repaints the web browser window. Refer to the Cisco CNS Performance Engine User Guide for information about the web browser interface.

When you start the Cisco CNS PE application, the Cisco CNS Performance Engine Login window, shown in Figure 3-5, appears.


Figure 3-5: Cisco CNS Performance Engine Login Window


Step 2   Log in to the Cisco CNS PE application. The default password is cisco.

Once you log in, other options become available, such as changing the password (which is good security practice.) Upon successfully logging in, the Cisco CNS PE Home window appears, as shown in Figure 3-6.


Figure 3-6: Cisco CNS Performance Engine Home Window



Stopping the Cisco CNS Performance Engine

When you stop the Cisco CNS PE software, you can stop just Cisco CNS PE application or you can stop Cisco CNS PE application and the web server. If you just stop Cisco CNS PE, you can still use the web browser to login to Cisco CNS PE and restart the application. If you stop all Cisco CNS PE processes, you will not be able to use a web browser to login to the Cisco CNS PE. The same goes for
Cisco CNS PE as for the web browser.

You can start and stop Cisco CNS PE from the web browser or in the Command Line Interface (CLI). The CLI commands are shown below.


Step 1   To stop only the Cisco CNS PE application:

perf-tme# cd /opt/CNS-PE

perf-tme# ./stop.sh

Step 2   To stop all Cisco CNS PE processes, including the web server, enter the following:

perf-tme# cd /opt/CNS-PE

perf-tme# bin/stop.sh all

Operations that occur when the Cisco CNS PE software is started and stopped are recorded in the dasadmin.log file.


Directory Structure

The following directory structure is created under the $DAS_HOME directory when the Cisco CNS PE software is installed:

Port Reservations

The port reservations shown in Table 3-1 are required or recommended for the Cisco CNS PE:


Table 3-1: Cisco CNS Performance Engine Port Reservations
Port/Purpose Requirement

8080 Webserver

Configurable with the start sequence:

perf-tme# bin/start.sh <websrvport>

2638 Database

Mandatory.

7600 RVA

Mandatory.

7680 HTTP interface for RVA

Mandatory.

7500 for RVD/RVRD

Optional, but highly recommended.

7580 HTTP interface for RVD

Optional, but highly recommended.

1813 RADIUS collector

Default, but can be set by the <acctPort> tag.

Uninstalling Cisco CNS Performance Engine

The Cisco CNS PE application does not provide an uninstall script. To uninstall the Cisco CNS PE, follow these steps:


Note   Backup all configuration and data files you might need before uninstalling Cisco CNS PE.


Step 1   Change directory to /opt/CNS-PE:

perf-tme# cd /opt/CNS-PE

Step 2   Enter the command to stop the Cisco CNS PE software and the web browser:

perf-tme# /bin/stop.sh all

Step 3   Remove the database and data directories:

perf-tme# /bin/rm -rf db/* data/*

Step 4   Change directory up one level and then remove the $DAS_HOME directory.

perf-tme# cd ..

perf-tme# /bin/rm -rf /opt/CNS-PE


Initial Configuration: Building XML Facilities

Overview of the Cisco CNS PE Graphical User Interface

The Cisco CNS PE application is installed with an empty configuration. In order to begin processing call data and element health and other network usage and condition variables, Cisco CNS PE must be initially configured. Configuration is accomplished by sending XML-based code to the Cisco CNS PE, through either the web based interface (http:// <Cisco CNS PE-hostname>:8080) or a third party configuration application that interfaces to the Cisco CNS PE's API.

This chapter does not detail the use of a third party application. It does detail the use of the web-based programming interface. Configurations are downloaded to the Cisco CNS PE by selecting the
Send XML tab in the Cisco CNS PE home window, shown in Figure 3-5.

There are several ways that XML code can be configured and downloaded to the web interface:

The last two methods comprise the leveraging of already written XML instead of starting from scratch. The plan employed in this chapter is to list the XML code required to collect data from a defined network of various elements, shown in Figure 3-4, so you can copy and paste the code from the text of this chapter. The various collectors, devices, and schedules are all listed in their corresponding sections of this document. The entire listing, with the required delimiters is again listed, without commentary, in the "Comprehensive Configuration Task List" section.

Operational Directives - Control and Notification

The configuration components of the Cisco CNS PE application comprise action verb and target devices and collectors and collection schedules. The action verbs are specified as control and notification components and include the XML tags listed in Table 3-2.


Table 3-2: XML Tags
XML Tag Purpose

create

To create components.

add

To add components to a collector or to add attributes to a threshold.

remove

To remove components from a collector or remove attributes from a threshold.

get

To get system configuration, system status, and data from collectors.

start

To start collectors.

stop

To stop collectors.

load

To load configurations or MIBs.

unload

To unload MIBs.

The operational possibilities of these actions are fully explained in the Cisco CNS Performance Engine User Guide and are not repeated in this chapter.

This chapter describes how to:

To configure the CNS-PE host, use the action commands listed in Table 3-2.

The following screen captures and descriptions serve as an introduction to generating XML code by pasting (or entering by hand) the XML for collecting performance data from our sample network.

All operations are initialized with the beginning and ending XML commands:

<das>

open actions required (i.e., add, load, create, remove, etc.)

close action (/add, /load, /create, /remove, )

</das>

As detailed in the "Operational Components" section, the first operation to accomplish is to load the MIBs that will be accessed in the Cisco CNS PE MIB and BulkMIB collectors. Determining which MIBs to select is dependent upon the network to be monitored. For this example, the following MIBs are loaded:

CISCO-RTTMON-MIB is another MIB Cisco CNS PE loads explicitly (you are not required to load it). This MIB also has a list of predecessors which are loaded as dependencies in the CISCO-RTTMON-MIB.

These predecessors are:

Predecessors for any specific MIB can be determined by reading the IMPORTS section of the MIB text. The relevant text in the RTTMON-MIB is excerpted here (found by searching on the text IMPORTS):

IMPORTS

MODULE-IDENTITY,

OBJECT-TYPE,

NOTIFICATION-TYPE,

Integer32,

Counter32,

Gauge32,

TimeTicks

FROM SNMPv2-SMI

MODULE-COMPLIANCE,

OBJECT-GROUP

FROM SNMPv2-CONF

DisplayString,

TruthValue,

RowStatus,

TimeInterval,

TimeStamp,

TEXTUAL-CONVENTION

FROM SNMPv2-TC

ciscoMgmt

FROM CISCO-SMI

OwnerString

FROM IF-MIB

;

Counting the seven MIBs you will load through the XML interface and the MIBs that are loaded automatically, a total of thirteen MIBs exist from which Cisco CNS PE can extract data.

The web interface GUI is shown in the following pages, illustrating the loading of MIBs as an example.

Loading MIBs is accomplished with the load command. The load command is somewhat unique due to the fact that when MIBs are loaded, there is no closing verb for <load> (that is, </load>). Because the MIBs have no inner components, it is possible to list the MIB in one line of XML code. An example of this is:

<load>

<mib name="CISCO-GATEKEEPER-MIB.my"/>

</load>

Notice the mib name variable is closed at the end of the line with the "/" character. Devices, such as collectors, are slightly different. This is explained as we proceed. The next section details the XML syntax using the action of loading MIBs.

The windows shown in Figure 3-7 and Figure 3-8 illustrate the commands as entered and then the feedback received upon successfully loading the MIBs.


Figure 3-7:
Configuration Commands for Loading Seven MIBs



Figure 3-8:
Response From Cisco CNS PE Upon Success


When configured, Cisco CNS PE returns the status of the configuration attempt with a response such as that shown in Figure 3-8. Cisco CNS PE also has an excellent error reporting mechanism should the configuration attempt fail. The configuration changes are halted at the point of failure and none of the configuration items that were part of that change are accepted. You should take actions necessary to correct the situation and re-enter the XML.

Some typical error messages are detailed below. For instance, if you try to load a MIB that is already loaded, the application returns an error such as the one shown in Figure 3-9. In this case, the commands shown in Figure 3-7 were reentered, with the status being the configuration changes were halted at the first MIB load because the MIB was already loaded.


Figure 3-9: Trying to Load Already Loaded MIB(s)


The configuration changes/updates were halted at the first attempt to load a MIB.

The Cisco CNS PE XML syntax requires all configuration commands to be started and stopped with the command set:

<das>

<action verb>

</action verb>

</das>

In Figure 3-9, the same GATEKEEPER-MIB is loaded without the necessary preamble, with the response to that attempt shown in Figure 3-10.


Figure 3-10: Error Response for not Preceding Commands with <load>


The actual error response does not pin point the problem, however, it will most likely become very familiar to you as the configurator of the Performance collector (this is probably the most common beginner's error). By examining the error response, you can see the response has the beginning and ending <das> </das> pair required for proper syntax, while the actual XML did not. The error message in its entirety is:

<failure>ConfigurationException[error(line 2): Element type "load" must be

declared. error(line 3): Element type "mib" must be declared. error(line 3):

Attribute "name" must be declared for element type "mib". ]</failure>

None of this appears to be a syntax error. Therefore, when you see an error that starts flagging line after line of other errors, the first thing to consider is whether or not you have properly begun the instruction, namely with:

<das>

<action>

<action variables/>

</action>

</das>

The remainder of this section details the order of operations for configuring the rest of the
Cisco CNS PE application to gather statistics and threshold crossing alerts for our example network. For space saving reasons, unless there is something unique about a command section, this chapter just describes the operation and foregoes the screen captures for each operation.

One of the motivations for loading the configuration in sections is the fact that errors, should they occur, can be isolated more quickly, as an error in any part of a command set causes the entire command set to be discarded. Also, certain operations require the use of other objects (for example, collectors require schedules and devices), so the required pieces must be defined before the object that requires it.

Refer to the "Comprehensive Configuration Task List" section for the complete configuration file. The subsections correspond to the order of operations presented in this section and can be copied and pasted into the GUI window. Of course, the device files and other objects that have network specific data embedded must be changed to reflect the actual parameters of the network you are configuring.

The example code in the "Comprehensive Configuration Task List" section corresponds to the code required to implement the sample network outlined in this chapter in Figure 3-4.

Operational Components

This section details the Extended Markup Language (XML) code required to monitor performance and to be alerted to threshold exceptions in the sample network shown in Figure 3-4. XML commands are executed through the GUI shown in Figure 3-5. Following is a list of the facilities built in this chapter and the order in which they are created.

A prerequisite to the creation of collectors is the definition of devices and schedules and possibly data handlers and purgers.

Devices

Schedules

Collectors

Once the above elements are developed, operations on the collectors are performed in conjunction with data handlers. XML code will be developed to create the following components:

Data Handlers

Thresholds

Thresholds for memory and disk usage and other critical processes.

Operations

Order of Operations

The Cisco CNS PE application may be added to or modified at any time through its GUI. This chapter takes you through the operations required to complete the functional requirements for a performance/accounting collection device. The operations are ordered for convenience and readability.

Cisco CNS PE can be configured by entering XML commands through its GUI or through the use of a third party configuration utility associated with a performance or accounting application. This chapter details the configuration by entering commands through the Cisco CNS PE GUI. The following sections provide example XML for the model network shown in Figure 3-5. Most of the variables (XML fields) listed in the following sections for configuring devices, collectors, etc., are required. Some variables are optional. This chapter is intended to help you become operational in a short period of time. The Cisco CNS Performance Engine User Guide provides detailed information on the required and optional fields and field defaults and should be consulted for more information.

Cisco is currently supporting partners that are developing third party reporting applications that interface with Cisco CNS PE. Along with creating customizable performance reports, these applications also automate the creation of XML code required to configure the Cisco CNS PE. The upstream applications create templates to facilitate the creation of devices, data handlers, collectors, etc., and create the XML code required to implement them in Cisco CNS PE. In this chapter, the XML code is generated with the aid of examples and templates that are contained as part of the Cisco CNS PE application. Therefore, if you encounter errors while following the examples, they are typically due to spelling errors. As the error messages are quite understandable, they can help a great deal in troubleshooting.

Examples and Templates

The Cisco CNS PE distribution files contain templates and examples to aid in the understanding of the XML syntax and to help streamline the design process. These example files are located under the DAS_HOME directory. Examples are in the $HOME/examples/xml directory and templates are in the $HOME/config/template directory. Refer to the Cisco CNS Performance Engine User Guide for details on their purpose and use.

perf-tme 20% pwd

/opt/CNS-PE/examples/xml

perf-tme 20% ls

adddevice.xml cmnm.xml radiusCollector.xml use_templates.xml
bams.xml mib_bulkmib.xml remove_template_config.xml
callhistCollector.xml overview.xml trap.xml

perf-tme 20% pwd

/opt/CNS-PE/config/template

perf-tme 20% ls

AAAserver.xml DS1Stats.xml Gatekeeper.xml Resource.xml common.xml

Loading MIBs

Because SNMP is a major protocol used in collecting performance information, typically, the MIBs required in the collectors are loaded first. The operation is straightforward. The following code loads the seven MIBs mentioned earlier. This is presented so you can try out sending XML to Cisco CNS PE using the GUI. Before configuring, the MIBs will be unloaded so as to start with a clean, empty configuration.

With this in mind, copy and paste the following XML into the Send XML window of the
Cisco CNS PE GUI:

<das>

<load>

<mib name="CISCO-GATEKEEPER-MIB.my"/>

<mib name="CISCO-PROCESS-MIB.my"/>

<mib name="CISCO-MEMORY-POOL-MIB.my"/>

<mib name="CISCO-POP-MGMT-MIB.my"/>

<mib name="CISCO-AAA-SERVER-MIB.my"/>

<mib name="CISCO-DIAL-CONTROL-MIB.my"/>

<mib name="CISCO-VOICE-DIAL-CONTROL-MIB.my"/>

</load>

</das>

Note the response to this action. A successful load of MIBs is represented in Figure 3-8. The next several sections detail the devices, collectors, and other facilities to be created in XML. It is not intended you copy and paste from the following sections. The XML is presented as a means of illustrating how the XML is used and as an aid in understanding the objective of the various XML facilities used. The entire suite of facilities created in these upcoming sections is reproduced in the "Comprehensive Configuration Task List" section without explanations. The preambles are in place and the facilities are separated into workable groups. The "Comprehensive Configuration Task List" section also has the code required to remove all of the created facilities.

Read the following sections for detailed explanations of the schedules, devices, and collectors to be created.

Creating Devices

The design and configuration examples of network devices, shown in Figure 3-11, follow the order of operations outlined above, namely:


Figure 3-11: Model Network Diagram


The network diagram shows a directory gatekeeper, two gatekeepers, two gateways, one CMNM, one PGW2200 machine, a billing machine, and the Cisco CNS PE machine. XML will be designed to create device files for these network elements.

The options required for a device are specific to the collector collecting data from the device. For example, for the MIB collector, SNMP community string properties should be specified in a device. For the CMNM (and other UNIX applications) collector, Telnet and FTP properties should be specified in the device. For the RADIUS collector, a RADIUS server key is configured globally in the das.properties file. The following example XML code creates all of the devices required for the model network. For gateways and gatekeepers, the device file contains the device type, IP address, and SNMP community strings configured in the device.

XML: Directory Gatekeeper

<device name="dgk">

<type>3660</type>

<snmp>

<ipaddress>172.19.49.24</ipaddress>

<readCommunity>public</readCommunity>

<writeCommunity>private</writeCommunity>

</snmp>

</device>

XML: Gatekeepers

<device name="emeaGK">

<type>3660</type>

<snmp>

<ipaddress>172.19.49.21</ipaddress>

<readCommunity>public</readCommunity>

<writeCommunity>private</writeCommunity>

</snmp>

</device>

<device name="usGK">

<type>3660</type>

<snmp>

<ipaddress>172.19.49.28</ipaddress>

<readCommunity>public</readCommunity>

<writeCommunity>private</writeCommunity>

</snmp>

</device>

XML: Gateways

<device name="emeagw">

<type>5350</type>

<snmp>

<ipaddress>172.19.49.13</ipaddress>

<readCommunity>public</readCommunity>

<writeCommunity>private</writeCommunity>

</snmp>

</device>

<device name="usgw">

<type>5350</type>

<snmp>

<ipaddress>172.19.49.5</ipaddress>

<readCommunity>public</readCommunity>

<writeCommunity>private</writeCommunity>

</snmp>

</device>

XML: CMNM Host

The collectors for the following devices, (Solaris based applications and billing server), have the SNMP community strings as well as the FTP and Telnet access information required to upload/download file information. Time zone details should correspond to the date settings on the machine. The machines referred to below are in Pacific Daylight Time, and therefore -8 hours from GMT.

<device name="cmnmtme">

<type>solaris</type>

<timezone>-08:00</timezone>

<telnet>

<ipaddress>172.19.49.3</ipaddress>

<username>tme</username>

<password>cisco</password>

<prompt>cmnm-tme%</prompt>

</telnet>

<ftp>

<ipaddress>172.19.49.3</ipaddress>

<username>tme-ftp</username>

<password>cisco</password>

</ftp>

</device>

XML: PGW2200 Host

<device name="pgwtme">

<type>solaris</type>

<timezone>-08:00</timezone>

<telnet>

<ipaddress>172.19.49.4</ipaddress>

<username>tme</username>

<password>cisco</password>

<prompt>pgw-tme%</prompt>

</telnet>

<ftp>

<ipaddress>172.19.49.4</ipaddress>

<username>tme-ftp</username>

<password>cisco</password>

</ftp>

</device>

XML: Billing Server

<device name="billtme">

<type>nt2000</type>

<timezone>-08:00</timezone>

<telnet>

<ipaddress>172.19.49.7</ipaddress>

<username>tme</username>

<password>cisco</password>

<prompt>bill-tme%</prompt>

</telnet>

<ftp>

<ipaddress>172.19.49.7</ipaddress>

<username>tme-ftp</username>

<password>cisco</password>

</ftp>

</device>

XML: Cisco CNS PE Host

<device name="perftme">

<type>solaris</type>

<timezone>-08:00</timezone>

<telnet>

<ipaddress>172.19.49.10</ipaddress>

<username>tme</username>

<password>cisco</password>

<prompt>perf-tme%</prompt>

</telnet>

<ftp>

<ipaddress>172.19.49.10</ipaddress>

<username>tme-ftp</username>

<password>cisco</password>

</ftp>

</device>


Note   While this method is somewhat time consuming at first, it also allows you to define specific access attributes that can be modified while the system is operational. The ability to configure individual access parameters increases the security potential against unauthorized access. Device files can be copied and pasted, and then all that is required is to modify the specifics.

Creating Schedules

Schedules have a start and stop time, intervals, and delay between multiple operations. Refer to the
Cisco CNS Performance Engine User Guide for the default values. The schedules necessary for the performance requirements of the model network are created in this section.

5 Second Schedule

<schedule name="S5sec">

<start>2001-08-31T00:00:00.000-08:00</start>

<end>2002-12-25T00:00:00.000-08:00</end>

<interval>PT5S</interval>

<interTaskDelay>PT0.1S</interTaskDelay>

</schedule>

10 Second Schedule

<schedule name="S10sec">

<start>2001-08-31T00:00:00.000-08:00</start>

<end>2002-12-25T00:00:00.000-08:00</end>

<interval>PT10S</interval>

<interTaskDelay>PT0.1S</interTaskDelay>

</schedule>

1 Minute Schedule

<schedule name="S1min">

<start>2001-08-31T00:00:00.000-08:00</start>

<end>2002-12-25T00:00:00.000-08:00</end>

<interval>PT1M</interval>

<interTaskDelay>PT0.1S</interTaskDelay>

</schedule>

3 Minute Schedule

<schedule name="S3min">

<start>2001-08-31T00:00:00.000-08:00</start>

<end>2002-12-25T00:00:00.000-08:00</end>

<interval>PT3M</interval>

<interTaskDelay>PT0.1S</interTaskDelay>

</schedule>

5 Minute Schedule

<schedule name="S5min">

<start>2001-08-31T00:00:00.000-08:00</start>

<end>2002-12-25T00:00:00.000-08:00</end>

<interval>PT5M</interval>

<interTaskDelay>PT0.1S</interTaskDelay>

</schedule>

15 Minute Schedule

<schedule name="S15min">

<start>2001-08-31T00:00:00.000-08:00</start>

<end>2002-12-25T00:00:00.000-08:00</end>

<interval>PT15M</interval>

<interTaskDelay>PT0.1S</interTaskDelay>

</schedule>

60 Minute Schedule

<schedule name="S60min">

<start>2001-08-31T00:00:00.000-08:00</start>

<end>2002-12-25T00:00:00.000-08:00</end>

<interval>PT60M</interval>

<interTaskDelay>PT0.1S</interTaskDelay>

</schedule>

Creating Collectors

The following sections detail the operations of and the XML code required to configure the following collectors:

Bulk MIB Collectors

Many Cisco devices support SNMP for performance data retrieval. Retrieving large tables using SNMP results in many SNMP getbulk requests and has more overhead. To mitigate this, many Cisco devices support bulk file transfer to retrieve large tables. The device must be configured to indicate the objects to be retrieved using the bulk transfer facility and when the transfer is requested, the device writes these objects to the specified file and transfers the file using FTP to the requested host.

The BulkMIB collector limits bulk file transfers to only tables (the object identifier (OID) given in the configuration should be that of a table). The BulkMIB collector uses the CISCO-BULK-FILE-MIB.my and CISCO-FTP-CLIENT-MIB.my MIBs at the device. During each collection period, the BulkMIB collector configures the bulk MIB and FTP Client MIB, requests the file be created, and requests the file be sent using FTP. It then removes the configuration at the device. Therefore, certain overhead is associated with Cisco CNS PE setting up bulk MIBs for each collection and the device must generate and FTP the file for each collection. Due to these reasons, it is recommended bulk MIB transfer be used only for retrieving large tables with a reasonable collection frequency. Except for data collection, the BulkMIB collector and MIB collector behave similarly with respect to threshold processing, data export, and data purge.

You must set the limits for the tables in these MIBs appropriately (cbfDefineMaxFile, cbfDefineMaxObjects, cbfStatusMaxFiles, cfcRequestMaximum). You must also set the bulkmib.ftp.username and bulkmib.ftp.password properties appropriately in the das.properties file. These properties are stored in the CISCO-FTP-CLIENT-MIB.my MIB. The user account used for FTP should have write privileges to the $DAS_HOME/tmp directory.

Different Bulk MIB collectors are created for different classes of devices. In this example, three different Bulk MIB collectors are created; one for gatekeeper zone information, one for DS0/DS1 usage statistics, and one for AAA server statistics. The Bulk MIB collectors make use of previously created schedules. Collection intervals are aligned with how many devices are being accessed and the size of the storage buffers in the devices. You must find an interval that is frequent enough so the data in the devices is not overwritten, yet not so frequent you overload the network with management traffic.

Because the various MIBs from which Cisco CNS PE collects data have been loaded into memory on the Cisco CNS PE machine, the tables being monitored can be referred to without the accompanying OID string common to SNMP variables.

XML: Gatekeeper Bulk MIB Collector

<collector name="GateKeeperBulkMibCollector">

<schedule name="S15min"/>

<BulkMibCollector>

<oid>cgkZoneTable</oid>

<oid>cgkLocalZoneTable</oid>

</BulkMibCollector>

</collector>

XML: DS1 Stats Bulk MIB Collector

<collector name="DS1StatsBulkMibCollector">

<schedule name="S15min"/>

<BulkMibCollector>

<oid>cpmDS0UsageTable</oid>

<oid>cpmDS1DS0UsageTable</oid>

</BulkMibCollector>

</collector>

XML: AAA Server Bulk MIB Collector

<collector name="AAAServerBulkMibCollector">

<schedule name="S60min"/>

<BulkMibCollector>

<oid>casStatisticsTable</oid>

</BulkMibCollector>

</collector>

MIB Collectors

MIB collectors are the SNMP counterpart to Bulk MIB collectors. MIB collectors target individual MIB variables as opposed to entire MIB tables. This example configures three MIB collectors that correspond to the three Bulk MIB collectors and one additional MIB collector that does not have a matching Bulk MIB collector. Note the MIB variables being collected are found in the previously loaded MIBs.

The choice of variables to collect is configurable within the collector XML code. The variables detailed in the example code are just suggestions and should be customized to the particular needs of the network involved.

As with the Bulk MIB tables, individual SNMP variables can be referred to with just the variable name. Because the number of MIBs loaded into memory is relatively small, (seven in this example), it is possible to find the desired data without the aid of the entire variable's object identifier (OID).

XML: Gatekeeper MIB Collector

The Gatekeeper MIB collector collects individual zone information from the gatekeepers and directory gatekeepers. The Gatekeeper MIB is queried every fifteen minutes. Gatekeeper MIB statistics are collected from the Cisco Gatekeeper MIB:

ftp://ftpeng.cisco.com/pub/mibs/v2/CISCO-GATEKEEPER-MIB.my

<collector name="GateKeeperMibCollector">

<schedule name="S15min"/>

<MibCollector>

<snmpGetMany>

<oid>cgkZoneZoneName</oid>

<oid>cgkZoneDomain</oid>

<oid>cgkZoneAddressLookupFailures</oid>

<oid>cgkZoneEndpointTimeouts</oid>

<oid>cgkZoneOtherFailures</oid>

<oid>cgkZoneLRQs</oid>

<oid>cgkLZoneACFs</oid>

<oid>cgkLZoneARJs</oid>

<oid>cgkLZoneTotalBandwidth</oid>

<oid>cgkLZoneAllocTotalBandwidth</oid>

<oid>cgkLZoneInterzoneBandwidth</oid>

<oid>cgkLZoneAllocInterzoneBandwidth</oid>

</snmpGetMany>

</MibCollector>

</collector>

XML: DS0/DS1 Statistics MIB Collectors

The DS1 statistics MIB collects usage information for DS0 and DS1 connections. It collects MIB variables every fifteen minutes. DS0 and DS1 usage statistics are available in the Cisco PoP Management MIB:

ftp://ftpeng.cisco.com/pub/mibs/v2/CISCO-POP-MGMT-MIB.my

<collector name="DS1StatsMibCollector">

<schedule name="S15min"/>

<MibCollector>

<snmpGetMany>

<!-- The following are DS0 stats. -->

<oid>cpmDS1SlotIndex</oid>

<oid>cpmDS1PortIndex</oid>

<oid>cpmChannelIndex</oid>

<oid>cpmConfiguredType</oid>

<oid>cpmDS0CallType</oid>

<oid>cpmL2Encapsulation</oid>

<oid>cpmCallCount</oid>

<oid>cpmTimeInUse</oid>

<oid>cpmInOctets</oid>

<oid>cpmOutOctets</oid>

<oid>cpmInPackets</oid>

<oid>cpmOutPackets</oid>

<oid>cpmAssociatedInterface</oid>

<!-- The following are DS1 stats. -->

<oid>cpmDS1UsageSlotIndex</oid>

<oid>cpmDS1UsagePortIndex</oid>

<oid>cpmDS1ActiveDS0s</oid>

<oid>cpmDS1ActiveDS0sHighWaterMark</oid>

<oid>cpmDS1TotalAnalogCalls</oid>

<oid>cpmDS1TotalDigitalCalls</oid>

<oid>cpmDS1TotalV110Calls</oid>

<oid>cpmDS1TotalV120Calls</oid>

<oid>cpmDS1TotalCalls</oid>

<oid>cpmDS1TotalTimeInUse</oid>

<oid>cpmDS1CurrentIdle</oid>

<oid>cpmDS1CurrentOutOfService</oid>

<oid>cpmDS1CurrentBusyout</oid>

<oid>cpmDS1InOctets</oid>

<oid>cpmDS1OutOctets</oid>

<oid>cpmDS1InPackets</oid>

<oid>cpmDS1OutPackets</oid>

</snmpGetMany>

</MibCollector>

</collector>

XML: AAA Server MIB Collector

The AAA Server MIB collector retrieves data from the Network Access Server (NAS). The AAA Server MIB variables are collected every sixty minutes. The AAA Server MIB is detailed here:

ftp://ftp.cisco.com/pub/mibs/v2/CISCO-AAA-SERVER-MIB.my

<collector name="AAAServerMibCollector">

<schedule name="S60min"/>

<MibCollector>

<snmpGetMany>

<oid>casAuthenRequests</oid>

<oid>casAuthenRequestTimeouts</oid>

<oid>casAuthenUnexpectedResponses</oid>

<oid>casAuthenServerErrorResponses</oid>

<oid>casAuthenIncorrectResponses</oid>

<oid>casAuthenResponseTime</oid>

<oid>casAuthenTransactionSuccesses</oid>

<oid>casAuthenTransactionFailures</oid>

<oid>casAuthorRequests</oid>

<oid>casAuthorRequestTimeouts</oid>

<oid>casAuthorUnexpectedResponses</oid>

<oid>casAuthorServerErrorResponses</oid>

<oid>casAuthorIncorrectResponses</oid>

<oid>casAuthorResponseTime</oid>

<oid>casAuthorTransactionSuccesses</oid>

<oid>casAuthorTransactionFailures</oid>

<oid>casAcctRequests</oid>

<oid>casAcctRequestTimeouts</oid>

<oid>casAcctUnexpectedResponses</oid>

<oid>casAcctServerErrorResponses</oid>

<oid>casAcctIncorrectResponses</oid>

<oid>casAcctResponseTime</oid>

<oid>casAcctTransactionSuccesses</oid>

<oid>casAcctTransactionFailures</oid>

<oid>casState</oid>

<oid>casCurrentStateDuration</oid>

<oid>casPreviousStateDuration</oid>

<oid>casTotalDeadTime</oid>

<oid>casDeadCount</oid>

</snmpGetMany>

</MibCollector>

</collector>

XML: Resource MIB Collector

The Resource MIB collector gathers statistics related to system resource utilization found in the Cisco Memory MIB:

ftp://ftpeng.cisco.com/pub/mibs/v2/CISCO-MEMORY-POOL-MIB.my

as well as the Cisco Process MIB:

ftp://ftpeng.cisco.com/pub/mibs/v2/CISCO-PROCESS-MIB.my

The schedule for the Resource MIB collector is every fifteen minutes.

<collector name="ResourceMibCollector">

<schedule name="S15min"/>

<MibCollector>

<snmpGetMany>

<oid>cpmCPUTotalPhysicalIndex</oid>

<oid>cpmCPUTotal5sec</oid>

<oid>cpmCPUTotal1min</oid>

<oid>cpmCPUTotal5min</oid>

<oid>ciscoMemoryPoolName</oid>

<oid>ciscoMemoryPoolValid</oid>

<oid>ciscoMemoryPoolUsed</oid>

<oid>ciscoMemoryPoolFree</oid>

<oid>ciscoMemoryPoolLargestFree</oid>

</snmpGetMany>

</MibCollector>

</collector>

RADIUS Collector

The RADIUS collector receives RADIUS accounting requests and returns an accounting response (an acknowledgement) for each request, back to the gateway. Cisco CNS PE acts as a RADIUS Accounting Server and the gateway acts as the Client. Transactions between gateways and the RADIUS Accounting Server are authenticated through the use of a shared secret which is never sent over the network. A RADIUS accounting packet is encapsulated in a User Datagram Protocol (UDP) datagram. The default UDP port on the Cisco CNS PE host, as well as the RADIUS Accounting server, is 1813. The UDP port is also configurable through the XML interface.

A Cisco VoIP call may consist of several call legs, where each leg represents a part of the path of a complete call. Cisco CNS PE can receive both start and stop accounting requests, however, it only processes stop requests that are sent when a call for a call leg is completed. Start accounting records are discarded by Cisco CNS PE. Each call processed through a gateway consists of an incoming and an outgoing call leg. Call legs 1 and 2 represent incoming and outgoing call legs for the originating gateway, while call legs 3 and 4 represent incoming and outgoing call legs for the terminating gateway of a four leg call.

The RADIUS collector correlates the RADIUS stop data and Call History data (if a CallHistory collector is configured). Only one RADIUS collector and one CallHistory collector can be configured on a single Cisco CNS PE host. By default, the correlation is performed based on the schedule assigned to the RADIUS collector. If the CallHistory collector is configured, the recommended value for the correlationInterval parameter should be at least twice the value of the collection interval configured for CallHistory.

The RADIUS collector is configured in two places on Cisco CNS PE. The first is in the Cisco CNS PE configuration file which is located at $DAS_HOME/config/das.properties and contains the following:

radius.req.threads=1                      # number of request processing threads

radius.ack.threads=1                     #number of ack processing threads

radius.cdr.threads=1                      #number cdr writer threads

radius.secret=XXXXX                        #key to generate the authenticator

radius.cdr.filename=radius           #prefix name for radius data files

radius.file.close.interval=3           #when to generate a new radius data file (minutes)

radius.aging.interval=30               #when to age out old cdrs to a radius data file                                        (seconds)

Possibly the most important attribute is radius.secret, as this must match the network element's sending RADIUS information. If it does not, Cisco CNS PE does not collect the RADIUS information. All network devices sending RADIUS data to a single Cisco CNS PE must be configured with the same RADIUS key for that Cisco CNS PE server.

The externally configurable parts to the RADIUS collector are the port upon which it listens for RADIUS information and how often it correlates RADIUS data. These values are specified in the XML code for the collector.

XML: RADIUS Collector

<collector name="radiusCollector">

<schedule name="S15min"/>

<radiusCollector>

<acctPort>1646</acctPort>

<fileInterval>PT7M</fileInterval>

<ageInterval>PT2M</ageInterval>

</radiusCollector>

</collector>

AAA broadcast accounting in the Cisco Voice Gateways enables AAA accounting records to be transmitted to both a Cisco CNS PE and a RADIUS AAA server. All gateways sending accounting requests to Cisco CNS PE should be configured with the same RADIUS secret.

In Cisco CNS PE, the RADIUS collector collects RADIUS accounting packets, correlates RADIUS Call Detail Records (CDRs), and the correlated RADIUS CDRs are written to a data file. The data is generated in Comma Separated Values (CSVs) format. The name of this file is passed to the VoipCorrelator module. The VoipCorrelator module correlates RADIUS and Call History call data and saves the results in flat files in the $DAS_HOME/data/VoipCorrelator directory. The
Cisco CNS PE application should retrieve files from the $DAS_HOME/data/VoipCorrelator directory and it should delete the files.

The format of the filenames is: <Cisco CNS PE-name>_<timestamp>.

For example:

perf-tme_D20011219T193632Z

perf-tme_D20011219T193732Z


Note   The RADIUS collector logs CDR statistics to the $DAS_HOME/logs/cdr.log file. This file identifies how many CDRs are written to each data file.

Cisco CNS PE purges the correlated files based on the purge parameters specified in the das.properties file. However, a purger can be configured for the RADIUS collector. In usual operations, the performance manager or the NMS application should delete the files after they are retrieved.


Note   Hotspot polling, threshold processing, on demand data export using XML interface, and periodic scheduling of data export are not supported for this collector. In order for VoipCorrelation to work correctly, the RADIUS collector must be started before the Call History collector.

The RADIUS collector does not support RADIUS attribute 44 (Acct-Session-Id overloading.) You must enable Vendor Specific Attributes (VSAs) by using the gw-accounting h323 vsa command on voice gateways.

Call History MIB Collector

The CallHistory collector retrieves call history information at the originating and terminating voice gateways. The CallHistory collector accomplishes this task by polling the following MIBs:

CISCO-DIAL-CONTROL-MIB: ftp://ftpeng.cisco.com/pub/mibs/v2/CISCO-DIAL-CONTROL-MIB.my

CISCO-VOICE-DIAL-CONTROL-MIB: ftp://ftpeng.cisco.com/pub/mibs/v2/CISCO-VOICE-DIAL-CONTROL-MIB.my

CISCO-VOICE-COMMON-DIAL-CONTROL-MIB: ftp://ftpeng.cisco.com/pub/mibs/v2/CISCO-VOICE-COMMON-DIAL-CONTROL-MIB.my

The variable names listed in the MIBs and the variable names listed in the collector do not appear to be the same. There is an inferred mapping of MIB variable names to collector variable names. This mapping is detailed in Appendix B of the Cisco CNS Performance Engine User Guide and in the "Attribute Mapping for the CallHistory Collector" section of this chapter.

Gateways have a configurable buffer size to hold the call history information and time allotted to retain this data. Cisco CNS PE should be configured so that it retrieves the call history information while it is still available at the device.


Note   The CallHistory collector should not be used with the IOS12.2(XU) release.

The CallHistory collector retrieves the telephony and VoIP call legs from the Call History tables, correlates these two legs for each call, and writes the resulting data to a temporary file in the $DAS_HOME/data/callhistory directory.

The CallHistory collector should only be used in conjunction with the RADIUS collector so the results from the CallHistory collector can be appended and correlated to the data collected through RADIUS CDRs. A third party application must ensure only one CallHistory collector is configured on each Cisco CNS PE system. The CallHistory collector automatically determines how many entries to poll by retrieving the maxTableEntries value from the gateway.

The following XML example shows a typical configuration for a Call History collector. The connection-id and h323-call-origin attributes must be specified in the configuration. Based on the load of the network and the impact of polling on the device, other attributes can be omitted. Because only one Call History collector can be configured, the collection schedule must reflect the needs of the busiest gateway. The schedule configured in the Call History collector detailed below is for three minutes so the Call History table of a fully loaded AS5850 will not wrap and be overwritten.

XML: Call History Collector

<collector name="callHistoryCollector">

<schedule name="S3min"/>

<callHistoryCollector>

<callHistAttrs>

<generic>

<attr>h323-setup-time</attr>

<attr>peer-address</attr>

<attr>peer-subaddress</attr>

<attr>peer-id</attr>

<attr>peer-if-index</attr>

<attr>logical-if-index</attr>

<attr>disconnect-cause</attr>

<attr>disconnect-text</attr>

<attr>h323-connect-time</attr>

<attr>h323-disconnect-time</attr>

<attr>h323-call-origin</attr>

<attr>charged-units</attr>

<attr>info-type</attr>

<attr>bytes_out</attr>

<attr>paks_out</attr>

<attr>bytes_in</attr>

<attr>paks_in</attr>

</generic>

<voip>

<attr>connection-id</attr>

<attr>h323-remote-address</attr>

<attr>remote-udp-port</attr>

<attr>round-trip-delay</attr>

<attr>selected-qos</attr>

<attr>session-protocol</attr>

<attr>session-target</attr>

<attr>ontime-rv-playout</attr>

<attr>gapfill-with-prediction</attr>

<attr>gapfill-with-interpolation</attr>

<attr>gapfill-with-redundancy</attr>

<attr>hiwater-playout-delay</attr>

<attr>lowater-playout-delay</attr>

<attr>receive-delay</attr>

<attr>vad-enable</attr>

<attr>coder-type-rate</attr>

<attr>h323-voice-quality</attr>

<attr>lost-packets</attr>

<attr>early-packets</attr>

<attr>late-packets</attr>

</voip>

<telephony>

<!-- Telephony Attributes -->

<attr>connection-id</attr>

<attr>tx-duration</attr>

<attr>voice-tx-duration</attr>

<attr>fax-tx-duration</attr>

<attr>coder-type-rate</attr>

<attr>noise-level</attr>

<attr>acom-level</attr>

<attr>session-target</attr>

<attr>img-pages-count</attr>

</telephony>

</callHistAttrs>

</callHistoryCollector>

</collector>

SAA Collectors

SAA collectors collect performance data measured by the Service Assurance Agent (SAA) within an IOS device. They provide different types of measurements by simulating different types of protocols. SAA is used to measure performance statistics between two routers. The source router is where SAA operations are configured. The target router is usually configured with the SAA responder. The source and target devices must have rtr responder configured in CLI.

Three types of SAA collectors can be configured:

For details about all SAA collectors, refer to the Network Monitoring Using Cisco Service Assurance Agent section of the Cisco IOS Configuration Fundamentals Configuration Guide, Release 12.2 at the following URL: http://www.cisco.com/univercd/cc/td/doc/product/software/ios122/122cgcr/ffun_c/fcfprt3/fcf017.htm

and the Cisco Service Assurance Agent Commands section of the Cisco IOS Configuration Fundamentals Configuration Guide, Release 12.2 at the following URL:

http://www.cisco.com/univercd/cc/td/doc/product/software/ios122/122cgcr/ffun_r/ffrprt3/frf017.htm.

All three SAA collectors retrieve measurements about their respective operations and store the data in the database. The data is exported in Comma Separated Values (CSVs) format. Data can be exported at a specified frequency or on-demand using the XML interface (<get> <data> tags). Data export is detailed in the "Data Handlers" section.

For data format specifications, refer to Chapter 5 in the Cisco CNS Performance Engine User Guide.

Both purging and hot spot polling are supported by the SAA collectors. If the collector is configured for hotspot polling, Cisco CNS PE does not store the data internally and does not process the data after data collection. As soon as the data is retrieved, the data is written to the Cisco CNS Integration Bus according to the configuration details.

Because SAA operations are point to point, multiple collectors of the same type may be configured in order to track performance from various source/target pairs. The source file is listed as
<device name="device-name"/> (line 3 below) and the target device is referred to with its IP address (line 5 below). If rtr responder is not configured on the target IOS device in CLI, the collector creation may be successful, however, it will not work. You may wish to create multiple collectors of the same type, except for the source/target pairs, in order to measure the same variables (for example, jitter) between different portions of the network.

You could also make the same kind of SAA collector (for example, jitter) and the same source/target pair, but with different packet sizes, or port numbers, or number of packets. The three collectors below are just beginning examples. Because the sample network is a single subnet, there is little reason to measure jitter or round trip delay. Typically these kinds of collectors are created between end points in the VoIP network so the results are meaningful. Several of the same types of SAA collectors may be configured with source and target devices chosen throughout the network.

XML: SAA ICMP Collector

<collector name="SAAicmp">

<schedule name="S1min"/>

<device name="usgw"/>

<SaaIcmpEchoCollector>

<targetAddress>172.19.49.12</targetAddress>

<timeout>PT5S</timeout>

<packetSize>512</packetSize>

</SaaIcmpEchoCollector>

</collector>

XML: SAA Jitter Collector

In the SAAjitter collector, the jitter between UDP packets is calculated. The rtr responder CLI command must be configured on the target router.

<collector name="SAAjitter">

<schedule name="S1min"/>

<device name="usgw"/>

<SaaJitterCollector>

<targetAddress>172.19.49.12</targetAddress>

<targetPort>101</targetPort>

</SaaJitterCollector>

</collector>

XML: SAA UDP Echo Collector

The SAAudpEcho collector uses the SA Agent on IOS routers to set up a UDP Echo operation that measures the round trip time of a UDP packet sent to a responder.

The rtr responder CLI command must be configured on the target router.

<collector name="SAAudpEcho">

<schedule name="S5min"/>

<device name="usgw"/>

<SaaUdpEchoCollector>

<targetAddress>172.19.49.12</targetAddress>

<targetPort>100</targetPort>

</SaaUdpEchoCollector>

</collector>

CMNM Collector

The Cisco MGC Node Manager (CMNM) collector works with a CEMF machine running the CMNM application to collect performance data about the devices managed by that Element Manager. The CMNM application is detailed in "Provisioning." CMNM collects fault and accounting data from the network elements that comprise the PGW2200. The collected data is transported periodically to the Cisco CNS PE for processing. The CMNM collector facilitates this process.

The following criteria must be met to use the CMNM collector:

    1. You must turn on CEMF performance polling using the CMNM Element Manager (see Chapter 7 in http://www.cisco.com/univercd/cc/td/doc/product/access/sc/rel8/cmnmgr/ramb15.pdf for instructions.)

    2. There must be a Telnet account on the CMNM machine with permissions to run CEMF command line programs.

    3. The Telnet account on the CMNM machine used by the Cisco CNS PE software must use bash as the default shell. The UNIX shells, sh, ksh, and csh, have a limitation on the command length. The history admin export command length exceeds this limitation and therefore, the Cisco CNS PE software cannot execute this command. The bash shell overcomes this limitation.

    4. There must be an FTP account on the CMNM machine.

The CMNM collector makes use of previously configured thresholds and devices. Because the data handler is unique to the CMNM host, shown in Figure 3-4, it is just as efficient to include it in the collector, as opposed to making a separate data handler. The mandatory element in the collector is the history criteria. The remainder of the elements have default values detailed in the Cisco CNS Performance Engine User Guide.

XML: Cisco MNM Collector

<das>

<create>

<!—- CEMF machine -->

<device name="cmnm-tme.cisco.com">

<timezone>-00:00</timezone>

<telnet>

<ipaddress>172.19.49.3</ipaddress>

<username>tme</username>

<password>tme-cisco</password>

<prompt>&gt;</prompt>

</telnet>

<ftp>

<ipaddress>172.19.49.3</ipaddress>

<username>tme</username>

<password>tme-ftp</password>

</ftp>

</device>

<!—- Attribute Threshold Crossing Alert -->

<threshold name="thr1">

<!-- CEMF-specific attribute name -->

<attribute name="mgcController:RFC1213-MIB.udpOutDatagrams">

<raiseOperation>LT</raiseOperation>

<raiseValue>817200</raiseValue>

<clearOperation>GT</clearOperation>

<clearValue>817200</clearValue>

<level>critical</level>

</attribute>

</threshold>

<!-- FTP Data Handler sends data from CNS-PE to storage machine-->

<dataHandler name="ftpDH1">

<ftp>

<urlPrefix>ftp://tme:tme-cmnm@storage.cisco.com/tmp/</urlPrefix>

</ftp>

</dataHandler>

<!-- CNS-PE Collector -->

<collector name="CMNMTest">

<schedule name="S15min"/>

<device name="cmnm-tme.cisco.com">

<threshold name="thr1"/>

</device>

<dataHandler name="ftpDH1">

<schedule name="S15min"/>

</dataHandler>

<!-- CMNMCollector-specific -->

<cmnmCollector>

<cemf-home>/opt/cemf</cemf-home>

<export-file>/tmp/dumpFile</export-file>

<max-export-file-size>-1</max-export-file-size>

<command-delay>30</command-delay>

<history-criteria>hostAPCHistoryCriteria</history-criteria>

</cmnmCollector>

</collector>

The CMNM collector exports data in Comma Separated Values (CSVs) format. The first line of each exported file contains the column headings separated by commas. Each subsequent line contains the following data:

The CMNM collector supports Threshold Crossing Alerts (TCAs). TCAs are configured for specific attributes. The collector checks for TCAs after each collection of performance data.


Note   Attribute names for TCAs are CEMF-specific.

The CMNM collector supports purging of collected performance data. This is done on a schedule configured for all of Cisco CNS PE or based on an individual purger.

Use the following information when you configure this collector:

XML Threshold Crossing Alerts

Once data is collected from polled MIBs, Cisco CNS PE can check against the thresholds set for a variable by upper layer applications. If a threshold is crossed, Cisco CNS PE uses a notifier to send out a Threshold Crossing Alert (TCA) message.

Cisco CNS PE supports OverThreshold and ClearThreshold alerts. For example, CPU utilization threshold is set as follows:

During collection, process utilization is 85%. Therefore, an OverThreshold alert is raised. In the next poll, process utilization is 80%. No action is taken because an OverThreshold alert is already raised. In the next poll, process utilization is 65%. Again, no action is taken because an OverThreshold alert is already raised. In the next poll, process utilization is 45%. A ClearThreshold alert is sent because the OverThreshold alert is in effect (otherwise, a ClearThreshold alert is not sent).


Note   The management system must be aware that it needs to correlate the ClearThreshold alert to the OverThreshold alert.

Cisco CNS PE allows grouping of threshold attributes together and dynamic addition and removal of attributes to and from this group of threshold attributes. This provides great flexibility for the OSS to dynamically add and remove attributes or change attribute values. Cisco CNS PE also provides the capability to set a default set of thresholds for a given collector and override these attributes for some devices.

XML: Definition of Thresholds

In the following XML sample, thresholds packetTHR and usgwPTHR are created. packetTHR can be used as the default set for the collector and usgwPTHR is used as the device-specific threshold for device usgw in that collector.

<das>

<create>

<threshold name="packetTHR">

<attribute name="ifMtu">

<raiseOperation>GT</raiseOperation>

<raiseValue>4000</raiseValue>

<clearOperation>LT</clearOperation>

<clearValue>2000</clearValue>

<level>critical</level>

</attribute>

<attribute name="ifInErrors">

<raiseOperation>GT</raiseOperation>

<raiseValue>10</raiseValue>

<clearOperation>LT</clearOperation>

<clearValue>5</clearValue>

<level>critical</level>

</attribute>

</threshold>

<threshold name="usgwThr1">

<attribute name="ifMtu">

<raiseOperation>GT</raiseOperation>

<raiseValue>5000</raiseValue>

<clearOperation>LT</clearOperation>

<clearValue>3000</clearValue>

<level>critical</level>

</attribute>

<attribute name="ifInErrors">

<raiseOperation>GT</raiseOperation>

<raiseValue>15</raiseValue>

<clearOperation>LT</clearOperation>

<clearValue>6</clearValue>

<level>critical</level>

</attribute>

</threshold>

</create>

</das>

The following collector makes use of the fact that the device threshold overrides the collector threshold:

<das>

<create>

<collector name="packetcoll">

<schedule name="S1min"/>

<device name="usgw">

<threshold name="usgwPTHR"/>

</device>

<threshold name="packetTHR"/>

<notifier name="cnsnotifier"/>

<MibCollector>

<snmpGetMany>

<oid>ifMtu</oid>

<oid>ifInErrors</oid>

</snmpGetMany>

</MibCollector>

</collector>

</create>

</das>

The following is the sample XML for configuring several threshold crossing alerts referenced in the System collector. Refer to Table 3-3, referencing the System Collector, for keyword attribute mapping.

<threshold name="diskTHRdb">

<attribute name="sys.disk.db.usedPercentage">

<raiseOperation>GT</raiseOperation>

<raiseValue>35</raiseValue>

<clearOperation>LT</clearOperation>

<clearValue>30</clearValue>

<level>minor</level>

</attribute>

</threshold>

<threshold name="diskTHRdata">

<attribute name="sys.disk.data.usedPercentage">

<raiseOperation>GT</raiseOperation>

<raiseValue>35</raiseValue>

<clearOperation>LT</clearOperation>

<clearValue>30</clearValue>

<level>minor</level>

</attribute>

</threshold>

<threshold name="sysTHRmem">

<attribute name="sys.memory.usedPercentage">

<raiseOperation>GT</raiseOperation>

<raiseValue>85</raiseValue>

<clearOperation>LT</clearOperation>

<clearValue>75</clearValue>

<level>critical</level>

</attribute>

</threshold>

<threshold name="cpuTHRload">

<attribute name="sys.cpu.loadPercentage">

<raiseOperation>GT</raiseOperation>

<raiseValue>90</raiseValue>

<clearOperation>LT</clearOperation>

<clearValue>75</clearValue>

<level>critical</level>

</attribute>

</threshold>

XML: System Collector

The System collector monitors the health of the Cisco CNS PE host, including disk, CPU, and memory usage on the system. The disks this collector monitors are the disks used for the data, the database, and $DAS_HOME. In addition, based upon the collector's configuration, it can also send out fatal system errors and informational messages during system startup and shutdown. It is recommended every Cisco CNS PE host is configured with a System collector. There is no device referenced in the System collector because it is monitoring itself. The last line of the collector: <SystemCollector> tells
Cisco CNS PE that it is referring to itself.

<collector name="sys">

<schedule name="S1min"/>

<threshold name="diskTHRdb"/>

<threshold name="diskTHRdata"/>

<threshold name="sysTHRmem"/>

<threshold name="cpuTHRload"/>

<notifier name="cnsnotifier"/>

<SystemCollector/>

</collector>

The System collector does not have internal storage, however, it supports threshold processing. The System collector supports thresholding on the attributes listed in Table 3-3.


Table 3-3: System Collector Thresholding Attributes
Attribute Description

sys.memory.availableMega

Memory available on the machine (in megabytes).

sys.memory.usedPercentage

Memory used on the machine (percentage).

sys.disk.home.availableMega

Disk space available for the home directory (in megabytes).

sys.disk.home.usedPercentage

Disk space used for the home directory (percentage).

sys.disk.db.availableMega

Disk space available for the database directory (in megabytes).

sys.disk.db.usedPercentage

Disk space used for the database directory (percentage).

sys.disk.data.availableMega

Disk space available for the data directory (in megabytes).

sys.disk.data.usedPercentage

Disk space used for the data directory (percentage).

sys.cpu.loadPercentage

CPU load on the machine (percentage).

java.memory.availableMega

Memory available on java virtual machine (in megabytes)

java.memory.usedPercentage

Memory used on java virtual machine (percentage).

VoIP DAS Collector

The VoIPDAS collector collects correlated call data from multiple Cisco CNS PE hosts and correlates four call legs for each call. Because of the distributed nature of Cisco CNS PE, one Cisco CNS PE may not have the call detail record for all four call legs. For example, call legs one and two are collected by Cisco CNS PE "A", while call legs three and four are collected by Cisco CNS PE "B". In cases such as this, each Cisco CNS PE correlates the data it has (that is, Cisco CNS PE "A" correlates call legs one and two while Cisco CNS PE "B" correlates call legs three and four.)

In this case, the VoipDAS collector is similar to any other collector in Cisco CNS PE and treats the underlying Cisco CNS PE systems as the data sources for call correlated data. Multiple Cisco CNS PE hosts are specified in the configuration as urlPrefixes, that is, each URL prefix identifies the username and password to access that particular Cisco CNS PE host and the data directory in which correlated files are stored. The VoipDAS collector picks up files from this directory for correlation and deletes the files from the data source after the files are retrieved. Because the sample network has only one instance of Cisco CNS PE, the VoIPDAS collector is included here only as an example. Because there is only one Cisco CNS PE, a VoIPDAS collector is not configured in this sample network.

Data Handlers

Data handlers are used for data publication to the Cisco CNS Integration Bus or for periodic data export.

In the following example, the <urlPrefix> tag specifies the transfer protocol, user name, password, host name, the path to the directory, and the file prefix for the data files. Only the FTP protocol is supported at this time. The sample network requires an FTP data handler for export of CMNM performance data. If other host specific data handlers are required, you only need to customize the details in the <urlPrefix> line.

<dataHandler name="dhcustom">

<ftp>

<urlPrefix>ftp://ftpuser:ftppasswd@ftphost/tmp/dh-</urlPrefix>

</ftp>

</dataHandler>

The next example shows a data handler used for hotspot polling. This data handler specifies the Cisco CNS subject name for publishing data onto the Cisco CNS Integration Bus. For the details of the format of the data published, refer to the appropriate collector documentation. The CNS data handler is used for monitoring (that is, low volume of data and high frequency of collection.)

<dataHandler name="dhcns">

<cns>

<subject>cisco.mgmt.das.perf-tme</subject>

</cns>

</dataHandler>

Notifiers

Cisco CNS PE can send notifications to the Cisco CNS Integration Bus or it can send traps. For Cisco CNS notifications, the subject should be supplied for the notifier. For traps, the target IP address and port should be configured for the notifier.

<notifier name="cnsnotifier">

<cns>

<subject>cisco.mgmt.das.notifier-listener</subject>

</cns>

</notifier>

<notifier name="trapnotifier">

<trap>

<ipaddress>trapman.cisco.com</ipaddress>

<port>162</port>

</trap>

</notifier>

In a trap notifier, the tag <port> is optional. If you do not specify the port, it defaults to 162 when sending traps from the Cisco CNS PE host. For Cisco CNS notifications, the schema of the notification is available in $DAS_HOME/schema/event.xsd.

Purgers

Cisco CNS PE can be set up to purge data from all supported collectors at periodic intervals. Alternatively, you can configure a purger for an individual collector, using XML similar to the following example:

<purger name="SixHoursPurger">

<time>2001-12-01T00:00:00.000Z</time>

<interval>PT6H</interval>

<delay>PT1H</delay>

</purger>

Hot Spot Polling

In some cases, an upper layer application may need to collect data for certain variables or sets of variables more frequently (for example, once every five seconds for real-time troubleshooting).This means Cisco CNS PE must send the data as soon as it is collected. This is referred to as hotspot polling. When upper layer applications enable hotspot polling for a variable or a set of variables, Cisco CNS PE sends the data immediately to the upper layer application after each poll. This is done by sending the data onto the Cisco CNS subscribe/publish Bus. Because the real-time polling frequency is typically short, the number of variables that can be enabled for hotspot polling at the same time should not be more than thirty.

When a collector is configured for hotspot polling, Cisco CNS PE does not process the data any further, that is, there is no internal storage, no threshold processing, and no query support. The sample code below polls the in and out octets on the gateways and gatekeepers in the sample network.

XML: Hotspot Polling

The following is a sample configuration for hotspot polling. This hotspot poller collects two variables, IfInOctets and ifOutOctets, every ten seconds. It sends the collected data directly to the Cisco CNS Integration Bus by way of the CNS data handler. Devices will be added to this collector in the next section so that the devices added will be polled for these two variables and the results published to the Cisco CNS Integration Bus in CSV format with the subject set to cisco.mgmt.das.hotspot.

<das>

<create>

<dataHandler name="hotspot">

<cns>

<subject>cisco.mgmt.das.hotspot</subject>

</cns>

</dataHandler>

<collector name="mchotspot">

<schedule name="S10sec"/>

<dataHandler name="hotspot"/>

<MibCollector>

<snmpGet>

<oid>ifInOctets</oid>

<oid>ifOutOctets</oid>

</snmpGet>

</MibCollector>

</collector>

</create>

</das>

Data Export

Under typical operational conditions, Cisco CNS PE collects data through its various collection mechanisms and temporarily stores them in the /data and /db directories. Under the /data directory are four additional directories. The /data/radius and /data/callhistory directories contain files used by the Cisco CNS PE application internally and are not for viewing by network operators.

The /data/VoipDasCollector directory contains correlated files from the VoipDas collector. These correlated files are only found in a first tier Cisco CNS PE that collects and correlates the data for voice calls from one or more second tier Cisco CNS PE collectors (which are collecting data from network elements.) This chapter does not detail the set up nor configuration of a second tier Cisco CNS PE installation, does not detail the set up of a first tier Cisco CNS PE, and therefore, does not show how to access data from this directory.

The /data/VoipCorrelator directory is where the correlated voice CDRs are temporarily stored. These files are named according to the convention:

CNS-PE name_timestamp for example perf-tme_D20020813T193035Z

Closer examination of the files in the /data/VoipCorrelator directory reveals the interval that has been configured for correlation of RADIUS and Call History CDRs. Typically this is a fifteen minute interval. These files are text files and can be accessed with a text file reader or editor while they remain on the host machine. These are the files an upstream application imports for reporting purposes.

There is another call record related file directory in which you can determine how many different calls were processed in each of the files described above. This file is in the ????MISSING INFO ??????

Removing Components

There are commands to undo configuration operations. The <load> command can be undone with the <unload> command. The <create> command can be undone with the <remove> command. If an object is in use (that is, it is referred to by another configured object), the application does not allow it to be removed. Therefore, the order of removal is the inverse of the order of adding or applying the objects in the configuration.

Both as an example of undoing a configuration item and so you can start with a clean slate, the MIBs previously loaded will be removed (unloaded). The configuration commands to do this are shown in Figure 3-12. It is recommended you view the configuration after each operation. The configuration, to this point, is shown in Figure 3-13.

As far as the <unload> command is concerned, perhaps MIBs are not the best example. This is because the contents of the MIBs are discovered dynamically and the "MIB name" line is complete (that is, the MIB name line ends with a slash (/).

The typical object (device, collector, notifier, etc.) has values and variables defined within the body of the object (in XML) and the device definition is ended in a line such as: </device> or </collector>. This has ramifications in the remove procedure. Therefore, a <remove> statement does not list the values embedded within the object, instead, it adds a trailing slash (/) to the object name.

The values and variables of the object are left out of the <remove> statement in XML. A device is defined like this:

<das>

<create>

<device name="dgk">

<type>3660</type>

<snmp>

<ipaddress>172.19.49.21</ipaddress>

<readCommunity>public</readCommunity>

<writeCommunity>private</writeCommunity>

</snmp>

</device>

</create>

</das>

is removed with an XML statement such as:

<das>

<remove>

<device name="dgk"/> Notice the trailing "/"

</remove>

</das>

As an exercise, you can unload the previously loaded MIBs (shown in Figure 3-7), and then you can view the empty configuration file. If you did not yet load the MIBs, you can do so now and then unload them as shown below.


Figure 3-12: Unloading MIBs



Figure 3-13:
Viewing Empty Configuration after MIBs were Successfully Unloaded


By viewing an empty configuration, you can see the necessary preambles are already present. The second nested <das> has a corresponding </das>. If you begin your configuration XML with <das>, it must go within the nested <das> pair and must match up with the closing operation for <load>, <create>, <add>, or <start>.

Order of Operations

Referencing the complete configuration in the "Comprehensive Configuration Task List" section, the operations should be performed following the order listed in Table 3-4 (certain collectors require devices and schedules, so ordering is important).

It is instructional to select View config after each operation to understand how Cisco CNS PE organizes the configuration information. Starting with a blank slate, illustrated in Figure 3-12, after having customized the configuration files to correspond to your own network parameters, perform the operations listed in Table 3-4.


Table 3-4: Ordered Set of Configuration Operations
Operation Section Title Comment

Loading MIBs

"Loading MIBs"

Creating IOS Devices

"Creating IOS Devices"

Certain characters are not allowed in device names in Cisco CNS PE version 1.0 only, most notably a "-".

Creating Solaris Host Devices

"Creating Solaris Host Devices"

Creating Schedules

"Creating Schedules"

Creating Bulk MIB Collectors

"Creating Bulk MIB Collectors"

Creating MIB Collectors

"Creating MIB Collectors"

Creating RADIUS and Call History Collectors

"Creating RADIUS and Call History Collectors"

With version 1.1 (version 1.0.0.4 with patch), if you remove the Radius collector, you must shutdown and restart the server before you can reload them.

Version 1.0 has a bug regarding this operation.

Creating SAA Collectors

"Creating Sample SAA Collectors"

Because the sample network is just one LAN, these collectors are shown as examples. Typically, you would test with SAA to remote targets.

rtr responder must be configured for the targets of the SAAudpEcho and SAAjitter collectors.

Creating Thresholds

"Creating Thresholds"

Creating a CMNM Collector

"Creating a CMNM Collector"

Creating Notifiers

"Creating Notifiers"

Creating Data Handlers

"Creating Data Handlers"

Creating Purgers

"Creating Purgers"

As the Cisco CNS PE application is configured, section by section, check to ensure each of the operations returns a successful response. To configure Cisco CNS PE, go to the "Comprehensive Configuration Task List" section and modify the device specific parameters (for example, IP addresses and host names) for your specific network, and then open the Cisco CNS PE GUI and enter the XML through the Send XML window.

Preparing Cisco CNS PE for Data Collection

Assuming you have entered all of the XML (after modifying the specific network device and IP address variables), Cisco CNS PE is ready to begin collecting data.

At this point, the MIBs are loaded, the devices, schedules and collectors are created, and the thresholds, data handlers, notifiers, and purgers are defined. The final tasks are to add devices to the collectors, add attributes to the thresholds, and start all of the collectors.

Adding Devices to Collectors

You have now created collectors that poll MIBs and bulk poll MIBs. Each of these collectors must be associated with a device or set of devices. Different MIBs correspond to different devices. The gatekeepers should be added to the Gatekeeper MIB collector and so on. An example of adding the gatekeepers to the Gatekeeper MIB collector follows:

<das>

<add>

<collector name="GateKeeperBulkMibCollector">

<device name="dgk"/>

<device name="emeaGK"/>

<device name="usGK"/>

</collector>

</add>

</das>

<das>

<add>

<collector name="GateKeeperMibCollector">

<device name="dgk"/>

<device name="emeaGK"/>

<device name="usGK"/>

</collector>

</add>

</das>

For the remainder of the collectors that require devices, refer to the "Comprehensive Configuration Task List" section, where all of the additions of devices to collectors are fully detailed. Notice the collectors that were created were done with specific schedules contained within. If schedules had not been created within the collectors, you would have to add them in the same way the devices were added.

In the "Comprehensive Configuration Task List" section, gateways are added to gateway specific collectors and the same for gatekeepers. All of the IOS devices are added to the Resource MIB collector. The gateways are added to the Call History MIB collector. Note, it is not necessary to add devices to the RADIUS collector because RADIUS is a push protocol and as long as the gateway knows the correct IP address and port number for the Cisco CNS PE host, it sends the data as the calls occur.

Finally, source devices are added to the SAA collectors. When the SAA collectors were created, they had target devices configured within. Your additions will give them a source device from which to send the SAA data. Only three SAA collectors were created, while a typical operation may configure many. It is the intent here to provide an example rather than a complete set, which differs with each operation and each operator.

Starting the Collectors

All that remains is to start the various collectors. This is done with the <start> directive and is detailed in the "Starting All Collectors" section.

Upgrading and Backing Up Cisco CNS PE

This section describes how to backup and restore Cisco CNS PE information.

Backing Up Cisco CNS Performance Engine Information

Cisco CNS PE can and should be backed up periodically so no important information is lost prior to a third party application gathering it from Cisco CNS PE. A backup can be done with Cisco CNS PE either running or shutdown and only requires the location where the backup data should be stored. To perform a backup operation, invoke the following command:

# <CNS-PE_base_directory>/bin/backup.sh <backupdirectory>

If the <backupdirectory> already exists, you are asked if all information in it can be deleted, after which the backup takes place. If the directory does not exist, it is created and the backup is performed. This allows periodic data backups to be performed without losing previous backup data.

Restoring Cisco CNS Performance Engine Data

The restoring of data should only be performed when the Cisco CNS PE application has been stopped. To perform a restore operation, invoke the following command:

# <CNS-PE_base_directory>/bin/restore.sh <backupdirectory>

The <backupdirectory> directory specified should be the directory where the data was previously backed up.

Related Cisco CNS Performance Engine Documents

The following documentation is provided with the Cisco CNS PE application:

These documents are available at the following URL:

http://www.cisco.com/univercd/cc/td/doc/product/rtrmgmt/cnspe/index.htm.

Attribute Mapping for the CallHistory Collector

This section contains attribute mapping tables from the MIB attribute name to the XML attribute name. The tables include attributes for the CallHistory collector, VoIP, and telephony.

CallHistory Collector

Table 3-5 maps the CallHistory related MIB attribute names to their corresponding XML attribute names.


Table 3-5: CallHistory Collector - MIB Attribute Name to XML Attribute Name
MIB Attribute Name XML Attribute Name

cCallHistorySetupTime

h323-setup-time

cCallHistoryPeerAddress

peer-address

cCallHistoryPeerSubAddress

peer-subaddress

cCallHistoryPeerId

peer-id

cCallHistoryPeerIfIndex

peer-if-index

cCallHistoryLogicalIfIndex

logical-if-index

cCallHistoryDisconnectCause

disconnect-cause

cCallHistoryDisconnectText

disconnect-text

cCallHistoryConnectTime

h323-connect-time

cCallHistoryDisconnectTime

h323-disconnect-time

cCallHistoryCallOrigin

h323-call-origin

cCallHistoryChargedUnits

charged-units

cCallHistoryInfoType

info-type

cCallHistoryTransmitPacket

paks_out

cCallHistoryTransmitBytes

bytes_out

cCallHistoryReceivePacket

paks_in

cCallHistoryReceiveBytes

bytes_in

VoIP Attributes

Table 3-6 maps the VoIP related MIB attribute names to their corresponding XML attribute names.


Table 3-6: VoIP - MIB Attribute Name to AML Attribute Name
MIB Attribute Name XML Attribute Name

cvVoIPCallHistoryConnectionId

connection-id

cvVoIPCallHistoryRemoteIPAddress

h323-remote-address

cvVoIPCallHistoryRemoteUDPPort

remote-udp-port

cvVoIPCallHistoryRoundTripDelay

round-trip-delay

cvVoIPCallHistorySelectedQoS

selected-qos

cvVoIPCallHistorySessionProtocol

session-protocol

cvVoIPCallHistorySessionTarget

session-target

cvVoIPCallHistoryOnTimeRvPlayout

ontime-rv-playout

cvVoIPCallHistoryGapFillWithSilence

gapfill-with-silence

cvVoIPCallHistoryGapFillWithPrediction

gapfill-with-prediction

cvVoIPCallHistoryGapFillWithInterpolation

gapfill-with-interpolation

cvVoIPCallHistoryGapFillWithRedundancy

gapfill-with-redundancy

cvVoIPCallHistoryHiWaterPlayoutDelay

hiwater-playout-delay

cvVoIPCallHistoryLoWaterPlayoutDelay

lowater-playout-delay

cvVoIPCallHistoryReceiveDelay

receive-delay

cvVoIPCallHistoryVADEnable

vad-enable

cvVoIPCallHistoryCoderRateType

coder-rate-type

cvVoIPCallHistoryIcpif

h323-voice-quality

cvVoIPCallHistoryLostPackets

lost-packets

cvVoIPCallHistoryEarlyPackets

early-packets

cvVoIPCallHistoryLatePackets

late-packets

Telephony Attributes

Table 3-7 maps the telephony related MIB attribute names to their corresponding XML attribute names.


Table 3-7: Telephony - MIB Attribute Name to AML Attribute Name
MIB Attribute Name XML Attribute Name

cvCallHistoryConnectionId

connection-id

cvCallHistoryTxDuration

tx-duration

cvCallHistoryVoiceTxDuration

voice-tx-duration

cvCallHistoryFaxTxDuration

fax-tx-duration

cvCallHistoryCoderTypeRate

coder-type-rate

cvCallHistoryNoiseLevel

noise-level

cvCallHistoryACOMLevel

acom-level

cvCallHistorySessionTarget

session-target

cvCallHistoryImgPageCount

img-pages-count

Comprehensive Configuration Task List

This section provides a comprehensive configuration task list along with sample XML code.

Loading MIBs

This section describes how to load MIBS.

<das>

<load>

<mib name="CISCO-GATEKEEPER-MIB.my"/>

<mib name="CISCO-PROCESS-MIB.my"/>

<mib name="CISCO-MEMORY-POOL-MIB.my"/>

<mib name="CISCO-POP-MGMT-MIB.my"/>

<mib name="CISCO-AAA-SERVER-MIB.my"/>

<mib name="CISCO-DIAL-CONTROL-MIB.my"/>

<mib name="CISCO-VOICE-DIAL-CONTROL-MIB.my"/>

</load>

</das>

Creating IOS Devices

This section describes how to create IOS devices.

<das>

<create>

<device name="dgk">

<type>3660</type>

<snmp>

<ipaddress>172.19.49.24</ipaddress>

<readCommunity>public</readCommunity>

<writeCommunity>private</writeCommunity>

</snmp>

</device>

<device name="emeaGK">

<type>3660</type>

<snmp>

<ipaddress>172.19.49.21</ipaddress>

<readCommunity>public</readCommunity>

<writeCommunity>private</writeCommunity>

</snmp>

</device>

<device name="usGK">

<type>3660</type>

<snmp>

<ipaddress>172.19.49.28</ipaddress>

<readCommunity>public</readCommunity>

<writeCommunity>private</writeCommunity>

</snmp>

</device>

<device name="usgw">

<type>5300</type>

<snmp>

<ipaddress>172.19.49.5</ipaddress>

<readCommunity>public</readCommunity>

<writeCommunity>private</writeCommunity>

</snmp>

</device>

<device name="emeagw">

<type>5300</type>

<snmp>

<ipaddress>172.19.49.13</ipaddress>

<readCommunity>public</readCommunity>

<writeCommunity>private</writeCommunity>

</snmp>

</device>

</create>

</das>

Creating Solaris Host Devices

This section describes how to create Solaris host devices.

<das>

<create>

<device name="billtme">

<type>nt2000</type>

<timezone>-08:00</timezone>

<telnet>

<ipaddress>172.19.49.9</ipaddress>

<username>tme</username>

<password>cisco</password>

<prompt>billtme%</prompt>

</telnet>

<ftp>

<ipaddress>172.19.49.9</ipaddress>

<username>tmeftp</username>

<password>cisco</password>

</ftp>

</device>

</create>

</das>

Creating Schedules

This section describes how to create schedules.

<das>

<create>

<schedule name="S5sec">

<start>2001-08-31T00:00:00.000-08:00</start>

<end>2002-12-25T00:00:00.000-08:00</end>

<interval>PT5S</interval>

<interTaskDelay>PT0.1S</interTaskDelay>

</schedule>

<schedule name="S10sec">

<start>2001-08-31T00:00:00.000-08:00</start>

<end>2002-12-25T00:00:00.000-08:00</end>

<interval>PT10S</interval>

<interTaskDelay>PT0.1S</interTaskDelay>

</schedule>

<schedule name="S1min">

<start>2001-08-31T00:00:00.000-08:00</start>

<end>2002-12-25T00:00:00.000-08:00</end>

<interval>PT1M</interval>

<interTaskDelay>PT0.1S</interTaskDelay>

</schedule>

<schedule name="S3min">

<start>2001-08-31T00:00:00.000-08:00</start>

<end>2002-12-25T00:00:00.000-08:00</end>

<interval>PT3M</interval>

<interTaskDelay>PT0.1S</interTaskDelay>

</schedule>

<schedule name="S5min">

<start>2001-08-31T00:00:00.000-08:00</start>

<end>2002-12-25T00:00:00.000-08:00</end>

<interval>PT5M</interval>

<interTaskDelay>PT0.1S</interTaskDelay>

</schedule>

<schedule name="S15min">

<start>2001-08-31T00:00:00.000-08:00</start>

<end>2002-12-25T00:00:00.000-08:00</end>

<interval>PT15M</interval>

<interTaskDelay>PT0.1S</interTaskDelay>

</schedule>

<schedule name="S60min">

<start>2001-08-31T00:00:00.000-08:00</start>

<end>2002-12-25T00:00:00.000-08:00</end>

<interval>PT60M</interval>

<interTaskDelay>PT0.1S</interTaskDelay>

</schedule>

</create>

</das>

Creating Collectors, Thresholds, Notifiers, Data Handlers, and Purgers

This section describes how to create collectors, thresholds, notifiers, data handlers, and purgers.

Creating Bulk MIB Collectors

This section describes how to create Bulk MIB collectors.

<das>

<create>

<collector name="GateKeeperBulkMibCollector">

<schedule name="S15min"/>

<BulkMibCollector>

<oid>cgkZoneTable</oid>

<oid>cgkLocalZoneTable</oid>

</BulkMibCollector>

</collector>

<collector name="DS1StatsBulkMibCollector">

<schedule name="S15min"/>

<BulkMibCollector>

<oid>cpmDS0UsageTable</oid>

<oid>cpmDS1DS0UsageTable</oid>

</BulkMibCollector>

</collector>

<collector name="AAAServerBulkMibCollector">

<schedule name="S60min"/>

<BulkMibCollector>

<oid>casStatisticsTable</oid>

</BulkMibCollector>

</collector>

</create>

</das>

Creating MIB Collectors

This section describes how to create MIB collectors.

<das>

<create>

<collector name="GateKeeperMibCollector">

<schedule name="S15min"/>

<MibCollector>

<snmpGetMany>

<oid>cgkZoneZoneName</oid>

<oid>cgkZoneDomain</oid>

<oid>cgkZoneAddressLookupFailures</oid>

<oid>cgkZoneEndpointTimeouts</oid>

<oid>cgkZoneOtherFailures</oid>

<oid>cgkZoneLRQs</oid>

<oid>cgkLZoneACFs</oid>

<oid>cgkLZoneARJs</oid>

<oid>cgkLZoneTotalBandwidth</oid>

<oid>cgkLZoneAllocTotalBandwidth</oid>

<oid>cgkLZoneInterzoneBandwidth</oid>

<oid>cgkLZoneAllocInterzoneBandwidth</oid>

</snmpGetMany>

</MibCollector>

</collector>

<collector name="DS1StatsMibCollector">

<schedule name="S15min"/>

<MibCollector>

<snmpGetMany>

<!-- The following are DS0 stats. -->

<oid>cpmDS1SlotIndex</oid>

<oid>cpmDS1PortIndex</oid>

<oid>cpmChannelIndex</oid>

<oid>cpmConfiguredType</oid>

<oid>cpmDS0CallType</oid>

<oid>cpmL2Encapsulation</oid>

<oid>cpmCallCount</oid>

<oid>cpmTimeInUse</oid>

<oid>cpmInOctets</oid>

<oid>cpmOutOctets</oid>

<oid>cpmInPackets</oid>

<oid>cpmOutPackets</oid>

<oid>cpmAssociatedInterface</oid>

<!-- The following are DS1 stats. -->

<oid>cpmDS1UsageSlotIndex</oid>

<oid>cpmDS1UsagePortIndex</oid>

<oid>cpmDS1ActiveDS0s</oid>

<oid>cpmDS1ActiveDS0sHighWaterMark</oid>

<oid>cpmDS1TotalAnalogCalls</oid>

<oid>cpmDS1TotalDigitalCalls</oid>

<oid>cpmDS1TotalV110Calls</oid>

<oid>cpmDS1TotalV120Calls</oid>

<oid>cpmDS1TotalCalls</oid>

<oid>cpmDS1TotalTimeInUse</oid>

<oid>cpmDS1CurrentIdle</oid>

<oid>cpmDS1CurrentOutOfService</oid>

<oid>cpmDS1CurrentBusyout</oid>

<oid>cpmDS1InOctets</oid>

<oid>cpmDS1OutOctets</oid>

<oid>cpmDS1InPackets</oid>

<oid>cpmDS1OutPackets</oid>

</snmpGetMany>

</MibCollector>

</collector>

<collector name="AAAServerMibCollector">

<schedule name="S60min"/>

<MibCollector>

<snmpGetMany>

<oid>casAuthenRequests</oid>

<oid>casAuthenRequestTimeouts</oid>

<oid>casAuthenUnexpectedResponses</oid>

<oid>casAuthenServerErrorResponses</oid>

<oid>casAuthenIncorrectResponses</oid>

<oid>casAuthenResponseTime</oid>

<oid>casAuthenTransactionSuccesses</oid>

<oid>casAuthenTransactionFailures</oid>

<oid>casAuthorRequests</oid>

<oid>casAuthorRequestTimeouts</oid>

<oid>casAuthorUnexpectedResponses</oid>

<oid>casAuthorServerErrorResponses</oid>

<oid>casAuthorIncorrectResponses</oid>

<oid>casAuthorResponseTime</oid>

<oid>casAuthorTransactionSuccesses</oid>

<oid>casAuthorTransactionFailures</oid>

<oid>casAcctRequests</oid>

<oid>casAcctRequestTimeouts</oid>

<oid>casAcctUnexpectedResponses</oid>

<oid>casAcctServerErrorResponses</oid>

<oid>casAcctIncorrectResponses</oid>

<oid>casAcctResponseTime</oid>

<oid>casAcctTransactionSuccesses</oid>

<oid>casAcctTransactionFailures</oid>

<oid>casState</oid>

<oid>casCurrentStateDuration</oid>

<oid>casPreviousStateDuration</oid>

<oid>casTotalDeadTime</oid>

<oid>casDeadCount</oid>

</snmpGetMany>

</MibCollector>

</collector>

<collector name="ResourceMibCollector">

<schedule name="S15min"/>

<MibCollector>

<snmpGetMany>

<oid>cpmCPUTotalPhysicalIndex</oid>

<oid>cpmCPUTotal5sec</oid>

<oid>cpmCPUTotal1min</oid>

<oid>cpmCPUTotal5min</oid>

<oid>ciscoMemoryPoolName</oid>

<oid>ciscoMemoryPoolValid</oid>

<oid>ciscoMemoryPoolUsed</oid>

<oid>ciscoMemoryPoolFree</oid>

<oid>ciscoMemoryPoolLargestFree</oid>

</snmpGetMany>

</MibCollector>

</collector>

</create>

</das>

Creating RADIUS and Call History Collectors

This section describes how to create RADIUS and Call History collectors.

<das>

<create>

<collector name="radiusCollector">

<schedule name="S15min"/>

<radiusCollector>

<acctPort>1646</acctPort>

</radiusCollector>

</collector>

<collector name="callHistoryCollector">

<schedule name="S3min"/>

<callHistoryCollector>

<callHistAttrs>

<generic>

<attr>h323-setup-time</attr>

<attr>peer-address</attr>

<attr>peer-subaddress</attr>

<attr>peer-id</attr>

<attr>peer-if-index</attr>

<attr>logical-if-index</attr>

<attr>disconnect-cause</attr>

<attr>disconnect-text</attr>

<attr>h323-connect-time</attr>

<attr>h323-disconnect-time</attr>

<attr>h323-call-origin</attr>

<attr>charged-units</attr>

<attr>info-type</attr>

<attr>bytes_out</attr>

<attr>paks_out</attr>

<attr>bytes_in</attr>

<attr>paks_in</attr>

</generic>

<voip>

<attr>connection-id</attr>

<attr>h323-remote-address</attr>

<attr>remote-udp-port</attr>

<attr>round-trip-delay</attr>

<attr>selected-qos</attr>

<attr>session-protocol</attr>

<attr>session-target</attr>

<attr>ontime-rv-playout</attr>

<attr>gapfill-with-prediction</attr>

<attr>gapfill-with-interpolation</attr>

<attr>gapfill-with-redundancy</attr>

<attr>hiwater-playout-delay</attr>

<attr>lowater-playout-delay</attr>

<attr>receive-delay</attr>

<attr>vad-enable</attr>

<attr>coder-type-rate</attr>

<attr>h323-voice-quality</attr>

<attr>lost-packets</attr>

<attr>early-packets</attr>

<attr>late-packets</attr>

</voip>

<telephony>

<!-- Telephony Attributes -->

<attr>connection-id</attr>

<attr>tx-duration</attr>

<attr>voice-tx-duration</attr>

<attr>fax-tx-duration</attr>

<attr>coder-type-rate</attr>

<attr>noise-level</attr>

<attr>acom-level</attr>

<attr>session-target</attr>

<attr>img-pages-count</attr>

</telephony>

</callHistAttrs>

</callHistoryCollector>

</collector>

</create>

</das>

Creating Sample SAA Collectors

This section describes how to create SAA collectors.

<das>

<create>

<collector name="SAAicmp">

<schedule name="S1min"/>

<SaaIcmpEchoCollector>

<targetAddress>172.19.49.5</targetAddress>

<timeout>PT5S</timeout>

<packetSize>512</packetSize>

</SaaIcmpEchoCollector>

</collector>

<collector name="SAAjitter">

<schedule name="S1min"/>

<SaaJitterCollector>

<targetAddress>172.19.49.13</targetAddress>

<targetPort>101</targetPort>

</SaaJitterCollector>

</collector>

<collector name="SAAudpEcho">

<schedule name="S5min"/>

<SaaUdpEchoCollector>

<targetAddress>172.19.49.5</targetAddress>

<targetPort>100</targetPort>

</SaaUdpEchoCollector>

</collector>

</create>

</das>

Creating Thresholds

This section describes how to create thresholds.

<das>

<create>

<threshold name="diskTHRdb">

<attribute name="sys.disk.db.usedPercentage">

<raiseOperation>GT</raiseOperation>

<raiseValue>35</raiseValue>

<clearOperation>LT</clearOperation>

<clearValue>30</clearValue>

<level>minor</level>

</attribute>

</threshold>

<threshold name="diskTHRdata">

<attribute name="sys.disk.data.usedPercentage">

<raiseOperation>GT</raiseOperation>

<raiseValue>35</raiseValue>

<clearOperation>LT</clearOperation>

<clearValue>30</clearValue>

<level>minor</level>

</attribute>

</threshold>

<threshold name="sysTHRmem">

<attribute name="sys.memory.usedPercentage">

<raiseOperation>GT</raiseOperation>

<raiseValue>85</raiseValue>

<clearOperation>LT</clearOperation>

<clearValue>75</clearValue>

<level>critical</level>

</attribute>

</threshold>

<threshold name="cpuTHRload">

<attribute name="sys.cpu.loadPercentage">

<raiseOperation>GT</raiseOperation>

<raiseValue>90</raiseValue>

<clearOperation>LT</clearOperation>

<clearValue>75</clearValue>

<level>critical</level>

</attribute>

</threshold>

<threshold name="thr1">

<!-- CEMF-specific attribute name -->

<attribute name="mgcController:RFC1213-MIB.udpOutDatagrams">

<raiseOperation>LT</raiseOperation>

<raiseValue>817200</raiseValue>

<clearOperation>GT</clearOperation>

<clearValue>817200</clearValue>

<level>critical</level>

</attribute>

</threshold>

</create>

</das>

Creating a CMNM Collector

This section describes how to create a CMNM collector.

<das>

<create>

<!—- CEMF machine -->

<device name="cmnmtme.cisco.com"/>

<!—- Attribute Threshold Crossing Alert -->

<threshold name="thr1"/>

<!-- CEMF-specific attribute name -->

<!-- FTP Data Handler sends data from CNS-PE to storage machine-->

<dataHandler name="ftpDH1">

<ftp>

<urlPrefix>ftp://tme:tmecmnm@cnote-tme.cisco.com/tmp/</urlPrefix>

</ftp>

</dataHandler>

<!-- CNS-PE Collector -->

<collector name="CMNMTest">

<schedule name="S15min"/>

<device name="cmnmtme.cisco.com"/>

<threshold name="thr1"/>

<dataHandler name="ftpDH1">

<schedule name="S15min"/>

</dataHandler>

<!-- CMNMCollector-specific -->

<cmnmCollector>

<cemf-home>/opt/cemf</cemf-home>

<export-file>/tmp/dumpFile</export-file>

<max-export-file-size>-1</max-export-file-size>

<command-delay>30</command-delay>

<history-criteria>hostAPCHistoryCriteria</history-criteria>

</cmnmCollector>

</collector>

</create>

</das>

Creating Notifiers

This section describes how to create notifiers.

<das>

<create>

<notifier name="cnsnotifier">

<cns>

<subject>cisco.mgmt.das.notifier-listener</subject>

</cns>

</notifier>

<notifier name="trapnotifier">

<trap>

<ipaddress>trapman.cisco.com</ipaddress>

<port>162</port>

</trap>

</notifier>

</create>

</das>

Creating Data Handlers

This section describes how to create data handlers.

<das>

<create>

<dataHandler name="dhcns">

<cns>

<subject>cisco.mgmt.das.perf-tme</subject>

</cns>

</dataHandler>

<dataHandler name="dhftp">

<ftp>

<urlPrefix>ftp://ftpuser:ftp-tme@cnote-tme/tmp/dh-</urlPrefix>

</ftp>

</dataHandler>

</create>

</das>

Creating Purgers

This section describes how to create purgers.

<das>

<create>

<purger name="SixHoursPurger">

<time>2001-12-01T00:00:00.000Z</time>

<interval>PT6H</interval>

<delay>PT1H</delay>

</purger>

</create>

</das>

Adding Devices to Collectors

This section describes how to add devices to collectors.

<das>

<add>

<collector name="GateKeeperBulkMibCollector">

<device name="dgk"/>

<device name="emeaGK"/>

<device name="usGK"/>

</collector>

<collector name="GateKeeperMibCollector">

<device name="dgk"/>

<device name="emeaGK"/>

<device name="usGK"/>

</collector>

<collector name="DS1StatsBulkMibCollector">

<device name="usgw"/>

<device name="emeagw"/>

</collector>

<collector name="AAAServerBulkMibCollector">

<device name="usgw"/>

<device name="emeagw"/>

</collector>

<collector name="DS1StatsMibCollector">

<device name="usgw"/>

<device name="emeagw"/>

</collector>

<collector name="AAAServerMibCollector">

<device name="usgw"/>

<device name="emeagw"/>

</collector>

<collector name="ResourceMibCollector">

<device name="usgw"/>

<device name="emeagw"/>

<device name="dgk"/>

<device name="usGK"/>

<device name="emeaGK"/>

</collector>

<collector name="callHistoryCollector">

<device name="usgw"/>

<device name="emeagw"/>

</collector>

<collector name="SAAicmp">

<device name="usgw"/>

</collector>

<collector name="SAAjitter">

<device name="usgw"/>

</collector>

<collector name="SAAudpEcho">

<device name="usgw"/>

</collector>

</add>

</das>

Starting All Collectors

This section describes how to start all collectors.

<das>

<start>

<collector name="GateKeeperBulkMibCollector"/>

<collector name="DS1StatsBulkMibCollector"/>

<collector name="AAAServerBulkMibCollector"/>

<collector name="GateKeeperMibCollector"/>

<collector name="DS1StatsMibCollector"/>

<collector name="AAAServerMibCollector"/>

<collector name="ResourceMibCollector"/>

<collector name="radiusCollector"/>

<collector name="callHistoryCollector"/>

<collector name="SAAicmp"/>

<collector name="SAAjitter"/>

<collector name="SAAudpEcho"/>

</start>

</das>

Stopping All Collectors

This section describes how to stop all collectors.

<das>

<stop>

<collector name="GateKeeperBulkMibCollector"/>

<collector name="DS1StatsBulkMibCollector"/>

<collector name="AAAServerBulkMibCollector"/>

<collector name="GateKeeperMibCollector"/>

<collector name="DS1StatsMibCollector"/>

<collector name="AAAServerMibCollector"/>

<collector name="ResourceMibCollector"/>

<collector name="radiusCollector"/>

<collector name="callHistoryCollector"/>

<collector name="SAAicmp"/>

<collector name="SAAjitter"/>

<collector name="SAAudpEcho"/>

</stop>

</das>

Removing Collectors, Thresholds, Notifiers, Data Handlers, and Purgers

This section describes how to remove collectors, thresholds, devices, schedules, notifiers, data handlers, purgers, and MIBs.

Removing Bulk MIB Collectors

This section describes how to remove Bulk MIB collectors.

<das>

<remove>

<collector name="GateKeeperBulkMibCollector"/>

<collector name="DS1StatsBulkMibCollector"/>

<collector name="AAAServerBulkMibCollector"/>

</remove>

</das>

Removing MIB Collectors

This section describes how to remove MIB collectors.

<das>

<remove>

<collector name="GateKeeperMibCollector"/>

<collector name="DS1StatsMibCollector"/>

<collector name="AAAServerMibCollector"/>

<collector name="ResourceMibCollector"/>

<collector name="mchotspot"/>

</remove>

</das>

Removing RADIUS and Call History Collectors

This section describes how to remove RADIUS and Call History collectors.

<das>

<remove>

<collector name="radiusCollector"/>

<collector name="callHistoryCollector"/>

</remove>

</das>

Removing SAA Collectors

This section describes how to remove SAA collectors.

<das>

<remove>

<collector name="SAAicmp"/>

<collector name="SAAjitter"/>

<collector name="SAAudpEcho"/>

</remove>

</das>

Removing Thresholds

This section describes how to remove thresholds.

<das>

<remove>

<threshold name="diskTHRdb"/>

<threshold name="diskTHRdata"/>

<threshold name="cpuTHRload"/>

<threshold name="sysTHRmem"/>

<threshold name="thr1"/>

</remove>

</das>

Removing a CMNM Collector

This section describes how to remove a CMNM collector.

<das>

<remove>

<collector name="CMNMTest"/>

</remove>

</das>

Removing IOS Devices

This section describes how to remove IOS devices.

<das>

<remove>

<device name="dgk"/>

<device name="emeaGK"/>

<device name="usGK"/>

<device name="usgw"/>

<device name="emeagw"/>

</remove>

</das>

Removing Solaris Host Devices

This section describes how to remove Solaris host devices.

<das>

<remove>

<device name="billtme"/>

</remove>

</das>

Removing Schedules

This section describes how to remove schedules.

<das>

<remove>

<schedule name="S5sec"/>

<schedule name="S10sec"/>

<schedule name="S1min"/>

<schedule name="S3min"/>

<schedule name="S5min"/>

<schedule name="S15min"/>

<schedule name="S60min"/>

</remove>

</das>

Removing Notifiers

This section describes how to remove notifiers.

<das>

<remove>

<notifier name="trapnotifier"/>

<notifier name="cnsnotifier"/>

</remove>

</das>

Removing Data Handlers

This section describes how to remove data handlers.

<das>

<remove>

<dataHandler name="dhcns"/>

<dataHandler name="dhftp"/>

<dataHandler name="hotspot"/>

</remove>

</das>

Removing Purgers

This section describes how to remove purgers.

<das>

<remove>

<purger name="SixHoursPurger"/>

</remove>

</das>

Removing MIBs

This section describes how to remove MIBs.

<das>

<unload>

<mib name="CISCO-GATEKEEPER-MIB.my"/>

<mib name="CISCO-PROCESS-MIB.my"/>

<mib name="CISCO-MEMORY-POOL-MIB.my"/>

<mib name="CISCO-POP-MGMT-MIB.my"/>

<mib name="CISCO-AAA-SERVER-MIB.my"/>

<mib name="CISCO-DIAL-CONTROL-MIB.my"/>

<mib name="CISCO-VOICE-DIAL-CONTROL-MIB.my"/>

</unload>

</das>

Broadcast Accounting in Cisco Voice Gateways

AAA broadcast accounting in Cisco voice gateways enables AAA accounting records to be transmitted to more than one RADIUS server for redundancy, as well as other purposes. Documentation about this feature can be found at:

http://www.cisco.com/univercd/cc/td/doc/product/software/ios121/121newft/121t/121t1/ dt_aaaba.htm.

For broadcast accounting in a VoIP network, consider Figure 3-14:


Figure 3-14: Sample Voice Over IP Network


Call Scenario

In Figure 3-14, both gateways are sending authentication and accounting information (using RADIUS) to the primary RADIUS server at: 172.19.49.9 and a backup RADIUS server is configured at: 172.19.49.44. These two RADIUS servers belong to the RADIUS server group redundant. The only way the backup RADIUS server will be utilized is in the event the primary goes down. This backup scenario is configured in the IOS CLI illustrated below.

Along with the primary and backup servers configured in the first RADIUS server group (mind2) are two additional RADIUS server groups (cnspe and cramer). These two server groups only do accounting. Broadcasting accounting is enabled in the final line of the following configuration:

aaa group server radius redundant

server 172.19.49.9 auth-port 1645 acct-port 1646

server 172.19.49.44 auth-port 1645 acct-port 1646

!

aaa group server radius cnspe

server 172.19.49.3 auth-port 1812 acct-port 1813

!

aaa group server radius cramer

server 172.19.49.19 auth-port 1645 acct-port 1646

!

aaa authentication login h323 group radius

aaa authentication login bypass none

aaa authorization exec h323 group radius

aaa accounting connection h323 start-stop broadcast group redundant group cnspe group

cramer

All three RADIUS servers must also be configured in the RADIUS configuration lines of the gateway:

radius-server host 172.19.49.9 auth-port 1645 acct-port 1646

radius-server host 172.19.49.3 auth-port 1812 acct-port 1813

radius-server host 172.19.49.19 auth-port 1645 acct-port 1646

radius-server retransmit 3

radius-server attribute 25 nas-port format b

radius-server attribute nas-port format b

radius-server key testing123

radius-server vsa send accounting

radius-server vsa send authentication

The gateway configuration is otherwise the same as for any gateway doing RADIUS with a billing server. When performing broadcast accounting, the administrator must consider the performance impact of sending multiple copies of VoIP CDRs to the configured server groups.


hometocprevnextglossaryfeedbacksearchhelp
Posted: Thu Oct 17 03:12:51 PDT 2002
All contents are Copyright © 1992--2002 Cisco Systems, Inc. All rights reserved.
Important Notices and Privacy Statement.