cc/td/doc/product/wanbu/igx8400/9_3_0
hometocprevnextglossaryfeedbacksearchhelp
PDF

Table of Contents

Update to the Cisco IGX 8400 Series Installation and Configuration Guide and Cisco IGX 8400 Series Reference Guide

Update to the Cisco IGX 8400 Series Installation and Configuration Guide and Cisco IGX 8400 Series Reference Guide

Release 9.3.0

This document describes the following:


Note Use this update in conjunction with Cisco IGX 8400 Series Installation and Configuration manual, Cisco IGX 8400 Series Reference, and the Regulatory Compliance and Safety Information document. If you have questions or need help, refer to the section, Obtaining Documentation.

This document includes the following sections:

Priority Bumping

Most networks are configured to have sufficient network resources available to maintain bandwidth requirements for all connections. However, if a trunk fails, the amount of available bandwidth is diminished. If network resources are insufficient to sustain all connections, priority bumping redirects the important connections from a failed trunk to other working trunks and bumps the least important connections. You determine the connection priority level by tagging connections with a class of service (COS) value between 0 (high priority) and 15 (low priority).


Note The priority bumping feature is only available in switch software version 9.3.0 or later.

How Priority Bumping Works

The priority bumping feature selects a connection based on COS-band priority. The most important COS-band connections are those with a COS below Band 1.

The following list summarizes the Priority Bumping algorithm:

Connections can be categorized as connections configured with a directed path, connections configured with a preferred path, and connections that have not been configured with a preferred path.

Within the same band of importance, the failed or derouted connections with a preferred path (and implicitly also with a directed path) are chosen first, followed by other failed or derouted connections, and finally by the working connections which are not already on the working preferred path. If the preferred path is available, the preferred (and directed) path candidates can be rerouted. If the path is unavailable, the selected directed-path candidate is skipped over, or an alternate path is chosen for the preferred path candidate. When no more connections within the current bumping band can be selected, the next band becomes eligible.

Route Selection

Before a route can be selected, the master node needs to have a view of the network topology. This is achieved by populating a scratch-pad open space table with trunk utilization information previously received from each node.

When a trunk along the routing path is chosen, the available trunk resources in both master-to-slave and slave-to-master directions have to satisfy the requirements by the reroute candidate.

If the chosen reroute candidate is a preferred path connection and the preferred path is available in the scratch-pad open space table, the path selection is complete. If the path is unavailable, an attempt is made to reclaim network resources along the specified path by releasing the less important working connections. If the resulting open space on the path is available, the path selection is complete.

If it is a directed path connection, it is declared temporarily not able to route. If it is a preferred path connection, it is treated no differently than other nonpreferred path connections and an alternate path is calculated.

Finding an Alternate Path

If no route can be found with the current open space table setting, an attempt is made to reclaim bandwidths and LCNs on the trunks by presuming all of the least important working connections are released. The least important connections are those with COS at or above Band 7. The next least important connections are those at or above Band 6, followed by Band 5, and so on until Band 1. The per-band trunk load units, and LCNs, have previously been broadcast by each node in a topology update message. The selected bumped COS keeps decremented until it reaches that of the current reroute candidates. At that time, the candidate connections are determined to be temporarily not able to route.

If a route is found, the selected band to be bumped is signaled to the next node in the reroute message. This enables subsequent nodes to choose and release an independent list of less important working connections based on this band.

Reroute Connection Bundling

To optimize rerouting performance, the switch looks for other reroute candidates which have the same destination node and the same band as the originally selected reroute candidate. Additional candidates that fit into the specified bumping-bundle size are packed together with the originally selected candidate in the same reroute request.

Preemptive Deroute Candidate Selection

The purpose of the preemptive deroute candidate selection is to reclaim sufficient bandwidth and LCNs in order to allow establishment of reroute candidates.

The selection includes connections that are mastered locally, transiting locally, and slaved locally. If the preemptive deroute candidates are mastered locally, they can be immediately derouted. Otherwise, a message is sent to the corresponding master node of each connection, advising for the actual deroute. All connections in the least important band are derouted before connections from another (more important) band are selected.

Preemptive Deroute

Preemptive deroute (PD) refers to the process of immediately reclaiming network resources associated with a preemptive deroute candidate. This occurs before the candidate connection is actually derouted end-to-end, and before the bumping connections can proceed with rerouting. When the deroute message subsequently arrives from the master end of the PD connection, the node does not release the resources again.

The node closer to the master end (the upstream node) selects a list of PD connections to be bumped. The upstream node releases as many network resources as possible associated with these PD connections, and either deroutes using existing protocol if the upstream node is also the master node of the PD connection, or advises the master node of the PD connection to actually deroute.

The upstream node informs the downstream nodes by appending the list of PD connections in the reroute message, advising the downstream node to synchronize its choice of PD connections. As long as the preemptive deroute candidate shares the same path as the chosen reroute path, the connection is placed in preemptive deroute at each node along the reroute path.

Priority Bumping Example

Figure 1 shows an example of how priority bumping works. If a trunk is established between switches A and B that has a bandwidth of 1000, there is no problem accommodating a connection 1 (Conn. 1) with a bandwidth of 800. However, if we add a second connection (Conn. 2) with a bandwidth of 500, the trunk can no longer accommodate both connections.

Conn. 1 (800) + Conn. 2 (500) = Total bandwidth of 1300

The lower banded connection has the higher priority. Conn. 1 failed so Conn. 2 traffic flows without interruption.


Figure 1: Priority Bumping between Two Switches


Another example with three switches is illustrated in Figure 2. Three trunks are established:

Trunk Bandwidth

AB

1000

AC

500

BC

600

Connection Bandwidth
Switches
COS Band Bandwidth

1

AB

10

5

400

2

BC

14

7

300

Two connections are established:

All traffic on the connections is uninterrupted, but if Trunk AB fails, Trunk BC, with a bandwidth of 600, cannot handle the total bandwidth of both connections (700). Conn. 1 is in Band 5; Conn. 2 is in Band 7. The lower band, the higher the priority. Conn. 2 is bumped to accommodate Conn. 1 with the higher priority.


Figure 2: Priority Bumping between Three Switches


Delayed Topology Updates

The master node of a reroute message is responsible for selecting the connection path after a reroute candidate has been selected. During open space calculation, the node might find that there is no available path that can satisfy the reroute candidate. If any of the downstream nodes determine that all or some of the reroute connections cannot be established, the master node selects another path for the rejected connections. After a few attempts, these connections are also marked as temporarily not able to route, so that other bumping connections of lesser or of the same importance are selected.

Before the master node selects a path for the reroute candidate, the node must have received topology update messages from other nodes. However, topology update messages might be delayed because the reporting node might not send an update if utilization changes are not significant enough, and distance and line speed between the reporting node and the master node are not optimal.

When a downstream node receives a reroute message (after the path has been selected by the master node based on an outdated view of the bandwidth/LCN utilization), the node might find itself incapable of satisfying the request. When this occurs, the node will attempt to proceed with as many connections as it can, and relay the reroute request to the next node. The slave node eventually acknowledges to the master node with a reduced number of successfully rerouted connections. If no connections can proceed, the node issues an error response back to the master node and rejects all connections.

Environment

The priority bumping feature is available in switch software version 9.3.0. No new platform hardware or firmware is needed. Priority bumping is supported in switch software version 9.3.0 for both NPM-32 and NPM-64 cards.


Note Enable priority bumping by changing the configuration parameter Bumping Enabled. Priority bumping should not be activated until the whole network has been upgraded to 9.3.0.

Configuring Priority Bumping

The scope of the configuration parameters is network wide. A change of the parameters made at one node is propagated to the rest of the network.

Table 1 shows the value of the configuration parameters for priority bumping.


Table 1: Priority Bumping
Parameter Values Defaults Description

BumpingEnabled

ON or OFF

OFF

This flag specifies whether the priority bumping feature is activated on the node.

MaxBumpingBndl

1 to 50

10

The number of connections that can be selected in a priority bumping routing request.

Band 1

1 to 15

2

The lowest value in the second most important COS. Connections with a COS value below Band 1 are implicitly grouped as the most important band, Band 0.

Connections in Band 0 can bump those in other bands, but cannot be bumped.

Connections in Band 1 can bump those in bands 2 to 7, and can only be bumped by those in Band 0.

Band 2

1 to 15

4

The lowest value in the third most important COS. Connections in this band can bump those in bands 3 to 7, and can be bumped by those in bands 0 to 1.

Band 2 cannot be less than Band 1. If they are equal, the band definition ends.

Band 3

1 to 15

6

The lowest value in the fourth most important COS. Connections in this band can bump those in bands 4 to 7, and can be bumped by those in bands 0 to 2.

Band 3 cannot be less than Band 2. If they are equal, the band definition ends.

Band 4

1 to 15

8

The lowest value in the fifth most important COS. Connections in this band can bump those in bands 5 to 7, and can be bumped by those in bands 0 to 3.

Band 4 cannot be less than Band 3. If they are equal, the band definition ends.

Band 5

1 to 15

10

The lowest value in the sixth most important COS. Connections in this band can bump those in bands 6 to 7, and can be bumped by those in bands 0 to 4.

Band 5 cannot be less than Band 4. If they are equal, the band definition ends.

Band 6

1 to 15

12

The lowest value in the seventh most important COS. Connections in this band can only bump those in Band 7, and can be bumped by those in bands 0 to 5.

Band 6 cannot be less than Band 5. If they are equal, the band definition ends.

Band 7

1 to 15

14

The lowest value in the least important COS. Connections in this band cannot bump, but can be bumped by those in bands 0 to 6.

Band 7 cannot be less than Band 6.

Configuration Parameters

Sample Configuration

The default configuration setting for priority bumping is set to disabled. When priority bumping is enabled without changing additional banding parameters, the network operates simultaneously with eight COS bands. Different configuration scenarios possible through the eight COS parameters are shown in Table 2.


Table 2: Default Priority Configuration
Band

1

2

3

4

5

6

7

8

COS

0/1

2/3

4/5

6/7

8/9

10/11

12/13

14/15

Refined Granularity-A sample band configuration of 1, 2, 3, 4, 5, 11 and 13 provides a better granularity of COS banding at the more important end of the spectrum shown in Table 3. Another sample band configuration of 3, 5, 8, 12, 13, 14, and 15 provides a better granularity of COS banding at the less important end of the spectrum shown in Table 4.


Table 3: Priority Band Granularity
Band

0

1

2

3

4

5

6

7

COS

0

1

2

3

4

5 - 10

11/12

13 - 15


Table 4:
Increased Priority Band Granularity
Band

0

1

2

3

4

5

6

7

COS

0 - 2

3/4

5 - 7

8 - 11

12

13

14

15

Less than Eight Bands-A sample band configuration of 1, 2, 8, 9, 9, 9, and 9 provides a reduced COS banding, thus allowing better operational performance (with less iterations through the bands) shown in Table 5. Another sample band configuration of 1, 1, 1, 1, 1, 1, and 1 provides a minimum COS banding (only two bands) shown in Table 6.


Table 5: Reduced COS Banding
Band

0

1

2

3

4

COS

0

1

2 - 7

8

9 - 15


Table 6:
Minimum COS Banding
Band

0

1

COS

0

1 - 15

Priority Bumping Commands

Use the cnfbmpparm command from any Cisco IGX switch to enable priority bumping and to configure priority bumping parameters.


Note Parameter changes made at one switch are propagated to the rest of the network then updated both to BRAM and to the standby processor card (SPC). Enabling priority bumping is required to be done once on any switch in the network, then priority bumping will automatically be enabled on all the switches throughout the network.

Use the dspbmpstats command to show operational statistics of priority bumping.

For a full description of commands used to configure priority bumping, refer to the Cisco WAN Switching Command Reference and Cisco WAN Switching SuperUser Command Reference.

Default Statistical Reserves for Physical Trunks

With switch software version 9.3.0, the default values for statistical reserves are increased to accommodate sufficient bandwidth for control traffic. The statistical reserve can be changed. Use the dsptrkcnf command to display parameter values. To set the parameter values, use the cnftrk command.

If you modify the reserve below recommended values, a warning message is displayed. For example, if the statistical reserve is modified below 1000 cps for "upped" T1/E1/T3/OC3/OC12 physical trunks and 300 for T1/E1 virtual trunks, the warning message that follows is displayed:

"WARNING: Changing stats reserve < {1000 | 300} may cause a drop in CC traffic"

The default statistical reserves for physical and virtual trunks is shown in Table 7 and Table 8.


Table 7: Default Statistical Reserves for Physical Trunks
Physical Trunks BNI BXM UXM NTM

IMA T1/E1

N/A

N/A

5000 cps > T2, E2

1000 cps < T2, E2

N/A

T1/E1

N/A

N/A

1000 cps

1000 cps

T3/E3

5000 cps

5000 cps

5000 cps

N/A

OC-3

5000 cps

5000 cps

5000 cps

N/A

OC-12

N/A

5000 cps

N/A

N/A

T2 = 14490 cps (96 DS0s)

E2 = 19900 cps

N/A = not available


Table 8:
Default Statistical Reserves for Virtual Trunks
Virtual Trunks BNI BXM UXM

T1/E1

N/A

N/A

300 cps

T3/E3

1000 cps

1000 cps

1000 cps

OC-3

1000 cps

1000 cps

1000 cps

OC-12

N/A

1000 cps

N/A

N/A = not available


Note As of switch software version 9.1.0, dsptrkcnf displays the cost of a trunk if cost-based routing is configured. You configure the administrative cost of a trunk with cnftrk.

Cisco IGX 8400 Series Card and Node Limits

Table 9 represents the Card and Node Limits of the Cisco IGX switch.


Table 9:
Cisco IGX 8400 Series Card and Node Limitations
Limit Description Software Version 8.5 Software Version 9.1 Software Version 9.2 Software Version 9.3

No. of VCs per switch

The maximum number of terminating connections supported by the switch.

2,750

2,750

2,750

2,750

No. of VC_BW table entries (NPM-16, NPM-32, and NPM-64)

Each VC requires a VC_BW table entry, but VCs might share the same table entry if their parameter set is identical.

254

254

254

254

No. of VC_BW table entries (NPM-64B)

Each VC requires a VC_BW table entry, but VCs might share the same table entry if their parameter set is identical.

699

699

699

1999

No. of via connections per switch

The total number of users via connections which can transit a single switch.

19,000

19,000

19,000

19,000

Connection descriptor size

The maximum number of bytes in the VCs connection descriptor field.

N/A

N/A

N/A

N/A

No. of VCs per switch with connection event logging enabled

The maximum number of connections supported per switch if connection event logging is enabled.

N/A

1,000

1,000

1,000

No. of LCONs per switch (NPM-16)

The maximum number of connections supported without grouping or bundling.

N/A

N/A

N/A

N/A

No. of LCONs per switch (NPM-32, NPM 32B)

The maximum number of connections supported without grouping or bundling.

2,750

2,750

2,750

2,750

No. of LCONs per switch (NPM-64)

The maximum number of connections supported without grouping or bundling.

2,750

2,750

2,750

2,750

No. of LCONs per switch (NPM-64B)

The maximum number of connections supported without grouping or bundling.

3,500

3,500

3,500

3,500

No. of connection groups

The maximum number of connection groups defined in the node.

1,000

(existing groups from upgrade only)

N/A

N/A

N/A

No. of VCs per connection group

The maximum number of VCs defined in a connection group.

16

N/A

N/A

N/A

No. of connection classes

The number of classes of service which may be defined as a shorthand way of adding ATM connections.

10

10

10

10

Connection class descriptor size

The maximum size in bytes of the connection-class descriptor.

25

25

25

25

No. of jobs

The maximum number of jobs that may be defined in a switch.

20

20

20

20

Job memory space

The maximum amount of BRAM in bytes, reserved for job storage.

30,000

30,000

30,000

30,000

Maximum job size

The maximum size of a single job in bytes, where job-desc is the number of characters in the job descriptor, the number of commands is the number of commands in the job where t_chars is the total of the number of characters in each command in the job.

3566>= 15 + job-desc + 5 (no. of cmnds) + t_chars

3566>= 15 + job-desc + 5 (no. of cmnds) + t_chars

3566>= 15 + job-desc + 5 (no. of cmnds) + t_chars

3566>= 15 + job-desc + 5 (no. of cmnds) + t_chars

No. of job triggers

The maximum number of job triggers which may be defined in the switch.

20

20

20

20

No. of triggers per job

The maximum number of triggers supported by a single job.

4

4

4

4

Job descriptor size

The maximum number of characters in a job descriptor.

16

16

16

16

No. of total trunks

The number of trunks supported by a single switch.

32

32

32

32

No. of routing trunks

The number of routing trunks supported by a single switch.

32

32

32

32

No. of feeder trunks

The number of feeder switch trunks supported by a single switch.

4

4

4

4

No. of device codes per switch

The number of device codes supported on each switch.

208

208

208

208

No. of VCs per device code (FR)

The number of
Frame Relay VCs which can share the same device code.

127

127

127

127

No. of VCs per device code (voice)

The number of voice VCs which can share the same device code.

32

32

32

32

Maximum non-UXM bandwidth per switch

The maximum amount of bus bandwidth available in a switch to all cards except for UXMs.

256 Mbps

256 Mbps less 25% of UXM bandwidth

256 Mbps less 25% of UXM bandwidth

256 Mbps less 25% of UXM bandwidth

Maximum UXM bandwidth per switch

The maximum amount of bus bandwidth available in a switch for UXMs.

N/A

1,024 Mbps

1,024 Mbps

1,024 Mbps

No. of user card slots

The number of slots available for trunk and channel modules.

30

30

30

30

Load model granularity

The minimum increment of trunk bandwidth assignable in cps.

1

1

1

1

Trunk load model granularity

The minimum increment of trunk bandwidth assignable to a trunk in the load model.

64 Kbps

64 Kbps

64 Kbps

64 Kbps

No. of SV+ (link 0)

The maximum number of directly attached SV+ workstations which can subscribe to a switch.

12

12

12

12

No. of SV+ (link 0 + link 1)

The local number of directly attached SV+s (link 0 + link 1) that a switch can support.

24

24

24

24

Statistics memory space (NPM-16, NPM 32B)

The amount of memory available for user statistics buckets and statistics files.

2 MB

610 KB

610 KB

610 KB

Statistics memory space (NPM-64)

The amount of memory available for user statistics buckets and statistics files.

12.7 MB

12.7 MB

12.7 MB

12.7 MB

SV+ message queue size

The total number of messages allowed to be queued to SV+ in the switch.

200

200

200

200

No. of telnet sessions

The maximum number of telnet sessions per switch.

5

5

5

5

No. of VT sessions

The maximum number of VT sessions per switch.

6

6

6

6

No. of user interface tasks

The total number of simultaneous user interface tasks that may be spawned by a switch.

8

8

8

8

No. of SNMP managers

The maximum number of SNMP managers that can register for traps.

12

12

12

12

No. of SNMP error table entries

The maximum number of SNMP error table entries which are maintained in the switch.

12

12

12

12

No. of SNMP PDU size

The range of valid PDU size for an SNMP message.

>=484=<1400

>=484=<1400

>=484=<1400

>=484=<1400

ARP table size

The maximum number of entries which are cached.

4

64

64

64

No. of event log entries

The total number of events stored in the maintenance log.

400

400

400

400

No. of software log entries

The total number of software errors stored in the software error log.

12

12

12

12

No. of print jobs

The maximum number of print commands that can be queued.

16

16

16

16

No. of User IDs per switch

The maximum number of User ID and password pairs stored in the switch.

63

63

63

63

User ID size

The maximum size in bytes of a User ID.

12

12

12

12

User password size

The maximum size in bytes of a user password.

15

15

15

15

No. of ports per switch

The maximum number of ports which may be configured on the switch.

600

600

600

600

No. of CLNs per switch

The maximum number of channelized T1/E1 circuit lines per switch.

64

64

64

64

No. of letters

The total number of PSOS letters available for task to task communication.

8,000

8,000

8,000

8,000

CC message processing capacity

The CC network message throughput in cells per second.

750

750

750

750

CC traffic ingress buffer size

The CC traffic's buffer space for incoming messages, in cells.

1,024

1,024

1,024

1,024

NTC and NTM Cards

No. of VCs per intra-domain trunk

The total number of connections supported on an intra-domain trunk.

N/A

N/A

N/A

N/A

No. of VCs per inter-domain trunk

The total number of connections supported on an inter-domain, or dual-domain trunk.

N/A

N/A

N/A

N/A

No. of VCs per dual-domain trunk

The total number of intra-domain connections supported on a dual-domain, trunk. A dual-domain trunk is a trunk between two junction nodes in the same domain.

N/A

N/A

N/A

N/A

No. of VCs per flat network trunk

The maximum number of connections supported on a flat network trunk.

213

213

213

213

PIF queue depth

The number of FastPackets in the queue receiving data from the muxbus.

64

64

64

64

PIF drain rate

The rate which arriving cells in the PIF queue are drained by the DSP.

14,000 cps

14,000 cps

14,000 cps

14,000 cps

AIT and BTM Cards

No. of VCs per trunk

The total number of connections (via + terminationg) supported on a trunk.

1,771

1,771

1,771

1,771

No. of VCs per dual-domain trunk

The total number of intra-domain connections supported on a dual-domain trunk. A dual-domain trunk is a trunk between two junction switches in the same domain.

N/A

N/A

N/A

N/A

PIF queue depth

The number of cells in the queue receiving data from the muxbus.

10

10

10

10

PIF drain rate

The rate which arriving cells in the PIF queue are drained by the DSP.

100,000 cps

100,000 cps

100,000 cps

100,000 cps

Maximum throughput

The maximum card throughput

16 Mbps

16 Mbps

16 Mbps

16 Mbps

FRP and FRM Cards

No. of ports per card (FRP-4, FRM-4)

The maximum number of ports which may be configured on the card.

4

4

4

4

No. of ports per card (FRP-6, FRM-6)

The maximum number of ports which may be configured on the card.

6

6

6

6

No. of ports per card (FRP-31, FRM-31)

The maximum number of ports which may be configured on the card.

24 per T1 31 per E1

24 per T1 31 per E1

24 per T1 31 per E1

24 per T1 31 per E1

VC_Q total buffer space (FRP-4, FRM-4)

The total number of
120 byte memory blocks available for VC_Q space.

6,205

6,205

6,205

6,205

VC_Q total buffer space per port (FRP-4, FRM-4)

The total VC_Q buffers allocated to a single port.

1,551

1,551

1,551

1,551

VC_Q buffer size (FRP-4, FRM-4)

The number of user traffic bytes of FR data in each VC_Q buffer.

120

120

120

120

VC_Q total buffer space (FRP-6, FRM-6) (FRP-31, FRM-31)

The total amount of memory blocks available for VC_Q space.

6,368

6,368

6,368

6,368

VC_Q buffer space per port (FRP-6, FRM-6) (FRP-31, FRM-31)

The total number of
100 byte memory blocks available for VC_Q space.

E1=155 T1=265 per DS0 of port speed

E1=155 T1=265 per DS0 of port speed

E1=155 T1=265 per DS0 of port speed

E1=155 T1=265 per DS0 of port speed

VC_Q buffer size (FRP-6, FRM-6) (FRP-31, FRM-31)

The number of user traffic bytes of FR data in each VC_Q buffer.

100

100

100

100

No. of VCs per card

The total number of connections terminating on a card.

252

252

252

252

Throughput per card

The total number of frames per second supported by a module, independent of frame size.

2,500

2,500

2,500

2,500

Packet RAM size

The total memory space allocated for packets arriving from the musbus.

64 KB

64 KB

64 KB

64 KB

Packet RAM drain rate

The rate at which arriving cells in the packet RAM are drained by the DSP.

14,000 pps

14,000 pps

14,000 pps

14,000 pps

Data chunk size (FRP-4, FRM-4)

The block size in bytes of data transferred from the backcard to VC_Q buffer space on model D cards.

10

10

10

10

Data chunk size (FRP-6, FRM-6) (FRP-31, FRM-31)

The block size in bytes of data transferred from the backcard to VC_Q buffer space on model E cards.

20

20

20

20

UFM-C Cards

No. of ports per card

The maximum number of logical ports which may be configured on an E1 card.

248

248

248

248

No. of VCs per card

The total number of connections terminating on a card.

1,000

1,000

1,000

1,000

Throughput per physical port

The total number of frames per second supported by a T1 or E1 physical port, at a
120 byte frame size.

3.600

3.600

3.600

3.600

Ingress buffer space

The amount of memory allocated on the card for ingress buffer space.

4 MB

4 MB

4 MB

4 MB

Ingress buffer size

The size of each ingress buffer in bytes.

120

120

120

120

User traffic ingress buffer space

The maximum number of ingress buffers assignable for user traffic.

28,000

28,000

28,000

28,000

Maximum ingress buffer space per port

The maximum number of ingress buffers assignable per port.

8,000

8,000

8,000

8,000

Egress buffer size

The size of each egress buffer in bytes.

60

60

60

60

User traffic egress buffer space

The maximum number of egress buffers assignable for user traffic.

65,000

65,000

65,000

65,000

Maximum ingress buffers per VC

The maximum ingress buffer space assignable to an FR VC.

2* (Bc+Be)

2* (Bc+Be)

2* (Bc+Be)

2* (Bc+Be)

Minimum frame length at maximum throughput

The minimum average frame length in bytes supported at full card throughput.

100

100

100

100

UFM-U Cards

No. of ports per card

The maximum number of logical ports which may be configured on an E1 card.

12

12

12

12

No. of VCs per card

The total number of connections terminating on a card.

1,000

1,000

1,000

1,000

Minimum port rate

The minimum speed supported by a port.

1 Mbps (HSSI); 56K (V.35 or X.21)

1 Mbps (HSSI); 56K (V.35 or X.21)

1 Mbps (HSSI); 56K (V.35 or X.21)

1 Mbps (HSSI); 56K (V.35 or X.21)

Maximum port rate

The maximum speed supported by any port. Also the maximum aggregate speed supported by ports 1 and 2, or ports 3 and 4.

16 Mbps (HSSI); 10 KB (V.35 or X.21)

16 Mbps (HSSI); 10 KB (V.35 or X.21)

16 Mbps (HSSI); 10 KB (V.35 or X.21)

16 Mbps (HSSI); 10 KB (V.35 or X.21)

Maximum card throughput

The maximum throughput supported by a card.

16 Mbps

16 Mbps

16 Mbps

16 Mbps

Maximum card oversubscription

The sum of the port speeds on the card must not exceed this value.

24 Mbps

24 Mbps

24 Mbps

24 Mbps

Maximum frame length at maximum throughput

The minimum average frame length in bytes supported at full card throughput.

100

100

100

100

Ingress buffer space

The amount of memory allocated on the card for ingress buffer space.

4 MB

4 MB

4 MB

4 MB

Ingress buffer size

The size of each ingress buffer in bytes.

120

120

120

120

User traffic ingress buffer space

The maximum number of ingress buffers assignable for user traffic.

28,000

28,000

28,000

28,000

Maximum ingress buffer space

The maximum number of ingress buffers assignable per port.

8,000

8,000

8,000

8,000

Egress buffer size

The size of each egress buffer in bytes.

60

60

60

60

User traffic egress buffer space

The maximum number of egress buffers assignable for user traffic.

65,000

65,000

65,000

65,000

Maximum ingress buffers per VC

The maximum ingress buffer space assignable to an FR VC.

2* (Bc + Be)

2* (Bc + Be)

2* (Bc + Be)

2* (Bc + Be)

ALM-A Cards

No. of VCs per card

The maximum number of connections terminating on a card.

1,000

1,000

1,000

N/A

FTC and FTM Cards

Aggregate port speed

The maximum aggregate port speed of the card.

2 Mbps

2 Mbps

2 Mbps

N/A

UXM Cards

No. of VCs per port card

The maximum number of connections supported by a card, operating in port mode.

N/A

4,000

4,000

4,000

No. of VCs per trunk card

The maximum number of connections supported by a card, operating in trunk mode.

N/A

8,000

8,000

8,000

No. of gateway VCs per trunk card

The maximum number of gateway connections (simple, complex, and so on) supported by a card, operating in trunk mode.

N/A

4,000

4,000

4,000

No. of reserved VCs per trunk port

The number of VCs reserved for CC traffic and other control user per configured trunk.

N/A

270

270

270

UXM T1 maximum UBUs

The maximum number of UBUs which may be assigned to a UXM T1 card.

N/A

4 port = 16

8 port = 32

4 port = 16

8 port = 32

4 port = 16

8 port = 32

Differential delay

The maximum differential delay in milliseconds between links.

N/A

E1 = 209 ms

T1= 275 ms

E1 = 209 ms

T1= 275 ms

E1 = 209 ms

T1= 275 ms

UXM-E Cards

No. of VCs per card

The maximum number of connections supported by a card, operating in port mode.

N/A

N/A

N/A

8,000

No. of gateway VCs per trunk card

The maximum number of gateway connections (simple, complex, and so on) supported by a card, operating in trunk mode.

N/A

N/A

N/A

4,000

No. of reserved VCs per trunk port

The number of VCs reserved for CC traffic and other control user per configured trunk.

N/A

N/A

N/A

270

UXM-E T1 maximum UBUs

The maximum number of UBUs which may be assigned to a UXM-E T1 card.

N/A

N/A

N/A

4 port = 16

8 port = 32

Differential delay

The maximum differential delay in milliseconds between links.

N/A

N/A

N/A

E1 = 209 ms

T1= 275 ms

Documentation Changes for this Release

This section consists of Cisco IGX 8400 series documentation changes for switch software version 9.3.0.

Adaptive Voice

The Cisco IGX 8400 series switch software feature known variously as Adaptive Voice or Adaptive VAD is no longer supported and must not be enabled with the cnfswfunc command. This old feature originally allowed the system software to dynamically enable and disable VAD on voice connections to take advantage of unused bandwidth on network trunks.

Obtaining Documentation

World Wide Web

You can access the most current Cisco documentation on the World Wide Web at http://www.cisco.com, http://www-china.cisco.com, or http://www-europe.cisco.com.

Documentation CD-ROM

Cisco documentation and additional literature are available in a CD-ROM package, which ships with your product. The Documentation CD-ROM is updated monthly. Therefore, it is probably more current than printed documentation. The CD-ROM package is available as a single unit or as an annual subscription.

Ordering Documentation

Registered CCO users can order the Documentation CD-ROM and other Cisco Product documentation through our online Subscription Services at http://www.cisco.com/cgi-bin/subcat/kaojump.cgi.

Nonregistered CCO users can order documentation through a local account representative by calling Cisco's corporate headquarters (California, USA) at 408 526-4000 or, in North America, call 800 553-NETS (6387).

Obtaining Technical Assistance

Cisco provides Cisco Connection Online (CCO) as a starting point for all technical assistance. Warranty or maintenance contract customers can use the Technical Assistance Center. All customers can submit technical feedback on Cisco documentation using the web, e-mail, a self-addressed stamped response card included in many printed docs, or by sending mail to Cisco.

Cisco Connection Online

Cisco continues to revolutionize how business is done on the Internet. Cisco Connection Online is the foundation of a suite of interactive, networked services that provides immediate, open access to Cisco information and resources at anytime, from anywhere in the world. This highly integrated Internet application is a powerful, easy-to-use tool for doing business with Cisco.

CCO's broad range of features and services helps customers and partners to streamline business processes and improve productivity. Through CCO, you will find information about Cisco and our networking solutions, services, and programs. In addition, you can resolve technical issues with online support services, download and test software packages, and order Cisco learning materials and merchandise. Valuable online skill assessment, training, and certification programs are also available.

Customers and partners can self-register on CCO to obtain additional personalized information and services. Registered users may order products, check on the status of an order and view benefits specific to their relationships with Cisco.

You can access CCO in the following ways:

You can e-mail questions about using CCO to cco-team@cisco.com.

Technical Assistance Center

The Cisco Technical Assistance Center (TAC) is available to warranty or maintenance contract customers who need technical assistance with a Cisco product that is under warranty or covered by a maintenance contract.

To display the TAC web site that includes links to technical support information and software upgrades and for requesting TAC support, use www.cisco.com/techsupport.

To contact by e-mail, use one of the following:

Language E-mail Address

English

tac@cisco.com

Hanzi (Chinese)

chinese-tac@cisco.com

Kanji (Japanese)

japan-tac@cisco.com

Hangul (Korean)

korea-tac@cisco.com

Spanish

tac@cisco.com

Thai

thai-tac@cisco.com

In North America, TAC can be reached at 800 553-2447 or 408 526-7209. For other telephone numbers and TAC e-mail addresses worldwide, consult the following web site: http://www.cisco.com/warp/public/687/Directory/DirTAC.shtml.

Documentation Feedback

If you are reading Cisco product documentation on the World Wide Web, you can submit technical comments electronically. Click Feedback in the toolbar and select Documentation. After you complete the form, click Submit to send it to Cisco.

You can e-mail your comments to bug-doc@cisco.com.

To submit your comments by mail, for your convenience many documents contain a response card behind the front cover. Otherwise, you can mail your comments to the following address:

Cisco Systems, Inc.
Document Resource Connection
170 West Tasman Drive
San Jose, CA 95134-9883

We appreciate and value your comments.




This document is to be used in conjunction with the Cisco IGX 8400 Series Installation and Configuration and Cisco IGX 8400 Series Reference.

Access Registrar, AccessPath, Any to Any, AtmDirector, Browse with Me, CCDA, CCDE, CCDP, CCIE, CCNA, CCNP, CCSI, CD-PAC, the Cisco logo, Cisco Certified Internetwork Expert logo, CiscoLink, the Cisco Management Connection logo, the Cisco NetWorks logo, the Cisco Powered Network logo, Cisco Systems Capital, the Cisco Systems Capital logo, Cisco Systems Networking Academy, the Cisco Systems Networking Academy logo, the Cisco Technologies logo, ConnectWay, Fast Step, FireRunner, Follow Me Browsing, FormShare, GigaStack, IGX, Intelligence in the Optical Core, Internet Quotient, IP/VC, Kernel Proxy, MGX, Natural Network Viewer, NetSonar, Network Registrar, the Networkers logo, Packet, PIX, Point and Click Internetworking, Policy Builder, Precept, RateMUX, ScriptShare, Secure Script, ServiceWay, Shop with Me, SlideCast, SMARTnet, SVX, The Cell, TrafficDirector, TransPath, ViewRunner, Virtual Loop Carrier System, Virtual Voice Line, VisionWay, VlanDirector, Voice LAN, Wavelength Router, Workgroup Director, and Workgroup Stack are trademarks; Changing the Way We Work, Live, Play, and Learn, Empowering the Internet Generation, The Internet Economy, and The New Internet Economy are service marks; and ASIST, BPX, Catalyst, Cisco, Cisco IOS, the Cisco IOS logo, Cisco Systems, the Cisco Systems logo, the Cisco Systems Cisco Press logo, Enterprise/Solver, EtherChannel, EtherSwitch, FastHub, FastLink, FastPAD, FastSwitch, GeoTel, IOS, IP/TV, IPX, LightStream, LightSwitch, MICA, NetRanger, Post-Routing, Pre-Routing, Registrar, StrataView Plus, Stratm, TeleRouter, and VCO are registered trademarks of Cisco Systems, Inc. or its affiliates in the U.S. and certain other countries. All other trademarks mentioned in this document are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any of its resellers. (0003R)

Copyright, 2000 Cisco Systems, Inc. All rights reserved.


hometocprevnextglossaryfeedbacksearchhelp
Posted: Wed Nov 13 11:43:42 PST 2002
All contents are Copyright © 1992--2002 Cisco Systems, Inc. All rights reserved.
Important Notices and Privacy Statement.