Sep
07

Fast Convergence for Designated Ports

RSTP protocol's fast convergence depends on the use of point-to-point links connecting switches. In order to quickly transition a designated port into non-discarding state, the upstream switch needs to make sure that the downstream neighbor agrees with that idea. This constitutes the process known as handshake (or proposal/agreement):

  1. Upstream bridge sends a proposal out of a designated port. As a matter of fact, it just sets the proposal bit in outgoing configuration BPDUs.
  2. Downstream bridge receives the proposal, and if it agrees with the upstream port role, it starts the process known as synchronization.
  3. Synchronization implies the downstream bridge blocking all non-edge designated ports, prior to sending an agreement to the upstream bridge.
  4. Synchronization is needed to make sure there are no loops in the topology, after the upstream bridge unblocks its designated port.
  5. If the downstream bridge does not agree with the proposal, it will continues sending it's own configuration BPDUs with the proposal bit set. Eventually one of the bridges will accept the superior information and send an agreement.

Fast Convergence for Other Port Types

See more detailed overview at: http://blog.ine.com/wp-content/uploads/2010/04/understanding-stp-rstp-convergence.pdf

The above procedure outlines the fast transition procedure for designated ports. As for root ports, they can always transition to forwarding state upon receiving a superior BPDU and synchronizing the local designated ports. Alternate ports may quickly transition to forwarding state if the current root port is lost, thanks to the feature known as UplinkFast in the classic STP implementations. Inferior BPDU handling is similar to BackboneFast feature, where a designated port receiving an inferior BPDU will quickly send a new proposal to synchronize the downstream peer.

Why P2P links are So Important?

And now, an interesting question: Why RSTP needs point-to-point links for fast convergence? The answer lies in the handshake protocol. If we would have multiple devices on the segment, performing synchronization would become really cumbersome. The upstream bridge will have to detect all downstreams, and wait for every one of them to synchronize with its proposal. Implementing such complicated protocols is not worth the benefits, as most of the time switches are connected using full-duplex links.

Treating Shared Links as P2P

Yes, this is possible. Even though you may think this is purely a theoretical concern, it's possible to encounter this in real-life scenarios. Recall that RSTP detects P2P links by looking at the link duplex. What if we have switches A,B and C (customer switches) plugged into switch D (provider switch) and switch D performing Layer-2 Protocol Tunneling? In this case, all customer switches would consider their connection as being P2P. However, switch D will tunnel all BPDUs, effectively connecting the switches via a share cloud. In this situation, RSTP would behave according to the P2P link rules, sending a proposal and unblocking the designated port upon receiving the first agreement. This may potentially introduce temporary Layer 2 loops, as one of the switches may not yet be synchronized. You may easily lab up this scenario and observer "fast convergence" over a shared link.

Another situation is possible with Cisco's R-PVST, which is a hybrid of RSTP and Cisco's proprietary PVST+. The encapsulation rules for RPVST follow the same used for PVST+ in order to allow for tunneling of Cisco PVST instances over an IEEE CST cloud (You may read about PVST+ encapsulation rules in the blog post named PVST+ Explained). The problem with RPVST is the same – there could be multiple Cisco switches connected to the IEEE CST cloud, every switch treating this cloud as a P2P link, as the links are full-duplex. The net effect is the same as described above.

So how long does it take for RSTP to converge?

Contrary to many beliefs of RSTP's fast convergence (order of milliseconds), it may exhibit convergence times in order of seconds even for small topologies. The main problems is the distance-vector or gradient nature of STP protocol, which converges based on the best information received from peers. Under some some cases (e.g. root bridge crash), this may result in old information circulating inside the topology until the hop count exceeds the limit. This is known as count to infinity and is very similar to the problem found in RIP or any other distance-vector protocol. With the critical nodes failing, RSTP may take seconds to recover from the information loss.

Further Reading on Ethernet and RSTP

If you are curious about details, you may want to read the following relatively small article Scaling Etherenet to a Million of Nodes which discusses the RSTP convergence problem and proposes a solution for "broadcast-less" Ethernet, which does not require STP. Pay special attention to the figures, especially the one showing the convergence times in ring topologies. To anyone further interested in «scalable ethernet» I would highly recommend reading another, more recent article: Floodless in SEATTLE. Special thanks to Daniel Ginsburg for referring me to this little gem!

Have fun with RSTP! :)

Mar
14

We are going to discuss Cisco’s proprietary extensions to STP algorithm, namely UplinkFast and BackBone Fast. Those two features aim to reduce the time it takes STP to re-activate topology after a link failure. While UplinkFast seems pretty intuitive, BB Fast is more complicated.

See more detailed overview at: http://blog.ine.com/wp-content/uploads/2010/04/understanding-stp-rstp-convergence.pdf

UplinkFast

stp-convergence212

Look at Figure 1 above. It demonstrates a sample network topology (pretty poorly designed, by the way) with a backbone (core) and access layers. Everything is Layer 2 in this topology, and you can see the STP subset of the topology highlighted in green. UplinkFast extension was designed for use on access-layer switches, such as AC-SW1 and AC-SW2. Commonly, those switches have redundant uplinks to the core/distribution layer. If the primary uplink fails it would take 2xForward_Time for the backup uplink to come up. This is only in the case if the failure would be detected immediately, without aging the BPDU stored for the primary uplink. However, it makes sense to bring the secondary uplink up immediately, as soon as the primary’s failure has been detected, as there are no more uplinks. Even in case when the access-switch has more than two uplinks, we can still transition the next available uplink to the operational state, since the access-switch is not supposed to become a transit path for any traffic. If the access-switch could become a transit point, we cannot use the same trick, as this might introduce loops into the active topology. Therefore, we can quickly transition the backup link into the forwarding state if:

1) The switch has only two uplinks.
2) The switch has more than two uplinks, but the STP parameters are set in such way, that the switch would never become a transit node.

The second condition could be fulfilled if we set the STP priority of the switch to a value that makes it almost impossible to become a root bridge (numerically large value) plus increasing all ports STP cost, making all transit paths via this switch less preferred. This is what Cisco calls “UplinkFast” feature. When you enable it on a switch, the failure of the root port will transition the secondary uplink (alternate port) into forwarding state almost immediately.

The last component of UplinkFast feature is quick MAC-address table re-learning. Since the primary uplink has failed, the core switches might lose the MAC addresses associated with the access switch. This will result in connectivity disruption, till the time the addresses are re-learned via the secondary path. Thus, in addition to bringing the secondary uplink up quickly, the switch will also flood it with dummy multicast frames, sourced from all the MAC addresses known to the switch. This will allow the upstream switches (the destination is multicast) to quickly re-learn the MAC addresses via the new path. Of course, the penalty is excessive network flooding with multicast frames.

BackboneFast

In the previous post, we described how STP handles inferior BPDUs. In short, classic algorithm simply ignores the potentially useful information conveyed by inferior BPDUs. Look at Figure 2 below.

stp-convergence22

What if the link between BB-SW2 and BB-SW3 fails? First, since BB-SW2 is the root, the failure of the designated port will only cause a topology change even. However, things are more complicated for BB-SW3, since it loses the root port. BB-SW3 will invalidate the currently know root bridge information, and try looking for another alternative. Since the port on BB-SW4 connected to BB-SW3 is blocking, there is now new information. Thus BB-SW3 declares itself the root of the STP and starts sending inferior BPDUs to BB-SW4. If the latter would be a classic STP switch, it would ignore the inferior information until the BPDU stored with the blocked port expires (around Max_Age seconds). However, with BackBone fast enable, the switch that receives inferior information will attempt to verify if this failure affected its own connection to the root (e.g. whether the current root bridge is actually dead or we lost the connection to it just like our neighbor).

1) The switch selects the root port and all alternate ports (all upstream ports) as the candidate paths to the current root. In case of BB-SW4 there is just one root link to BB-SW2 and there is ALN path via BB SW3. The switch then sends out special RLQ (Root Link Query) BPDUs out of the selected ports, in our case to BB-SW4 and BB-SW3. What these BPDUs contain is the following (among other fields):

a) The Bridge ID of the querying switch (local switch BID)
b) The Bridge ID of our current Root Bridge (or what the querying switch thinks is the current root bridge).

2) Every switch that receives the RLQ, checks the Root Bridge ID in the query.
2.1) If this is the same BID that receiving switch consider to be the root, it relays the RLQ upstream, across its root port.
2.2) If the switch receiving the RLQ is the root bridge itself, it floods a positive RLQ response out of ALL its designated (downstream) ports. In our situation, this is the case, and BB-SW2 immediately responds to BB-SW4.
2.3) If the switch receiving the RLQ considers a different bridge to be the root of the topology, it immediately responds with a negative RLQ, flooded out of all designated (downstream) ports.

3) RLQ responses are flooded by every switch downstream out of all designated ports. Only the switch that sees itself as the originator of the RLQ will not flood the responses further. This is how the RLQ responses are eventually delivered to the querying bridge.

4) When the originating switch receives a negative response on any upstream port, it immediately invalidates the information stored with this port, and moves it to the Listening state, starting BPDU exchange. This happens with the port connected to BB-SW3 in our case. If the answer is positive, the information stored with the port is considered to be valid. The switch waits for responses on all upstream ports, retaining or invalidating the respective stored BPDUs.

5) If ALL responses were negative, the querying switch deduces loss of the connectivity to the old root. The querying switch declares itself as the new root and starts propagating this information out of all ports, listening for better BPDUs at the same time.

6) If at least ONE response was positive, the querying switch assumes that it still has healthy connection to the current root, unblocks the ports that have received negative responses. This allows the switch to start sending information about the current root to the switch that thinks it lost connection to the root bridge. In our case, BB-SW4 receives a positive response from BB-SW2 and immediately unblocks the port to BB-SW3, starting to relay BB-SW2 configuration BPDUs.

Therefore, the overall effect of BB Fast is proactive testing of the current topology and quick invalidation of the outdated information. Backbone Fast allows saving up to Max_Age seconds of waiting to expire the old root bridge information and thus reduce the convergence time to 2xForward_Time. The BB Fast feature is only useful on switches that are capable of becoming a transit node, i.e. the switches that form the core (backbone) of the bridged network. Of course, since BB Fast uses special RLQ BPDUs, it must be explicitly enabled in all participating Cisco switches.

Finally, a quick question to think about after reading this post: Is if OK to enable the UplinkFast feature on all switches forming the “ring” topology ?

Mar
07

In this post we are going to look into STP convergence process. Many people have perfect understanding of STP, but yet face difficulties when they see questions like “How many seconds will it take for STP to recover connectivity if a given link fails?”. The post will follow the outline below:

1) General overview of STP convergence process
2) How STP converges if a directly connected link fails
3) How STP converges when it detects indirect link failure
4) Topology changes and their effect

See more detailed overview at: http://blog.ine.com/wp-content/uploads/2010/04/understanding-stp-rstp-convergence.pdf

STP Convergence in General

As we know, STP protocol follows certain simple procedure to calculate the loop-free subset of the network topology. STP protocol could be compared to RIP in some sense. Both execute a version of Bellman-Ford iterative algorithm, which could be described as “gradient” (meaning it iteratively looks for the optimal solution, selecting the “closest” candidate every time). Every switch accepts and retains only the best current root bridge information. The switch then blocks alternate paths to the root bridge, leaving only the single optimal (in terms of path cost) uplink and continues relaying the optimal information. If a switch learns about a better (“superior”) root bridge than it knows now (e.g. better bridge id, or shorter path to the root), the old information is erased and the new one immediately accepted and relayed. Note that the switch stores the most recent STP BPDUs with every port that receives them. Therefore, for a given switch, there is a BPDUs stored with every root or alternate (blocked port).

There are certain features in STP designed to improve the algorithm stability and ensure the aging out of the old information. Every BPDU contains two fields: Max_Age and Message_Age. The Message_Age field is incremented every time a BPDU traverses a switch (so it might be compared to the hop count). When a switch stores the BPDU with the respective port, it will count the time in seconds, starting from Message_Age and up to the Max_Age. If during this interval, no further BPDUs are received, the current BPDU is wiped out and the port is declared designated. This procedure ensures that the old information is eventually aged out of the topology.

There is one more thing, similar to the “hold-down” feature found in RIP. It is the way in which STP deals with “inferior” BPDUs. The BPDU is considered inferior, if it carries information about the root bridge that is worse than the one currently stored for the port, or the BPDU has longer distance to reach the current root bridge (compare this to RIP's increase in metric). Inferior BPDUs may appear when a neighboring switch suddenly loses its uplink and claims itself the new root of the topology. By default, every switch should ignore inferior BPDUs, until the currently stored BPDU expires (time=Max_Age - Message_Age). This feature intends to stabilize STP topology in situations where an uplink on some switch flaps, causing the switch to start sending inferior information.

STP convergence in case of directly connected link failure

Consider a switch on Fig 1., with two uplinks – one forwarding (root port, port A) and another blocking (alternate port, port B). Imagine now that the root port fails.

stp_convergence_1

There are two different situations:

1) The switch detects loss of carrier and immediately declares the port dead. Since this was the port with the best information, the switch immediately invalidates it, and selects the next “best candidate” which is the alternate port (Port B) as the new root port. The switch will transition Port B through Listening and Learning states, which takes 2xForward_Time. Therefore, the connectivity is restored in 2xForward_Time.

2) The switch does not detect the loss of carrier (for example, the uplink is fiber connected to a converter or connects through a hub), and thus the port remains up. The root port, however, loses the continuous stream of BPDUs. Thus, the stored BPDU information is no longer updated. Based on the default procedure, it takes time=Max_Age-Message_Age to expire the stored information. After this, the switch considers the BPDU stored with the alternate port, and unblock Port B. It will take another 2xForward_Delay to bring the port to forwarding state. Therefore, the connectivity is resotored in 2xForward_Time + (Max_Age-Message_Age).

If the switch detects loss of carrier on the designated port (Port C) nothing much will happen. Since there are no BPDUs received on this port, the switch will only generate a topology change event (more on that later), but will not block or unblock any other local ports. This event, might, however, affect the downstream switches.

STP Convergence in case of indirect link failure

Consider the topology on Fig 2.

stp_convergence_2

In this case, SW2 has better Bridge ID than SW3, and thus Port D is designated on the segment between SW2 and SW3. SW3 blocks the redundant uplink to via SW3 (Port B) and elects Port A as the root port. Now imagine that SW2 detects loss of carrier on the link connected to SW1 (Port C). The switch will immediately invalidate the best BPDU stored for Port C, and will assume itself the root of the spanning-tree, as there are no other ports receiving BPDUs. SW2 will start advertising BPDUs to SW3, setting the designated and the root bridge to itself in the configuration BPDUs. Those are, by definition, inferior BPDUs, and SW3 will ignore them, as it still hears better information from SW1. SW3 will also keep the previous BPDU associated with Port B for the duration of Max_Age-Message_Age. When this timer expires, SW3 will start considering the inferior BPDUs. Port B will move to Listening state, and SW3 will start relaying SW1’s BPDUs to SW2, as those are superior to SW2’s BPDUs. Now, SW2 would detect the better information on its formerly designated port (Port D) and will cycle the port through Listening and Learning states. Both switches (SW2 and SW3) will eventually move their ports into forwarding states, recovering the connectivity. Therefore, it will take Max_Age-Message_Age + 2xForward_Time to recover from indirect link failure.

The effect of topology changes

Switches forward Ethernet frames based on their MAC address tables (filtering tables) that bind MAC addresses to egress ports. When a change in topology occurs (e.g. a link failure) the MAC address tables may appear to be invalid, as the paths between switches have changed. The switches may eventually re-learn the new information, but it may take considerable time, especially if the traffic is scarce and MAC address aging time is large (5 minutes by default). Based on that, if switch detects a change in the topology (e.g. link going up or down), it should notify all other switches that something has changed. In response to this notification, all switches will reduce their MAC address aging time to Forward_Time (15 secs by default) effectively fastening the aging process.

As we know, topology changes are signaled via special TCN BPDU, which is being sent upstream from the originating switch (the one that detected the change) to the root switch via the root ports. As the root switch hears the TCN BPDU, it will set TCN ACK flag in all its outgoing configuration BPDUs for the duration of Max_Age+Forward_Time. All switches that see this flag, will set their MAC address tables aging time to Forward_Time. Once the switch that originated the TCN BPDU will hear the TCN ACK, it will stop signaling about the topology change.

Now what is the effect of a topology change event? Two major things are impacted:

1) Connectivity. In some cases, it may time additional Forward_Delay seconds to expire the old MAC address information and recover connectivity. This may only happen if the old information persists in some switches, and the frames are black-holed.

2) Network performance. Shortening the MAC address table aging time results in less stable topology. When a switch loses a MAC address, it starts flooding frames for this destination, effectively acting like a hub. If the flow of packets in your network is not intense enough, the switches may start losing MAC address table information, resulting in excessive traffic flooding.

The second issue might become pretty dangerous with high number of topology changes. Excessive flooding might severely impact your network performance. Note, that this issue also pertains to L2 topologies that runs RSTP, as the topology changes are handled in the similar way. In order to reduce the number of topology changes, configure all edge ports in the topology (connected to hosts, IP Phones, servers) as spanning-tree portfast. Portfast ports do not generate TC events when they go up or down.

For more detailed description of topology change notification read the following great article at Cisco’s site:

Understanding Spanning-Tree Topology Changes

Part II of this post will consider UplinkFast and BackboneFast features, and their effect on STP convergence.

PS
We often use the formula Max_Age-Message_Age in this text, to be precise. However, most STP topologies are small enough to ignore Message_Age and assume the value of Max_Age for most calculations, unless Max_Age is artificially set to a very low value.

Subscribe to INE Blog Updates