I recently ran into a problem when trying to apply a base build to a Cisco 7600 router with dual supervisors which didn't seem to be documented anywhere, so I thought I'd record the issue and the eventual fix here.
The gist of the problem was that the secondary supervisor would not go from cold standby to hot, so in other words if the active supervisor crashed, the chassis would have to reboot in order to use the standby supervisor. The system was showing the reason for this as software mismatch, even though the two cards had the same image installed:
BUILD#show redundancy states
my state = 13 -ACTIVE
peer state = 4 -STANDBY COLD
Mode = Duplex
Unit = Primary
Unit ID = 5
Redundancy Mode (Operational) = rpr Reason: Software mismatch
Redundancy Mode (Configured) = sso
Redundancy State = rpr
Maintenance Mode = Disabled
Communications = Up
client count = 159
client_notification_TMR = 30000 milliseconds
keep_alive TMR = 9000 milliseconds
keep_alive count = 1
keep_alive threshold = 18
RF debug mask = 0x0
I won't say exactly which image this was, but it was an SSO-capable relase of IOS 15 and the two supervisors were *definitely* running the same code (one was copied from the other). The tale of software incompatibility seemed unlikely.
BUILD#show log
[snip]
*Jan 6 17:21:33.339: %SYS-SP-STDBY-5-RESTART: System restarted --
Cisco IOS Software, c7600s72033_sp Software (c7600s72033_sp-ADVIPSERVICESK9-M), Version 15.x(x)x, RELEASE SOFTWARE (xx)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2012 by Cisco Systems, Inc.
Compiled Mon 00-Jan-00 00:00 by prod_rel_team
*Jan 6 17:22:50.255 GMT: Config Sync: Bulk-sync failure due to Servicing Incompatibility. Please check full list of mismatched commands via:
show redundancy config-sync failures mcl
*Jan 6 17:22:50.255 GMT: Config Sync: Starting lines from MCL file:
-ipv6 mfib hardware-switching replication-mode ingress
*Jan 6 17:22:50.255 GMT: %ISSU-SP-3-INCOMPATIBLE_PEER_UID: Setting image (c7600s72033_sp-ADVIPSERVICESK9-M), version (15.x(x)xx) on peer uid (6) as incompatible
*Jan 6 17:22:50.995 GMT: %RF-SP-5-RF_RELOAD: Peer reload. Reason: ISSU Incompatibility
*Jan 6 17:22:50.995 GMT: %OIR-SP-3-PWRCYCLE: Card in module 6, is being power-cycled (RF request)
*Jan 6 17:22:51.999 GMT: %PFREDUN-SP-6-ACTIVE: Standby processor removed or reloaded, changing to Simplex mode
*Jan 6 17:22:53.195 GMT: %SNMP-5-MODULETRAP: Module 6 [Down] Trap
*Jan 6 17:24:19.791 GMT: %ISSU-SP-3-PEER_IMAGE_INCOMPATIBLE: Peer image (c7600s72033_sp-ADVIPSERVICESK9-M), version (15.x(x)xx) on peer uid (6) is incompatible
*Jan 6 17:24:19.791 GMT: %ISSU-SP-3-PEER_IMAGE_INCOMPATIBLE: Peer image (c7600s72033_sp-ADVIPSERVICESK9-M), version (15.x(x)xx) on peer uid (6) is incompatible
*Jan 6 17:25:53.149 GMT: %PFREDUN-SP-4-INCOMPATIBLE: Defaulting to RPR mode (Runtime incompatible)
*Jan 6 17:25:54.154 GMT: %PFREDUN-SP-6-ACTIVE: Standby initializing for RPR mode
*Jan 6 17:25:58.471 GMT: %SYS-SP-3-LOGGER_FLUSHED: System was paused for 00:00:00 to ensure console debugging output.
*Jan 6 17:25:58.763 GMT: %FABRIC-SP-5-CLEAR_BLOCK: Clear block option is off for the fabric in slot 6.
*Jan 6 17:25:58.859 GMT: %FABRIC-SP-5-FABRIC_MODULE_BACKUP: The Switch Fabric Module in slot 6 became standby
*Jan 6 17:26:00.299 GMT: %SNMP-5-MODULETRAP: Module 6 [Up] Trap
*Jan 6 17:26:00.279 GMT: %DIAG-SP-6-BYPASS: Module 6: Diagnostics is bypassed
*Jan 6 17:26:00.375 GMT: %OIR-SP-6-INSCARD: Card inserted in slot 6, interfaces are now online
*Jan 6 17:26:06.435 GMT: %RF-SP-5-RF_TERMINAL_STATE: Terminal state reached for (RPR)
OK, so clearly it doesn't like the "ipv6 mfib hardware-switching replication-mode ingress" command for some reason. Why it would work on one and not the other is a mystery but hey... I don't have big plans for IPv6 multicast so I don't care what replication mode it's in - let's just delete the offending command:
BUILD#conf t
Enter configuration commands, one per line. End with CNTL/Z.
BUILD(config)#no ipv6 mfib hardware-switching replication-mode ingress
no ipv6 mfib hardware-switching replication-mode ingress
^
% Invalid input detected at '^' marker.
So I can't negate the command, in fact there's no "mfib" stanza under "no ipv6":
BUILD(config)#no ipv6 ?
access-list Configure access lists
[snip]
local Specify local options
mld Global mld commands
[snip]
spd Selective Packet Discard (SPD)
In fact, even the original command seems to be invalid:
BUILD(config)#ipv6 mfib hardware-switching replication-mode ?
% Unrecognized command
And yet here it is in the config from which we booted:
BUILD#show start | inc ipv6
ipv6 unicast-routing
ipv6 mfib hardware-switching replication-mode ingress
no mls flow ipv6
?!?!
I guess it's one of those legacy commands they bodge the CLI to take but you can't see in the help. But it won't take the command anyway :| Eventually I found an equivalent command that it *would* take:
BUILD(config)#no ipv6 multicast hardware-switching replication-mode ingress
Warning: This command will change the replication mode for all address families.
BUILD(config)#do show run | inc ipv6
ipv6 unicast-routing
no mls flow ipv6
BUILD(config)#
At last, the problem config is gone! We're almost there but not quite, the previous failures sit in the active supervisor even if the standby is reloaded so we have to kick it to re-evaluate:
BUILD#show redundancy config-sync failures mcl
Mismatched Command List
-----------------------
-ipv6 mfib hardware-switching replication-mode ingress
BUILD#redundancy config-sync validate mismatched-commands
*Jan 7 08:26:28.600 GMT: CONFIG SYNC: MCL validation succeeded
*Jan 7 08:26:28.600 GMT: %ISSU-SP-3-PEER_IMAGE_REM_FROM_INCOMP_LIST: Peer image (c7600s72033_sp-ADVIPSERVICESK9-M), version (15.x(x)xx) on peer uid (6) being removed from the incompatibility list
BUILD#show redundancy config-sync failures mcl
Mismatched Command List
-----------------------
The list is Empty
BUILD#redundancy reload peer
Reload peer [confirm]
Preparing to reload peer
BUILD#
*Jan 7 08:27:16.096 GMT: RP sending reload request to Standby. User: admin on console, Reason: Admin reload CLI
BUILD#
Eventually...
*Jan 7 08:33:37.532 GMT: %HA_CONFIG_SYNC-6-BULK_CFGSYNC_SUCCEED: Bulk Sync succeeded*Jan 7 08:33:37.552 GMT: %RF-SP-5-RF_TERMINAL_STATE: Terminal state reached for (SSO)
*Jan 7 08:33:36.572 GMT: %PFREDUN-SP-STDBY-6-STANDBY: Ready for SSO mode
BUILD#show redundancy
Redundant System Information :
------------------------------
Available system uptime = 15 hours, 20 minutes
Switchovers system experienced = 0
Standby failures = 3
Last switchover reason = none
Hardware Mode = Duplex
Configured Redundancy Mode = sso
Operating Redundancy Mode = sso
Maintenance Mode = Disabled
Communications = Up
Current Processor Information :
-------------------------------
Active Location = slot 5
Current Software state = ACTIVE
Uptime in current state = 15 hours, 19 minutes
Image Version = Cisco IOS Software, c7600s72033_rp Software (c7600s72033_rp-ADVIPSERVICESK9-M), Version 15.x(x)xx, RELEASE SOFTWARE (fc2)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2012 by Cisco Systems, Inc.
Compiled Wed 01-Aug-12 20:15 by prod_rel_team
BOOT = sup-bootdisk:/c7600s72033-advipservicesk9-mz.15x-x.xx.bin,1;
CONFIG_FILE =
BOOTLDR =
Configuration register = 0x2102
Peer Processor Information :
----------------------------
Standby Location = slot 6
Current Software state = STANDBY HOT
Uptime in current state = 3 minutes
Image Version = Cisco IOS Software, c7600s72033_rp Software (c7600s72033_rp-ADVIPSERVICESK9-M), Version 15.x(x)xx, RELEASE SOFTWARE (fc2)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2012 by Cisco Systems, Inc.
Compiled Wed 01-Aug-12 20:15 by prod_rel_team
BOOT = sup-bootdisk:/c7600s72033-advipservicesk9-mz.15x-x.xx.bin,1;
CONFIG_FILE =
BOOTLDR =
Configuration register = 0x2102
BUILD#
Win!
Sunday, 31 January 2016
Saturday, 23 January 2016
Cisco Nexus Output Errors
A little while ago I was asked to investigate an IP based storage problem which had been traced back to a large amount of output errors on the port facing a particular compute node. The port was on a Cisco Nexus 5000 series device and I could see that, while output errors were clocking up at a massive rate, the switch was giving me nothing to go on as to what kind of errors they were. Every one of the usual suspects (collisions, etc) on the port showed nothing and yet the output errors were clocking up.
The ultimate answer turned out to be related to the fact that the Nexus 5k aims for low latency and as such performs cut-through switching. If you're not familiar with this term, please refer to this reasonably decent Cisco explanation, however at a high level there are two possible modes of transmission in switched networks:
1 - Store and Forward, where the entire frame is buffered into memory, the FCS is validated and then the frame is passed on. This mode can handle ports of differing speeds but obviously for large frames the serialisation delay becomes significant.
2 - Cut through, where just the header is checked for source / destination, plus any fields required for QoS / ACLs, then the rest of the frame is "cut through" onto the appropriate output port without buffering. This requires ports of an identical speed but offers lower latency.
One of the not-immediately-obvious side effects of cut through switching is that the FCS is only validated once the frame has been passed, by which point it is too late to take any corrective action. Essentially, the forwarding switch has already passed a broken fame on and, although it knows this, it can do nothing about it in retrospect and so it just says "oh, well" and increments its error counters on the ingress and egress ports.
If you are seeing output errors on a port with no other real explanation of how they got there, check other ports of the same speed for input errors. In my case it was due to a fibre fault - corrupted frames were entering one port, being cut through to another and causing errors to clock up on both.
The ultimate answer turned out to be related to the fact that the Nexus 5k aims for low latency and as such performs cut-through switching. If you're not familiar with this term, please refer to this reasonably decent Cisco explanation, however at a high level there are two possible modes of transmission in switched networks:
1 - Store and Forward, where the entire frame is buffered into memory, the FCS is validated and then the frame is passed on. This mode can handle ports of differing speeds but obviously for large frames the serialisation delay becomes significant.
2 - Cut through, where just the header is checked for source / destination, plus any fields required for QoS / ACLs, then the rest of the frame is "cut through" onto the appropriate output port without buffering. This requires ports of an identical speed but offers lower latency.
One of the not-immediately-obvious side effects of cut through switching is that the FCS is only validated once the frame has been passed, by which point it is too late to take any corrective action. Essentially, the forwarding switch has already passed a broken fame on and, although it knows this, it can do nothing about it in retrospect and so it just says "oh, well" and increments its error counters on the ingress and egress ports.
If you are seeing output errors on a port with no other real explanation of how they got there, check other ports of the same speed for input errors. In my case it was due to a fibre fault - corrupted frames were entering one port, being cut through to another and causing errors to clock up on both.
A Down in the Weeds look at Route Distinguishers
I was recently involved in a discussion on reddit about VRF route targets and route distinguishers and I noticed that there was a lot of misinformation flying around. That doesn't really surprise me as a lot of the folks on there are learning and I've heard some jarring misconceptions on the topic come from senior guys who have worked with MPLS for years. Most of the route target stuff was straightened out quite quickly and I will not get into any of that here, however the route distinguisher debate went on longer and covered some areas that seemed to be new or controversial to a lot of people.
The crux of the issue is that a lot of people believe the route distinguisher to be only locally significant - apparently there are many resources on the Internet which say this. I'll grant you that many are ambiguous, for example the first hit on Google for "route distinguisher and route target" says that "The route distinguisher has only one purpose, to make IPv4 prefixes globally unique. It is used by the PE routers to identify which VPN a packet belongs to". The well-respected packet life blog says "As its name implies, a route distinguisher (RD) distinguishes one set of routes (one VRF) from another. It is a unique number prepended to each route within a VRF to identify it as belonging to that particular VRF or customer." To be fair it goes on to clarify that "An RD is carried along with a route via MP-BGP when exchanging VPN routes with other PE routers", which suggests at the global significance.
In this post I hope to prove to anyone who is interested that route distinguishers are, in fact, both locally and globally significant and to demonstrate why this is important to understand.
If you've got this far then I assume you will already be familiar with what route targets and route distinguishers do, if not then I suggest you read up and play in the lab a while before venturing on.
The reason for needing a route distinguisher locally within a device is to extend the normal IPv4 prefixes that are known within each VRF in order to make them unique. Any locally learned IPv4 prefixes (connected, static or learned via an IPv4 routing protocol) are extended with the route distinguisher assigned to the VRF, as shown here:
It is also true that different PEs may use different route distinguishers for the same VRF without breaking anything:
PE1#show run vrf A
Building configuration...
Current configuration : 316 bytes
ip vrf A
rd 100:2439
route-target export 100:100
route-target import 100:100
!
!
interface FastEthernet1/0
ip vrf forwarding A
ip address 192.168.1.1 255.255.255.0
speed auto
duplex auto
!
router bgp 100
!
address-family ipv4 vrf A
redistribute connected
redistribute static
exit-address-family
!
end
PE1#show ip route vrf A
Routing Table: A
Gateway of last resort is not set
192.168.1.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.1.0/24 is directly connected, FastEthernet1/0
L 192.168.1.1/32 is directly connected, FastEthernet1/0
B 192.168.24.0/24 [200/0] via 10.255.255.2, 00:04:03
PE1#
PE2#show run vrf A
Building configuration...
Current configuration : 317 bytes
ip vrf A
rd 100:2458
route-target export 100:100
route-target import 100:100
!
!
interface FastEthernet1/0
ip vrf forwarding A
ip address 192.168.24.1 255.255.255.0
speed auto
duplex auto
!
router bgp 100
!
address-family ipv4 vrf A
redistribute connected
redistribute static
exit-address-family
!
end
PE2#show ip route vrf A
Routing Table: A
Gateway of last resort is not set
B 192.168.1.0/24 [200/0] via 10.255.255.1, 00:03:44
192.168.24.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.24.0/24 is directly connected, FastEthernet1/0
L 192.168.24.1/32 is directly connected, FastEthernet1/0
PE2#
So it's easy to see how the idea got started that RDs are only locally significant:
Route distinguishers don't need to match between devices in the same VRF in order for routes to be shared between them.
The first clue at the global significance of the route distinguisher is that it is carried in the MP-BGP updates:
PE1#show bgp vpnv4 unicast all 192.168.24.0/24
BGP routing table entry for 100:2439:192.168.24.0/24, version 8
Paths: (1 available, best #1, table A)
Not advertised to any peer
Refresh Epoch 2
Local, imported path from 100:2458:192.168.24.0/24 (global)
10.255.255.2 (metric 3) from 10.255.255.200 (10.255.255.200)
Origin incomplete, metric 0, localpref 100, valid, internal, best
Extended Community: RT:100:100
Originator: 10.255.255.2, Cluster list: 10.255.255.200
mpls labels in/out nolabel/28
rx pathid: 0, tx pathid: 0x0
BGP routing table entry for 100:2458:192.168.24.0/24, version 7
Paths: (1 available, best #1, no table)
Not advertised to any peer
Refresh Epoch 2
Local
10.255.255.2 (metric 3) from 10.255.255.200 (10.255.255.200)
Origin incomplete, metric 0, localpref 100, valid, internal, best
Extended Community: RT:100:100
Originator: 10.255.255.2, Cluster list: 10.255.255.200
mpls labels in/out nolabel/28
rx pathid: 0, tx pathid: 0x0
PE1#
Interestingly, we have 2 different prefixes here. One is the original (100:2458:192.168.24.0/24) which we learned over the network, while the other is the same IPv4 prefix but prepended with the RD of the VRF which imports it (100:2439:192.168.24.0/24). If we imported it into multiple VRFs then we would have an additional copy for each RD used by the respective VRFs.
If the RD were only locally significant then why would the protocol designers send it? You may be thinking "otherwise you couldn't overlap prefixes!", but surely route targets would be enough to achieve this? If you heard a prefix of 10.0.0.0/8 announced with a route target imported by VRF A then you would import it into VRF A and not VRF B, if you heard a different announcement for 10.0.0.0/8 with a route target imported by VRF B then you would import it into VRF B and not VRF A.
That could kind of work, in theory, but it would essentially break the whole BGP paradigm as you would have multiple copies of the same prefix in use concurrently for different purposes. BGP likes to determine the best path and only offers that into the FIB. With a unique RD against each of the two 10.0.0.0/8 routes, BGP is able to do its best path determination and pass the two, now different, routes into their respective VRFs.
So the route distinguishers overcome that problem, but is that the only reason why they are carried in MP-BGP? That would be a fairly weak argument for global significance, but the best path point here touches on a much stronger case.
One key thing to bear in mind which is often forgotten in the grand scheme of things is that the PE is not the only place where BGP best path calculations happen. Any MPLS network of even moderate scale will be using BGP route reflectors to keep the number of BGP sessions under control, and the route reflectors themselves perform a best path determination on the routes they receive before sending them out to their route reflector clients.
This extends the previous case to all the route reflector's clients, so essentially the entire AS. Let's take an example where the admin has been sloppy and has failed to keep RDs globally unique:
Notice that VRF A uses import / export RT 100:100, VRF B uses import / export RT of 100:200. The network administrator has tried to assign unique route distinguishers per VRF per device, but has made an error and overlapped the route targets used on PE1's VRF A and PE3's VRF B.
The two VRFs are completely distinct from one another and they are not even present on the same PEs. We can see that VRF A is only learning VRF A's routes and VRF B is only learning VRF B's routes:
PE1#show ip vrf
Name Default RD Interfaces
A 100:2439 Fa1/0
PE1#show ip route vrf A
Routing Table: A
Gateway of last resort is not set
192.168.1.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.1.0/24 is directly connected, FastEthernet1/0
L 192.168.1.1/32 is directly connected, FastEthernet1/0
B 192.168.24.0/24 [200/0] via 10.255.255.2, 00:05:01
PE1#
PE2#show ip vrf
Name Default RD Interfaces
A 100:2458 Fa1/0
PE2#show ip route vrf A
Routing Table: A
Gateway of last resort is not set
B 192.168.1.0/24 [200/0] via 10.255.255.1, 00:05:15
192.168.24.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.24.0/24 is directly connected, FastEthernet1/0
L 192.168.24.1/32 is directly connected, FastEthernet1/0
PE2#
PE3#show ip vrf
Name Default RD Interfaces
B 100:2439 Fa1/0
PE3#show ip route vrf B
Routing Table: B
Gateway of last resort is not set
192.168.3.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.3.0/24 is directly connected, FastEthernet1/0
L 192.168.3.1/32 is directly connected, FastEthernet1/0
B 192.168.19.0/24 [200/0] via 10.255.255.4, 00:01:54
PE3#
PE4#show ip vrf
Name Default RD Interfaces
B 100:2895 Fa1/0
PE4#show ip route vrf B
Routing Table: B
Gateway of last resort is not set
B 192.168.3.0/24 [200/0] via 10.255.255.3, 00:02:21
192.168.19.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.19.0/24 is directly connected, FastEthernet1/0
L 192.168.19.1/32 is directly connected, FastEthernet1/0
PE4#
Now, let's introduce an additional subnet on VRF A. It uses the same address space as VRF B but they are completely separate so that should be fine (right?!).
PE1(config)#ip route vrf A 192.168.3.0 255.255.255.0 192.168.1.10
Customer A is now happy as their new network is reachable over the VRF but all of a sudden we have customer B on the phone, complaining that their site (which used to work) is off the air. Looking into PE 4 we can see why:
PE4#show ip route vrf B
Routing Table: B
Gateway of last resort is not set
192.168.19.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.19.0/24 is directly connected, FastEthernet1/0
L 192.168.19.1/32 is directly connected, FastEthernet1/0
PE4#
The route to 192.168.3.0/24 has disappeared! Why is that? Looking on the route reflector gives us the answer:
RR#show bgp vpnv4 uni all 192.168.3.0/24
BGP routing table entry for 100:2439:192.168.3.0/24, version 14
Paths: (2 available, best #1, no table)
Advertised to update-groups:
3
Refresh Epoch 1
Local, (Received from a RR-client)
10.255.255.1 (metric 3) from 10.255.255.1 (10.255.255.1)
Origin incomplete, metric 0, localpref 100, valid, internal, best
Extended Community: RT:100:100
mpls labels in/out nolabel/25
rx pathid: 0, tx pathid: 0x0
Refresh Epoch 1
Local, (Received from a RR-client)
10.255.255.3 (metric 3) from 10.255.255.3 (10.255.255.3)
Origin incomplete, metric 0, localpref 100, valid, internal
Extended Community: RT:100:200
mpls labels in/out nolabel/28
rx pathid: 0, tx pathid: 0
RR#
Both 192.168.3.0/24 prefixes are being advertised with the same RD but different route targets. The route reflector has, therefore, seen the two "identical" prefixes and has chosen a best path - for want of a better metric it has chosen based on lowest next hop IP (PE1, VRF A):
Since the route reflector only advertises best paths to its clients, that means nobody gets to hear about the route from PE 3, VRF B. The route advertised from PE1 has a route target of 100:100, which doesn't match any VRFs on PE4 so it just discards the route leaving it with no way to reach the 192.168.3.0/24 network.
This proves that:
If you fail to apply globally unique route distinguishers on at least a per VRF basis, changes in one VRF can impact on another. This is irrespective of whether there are devices common to the two VRFs and occurs even when their route targets are completely different.
We want to use the the purple link to reach this prefix when it's available for administrative reasons (say the purple link is cheaper, or faster). Routes learned over the purple link are tagged with community 100:123 to allow upstream PEs to recognise this. Let's compare the case where both PEs use the same RD vs. the case where each PE uses a unique RD for the same VRF. Firstly, the same RD:
PE1 and PE2 are set to use the same RD. PE3 wants to use purple routes so it is set up with a policy to favour anything with a 100:123 community attached, as follows:
ip vrf A
rd 100:2512
import map A-import-map
route-target export 100:100
route-target import 100:100
!
route-map A-import-map permit 10
match community purple
set local-preference 200
!
route-map A-import-map permit 20
set local-preference 100
!
ip community-list standard purple permit 100:123
For some reason, though, our traffic all goes out via the orange link. What is happening here is that the route reflector is again receiving two identical prefixes - this does not cause a reachability problem as both prefixes reside within the same VRF, but it does mean that the route reflector makes a best path determination and discards one of the routes. PE3 only receives one route so its policy has to take what it can get:
PE3#show bgp vpnv4 uni all 192.168.200.0/24
BGP routing table entry for 100:2439:192.168.200.0/24, version 61
Paths: (1 available, best #1, no table)
Not advertised to any peer
Refresh Epoch 4
200
10.255.255.1 (metric 4) from 10.255.255.200 (10.255.255.200)
Origin incomplete, metric 0, localpref 100, valid, internal, best
Extended Community: RT:100:100
Originator: 10.255.255.1, Cluster list: 10.255.255.200
mpls labels in/out nolabel/30
rx pathid: 0, tx pathid: 0x0
BGP routing table entry for 100:2512:192.168.200.0/24, version 68
Paths: (1 available, best #1, table A)
Not advertised to any peer
Refresh Epoch 4
200, imported path from 100:2439:192.168.200.0/24 (global)
10.255.255.1 (metric 4) from 10.255.255.200 (10.255.255.200)
Origin incomplete, metric 0, localpref 100, valid, internal, best
Extended Community: RT:100:100
Originator: 10.255.255.1, Cluster list: 10.255.255.200
mpls labels in/out nolabel/30
rx pathid: 0, tx pathid: 0x0
PE3#
Clearly this is not doing what we want. The local VRF table only has one option, and that's the orange route. Let's try the same thing but with unique RDs per VRF per PE:
Now we see this at PE3:
PE3#show bgp vpnv4 uni all 192.168.200.0/24
BGP routing table entry for 100:2439:192.168.200.0/24, version 61
Paths: (1 available, best #1, no table)
Not advertised to any peer
Refresh Epoch 4
200
10.255.255.1 (metric 4) from 10.255.255.200 (10.255.255.200)
Origin incomplete, metric 0, localpref 100, valid, internal, best
Extended Community: RT:100:100
Originator: 10.255.255.1, Cluster list: 10.255.255.200
mpls labels in/out nolabel/30
rx pathid: 0, tx pathid: 0x0
BGP routing table entry for 100:2458:192.168.200.0/24, version 62
Paths: (1 available, best #1, no table)
Not advertised to any peer
Refresh Epoch 4
200
10.255.255.2 (metric 4) from 10.255.255.200 (10.255.255.200)
Origin incomplete, metric 0, localpref 100, valid, internal, best
Community: 6553723
Extended Community: RT:100:100
Originator: 10.255.255.2, Cluster list: 10.255.255.200
mpls labels in/out nolabel/29
rx pathid: 0, tx pathid: 0x0
BGP routing table entry for 100:2512:192.168.200.0/24, version 64
Paths: (2 available, best #1, table A)
Not advertised to any peer
Refresh Epoch 4
200, imported path from 100:2458:192.168.200.0/24 (global)
10.255.255.2 (metric 4) from 10.255.255.200 (10.255.255.200)
Origin incomplete, metric 0, localpref 200, valid, internal, best
Community: 6553723
Extended Community: RT:100:100
Originator: 10.255.255.2, Cluster list: 10.255.255.200
mpls labels in/out nolabel/29
rx pathid: 0, tx pathid: 0x0
Refresh Epoch 4
200, imported path from 100:2439:192.168.200.0/24 (global)
10.255.255.1 (metric 4) from 10.255.255.200 (10.255.255.200)
Origin incomplete, metric 0, localpref 100, valid, internal
Extended Community: RT:100:100
Originator: 10.255.255.1, Cluster list: 10.255.255.200
mpls labels in/out nolabel/30
rx pathid: 0, tx pathid: 0
PE3#
Now we can see that two routes are received (purple and orange) and our route map has taken effect, pushing the purple route up to a better local preference and causing it to be selected into the VRF B table.
In summary, if you use the same route distinguisher at more than one point where the same IP prefix is learned, the best path determination will occur at the route reflector, not the receiving PE. This best path determination is likely to be quite coarse and applying per-VRF policies on route reflectors is inappropriate. Using unique RDs ensures that multiple copies of the same IP prefix can be learned by other PEs, allowing the best path determination to be done by the receiving PE using arbitrary local policies on a per-VRF basis.
The final example is one of the most widely seen use cases for unique RD per VRF per PE. Let's take a look at the failover times for a route to move between PE1 and PE2 in the following scenario:
In the case of matching RDs, only one route for the destination is learned throughout the network so when a failure occurs a series of BGP updates need to occur before traffic can switch paths. In a real environment this chain of updates may take time. In a scaled environment (and for illustrative purposes in this lab), there may be hierarchical route reflectors and these may be configured with an update delay. Here is an example failover with two-tiered route reflectors and an update delay of 10s:
CE2#ping 1.1.1.1 repeat 100000
Type escape sequence to abort.
Sending 100000, 100-byte ICMP Echos to 1.1.1.1, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.......UUUUUUUUUUUUUUUUU
UUUUU.UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU.UUUUUUUUUUUUUUUU
UUUUUUUUUUUUUUUUUUUUUUUUUUU.UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU
UUUUU.UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU.!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.
Success rate is 37 percent (130/344), round-trip min/avg/max = 4/8/152 ms
CE2#
This failover takes around 30 seconds due to cascading updates being batched and delayed multiple times. The "U" marks above show that the edge PE has no route to the destination, due to having received the withdraw from the primary path but not yet having received the advertisement from the standby. The diagram below shows the BGP updates which need to take place before routing converges to the standby path:
Compare this to the output when unique RDs are set, meaning that the alternate path is already learned throughout the network but is simply not selected by the ingress PE:
CE2#ping 1.1.1.1 repeat 100000
Type escape sequence to abort.
Sending 100000, 100-byte ICMP Echos to 1.1.1.1, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.......!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.
Success rate is 94 percent (131/139), round-trip min/avg/max = 5/9/16 ms
CE2#
As you can see, the failover is much faster, around 10-15s, and there are no unreachables as the PE always has a route in its table (even if it is a stale one for a while).
This can be improved further by using label per VRF mode in addition to unique RDs. Without going into too much detail, the standard mode for Cisco IOS is to generate a label per prefix. The LFIB of the generating PE will have an entry saying "if I receive label X, I will stick encap Y on it and throw it out of interface Z". This can be changed as follows:
PE2(config)#mpls label mode vrf A protocol bgp-vpnv4 per-vrf
In label per VRF mode, the same label is advertised for all prefixes advertised from within a particular VRF - the corresponding LFIB entry essentially says "rip off the label and route the packet that follows". When in label per VRF mode, we don't wait for any BGP updates at all because the egress PE where the primary link just failed can instantly use the standby route, which it already knows thanks to the unique RDs. Traffic gets U-turned back into the MPLS network while the BGP convergence occurs, but the traffic at least arrives:
Traffic temporarily hops via the primary PE into the secondary, restoring connectivity while BGP takes its sweet time to converge. Once the routing updates have propagated, traffic will go directly to the secondary PE. Failover times here are much more impressive:
CE2#ping 1.1.1.1 repeat 100000
Type escape sequence to abort.
Sending 100000, 100-byte ICMP Echos to 1.1.1.1, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Success rate is 99 percent (200/201), round-trip min/avg/max = 5/7/12 ms
CE2#
The crux of the issue is that a lot of people believe the route distinguisher to be only locally significant - apparently there are many resources on the Internet which say this. I'll grant you that many are ambiguous, for example the first hit on Google for "route distinguisher and route target" says that "The route distinguisher has only one purpose, to make IPv4 prefixes globally unique. It is used by the PE routers to identify which VPN a packet belongs to". The well-respected packet life blog says "As its name implies, a route distinguisher (RD) distinguishes one set of routes (one VRF) from another. It is a unique number prepended to each route within a VRF to identify it as belonging to that particular VRF or customer." To be fair it goes on to clarify that "An RD is carried along with a route via MP-BGP when exchanging VPN routes with other PE routers", which suggests at the global significance.
In this post I hope to prove to anyone who is interested that route distinguishers are, in fact, both locally and globally significant and to demonstrate why this is important to understand.
Local Significance
If you've got this far then I assume you will already be familiar with what route targets and route distinguishers do, if not then I suggest you read up and play in the lab a while before venturing on.
The reason for needing a route distinguisher locally within a device is to extend the normal IPv4 prefixes that are known within each VRF in order to make them unique. Any locally learned IPv4 prefixes (connected, static or learned via an IPv4 routing protocol) are extended with the route distinguisher assigned to the VRF, as shown here:
It is also true that different PEs may use different route distinguishers for the same VRF without breaking anything:
PE1#show run vrf A
Building configuration...
Current configuration : 316 bytes
ip vrf A
rd 100:2439
route-target export 100:100
route-target import 100:100
!
!
interface FastEthernet1/0
ip vrf forwarding A
ip address 192.168.1.1 255.255.255.0
speed auto
duplex auto
!
router bgp 100
!
address-family ipv4 vrf A
redistribute connected
redistribute static
exit-address-family
!
end
PE1#show ip route vrf A
Routing Table: A
Gateway of last resort is not set
192.168.1.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.1.0/24 is directly connected, FastEthernet1/0
L 192.168.1.1/32 is directly connected, FastEthernet1/0
B 192.168.24.0/24 [200/0] via 10.255.255.2, 00:04:03
PE1#
PE2#show run vrf A
Building configuration...
Current configuration : 317 bytes
ip vrf A
rd 100:2458
route-target export 100:100
route-target import 100:100
!
!
interface FastEthernet1/0
ip vrf forwarding A
ip address 192.168.24.1 255.255.255.0
speed auto
duplex auto
!
router bgp 100
!
address-family ipv4 vrf A
redistribute connected
redistribute static
exit-address-family
!
end
PE2#show ip route vrf A
Routing Table: A
Gateway of last resort is not set
B 192.168.1.0/24 [200/0] via 10.255.255.1, 00:03:44
192.168.24.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.24.0/24 is directly connected, FastEthernet1/0
L 192.168.24.1/32 is directly connected, FastEthernet1/0
PE2#
So it's easy to see how the idea got started that RDs are only locally significant:
Route distinguishers don't need to match between devices in the same VRF in order for routes to be shared between them.
Global Significance
The first clue at the global significance of the route distinguisher is that it is carried in the MP-BGP updates:
PE1#show bgp vpnv4 unicast all 192.168.24.0/24
BGP routing table entry for 100:2439:192.168.24.0/24, version 8
Paths: (1 available, best #1, table A)
Not advertised to any peer
Refresh Epoch 2
Local, imported path from 100:2458:192.168.24.0/24 (global)
10.255.255.2 (metric 3) from 10.255.255.200 (10.255.255.200)
Origin incomplete, metric 0, localpref 100, valid, internal, best
Extended Community: RT:100:100
Originator: 10.255.255.2, Cluster list: 10.255.255.200
mpls labels in/out nolabel/28
rx pathid: 0, tx pathid: 0x0
BGP routing table entry for 100:2458:192.168.24.0/24, version 7
Paths: (1 available, best #1, no table)
Not advertised to any peer
Refresh Epoch 2
Local
10.255.255.2 (metric 3) from 10.255.255.200 (10.255.255.200)
Origin incomplete, metric 0, localpref 100, valid, internal, best
Extended Community: RT:100:100
Originator: 10.255.255.2, Cluster list: 10.255.255.200
mpls labels in/out nolabel/28
rx pathid: 0, tx pathid: 0x0
PE1#
Interestingly, we have 2 different prefixes here. One is the original (100:2458:192.168.24.0/24) which we learned over the network, while the other is the same IPv4 prefix but prepended with the RD of the VRF which imports it (100:2439:192.168.24.0/24). If we imported it into multiple VRFs then we would have an additional copy for each RD used by the respective VRFs.
If the RD were only locally significant then why would the protocol designers send it? You may be thinking "otherwise you couldn't overlap prefixes!", but surely route targets would be enough to achieve this? If you heard a prefix of 10.0.0.0/8 announced with a route target imported by VRF A then you would import it into VRF A and not VRF B, if you heard a different announcement for 10.0.0.0/8 with a route target imported by VRF B then you would import it into VRF B and not VRF A.
That could kind of work, in theory, but it would essentially break the whole BGP paradigm as you would have multiple copies of the same prefix in use concurrently for different purposes. BGP likes to determine the best path and only offers that into the FIB. With a unique RD against each of the two 10.0.0.0/8 routes, BGP is able to do its best path determination and pass the two, now different, routes into their respective VRFs.
So the route distinguishers overcome that problem, but is that the only reason why they are carried in MP-BGP? That would be a fairly weak argument for global significance, but the best path point here touches on a much stronger case.
The Route Reflector Problem
One key thing to bear in mind which is often forgotten in the grand scheme of things is that the PE is not the only place where BGP best path calculations happen. Any MPLS network of even moderate scale will be using BGP route reflectors to keep the number of BGP sessions under control, and the route reflectors themselves perform a best path determination on the routes they receive before sending them out to their route reflector clients.
This extends the previous case to all the route reflector's clients, so essentially the entire AS. Let's take an example where the admin has been sloppy and has failed to keep RDs globally unique:
Notice that VRF A uses import / export RT 100:100, VRF B uses import / export RT of 100:200. The network administrator has tried to assign unique route distinguishers per VRF per device, but has made an error and overlapped the route targets used on PE1's VRF A and PE3's VRF B.
The two VRFs are completely distinct from one another and they are not even present on the same PEs. We can see that VRF A is only learning VRF A's routes and VRF B is only learning VRF B's routes:
PE1#show ip vrf
Name Default RD Interfaces
A 100:2439 Fa1/0
PE1#show ip route vrf A
Routing Table: A
Gateway of last resort is not set
192.168.1.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.1.0/24 is directly connected, FastEthernet1/0
L 192.168.1.1/32 is directly connected, FastEthernet1/0
B 192.168.24.0/24 [200/0] via 10.255.255.2, 00:05:01
PE1#
PE2#show ip vrf
Name Default RD Interfaces
A 100:2458 Fa1/0
PE2#show ip route vrf A
Routing Table: A
Gateway of last resort is not set
B 192.168.1.0/24 [200/0] via 10.255.255.1, 00:05:15
192.168.24.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.24.0/24 is directly connected, FastEthernet1/0
L 192.168.24.1/32 is directly connected, FastEthernet1/0
PE2#
PE3#show ip vrf
Name Default RD Interfaces
B 100:2439 Fa1/0
PE3#show ip route vrf B
Routing Table: B
Gateway of last resort is not set
192.168.3.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.3.0/24 is directly connected, FastEthernet1/0
L 192.168.3.1/32 is directly connected, FastEthernet1/0
B 192.168.19.0/24 [200/0] via 10.255.255.4, 00:01:54
PE3#
PE4#show ip vrf
Name Default RD Interfaces
B 100:2895 Fa1/0
PE4#show ip route vrf B
Routing Table: B
Gateway of last resort is not set
B 192.168.3.0/24 [200/0] via 10.255.255.3, 00:02:21
192.168.19.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.19.0/24 is directly connected, FastEthernet1/0
L 192.168.19.1/32 is directly connected, FastEthernet1/0
PE4#
Now, let's introduce an additional subnet on VRF A. It uses the same address space as VRF B but they are completely separate so that should be fine (right?!).
PE1(config)#ip route vrf A 192.168.3.0 255.255.255.0 192.168.1.10
Customer A is now happy as their new network is reachable over the VRF but all of a sudden we have customer B on the phone, complaining that their site (which used to work) is off the air. Looking into PE 4 we can see why:
PE4#show ip route vrf B
Routing Table: B
Gateway of last resort is not set
192.168.19.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.19.0/24 is directly connected, FastEthernet1/0
L 192.168.19.1/32 is directly connected, FastEthernet1/0
PE4#
The route to 192.168.3.0/24 has disappeared! Why is that? Looking on the route reflector gives us the answer:
RR#show bgp vpnv4 uni all 192.168.3.0/24
BGP routing table entry for 100:2439:192.168.3.0/24, version 14
Paths: (2 available, best #1, no table)
Advertised to update-groups:
3
Refresh Epoch 1
Local, (Received from a RR-client)
10.255.255.1 (metric 3) from 10.255.255.1 (10.255.255.1)
Origin incomplete, metric 0, localpref 100, valid, internal, best
Extended Community: RT:100:100
mpls labels in/out nolabel/25
rx pathid: 0, tx pathid: 0x0
Refresh Epoch 1
Local, (Received from a RR-client)
10.255.255.3 (metric 3) from 10.255.255.3 (10.255.255.3)
Origin incomplete, metric 0, localpref 100, valid, internal
Extended Community: RT:100:200
mpls labels in/out nolabel/28
rx pathid: 0, tx pathid: 0
RR#
Both 192.168.3.0/24 prefixes are being advertised with the same RD but different route targets. The route reflector has, therefore, seen the two "identical" prefixes and has chosen a best path - for want of a better metric it has chosen based on lowest next hop IP (PE1, VRF A):
Since the route reflector only advertises best paths to its clients, that means nobody gets to hear about the route from PE 3, VRF B. The route advertised from PE1 has a route target of 100:100, which doesn't match any VRFs on PE4 so it just discards the route leaving it with no way to reach the 192.168.3.0/24 network.
This proves that:
If you fail to apply globally unique route distinguishers on at least a per VRF basis, changes in one VRF can impact on another. This is irrespective of whether there are devices common to the two VRFs and occurs even when their route targets are completely different.
Policy at the PE
A similar but more subtle example of where globally unique route distinguishers are a benefit is in the case where you have a multi-homed network connected or routed via two PEs for resilience.We want to use the the purple link to reach this prefix when it's available for administrative reasons (say the purple link is cheaper, or faster). Routes learned over the purple link are tagged with community 100:123 to allow upstream PEs to recognise this. Let's compare the case where both PEs use the same RD vs. the case where each PE uses a unique RD for the same VRF. Firstly, the same RD:
PE1 and PE2 are set to use the same RD. PE3 wants to use purple routes so it is set up with a policy to favour anything with a 100:123 community attached, as follows:
ip vrf A
rd 100:2512
import map A-import-map
route-target export 100:100
route-target import 100:100
!
route-map A-import-map permit 10
match community purple
set local-preference 200
!
route-map A-import-map permit 20
set local-preference 100
!
ip community-list standard purple permit 100:123
For some reason, though, our traffic all goes out via the orange link. What is happening here is that the route reflector is again receiving two identical prefixes - this does not cause a reachability problem as both prefixes reside within the same VRF, but it does mean that the route reflector makes a best path determination and discards one of the routes. PE3 only receives one route so its policy has to take what it can get:
PE3#show bgp vpnv4 uni all 192.168.200.0/24
BGP routing table entry for 100:2439:192.168.200.0/24, version 61
Paths: (1 available, best #1, no table)
Not advertised to any peer
Refresh Epoch 4
200
10.255.255.1 (metric 4) from 10.255.255.200 (10.255.255.200)
Origin incomplete, metric 0, localpref 100, valid, internal, best
Extended Community: RT:100:100
Originator: 10.255.255.1, Cluster list: 10.255.255.200
mpls labels in/out nolabel/30
rx pathid: 0, tx pathid: 0x0
BGP routing table entry for 100:2512:192.168.200.0/24, version 68
Paths: (1 available, best #1, table A)
Not advertised to any peer
Refresh Epoch 4
200, imported path from 100:2439:192.168.200.0/24 (global)
10.255.255.1 (metric 4) from 10.255.255.200 (10.255.255.200)
Origin incomplete, metric 0, localpref 100, valid, internal, best
Extended Community: RT:100:100
Originator: 10.255.255.1, Cluster list: 10.255.255.200
mpls labels in/out nolabel/30
rx pathid: 0, tx pathid: 0x0
PE3#
Clearly this is not doing what we want. The local VRF table only has one option, and that's the orange route. Let's try the same thing but with unique RDs per VRF per PE:
Now we see this at PE3:
PE3#show bgp vpnv4 uni all 192.168.200.0/24
BGP routing table entry for 100:2439:192.168.200.0/24, version 61
Paths: (1 available, best #1, no table)
Not advertised to any peer
Refresh Epoch 4
200
10.255.255.1 (metric 4) from 10.255.255.200 (10.255.255.200)
Origin incomplete, metric 0, localpref 100, valid, internal, best
Extended Community: RT:100:100
Originator: 10.255.255.1, Cluster list: 10.255.255.200
mpls labels in/out nolabel/30
rx pathid: 0, tx pathid: 0x0
BGP routing table entry for 100:2458:192.168.200.0/24, version 62
Paths: (1 available, best #1, no table)
Not advertised to any peer
Refresh Epoch 4
200
10.255.255.2 (metric 4) from 10.255.255.200 (10.255.255.200)
Origin incomplete, metric 0, localpref 100, valid, internal, best
Community: 6553723
Extended Community: RT:100:100
Originator: 10.255.255.2, Cluster list: 10.255.255.200
mpls labels in/out nolabel/29
rx pathid: 0, tx pathid: 0x0
BGP routing table entry for 100:2512:192.168.200.0/24, version 64
Paths: (2 available, best #1, table A)
Not advertised to any peer
Refresh Epoch 4
200, imported path from 100:2458:192.168.200.0/24 (global)
10.255.255.2 (metric 4) from 10.255.255.200 (10.255.255.200)
Origin incomplete, metric 0, localpref 200, valid, internal, best
Community: 6553723
Extended Community: RT:100:100
Originator: 10.255.255.2, Cluster list: 10.255.255.200
mpls labels in/out nolabel/29
rx pathid: 0, tx pathid: 0x0
Refresh Epoch 4
200, imported path from 100:2439:192.168.200.0/24 (global)
10.255.255.1 (metric 4) from 10.255.255.200 (10.255.255.200)
Origin incomplete, metric 0, localpref 100, valid, internal
Extended Community: RT:100:100
Originator: 10.255.255.1, Cluster list: 10.255.255.200
mpls labels in/out nolabel/30
rx pathid: 0, tx pathid: 0
PE3#
Now we can see that two routes are received (purple and orange) and our route map has taken effect, pushing the purple route up to a better local preference and causing it to be selected into the VRF B table.
In summary, if you use the same route distinguisher at more than one point where the same IP prefix is learned, the best path determination will occur at the route reflector, not the receiving PE. This best path determination is likely to be quite coarse and applying per-VRF policies on route reflectors is inappropriate. Using unique RDs ensures that multiple copies of the same IP prefix can be learned by other PEs, allowing the best path determination to be done by the receiving PE using arbitrary local policies on a per-VRF basis.
Fast Failover
The final example is one of the most widely seen use cases for unique RD per VRF per PE. Let's take a look at the failover times for a route to move between PE1 and PE2 in the following scenario:
In the case of matching RDs, only one route for the destination is learned throughout the network so when a failure occurs a series of BGP updates need to occur before traffic can switch paths. In a real environment this chain of updates may take time. In a scaled environment (and for illustrative purposes in this lab), there may be hierarchical route reflectors and these may be configured with an update delay. Here is an example failover with two-tiered route reflectors and an update delay of 10s:
CE2#ping 1.1.1.1 repeat 100000
Type escape sequence to abort.
Sending 100000, 100-byte ICMP Echos to 1.1.1.1, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.......UUUUUUUUUUUUUUUUU
UUUUU.UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU.UUUUUUUUUUUUUUUU
UUUUUUUUUUUUUUUUUUUUUUUUUUU.UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU
UUUUU.UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU.!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.
Success rate is 37 percent (130/344), round-trip min/avg/max = 4/8/152 ms
CE2#
This failover takes around 30 seconds due to cascading updates being batched and delayed multiple times. The "U" marks above show that the edge PE has no route to the destination, due to having received the withdraw from the primary path but not yet having received the advertisement from the standby. The diagram below shows the BGP updates which need to take place before routing converges to the standby path:
Compare this to the output when unique RDs are set, meaning that the alternate path is already learned throughout the network but is simply not selected by the ingress PE:
CE2#ping 1.1.1.1 repeat 100000
Type escape sequence to abort.
Sending 100000, 100-byte ICMP Echos to 1.1.1.1, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.......!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.
Success rate is 94 percent (131/139), round-trip min/avg/max = 5/9/16 ms
CE2#
As you can see, the failover is much faster, around 10-15s, and there are no unreachables as the PE always has a route in its table (even if it is a stale one for a while).
Super Fast Failover
This can be improved further by using label per VRF mode in addition to unique RDs. Without going into too much detail, the standard mode for Cisco IOS is to generate a label per prefix. The LFIB of the generating PE will have an entry saying "if I receive label X, I will stick encap Y on it and throw it out of interface Z". This can be changed as follows:
PE2(config)#mpls label mode vrf A protocol bgp-vpnv4 per-vrf
In label per VRF mode, the same label is advertised for all prefixes advertised from within a particular VRF - the corresponding LFIB entry essentially says "rip off the label and route the packet that follows". When in label per VRF mode, we don't wait for any BGP updates at all because the egress PE where the primary link just failed can instantly use the standby route, which it already knows thanks to the unique RDs. Traffic gets U-turned back into the MPLS network while the BGP convergence occurs, but the traffic at least arrives:
Traffic temporarily hops via the primary PE into the secondary, restoring connectivity while BGP takes its sweet time to converge. Once the routing updates have propagated, traffic will go directly to the secondary PE. Failover times here are much more impressive:
CE2#ping 1.1.1.1 repeat 100000
Type escape sequence to abort.
Sending 100000, 100-byte ICMP Echos to 1.1.1.1, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Success rate is 99 percent (200/201), round-trip min/avg/max = 5/7/12 ms
CE2#
One ping lost / two seconds to fail over. Much better, and only possible with unique RDs!
Using unique RDs allows for much faster failover times, due to decreased numbers of BGP updates being required to converge following failures. This is particularly true when using label per VRF mode, since egress PEs can U-turn traffic without waiting for any BGP convergence at all.
Labels:
BGP,
convergence,
fast failover,
label,
LFIB,
MP-BGP,
MPLS,
RD,
route distinguisher,
route-target,
RT,
switch,
time,
VRF
Subscribe to:
Posts (Atom)