Useful Articles

25/recent/ticker-posts

MPLS LDP IGP SYNC

MPLS LDP IGP SYNC - Concept


MPLS LDP IGP Sync is feature used to avoid traffic blackhole scenarios in the MPLS VPN networks. The traffic blackhole could occur when IGP is running up and fine on a Backbone link but, LDP is down. This could be due to mis-configuration or due some hardware/software bug issues in the network.

To avoid such network outages, we enable MPLS LDP IGP SYNC feature on all the routers in the MPLS VPN backbone network.

MPLS LDP IGP SYNC - Syntax

Router(config-router)#mpls ldp sync

Considering the diagram below, given are the scenarios which will help in understanding the concept:

In this topology, R2-R3-R4 is primary IGP path and R2-R6-R4 is the backup IGP path.

MPLS LDP IGP SYNC

MPLS LDP IGP SYNC - Scenario1

When “MPLS LDP IGP SYNC” feature is not enabled/configured in the Backbone network.


The command below shows that MPLS LDP IGP Sync is not enabled.

R3#sh mpls ldp igp sync
FastEthernet0/0:
LDP configured; LDP-IGP Synchronization not enabled.
FastEthernet0/1:
LDP configured; LDP-IGP Synchronization not enabled.

If “mpls ldp igp sync” feature is not enabled/configured in the Backbone Network, and say, we disable LDP on any of the links (R2-R3 or R3-R4) on the primary path and if we ping from CE R1 to R5, the Ping will not work. This is because the traffic gets dropped at R3(being in the best path).

R1#ping 9.9.0.5 source 9.9.0.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 9.9.0.5, timeout is 2 seconds:
Packet sent with a source address of 9.9.0.1
…..
Success rate is 0 percent (0/5), round-trip min/avg/max = 186/300/408 ms


 

MPLS LDP IGP SYNC - Scenario2

When “MPLS LDP IGP SYNC” feature is  enabled/configured (without holddown time)in the Backbone network.


R3(config-router)#mpls ldp sync

With “mpls ldp sync” feature enabled/configured, the connectivity of R1 to R5 should be restored as the R3-R4 path should not be used now for the backbone traffic. But this is not the case. R1 to R5 connectivity will still remain down. Let’s see why?

R3#show mpls ldp igp sync       (output is shown only for Fa0/1, interface connected to R4)
FastEthernet0/1:
LDP configured; LDP-IGP Synchronization enabled.
Sync status: sync achieved; peer reachable.
Sync delay time: 0 seconds (0 seconds left)
IGP holddown time: infinite.
Peer LDP Ident: 9.9.0.4:0
IGP enabled: 1

R3#show ip ospf mpls ldp interface FastEthernet0/1
FastEthernet0/1
Process ID 1, Area 0
LDP is not configured through LDP autoconfig
LDP-IGP Synchronization : Required
Holddown timer is not configured
Interface is up

Let’s disable  LDP on R4 f0/0 for R3-R4 link. After disabling, please see the output for fa0/1 below:

R3#show mpls ldp igp sync
FastEthernet0/1:
LDP configured; LDP-IGP Synchronization enabled.
Sync status: sync not achieved; peer reachable.
Sync delay time: 0 seconds (0 seconds left)
IGP holddown time: infinite.
IGP enabled: 1

R3#show ip ospf mpls ldp interface f0/1
FastEthernet0/1
Process ID 1, Area 0
LDP is not configured through LDP autoconfig
LDP-IGP Synchronization : Required
Holddown timer is not configured
Interface is up and sending maximum metric >>>>> R3 will start advertising high metric in n/w.


Output from R4’s interface

R4#show ip ospf mpls ldp interface f0/0
FastEthernet0/0
Process ID 1, Area 0
LDP is not configured through LDP autoconfig
LDP-IGP Synchronization : Not required                >>>>>>> because LDP is disabled.
Holddown timer is disabled
Interface is up

R4#show run int f0/0
interface FastEthernet0/0   >>>>>>> no MPLS/LDP config present.
ip address 9.9.34.4 255.255.255.0
ip ospf 1 area 0
speed 100
duplex full
end.

Ping from R1 to R5 will not work even when R3 is advertising highest metric. This is because on R4, mpls ip is disabled and for OSPF this shows that “mpls ldp igp sync” is no longer required.
IMP NOTE: This concludes that removing the LDP is not the right way to test MPLS LDP IGP SYNC feature.

Solutions:
  • Block LDP neighborship forming b/w R3-R4 by using an ACL
  • Block LDP neighborship by configuring a LDP password on either side.

 

MPLS LDP IGP SYNC - Scenario3

Block LDP neighborship forming b/w R3-R4 by using an ACL (Using Solution 1).


ip access-list extended block_ldp
deny udp any eq 646 any log
deny tcp any any eq 646 log
deny tcp any eq 646 any log
deny udp any any eq 646 log
permit ip any any

Apply it on F0/0 on R4 to block it to form ldp neighborship with R3.

R4(config)#int fa0/0
R4(config-if)#ip access-group block_ldp in

Now R4 will show correct output and R1 to R5 connectivity will be restored as R3-R4 link will not be used for traffic.

R4#show ip ospf mpls ldp interface f0/0
FastEthernet0/0
Process ID 1, Area 0
LDP is not configured through LDP autoconfig
LDP-IGP Synchronization : Required
Holddown timer is not configured
Interface is up and sending maximum metric


Now ping will also work. The trace will show that traffic will skip R3-R4 link as sync not achieved on that path and will take backup path R2-R6-R4 for traffic.

R1#ping 9.9.0.5 source 9.9.0.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 9.9.0.5, timeout is 2 seconds:
Packet sent with a source address of 9.9.0.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 196/306/416 ms

R1#traceroute 9.9.0.5 source 9.9.0.1
Tracing the route to 9.9.0.5
1 9.9.12.2 64 msec 128 msec 128 msec
2 9.9.26.6 [MPLS: Labels 18/21 Exp 0] 256 msec 320 msec 192 msec
3 9.9.45.4 [MPLS: Label 21 Exp 0] 384 msec 192 msec 192 msec
4 9.9.45.5 328 msec 324 msec 256 msec


 

MPLS LDP IGP SYNC - Scenario4

Block LDP neighborship by configuring a LDP password on either side (Using Solution 2).


R4(config)#mpls ldp neighbor 9.9.0.3 password cisco
*Aug 4 21:07:52.311: %LDP-5-NBRCHG: LDP Neighbor 9.9.0.3:0 (1) is DOWN
*Aug 4 21:08:04.355: %TCP-6-BADAUTH: No MD5 digest from 9.9.0.3(646) to 9.9.0.4(41243) tableid

R4#show ip ospf mpls ldp interface f0/0
FastEthernet0/0
Process ID 1, Area 0
LDP is not configured through LDP autoconfig
LDP-IGP Synchronization : Required
Holddown timer is not configured
Interface is up and sending maximum metric

R1#traceroute 9.9.0.5 source 9.9.0.1
1 9.9.12.2 140 msec 128 msec 132 msec
2 9.9.26.6 [MPLS: Labels 18/21 Exp 0] 320 msec 320 msec 324 msec
3 9.9.45.4 [MPLS: Label 21 Exp 0] 192 msec 192 msec 192 msec
4 9.9.45.5 328 msec 192 msec 260 msec

 

MPLS LDP IGP SYNC - Scenario5

When hold down timer is configured along with “mpls ldp igp sync” feature.

 

Significance of configuring holddown timer with MPLS LDP IGP Sync:
By default, if MPLS LDP IGP sync is not achieved, the IGP waits indefinitely to bring up the adjacency. You can change this with the global command mpls ldp igp sync holddown msecs,which instructs the IGP to wait only for the configured time. After the synchronization Holddown timer expires, the IGP forms an adjacency across the link. As long as the IGP adjacency is up, while the LDP session is not synchronized, the IGP advertises the link with maximum metric.

If MPLS LDP IGP Sync is not achieved and hold down time is not configured, the IGP waits for infinite time to bring up IGP adjacency.

Outputs from one of the routers R3 (Fa0/0 – connected to R2) when Holddown time is not configured:

R3#show mpls ldp igp sync
FastEthernet0/0:
LDP configured; LDP-IGP Synchronization enabled.
Sync status: sync achieved; peer reachable.
Sync delay time: 0 seconds (0 seconds left)
IGP holddown time: infinite.
Peer LDP Ident: 9.9.0.2:0
IGP enabled: 1

R3#sh ip ospf mpls ldp interface f0/0
FastEthernet0/1
Process ID 1, Area 0
LDP is not configured through LDP autoconfig
LDP-IGP Synchronization : Required
Holddown timer is not configured
Interface is up and sending maximum metric advertising high metric in n/w.

Command to configured holddown timer:
Router(config)#mpls ldp igp sync holddown 30000 (msecs).   >>>>>>>>> 30 seconds

After above configuration, the IGP doesn’t come up immediately and start advertising high IGP metric, but waits for holddown timer to expire. Once the holddown timer is expired, IGP adjacency comes up and now will advertise high metric for that link for which sync was not achieved.

To simulate this, We have configured holddown timer of 30secs for “mpls ldp sync feature” on all backbone routers. Also we have set LDP password b/w R2 and R3 link on R2 and also shut R2-R3 link at R2 side.

R2(config)#mpls ldp igp sync holddown 30000
R2(config)#no mpls ldp neighbor 9.9.0.3 password cisco
R3(config)#int f0/0
R3(config-if)#shut

R3(config-if)#do sh ip ospf mpls ldp int fa0/0
FastEthernet0/0
Process ID 1, Area 0
LDP is not configured through LDP autoconfig
LDP-IGP Synchronization : Required
Holddown timer is configured : 30000 msecs
Holddown timer is not running
Interface is down

Now un-shut the R3 side interface and check the sync status:

R3(config-if)#no shut

*Aug 29 09:28:55.807: %LINK-3-UPDOWN: Interface FastEthernet0/0, changed state to up
*Aug 29 09:28:56.807: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/0, changed state to up

R3(config-if)#do sh ip ospf mpls ldp int fa0/0
FastEthernet0/0
Process ID 1, Area 0
LDP is not configured through LDP autoconfig
LDP-IGP Synchronization : Required
Holddown timer is configured : 30000 msecs
Holddown timer is running and is expiring in 29136 msecs
Interface is down and pending LDP

After hold down timer expires(30 secs), OSPF neighborship b/w R2 and R3 will come up and R3 will start advertising highest metric for that link.

*Aug 29 09:29:31.991: %OSPF-5-ADJCHG: Process 1, Nbr 9.9.0.2 on FastEthernet0/0 from LOADING to FULL, Loading Done

R3(config-if)#do sh ip ospf mpls ldp int fa0/0
FastEthernet0/0
Process ID 1, Area 0
LDP is not configured through LDP autoconfig
LDP IGP Synchronization : Required
Holddown timer is configured : 30000 msecs
Holddown timer is not running
Interface is up and sending maximum metric