Tuesday, February 12, 2013

BGP - Configuring mBGP (Multicast BGP)

Do you influence or do you manipulate people to get what you want? What exactly is the difference? I mean in the end, you end up getting what you want right? Wrong, The difference lies in loyalty, if you influence someone then you get them to want to do what you want them to do. If you coerce or manipulate you may achieve short term success, however in the long term you will loose their trust and they will perceive you as someone who violated their trust rather than a leader. Therefore a great leader will influence others to achieve long term success. Although, sometimes you have no choice but to manipulate, for example my wife wants to influence me to do the dishes. Well that's just not going to happen -- she's never going to convince me that I want to do dishes, so she settles for manipulation :). Now that we've gotten my daily tidbit out of the way let's get into the topic at hand which is -- mBGP (Multicast BGP) not to be confused with Multi-protocol BGP which is used by MPLS, IPv6 etc.

Imagine you have Unicast Servers and Multicast Sources on the same network. Furthermore, imagine that you have 2 paths to reach that network, however you want to separate Unicast and Multicast traffic between different paths. How can we achieve this? Well there are a couple of options.

One, we can add static mroutes downstream or you could run mBGP and since the title of this blogtorial is configuring mBGP we'll choose the latter option. 

Consider this simple topology and let's get started. 

Complete configs can be found here.

Objective:

All Unicast traffic from R2 to 10.10.10.0/24 should go over Fast1/0.
All Multicast traffic from R2 to 10.10.10.0/24 should go over Fast 1/1. 

As usual let's get started with configuring the interfaces. 

R1 relevant config:

ip multicast-routing
!
ip pim rp-address 10.10.10.1 override
!
interface Loopback1  
  ip address 10.10.10.1 255.255.255.0  
 !  
 interface FastEthernet1/0  
  description unicast traffic only  
  ip address 1.1.1.1 255.255.255.0  
  duplex auto  
  speed auto  
 !  
 interface FastEthernet1/1  
  description multicast traffic only  
  ip address 2.2.2.1 255.255.255.0  
  ip pim sparse-mode  
  duplex auto  
  speed auto  
 !  
 router bgp 1  
  no bgp default ipv4-unicast  
  bgp log-neighbor-changes  
  neighbor 1.1.1.2 remote-as 2  
  neighbor 2.2.2.2 remote-as 2  
  !  
  address-family ipv4  
  neighbor 1.1.1.2 activate  
  no auto-summary  
  no synchronization  
  network 10.10.10.0 mask 255.255.255.0  
  exit-address-family 

R2 relevant config:

 ip multicast-routing  
 !  
 interface FastEthernet1/0  
  description unicast traffic only  
  ip address 1.1.1.2 255.255.255.0  
  duplex auto  
  speed auto  
 !  
 interface FastEthernet1/1  
  description multicast traffic only  
  ip address 2.2.2.2 255.255.255.0  
  ip pim sparse-mode  
  duplex auto  
  speed auto  
 !  
 router bgp 2  
  no bgp default ipv4-unicast  
  bgp log-neighbor-changes  
  neighbor 1.1.1.1 remote-as 1  
  !  
  address-family ipv4  
  neighbor 1.1.1.1 activate  
  no auto-summary  
  no synchronization  
  exit-address-family  
 !  
 ip pim rp-address 10.10.10.1  
 !  

Now let's see what happens when you do a "mtrace" command to the RP.

R2#mtrace 10.10.10.1  
 Type escape sequence to abort.  
 Mtrace from 10.10.10.1 to 1.1.1.2 via RPF  
 From source (?) to destination (?)  
 Querying full reverse path...  
  0 1.1.1.2  
 -1 1.1.1.2 None No route  
 R2#show ip rpf 10.10.10.1
 RPF information for ? (10.10.10.1) failed, no route exists
 R2#ping 10.10.10.1  
 Type escape sequence to abort.  
 Sending 5, 100-byte ICMP Echos to 10.10.10.1, timeout is 2 seconds:  
 !!!!!  
 Success rate is 100 percent (5/5), round-trip min/avg/max = 8/40/72 ms  
 R2#  

Something isn't quite right... No route, however we can ping it. At this point there would be no multicast traffic flowing because our Unicast routing table is pointing to the fast1/0 as the next hop for 10.10.10.0/24 and there is no PIM enabled on this interface. Plus the RPF check is failing. We need to bring up mBGP between R1 and R2 and populate the multicast routing table on R2.

 R1#show run | sec bgp  
 router bgp 1  
  no bgp default ipv4-unicast  
  bgp log-neighbor-changes  
  neighbor 1.1.1.2 remote-as 2  
  neighbor 2.2.2.2 remote-as 2  
  !  
  address-family ipv4  
  neighbor 1.1.1.2 activate  
  no auto-summary  
  no synchronization  
  network 10.10.10.0 mask 255.255.255.0  
  exit-address-family  
  !  
  address-family ipv4 multicast  
  neighbor 2.2.2.2 activate  
  no auto-summary  
  no synchronization  
  network 10.10.10.0 mask 255.255.255.0  
  exit-address-family  
 R2#show run | sec bgp  
 router bgp 2  
  no bgp default ipv4-unicast  
  bgp log-neighbor-changes  
  neighbor 1.1.1.1 remote-as 1  
  neighbor 2.2.2.1 remote-as 1  
  !  
  address-family ipv4  
  neighbor 1.1.1.1 activate  
  no auto-summary  
  no synchronization  
  exit-address-family  
  !  
  address-family ipv4 multicast  
  neighbor 2.2.2.1 activate  
  no auto-summary  
  no synchronization  
  exit-address-family  

Let's check out the routes again and see how the routing has changed after mBGP is configured.

 R2#show ip bgp ipv4 multicast summary | beg Neigh  
 Neighbor    V  AS MsgRcvd MsgSent  TblVer InQ OutQ Up/Down State/PfxRcd  
 2.2.2.1     4   1   29   26    6  0  0 00:01:22    1  
 R2#show ip bgp ipv4 multicast  
 BGP table version is 6, local router ID is 2.2.2.2  
 Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,  
        r RIB-failure, S Stale  
 Origin codes: i - IGP, e - EGP, ? - incomplete  
   Network     Next Hop      Metric LocPrf Weight Path  
 *> 10.10.10.0/24  2.2.2.1         0       0 1 i  

 R2#show ip rpf 10.10.10.1  
 RPF information for ? (10.10.10.1)  
  RPF interface: FastEthernet1/1  
  RPF neighbor: ? (2.2.2.1)  
  RPF route/mask: 10.10.10.0/24  
  RPF type: mbgp  
  RPF recursion count: 0  
  Doing distance-preferred lookups across tables  

As you can see the RPF check passes, and all the multicast will now be flowing through Fa1/1. I added "ip igmp static-group 224.2.1.1" to illustrate *,G and the incoming/outgoing interface RPF neighbor etc.Note that the RPF states Mbgp.

 R2#show ip mroute 224.2.1.1  
 IP Multicast Routing Table  
 Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,  
     L - Local, P - Pruned, R - RP-bit set, F - Register flag,  
     T - SPT-bit set, J - Join SPT, M - MSDP created entry,  
     X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,  
     U - URD, I - Received Source Specific Host Report,  
     Z - Multicast Tunnel, z - MDT-data group sender,  
     Y - Joined MDT-data group, y - Sending to MDT-data group  
 Outgoing interface flags: H - Hardware switched, A - Assert winner  
  Timers: Uptime/Expires  
  Interface state: Interface, Next-Hop or VCD, State/Mode  
 (*, 224.2.1.1), 00:10:22/00:02:49, RP 10.10.10.1, flags: SJCL  
  Incoming interface: FastEthernet1/1, RPF nbr 2.2.2.1, Mbgp  
  Outgoing interface list:  
   Loopback1, Forward/Sparse, 00:00:11/00:02:49  

Conclusion: 

Multicast BGP is very useful in the financial industry because we keep a separation between Unicast and Multicast traffic as well as Multicast A and Multicast B side separation. Next blogtorial I am planning to write will discuss how to translate Unicast routes (nlri) into multicast routes (nlri).

Many more articles to come so stay tuned.

Please subscribe/comment/+1 if you like my posts as it keeps me motivated to write more and spread the knowledge.

8 comments:

  1. good job and a clear explanation on mcastBGP

    ReplyDelete
  2. Thank you sir. Appreciate the comment.

    ReplyDelete
  3. its really good .Thanks

    ReplyDelete
  4. Thank you .. appreciate the comments.

    ReplyDelete
  5. I didnt undernstand the last part ? where did you configure igmp join ..have you created a loopback1 interface on R2 ? is pim neighborship must on fasthethernet 1/1 interface on both sides ... can any one explain with the below config
    (source)R1------R2(RP)-----R3------R4-------R5----R6(IGMP-client)
    -------
    R3 and R4 having MBGP configured .. still i am not very clear about this concept ..pls help me ..

    ReplyDelete
    Replies
    1. Sorry it's been a while but I added it to loopback 1 on R2 .. and to R2 fast1/1

      Delete
  6. on R1:
    interface Loopback1
    ip address 10.10.10.1 255.255.255.0
    ip pim sparse-mode

    otherwise it won't be known to R2 BGP multicast table


    ReplyDelete