Thứ Hai, 9 tháng 4, 2012

Cisco Linksys routers bring the cloud home


Cisco Linksys routers bring the cloud home

Cisco unveils Wi-Fi routers for cloud-based home networking apps

By , Network World
April 05, 2012 04:16 PM ET 
 
Cisco has rolled out a new line of home Wi-Fi routers that embed application intelligence for managing devices connected to the home network.
The "app-enabled" Linksys routers are intended to allow users to manage home networks from smartphones, tablets or Web browsers. Users can give guests Internet access, set parental controls and add devices to their home network using this password-protected software, Cisco says.
WI-FI WORRIES: Wi-Fi complaints prompt Apple to replace some 3rd-gen iPads
The new app-enabled Linksys routers, three in all, support Cisco Connect Cloud software for home network management. Connect Cloud is based on the same Cisco Connect software used to configure the Linksys routers at initial setup.
The routers are called the EA4500, EA3500 and EA2700. The EA4500 is designed for large, multi-room homes and home theaters. It features performance up to 900Mbps, HD video streaming to tablets, TVs and mobile devices, four Gigabit Ethernet ports, a USB port, IPv6, and the ability to connect multiple devices like storage and printers.
The EA3500 has most of the same features, but with performance of 750Mbps and HD video, music and multiplayer gaming streaming. The EA 2700 does not have the USB port, and features performance of 600Mbps. It allows non-HD streaming of multimedia and gaming.
The EA4500 costs $199, the EA3500 $139 and the EA2700 $99. All are available now. The Cisco Connect Cloud software will be available in June at no additional charge.
Cisco is also lining up partners developing applications for home networks on Cisco Connect Cloud. Cisco is currently working with consumer appliance giant Whirlpool to develop applications that allow people to control their appliances when they are away from home.

(source: Networkworld.com)

Thứ Tư, 4 tháng 4, 2012

Virtual Private LAN Service ( VPLS) (Part 4)

Routed VPLS

This article is to demonstrate Alcatel-Lucent proprietary solution of Routed VPLS (R-VPLS). It allows a VPLS SAP interface to be configured with a Layer 3 IP address and make the interface route-able. This article will use figure 1 to demonstrate R-VPLS configuration.



Figure 1 shows that two servers (Application Server and Database Server) has two direct connections to two ALU 7450 switches - the 7450s run MC-LAG between them. The servers also run LAG. These two connections are separate connections- each for BLUE and RED VPLS. RT1 and RT2 are connected to each other by LAG-1 (ports 1/2/19 and port 1/2/20), and RT3 and RT4 are also connected to each other by LAG-1 (ports 1/2/19 and 1/2/20).

In this case, the servers ping geographically remote standby servers to monitor availability. If the standby server cannot reach the active server, the standby server tries to take on active status by connecting to the primary database.

R-VPLS is configured on PE routers that allows MC-LAG to be configured on the SAPs facing the servers. MC-LAG and VRRP will be configured to protect against link and PE router failure.

The configuration steps are as follows-

1. Configure MC-LAG on PE routers.

MC-LAG configuration on RT1

A:RT1# configure lag 101
A:RT1>config>lag# info
            mode access
            port 1/2/10
            lacp active administrative-key 32768
            no shutdown
            exit all
A:RT1# configure lag 102
A:RT1>config>lag# info
            mode access
            port 1/2/11
            lacp active administrative-key 32768
            no shutdown
            exit all

A:RT1# configure redundancy multi-chassis
A:RT1>config>redundancy>multi-chassis# info
            peer 10.0.0.2
            mc-lag
                lag 101 lacp-key 10 system-id 00:00:00:00:00:01 system-priority 32768
                lag 102 lacp-key 10 system-id 00:00:00:00:00:02 system-priority 32768
                no shutdown
                exit
            sync
                igmp
                igmp-snooping
                no shutdown
                exit all        

A similar MC-LAG configuration applies to RT2 router. Note that the triplet {LACP-Key, System-ID, System Priority} values must match on both PE routers for MC-LAG to function correctly.

MC-LAG configuration on RT2

A:RT2# configure lag 101
A:RT2>config>lag# info
            mode access
            port 1/2/10
            lacp active
            no shutdown
            exit all
A:RT2# configure lag 102
A:RT2>config>lag# info
            mode access
            port 1/2/11
            lacp active administrative-key 32768
            no shutdown
            exit all

A:RT2# configure redundancy multi-chassis
A:RT2>config>redundancy>multi-chassis# info
            peer 10.0.0.1
            mc-lag
                lag 101 lacp-key 10 system-id 00:00:00:00:00:01 system-priority 32768
                lag 102 lacp-key 10 system-id 00:00:00:00:00:02 system-priority 32768
                no shutdown
                exit
            sync
                igmp
                igmp-snooping
                no shutdown
                exit all        

RT3 and RT4 configuration is similar to RT1 and RT2 routers, and is not provided here for brevity.

2. Configure SDP on PE routers. This SDP is only configured on directly connected PE routers i.e. between RT1 and RT2, and between RT3 and RT4 routers. The LSP created is used to carry R-VPLS service between RT1 and RT2 that allows MC-LAG and VRRP traffic.

SDP configuration on RT1

A:RT1# configure router mpls
A:RT1>config>router>mpls# info
            path PATH_to_RT2            # Loose path to RT2
                no shutdown
                exit
            lsp LSP_to_RT2
                to 10.0.0.2
                cspf
                primary PATH_to_RT2
                fast-reroute facility
                exit
            no shutdown
            exit all

A:RT1# configure service sdp 1 mpls create
A:RT1>config>service>sdp# info
            far-end 10.0.0.2
            description "SDP to RT2"
            lsp LSP_to_RT2
            signaling tldp
            no shutdown
            exit all      

A similar configuration on RT2 router follows for SDP to RT1 router.

SDP configuration on RT2

A:RT2# configure router mpls
A:RT2>config>router>mpls# info
            path PATH_to_RT1
                no shutdown
                exit
            lsp LSP_to_RT1
                to 10.0.0.1
                cspf
                primary PATH_to_RT1
                fast-reroute facility
                exit
            no shutdown
            exit all

A:RT2# configure service sdp 1 mpls create
A:RT2>config>service>sdp# info
            far-end 10.0.0.1
            description "SDP to RT1"
            lsp LSP_to_RT1
            signaling tldp
            no shutdown
            exit all       

RT3 and RT4 configuration is similar to RT1 and RT2 routers, and is not provided here for brevity.

3. Configure R-VPLS on PE routers.

R-VPLS configuration on RT1

A:RT1# configure service vpls 50
A:RT1>config>service>vpls# info
            description "BLUE VPLS"
            allow-ip-int-binding         # Allow routed VPLS functionality      
            service-name "BLUE_VPLS"
            stp
                shutdown
            sap lag-101 create
            exit
            mesh-sdp 1 create
            exit
            no shutdown
            exit all
A:RT1# configure service vpls 51
A:RT1>config>service>vpls# info
            description "RED VPLS"
            allow-ip-int-binding
            service-name "RED_VPLS"
            stp
                shutdown
            sap lag-102 create
            exit
            mesh-sdp 1 create
            exit
            no shutdown
            exit all

A similar R-VPLS configuration is applied on RT2 router.

R-VPLS configuration on RT2

A:RT2# configure service vpls 50
A:RT2>config>service>vpls# info
            description "BLUE VPLS"
            allow-ip-int-binding           
            service-name "BLUE_VPLS"
            stp
                shutdown
            sap lag-101 create
            exit
            mesh-sdp 1 create
            exit
            no shutdown
            exit all
A:RT2# configure service vpls 51
A:RT2>config>service>vpls# info
            description "RED VPLS"
            allow-ip-int-binding
            service-name "RED_VPLS"
            stp
                shutdown
            sap lag-102 create
            exit
            mesh-sdp 1 create
            exit
            no shutdown
            exit all

This R-VPLS instance (R-VPLS service-name) can be used in a VPRN or IES service using the command vpls "BLUE_VPLS" or vpls "RED_VPLS". This allows the interface to become route-able.
(source: https://sites.google.com/site/amitsciscozone/home/vpls/routed-vpls)

Virtual Private LAN Service (VPLS) (Part 3)

IEEE 802.1ak - Multiple Registration Protocol
In PBB-VPLS, for N:1 mode, when an I-VPLS instance receives broadcast/unknown/multicast (BUM) traffic, it is forwarded to all other IB-PE routers as multiple I-VPLS instances share a common B-VPLS instance. The traffic is forwarded to remote IB-PE routers even if they do not have that I-VPLS instance locally bound to the B-VPLS. This causes unnecessary bandwidth waste as the remote IB-PE routers will drop the traffic due to lack of presence of that I-VPLS instance.
In order to solve this problem, Multiple Registration Protocol (MRP) can be used to build a distribution tree for each I-VPLS service (per I-SID), and flooding the BUM traffic from certain I-VPLSs only to those remote IB-PE routers that are members of the tree. MRP is a distributed protocol that allows attribute registration and declaration by multiple participants in a bridged LAN environment. MRP has two applications - MMRP or Multiple MAC Registration Protocol, and MVRP or Multiple VLAN Registration Protocol. In PBB-VPLS, MMRP is used to optimize traffic flooding in B-domain. In MMRP, the attribute handled by MRP is MAC address, and in case of PBB-VPLS, the attribute is group B-MAC address.
MMRP maintains a filter database, that contains the group B-MAC addresses of interested members. Traffic is filtered based on this database. In order to maintain the registration state and receive traffic, a bridge port must declare interest periodically. In PBB-VPLS, every IB-PE router participating in B-VPLS, maintains a filtering database, and floods traffic that have group B-MAC addresses only to listed interested members.
MMRP Terminology
  • Participant: The node that is participating in MMRP. In PBB-VPLS solution, each IB-PE router in B-domain is a participant.
  • Attribute: The information MRP tracks. In MMRP, the attribute is MAC address; more specifically, in PBB-VPLS network, the attribute is group B-MAC address.
  • Applicant: The element that announces its interest, ensuring that the declaration of the participant is registered by the registrars of other participants. In PBB-VPLS, every IB-PE router has one applicant for each I-VPLS instance. The applicant annouces the group B-MAC address and triggers Multiple Registration PDU (MRPDU) propagation.
  • Registrar: The element that records the declaration from the applicant made by other participants on the LAN. The registrar receives only MRPDUs. In PBB-VPLS, each IB-PE router in B-VPLS has one registrar for each I-VPLS instance. The registrar listens to MRPDUs and registers the applicants.
  • Declaration: An announcement made by an applicant of its interest in a MAC address by triggering an MRPDU. The declaration of a local participant triggers MRPDU propagation and causes corresponding registrars of other participants to register the attribute. In PBB-VPLS, an IB-PE router declares a group B-MAC address in B-VPLS if it has a local I-VPLS instance associated with that B-VPLS, or if it has received a registration from another IB-PE router.
  • Registration: For an IB-PE router, receiving a registration of any group B-MAC address means that there are remote IB-PE routers participating in an I-VPLS.
  • Propagation: When an applicant in one participant declares a membership, it triggers the router to flood MRPDU to all other participants. The propagation always follows the active topology of STP.
MMRP for PBB-VPLS
In MMRP for PBB-VPLS, the attribute registered is group B-MAC address. The group B-MAC address is of the form 01:1E:83:pq:rs:tu, where OUI 01:1E:83 is set for group B-MAC address, and pq:rs:tu is the hex value of I-SID for an I-VPLS. Since the I-SID is globally unique, the group B-MAC address formed is also unique for each I-VPLS. When a new I-VPLS instance (say, 100) is associated with the common B-VPLS instance in an IB-PE router, the MMRP applicant in that I-VPLS declares the group B-MAC address (say, 01:1E:83:00:00:C8) and triggers MRPDU propagation throughout the B-VPLS to register this group B-MAC address with remote IB-PE routers' registrars.

The following are the rules for MMRP information exchange-
  1. In an IB-PE router, the association of an I-VPLS instance to a B-VPLS instance will trigger a declaration of group B-MAC address, putting the port (PW) in Declared (D) state, and flooding the information to the entire B-domain.
  2. In an IB-PE router, the declaration of group B-MAC address triggers MRPDU generation from that port to the remote IB-PE routers.
  3. In an IB-PE router, receiving a MRPDU puts the port in Registered (R) state.
  4. In an IB-PE router, a registration of group B-MAC address on a bridge port, causes the router to put all the other bridge ports (in the same bridge group), into the D state for the same group B-MAC address, and triggers MRPDU propagation (obeying Split Horizon rule) i.e. registration on one bridge port triggers the generation and flooding of the MRPDU to all other bridge ports.
  5. If STP is used, the MRPDU propagation follows the active topology of STP.
  6. Only the port that is in both, D and R, states is added to flooding tree in B-VPLS.
The following table shows the combination of ports in D and R states, and their meaning.
 D State
 R State
  Description
 No  No  This means the local I-VPLS is not UP, and no remote IB-PE routers are registering the same I-VPLS. So the group B-MAC address is not added to the filtering database.
 No  Yes  This means that there is one (or many) remote IB-PE router(s) that are members of the corresponding I-VPLS instance, but no local I-VPLS instance. So there is no need to join the group B-MAC address flooding tree for this I-VPLS.
 Yes  No  This means that the IB-PE router has a local instance of I-VPLS, but there are no remote IB-PE members for this I-VPLS. So this entry is not added to the group B-MAC flooding tree.
 Yes  Yes  This means that there is a local instance of I-VPLS and remote peering. In this case, the BUM traffic needs to be flooded to remote IB-PE router. Hence, this entry is added to the group B-MAC flooding tree. 
Building The Flooding Tree In PBB-VPLS With MMRP
In Figure 2, when an I-VPLS 100 is associated with B-VPLS in IB-PE1 router, the MRPDU containing the attribute group B-MAC address (say, 01:1E:83:00:00:C8) is flooded to the entire B-domain. It puts the announcing port (pseudowire) to D state. The remote IB-PE3 router receives this MRPDU and registers the group B-MAC address in its filtering database. Further, it puts the receiving port (pseudowire) in R state. Also note that, IB-PE2 router also receives a copy of MRPDU, however, since it does not have any I-VPLS 100 instance locally bound to common B-VPLS, it drops/ignores the MRPDU.
Now, as IB-PE3 router also has an I-VPLS 100 instance locally bound to B-VPLS, it too propagates an MRPDU containing group B-MAC address (say, 01:1E:83:00:00:31) to the entire B-domain. IB-PE3 router puts its announcing port to D state. Similarly, IB-PE1 router receives this MRPDU and registers the group B-MAC address in its filtering database. And, it puts the receiving port (pseudowire) in R state. Again, IB-PE2 router drops the MRPDU.
The propagation of MRPDU obeys the rules mentioned above. After both declaration processes are completed, all IB-PE routers in the B-domain are either in D, R or both states. Only ports that are in both D and R states, are added into the group B-MAC address flooding tree.
The MRP database stores the port, group B-MAC address and registration/declaration status. When an IB-PE router receives MRPDU, it creates a registration entry in the database. This does not mean traffic is forwarded. If the entries in database exceeds the size of the database, no further registration is accepted. Essentially, traffic is not forwarded.
Using the flooding tree, the IB-PE router floods the BUM traffic from the local I-VPLS instance to the pseudowires listed in the tree, accepts the BUM traffic from these pseudowires, and then forwards the traffic to the local I-VPLS instance.
Summary
To avoid the problem of unnecessary bandwidth utilization due to BUM traffic in N:1 mode, MMRP can be implemented on IB-PE routers. It works like an IGMP protocol in a way that the IB-PE routers register their group B-MAC addresses of I-VPLS instance to remote IB-PE routers so that BUM traffic is flooded to only those interested members that have I-VPLS instance locally bound to the common B-VPLS. 
(source: https://sites.google.com/site/amitsciscozone/home/vpls/ieee-802-1ak---multiple-registration-protocol)

Virtual Private LAN Service (VPLS) ( Part2)

PBB-VPLS
Introduction
VPLS connects multiple geographically separated customer sites by emulating a bridge domain. All customer sites connected to the same VPLS instance appear to be in the same LAN segment. However, VPLS requires all PE routers be fully meshed with pseudowires. Hierarchical VPLS is often implemented in service provider network to improve scalability as it removes the requirement of pseudowire full mesh for PE routers, thereby, reducing the number of Label Switched Paths (LSPs) and pseudowires (PWs) each PE router must maintain.
However, PE routers in VPLS and H-VPLS make traffic forwarding decision based on Customer MAC (C-MAC) addresses. This causes a serious concern of MAC explosion in PE routers due to a large number of customers connected to the service provider networks. To solve this problem, PBB is integrated into VPLS solution.
In PBB, end-to-end solution is built around two components: the backbone component (B-Component) operating in the backbone domain (B-domain), and the customer component (I-Component) operating in the customer addressing space. For PBB-VPLS, the I-Component is represented by the customer-facing VPLS instance (I-VPLS). An I-VPLS service instance still performs learning and forwarding based on C-MAC addresses, just like regular VPLS instances. An I-VPLS can also send traffic to associated B-VPLS (Backbone VPLS instance) across the B-domain, if necessary. When the I-VPLS service instance sends traffic to B-VPLS service instance, the customer MAC addressing is encapsulated by another pair of MAC addresses - the backbone source and destination MAC (B-MAC) addresses. Traffic in the B-domain uses B-MAC addresses for forwarding decisions. The C-MAC addresses are hidden from the B-VPLS service instances.
In a hub-spoke H-VPLS solution, the PE router facing the customer in the edge is u-PE, and the aggregating PE router facing the network at the core is n-PE. Several u-PEs are connected to a single n-PE by spoke-PWs. The n-PEs are connected to other n-PEs in full mesh using hub-PWs. The n-PE aggregates and facilitates the forwarding of traffic between u-PEs, and between u-PEs and other n-PEs. H-VPLS reduces the number of PWs, LSPs and targeted-LDPs (T-LDPs) required in the network but it does not reduce the number of C-MACs a PE router needs to learn- traffic forwarding in H-VPLS is still based on C-MAC address learning. Since the n-PE is based deep in the service provider network, it needs to learn more MAC addresses than the u-PE. This problem is called MAC explosion.
In PBB, the Backbone Edge Bridge (BEB) adds another Ethernet encapsulation to customer Ethernet frames, called backbone header. The backbone header contains B-MAC addresses which are meaningful to BEBs and Backbone Core Bridges (BCBs) in the PBB network (PBBN). The bridges in the PBBN use B-MAC addresses to make forwarding decisions. The backbone VLAN ID (B-VLAN ID) is used to define backbone broadcast domain. The BEB also adds a 4-byte I-TAG in front of the customer MAC addressing. It contains an Interface Service Instance Indicator I-SID (24 bits) which allows each BEB to support multiple backbone service instances. Figure 1 shows PBB frame format.
Thus, the C-MAC addresses are learned only by customer-facing ports of the BEB. The BEB maps C-MAC to appropriate Destination B-MAC and then forwards the frame to PBBN core. The nodes in PBBN are aware of Destination B-MAC address and perform learning and forwarding based on this address only. The PBBN runs Spanning Tree Protocol (STP) in the core to prevent loops.
The BEB has 2 components-
  • B-Component: It faces the PBB Backbone. The B-Component learns and forwards the PBB-encapsulated frames to other BEBs.
  • I-Component: It faces the 802.1ad (Q-in-Q) network (or customer network). The I-Component adds the PBB header to the incoming customer traffic and maintains the mapping of C-MAC addresses and C-VLAN IDs to the B-MAC addresses and B-VLAN IDs.
PBB-VPLS Architecture
PBB provides exactly what is required to solve the MAC explosion issue in VPLS - by adding a pair of source/destination B-MAC addresses to every Ethernet frame. The backbone switches make forwarding decisions based on these B-MAC addresses and hence only need to be aware of these B-MAC addresses, and not C-MAC addresses. When PBB is integrated into VPLS, the end-to-end solution is divided into 2 parts: the backbone domain (B-domain) and one or multiple edge domains (I-domains).
NOTE: Please note that this article is based on MPLS in the access i.e. I-domain. However, the draft http://tools.ietf.org/html/draft-sajassi-l2vpn-vpls-pbb-interop-04 also discusses PBBN access. With MPLS access, the PBB functionality is embedded either in u-PE or n-PE. The u-PEs connect to n-PE using spoke-PWs, while n-PEs connect to other n-PEs using hub-PWs.
In figure 2, the B-domain contains PE routers that are responsible for forwarding PBB-encapsulated customer traffic using B-MAC addresses across the backbone. These routers now have a new type of VPLS called Backbone VPLS (B-VPLS) configured instead of regular VPLS. The I-domain contains PE routers that participate in customer-facing VPLS (I-VPLS or Interface VPLS) and perform learning and forwarding based on C-MAC addresses. The PE routers draw the boundary between B-domain and I-domain, which contains both B-VPLS and I-VPLS. The B-VPLS is configured and connected by pseudowires to other PE routers in the B-domain. I-VPLS is configured and connected to other PE routers in the I-domain that are running regular VPLS.
The I-VPLS and B-VPLS communicate with each other in an IB-PE router using an internal link. When the I-VPLS receives regular VPLS traffic from the I-domain, it performs PBB encapsulation and forwards the traffic to the B-VPLS towards the B-domain. When the B-VPLS receives PBB-encapsulated traffic from the B-domain, it forwards the traffic to correct I-VPLS. The I-VPLS then performs PBB decapsulation and forwards the traffic to correct destination based on customer information. Each I-VPLS can have its own B-VPLS connected to the B-domain (1:1 mode), or multiple I-VPLS instances can share a common B-VPLS instance (N:1 mode).
I-VPLS
The I-VPLS instance in an IB-PE router is responsible for connecting the regular VPLS from the I-domain to B-VPLS in B-domain. The I-VPLS is also a Virtual Bridge that has bridge ports and forwarding database. The I-VPLS has 3 types of bridge ports:
  • An internal link to the associated B-VPLS. The internal link is created when the association between I-VPLS and B-VPLS is manually configured in the I-VPLS in IB-PE router. The I-VPLS performs PBB encapsulation of the traffic when sent over the internal link to B-VPLS, and perform PBB decapsulation when receives traffic from B-VPLS over the internal link.
  • The spoke PW(s) connected to the regular VPLS in the I-domain. This spoke PW sends and receives VPLS encapsulated traffic with regular VPLS PE routers in the I-domain.
  • The interface facing the local customer site. The I-VPLS can have interfaces with attachment circuits to local customer sites.
One B-VPLS instance can be associated with multiple I-VPLS instances. By incorporating PBB function, the IB-PE router maps each service (for a given customer) onto a single I-SID based on the configuration at IB-PE router. Many I-SIDs can be multiplexed within a single bridge domain (e.g. B-VLAN). Then, the IB-PE can either map a single I-SID into a B-VPLS instance, or it can map a bridge domain (B-VLAN) onto a B-VPLS instance, as per configuration. Next, the encapsulated frames are sent over the hub-PWs associated with that B-VPLS instance.
When B-VPLS receives the PBB-encapsulated traffic from remote B-VPLS PE routers, the B-VPLS uses the I-SID to identify to which I-VPLS instance the traffic belongs. I-SID is the I-VPLS service de-multiplexer. Each I-VPLS instance has its own I-SID. I-SID value is configurable.
B-VPLS
The B-VPLS instance in an IB-PE router is responsible for connecting other member PE routers in the backbone network. The B-VPLS service performs learning and forwarding based on B-MAC addresses, and it is not aware of C-MAC addresses. The B-VPLS is also a Virtual Bridge that has bridge ports and forwarding database. The B-VPLS has 3 types of bridge ports:
  • An internal link to the associated I-VPLS. B-VPLS sends and receives traffic from I-VPLS over the internal link. One B-VPLS may be associated with more than one I-VPLS instances. The B-VPLS uses the I-SID in the PBB header to identify traffic belonging to different I-VPLSs.
  • The hub PW(s) connected to other PE routers in the backbone network. These PWs are used to send and receive PBB-encapsulated VPLS traffic.
  • The B-VPLS can have interface(s) connecting to other PBB (IEEE 802.1ah) networks.
Each B-VPLS service instance in a PE router is explicitly configured with a B-MAC address. This B-MAC address unique identifies the B-VPLS service instance in the PE router. All I-VPLS service instances locally associated with a particular B-VPLS, use this MAC address as source B-MAC address when performing PBB encapsulation. When the B-VPLS service instance receives traffic from other B-VPLS PE routers, it learns the source B-MAC address and adds to its forwarding database.
PBB-VPLS Encapsulation
As mentioned above, the I-VPLS performs PBB encapsulation on traffic arriving on spoke PWs. The PBB header has 3 pieces of information:
  • I-SID: The I-SID is the I-VPLS service identifier for the remote B-VPLS to identify which I-VPLS service instance associated with it, should receive the traffic. All I-VPLS instances belonging to the same service must use the same I-SID value and must be associated with same B-VPLS instance.
  • Source B-MAC address: Each B-VPLS instance has its own globally unique B-MAC address defined manually. The I-VPLS uses the B-MAC defined in the B-VPLS instance, with which it is associated, as the source B-MAC address in the PBB header.
  • Destination B-MAC address: The destination B-MAC address is learned by receiving PBB-encapsulated traffic from other B-VPLS PE routers.
Every time an I-VPLS receives traffic from an I-domain destined for a remote I-domain, it checks its PBB forwarding database to find the correct destination B-MAC address for PBB encapsulation. In case of no match, the I-VPLS uses the group B-MAC address of 01:1E:83:pq:rs:tu. The pq:rs:tu part is achieved from the I-SID (24 bits). Once the B-MAC addresses are acquired, the I-VPLS performs PBB encapsulation before sending the traffic to B-VPLS over the internal link. The PBB encapsulation is as follows:
  1. Insert B-MAC addresses. The destination B-MAC address is found in the PBB forwarding database of the I-VPLS instance.
  2. Insert the I-TAG which contains the I-SID. The I-SID is used at remote PE router to determine to which I-VPLS the traffic belongs.
  3. Insert the B-TAG which contains the B-VLAN ID for the backbone network.
When the remote PE router receives PBB-VPLS encapsulated traffic from B-VPLS of another PE router in the B-domain, it removes the VPLS header and checks the destination B-MAC of the frame. If the destination B-MAC address matches, the traffic is meant to be forwarded to local I-VPLS instance. Otherwise the traffic is dropped. Then, it performs I-SID lookup to identify to which I-VPLS the traffic belongs. The I-VPLS decapsulates the PBB frame and forwards/floods to appropriate destination address.
Flooding in PBB-VPLS
The two modes of I-VPLS to B-VPLS mapping are 1:1 and N:1, as discussed above. In 1:1 mode, each I-VPLS representing a service connects to its own B-VPLS, and that B-VPLS only connects to other IB-PE routers that have that I-VPLS instance configured on them. In N:1 mode, multiple I-VPLS instances share a common B-VPLS instance created in the B-domain, and all IB-PE routers participate in that B-VPLS instance. The I-VPLS instances on different IB-PE routers are different but share the same B-VPLS instance. The IB-PE router receiving traffic from that B-VPLS instance identifies the correct I-VPLS from the I-SID value, and forwards the traffic to that I-VPLS.
In figure 4, multiple I-VPLS instances share a common B-VPLS instance. One issue with N:1 mode is that different I-VPLSs sharing a common B-VPLS may require different network overlays. For example, in figure 4, I-VPLS 100 connects only to IB-PE1 and IB-PE3, while I-VPLS 200 connects IB-PE1 and IB-PE2. Now, if I-VPLS 100 on IB-PE1 router receives broadcast/unknown/multicast (BUM) traffic, it is propagated to all IB-PE routers connecting to that B-VPLS, regardless of I-VPLSs on them. Other IB-PE routers discard this traffic because there is no corresponding I-VPLS locally bound to the B-VPLS. This causes unnecessary bandwidth waste.
In order to solve this issue, a mechanism is required to track the membership of IB-PE routers in each I-VPLS instances. When a I-VPLS sends BUM traffic, only selected IB-PE routers receives that traffic which have that I-VPLS instance locally bound to the B-VPLS. The Multiple Registration Protocol (IEEE 802.1ak) can be used to optimize this behavior.
Summary
The PBB-VPLS solution solves the problem of MAC explosion that can be caused by learning a large number of C-MAC addresses. It incorporates PBB functionality into PE routers of H-VPLS solution. The IB-PE router divides the network into I-domain and B-domain. The VPLS in the I-domain learns C-MAC addresses and maps them to B-MAC addresses. The VPLS in the B-domain uses B-MAC addresses to perform learning and forwarding. 
(source: https://sites.google.com/site/amitsciscozone/home/vpls/pbb-vpls)

Virtual Private LAN Service (VPLS) ( Part1)

Hierarchical VPLS
Virtual Private LAN Service (VPLS) has become a very attractive technology over the past few years with the advent of MPLS. The reason being some enterprises are very reluctant to relinquish routing control of their network to the service provider and desire L2VPN services with multipoint connectivity. VPLS allows service providers to deploy carrier-class service over Ethernet/MPLS-based network in a realiable and flexible way. This article will start with some VPLS basics and continue with H-VPLS.
VPLS
As its name implies, the purpose of VPLS is to provide a private multipoint LAN-type Ethernet connectivity service i.e. VPLS emulates a LAN segment over MPLS backbone across pseudowires or virtual circuits. VPLS creates one or more LANs for each customer who is using the service from the service provider. Each LAN is completely separate from the other emulated LAN segments. When a customer with different Ethernet sites connects to an MPLS backbone where VPLS is deployed, it appears as if all the sites are interconnected through a virtual Ethernet switch.
VPLS Terminology
  • u-PE: User facing Provider Edge bridge that is used to connect Customer Edge (CE) devices to the service
  • n-PE: Network Provider Edge that acts as a gateway between MPLS core and edge domain, which may be MPLS or Ethernet
  • PE-Agg: Provider Edge - Aggregation Switch is an Ethernet switch that aggregates several u-PE connections for onward connection onto n-PE
  • VSI: Virtual Switch Instance - A VSI describes an Ethernet bridge function within an n-PE that equates to a multipoint L2VPN. A unique atttribute of a VSI is that it terminates PW virtual interfaces, which differs from an Ethernet bridge that terminates physical Ethernet interface.
  • PW: PsuedoWire - A PW is a virtual connection that connects two VSIs. A PW is bi-directional in nature and consists of a pair of uni-directional MPLS Virtual Circuits (VCs). A PW may also be used to connect a point-to-point circuit.
  • AC: Attachment Circuit - An AC is the customer connection to a service provider network. An AC may be a physical port, or a virtual port.
VPLS Architecture
An Ethernet switch has the following characteristics:
  • Forwarding of Ethernet frames
  • Forwarding of Unicast frames with an unknown destination MAC address
  • Replication of broadcast and multicast frames to more than one port
  • Loop prevention
  • Dynamic learning of MAC addresses
  • MAC address aging
VPLS should also have these characteristics. If the PE router receives an Ethernet frame with unknown destination MAC address, the frame is replicated and forwarded to all ports that belong to that LAN segment.
VPLS Overview
For each VPLS, the PE routers are fully meshed with pseudowires. A PE receiving a frame from another PE can identify which VPLS the frame belongs to, on the basis of pseudowire label or VC label. As far as each customer is concerned, an Ethernet frame that is sent into the service provider network is delivered to the correct site(s), on the basis of destination MAC address. It is the task of each PE router to inspect the destination MAC address of each frame arriving from a locally attached site and to forward it to appropriate destination site. This destination site may be attached to the same PE on a different port or a remote PE. If the destination site is attached to the same PE, the PE locally switches the frame to correct port. If the destination site is attached to a remote PE, the ingress PE must forward the frame to appropriate pseudowire to the remote PE. This means that the ingress PE needs to know which egress PE to send the frame to.
There are two ways in which this can be achieved- one is to have a control plane signalling to carry information about MAC addresses between PEs, or to have a scheme based on MAC address learning. VPLS takes the latter approach by having each PE take the responsibility of learning which remote PE is associated with a given MAC address. Thus an ingress PE simply needs to identify which frames need to be sent to egress PEs, and egress PEs takes care of identifying which local ports to forward the packet to. By inspecting the source MAC address, say A, of the frame arriving on a port, whether an actual local port or a pseudowire from a remote PE, and by creating a corresponding entry in the forwarding table, the PE learns where to send frames in the future having destination MAC address, A.
In the case where Ethernet switches are used as CE devices and connected to PE routers, the PEs need to learn the MAC addresses of individual hosts attached to the switches. So, if a host is plugged into the office network served by a switch as a CE, the effect will be felt by all PEs. Thus, for a large deployment, it is better to use routers as CEs than switches.
Forwarding of Unicast Frames
In figure 1, suppose that the MAC address of host C is C and the MAC address of host B is B, for the customer network X. Suppose host C sends a frame with source MAC address C and destination MAC address B. Suppose that PE3 does not know the location of MAC address B. As a learning bridge would do, PE3 floods the packet on all ports except the port on which it arrived. This means the packet is flooded to the pseudowire to PE2 and the pseudowire to PE1.
PE1 and PE2 know that the packet belongs to customer X's VPLS, by virtue of the pseudowire on which the frame arrived. PE1 and PE2 both perform destination MAC address lookup in their VPLS forwarding tables corresponding to customer X. If PE1 does not know the location of MAC address B, it floods the frame on its local ports to CE. However, it does not flood the frame to any other PEs. This split horizon scheme ensures no forwarding loops occur. Similarly, PE2 forwards the frame on to the port facing the switch CE.
Receiving frames with MAC address C enables each PE to learn the location of host C. Thus, PE2 and PE1 creates an entry in their forwarding tables with an association between MAC address C and their respective pseudowires to PE3. In this way, all PEs learn the MAC addresses and creates an association between MAC addresses and pseudowires (for remote destinations) in their forwarding tables for that particular VPLS instance.
Forwarding of Broadcast and Multicast Frames
In figure 1, suppose PE3 receives a broadcast frame sent by host C. The frame must be sent to all sites of customer X's VPLS. PE3 floods the frame onto pseudowires to PE1 and PE2. In turn, PE1 and PE2 floods the frame to the attached CEs, but due to split horizon, do not send the frames to any PEs. Multicast traffic is also treated exactly this way.
VPLS Discovery - How does a PE know which other PEs have members of a particular VPLS attached?
The capability to manually configure the addresses of remote PEs is required. However, the use of manual configuration is not necessary if auto-discovery is used. Auto-discovery allows PE devices to automatically discover other PE devices that have an association with a particular VPLS instance. Once the PEs have discovered other PEs for a VPLS instance, they can then signal connections to interconnect the PEs.
There are a number of mechanisms that can be used to distribute VPLS associations between PE devices, which includes extensions to BGP version 2 (multiprotocol BGP), LDP-based, DNS-based, RADIUS-based, and static.
  • Static configuration requires that each PE associated with a VPLS instance is configured as a peer. The scalability of this solution is low as manual configuration is required every time a VPLS is added, changed or deleted. However, as the peers are manually configured, the security and flexibility to signal additional attributes of the solution is fairly robust.
  • NMS/OSS configuration uses a central management point that distributes VPLS membership to each PE associated with a particular VPLS. This provides syntax checking based upon the type of device being configured and allows other service specific attributes to be provisioned at the same time.
  • DNS configuration uses a DNS to distribute VPLS membership information. This mechanism provides centralized management and also uses a common syntax. This mechanism requires that the requesting PE must belong to the DNS entries for the VPLS instance. However, this can not signal additional attributes and other mechanism is required to provide additional information.
  • RADIUS configuration uses RADIUS attributes to distribute VPLS membership information. In this scheme, the PE sends a request to the RADIUS server containing an identifier specific to a VPLS instance. The RADIUS server returns a list of addresses of PEs belonging to that VPLS instance. This mechanism is centralized and requires that the requesting PE must belong to have a RADIUS attribute associating the PE with the requested VPLS. RADIUS attribute-value pairs can be used to signal additional attributes.
  • LDP signalling requires that each PE is identified and a targeted LDP session is active for auto-discovery; it has no inherent auto-discovery, so the pseudowires must be manually configured or some external auto-discovery mechanism must be used. The overall scalability is poor as a PE must be associated with all other PEs for LDP discovery to work, which can lead to a large number of targeted LDP sessions. LDP can signal additional attributes but additional configuration is required from NMS/OSS or static.
  • BGP signalling requires that a PE associated with a particular VPLS is configured under a BGP process. BGP then advertises VPLS membership information using NLRIs. Hence, BGP has inherent mechanism for auto-discovery and so frees the user from having to configure the pseudowires manually. BGP cannot easily distribute attributes such as bandwidth profiles without introducing additional overhead.
VPLS Signaling - How is a full mesh of pseudowires setup between PEs?
Once the PEs have ascertained that other PEs have an association with the same VPLS instance, each PE needs to setup a PE between each other and bind the PWs to the particular VSI. There are 2 solutions that have been described for signalling of PW between PEs-
  • BGP-based VPLS (RFC 4761)
  • LDP-based VPLS (RFC 4762)
LDP-based Signaling
LDP is used for the signaling of pseudowires that are used to interconnect the VPLS instance of a given customer on the PEs. In order to signal a full mesh of pseudowires required, a full mesh of targeted LDP (T-LDP) sessions is required between the PEs. In absence of auto-discovery, these sessions must be manually configured on each PE router. This LDP session is used to communicate the "inner label" or "VC label" that must be used for each pseudowire. The network operator assigns a VC ID which is used to identify a particular pseudowire, is configured to be the same for a particular VPLS instance on all PE routers.
Following is the information communicated over the LDP session including the label value itself. The FEC element includes the following fields-
  • VC ID
  • Control Word bit - This indicates whether a control word will be used.
  • VC Type - This indicates the encapsulation type. This would be Ethernet or VLAN-tagged Ethernet.
  • Interface parameters field - This contains information such as media MTU, etc.
Cisco IOS uses LDP to signal the setup, maintenance and teardown of a PW between PE devices. Once a PE discovers that other PEs have an association for a particular VPLS instance, the PEs signal using T-LDP to other PEs that a PW is required to be setup between PEs. When a PE device is associated with a particular VSI, LDP transmits a label-mapping message with VC-Type 0x0005 and a 4-byte VC-ID value. If the remote PE has an association with that particular VC ID, it will accept the LDP label-mapping message and respond with its own label-mapping message. Once the two uni-directional VCs are operational, they are combined to form a bi-directional PW.
BGP-based Auto-discovery and Signaling
The BGP NLRI takes care of auto-discovery and signaling at the same time, the NLRI generated by a given PE containing the necessary information required by any other PE. These components enable the automatic setting up of full mesh of pseudowires for each VPLS.
On each PE, a Route Distinguisher (RD) and a Router Target (RT) is configured for each VPLS, like in L3VPN and L2VPN. The RT is same for a particular VPLS across all PEs, and is used to identify which VPLS a particular BGP message pertains to. The RD is used to disambiguate routes. On each PE, for each VPLS an identifier is configured called VPLS Edge Identifier (VE ID). Each PE involved in a particular VPLS must be configured with a different VE ID. BGP is used to advertise the VE ID to other PEs. This, along with other information in NLRI, is used to calculate the value of pseudowire label required to reach the advertising PE.
For a given VPLS, a PE requires that each remote PE uses a different pseudowire label to send traffic to that PE. This is to facilitate the MAC learning process. This way, the receiving PE can learn which PE is associated with the source MAC address of the frame. A PE could send a list of pseudowire labels required to reach it, one per remote VPLS Edge in that VPLS. However, this would mean that a PE send a long list of labels if there are a large number of PEs. Instead, the necessary information is carried in BGP NLRI. This allows all remote PEs to calculate the pseudowire label expected by the advertising PE.
A BGP Update message contains the following items-
  • Extended Community Route Target - This allows the receiving PE to ascertain which particular VPN the advertisement pertains to.
  • Layer 2 Info - This is automatically generated by sending PE. It contains following information-
    • Control Flags to indicate whether a Control Word is included
    • Encapsulation type - Ethernet or VLAN tagged
    • MTU
  • Other BGP attributes like AS Path, Origin, etc
  • The NLRI contains the following information-
    • Route Distinguisher
    • VE ID
    • Label Base
    • VE Block Offset
    • VE Block size
The label base, VE block offset and VE block size are the information the receiving PE requires to calculate the pseudowire label when sending traffic for VPLS to the advertising PE. A PE allocates blocks of labels. Each block is a contiguous set of label values. The PE simply advertises the value of the first label (label base) and the number of labels in the block (block size). The label value that a remote PE having a VE ID value must use to reach the advertising PE is computed as
label value = label base + VE ID - 1
For example, say in figure 1, suppose PE1 advertises a label base of 100 to PE2 and PE3. Suppose the VE IDs of customer X's VPLS instance on PE1 is 1, on PE2 is 2 and on PE3 is 3. When PE2 wants to send traffic to PE1, it calculates the pseudowire label as follows-
label value = 100 + 2 (VE ID of PE2) - 1
                = 101
So, PE2 uses 101 as the VC label to send traffic to PE1. The VE Block offset is used in case there are multiple label blocks are advertised in separate NLRIs.
Both of these implementation options are identical from forwarding plane point-of-view, but they differ in control plane, particularly in the protocol they use to signal and establish the pseudowires. The BGP implementation advertises a PE's association with a particular VPLS as well as the label block from which labels may be assigned to communicate with that PE. This approach has many disadvantages-
  1. All label information is broadcast to all PEs associated with a particular VPLS due to full mesh. This is acceptable for initial VPLS auto-discovery, subsequent PW discovery is inefficient.
  2. PW signalling of peer-to-peer parameters are broadcast to all PE routers which wastes bandwidth.
  3. This mechanism uses BGP for MAC address flush instead of IEEE Spanning Tree TCNs that makes it incompatible with IEEE bridges.
  4. BGP scaling mechanisms such as Route Reflectors (RRs) have to take into account increased signalling overhead.
Although a full mesh of PWs is formed between PEs, each individual PW has a set of unique attributes that are specific to a PW and have significance to that PW only. As attributes are point-to-point in nature, targeted LDP (T-LDP) is best suited. Further, since most MPLS implementations are LDP based, the use of LDP does not introduce a new protocol into the network.
To prevent packets looping, split horizon forwarding technique is used. It prevents transmitting a packet back out of the interface it was received upon. If a packet is received on a PW, it cannot be forwarded on any other PW associated with a particular VSI. The VPLS implementation allows customer BPDUs to be transported across the network. These customer BPDUs are tunneled through VPLS network.
VPLS Summary
VPLS enables a multipoint Ethernet service to be delivered over MPLS infrastructure. However, it has several scaling limitations like the requirement for a full mesh targeted LDP sessions for VPLS discovery and signalling. The PE device must form adjacency with all other PE devices. This requires the PE devices to learn the IP addresses of all remote PE devices, and exchange label information with them. Also, the replication of broadcast and multicast frames which occurs at the ingress PE router, causes an inefficient use of network bandwidth. With VPLS, you can transparently tunnel Layer 2 control protocols like CDP, STP, or VTP.
H-VPLS
To address the scaling limitations within flat VPLS architecture, H-VPLS is described. VPLS requires a full mesh of tunnel LSPs between all PE routers that participate in the VPLS service. For each VPLS service, n*(n-1)/2 PWs must be setup between PEs. This creates signalling overhead. Also, packet replication requirement detriments large scale deployment. Hierarchical connectivity reduces signalling and replication overhead.
H-VPLS introduces u-PE and n-PE routers. u-PE routers are user-facing PE routers, while n-PE routers are network-facing PE routers. The hierarchy provides the benefits of less signalling in MPLS core network and less packet replication on n-PE routers. The u-PE routers have an aggregation role and do some packet replication and MAC address learning.
RFC 4762 defines 2 mechanisms for access domain in H-VPLS. They are-
  1. H-VPLS with Q-in-Q in Access layer
  2. H-VPLS with MPLS in Access layer
The VPLS core PWs (hub) are augmented with access PWs (spoke) to form a two-tier H-VPLS. In figure 2, 3 customer sites are connected to u-PE devices. The u-PE devices have single connection (PW) to n-PE routers. The n-PE routers are connected in a basic VPLS full mesh service. For each VPLS service, a single spoke PW is setup between u-PE and n-PE devices. Unlike traditional PWs that terminate on a physical or logical port, a spoke PW terminates on a VSI on u-PE and n-PE devices. The u-PEs and n-PEs treat each spoke connection like an attachment circuit of the VPLS service. The PW label is used to associate the traffic from the spoke PW to a VPLS instance.
u-PE Operation
The u-PE device supports Layer 2 switching and does all the normal bridging functions of learning and replication on all its ports, including spoke PW. Packets to unknown destination are replicated to all ports in the service including spoke PW. Once the MAC addresses of CE devices connected to the same u-PE device are learned, traffic between them is switched localled, saving the capacity of the spoke PW to n-PEs. Similarly, traffic between remotely connected CE devices to different u-PE devices is switched directly onto spoke PW and sent to n-PEs over the point-to-point PW.
Since the u-PE is bridging capable, only a single PW is required per VPLS instance for any number of access connections in the same VPLS service. This reduces the signalling overhead between u-PEs and n-PEs. If the u-PE is directly connected to n-PEs, Q-in-Q encapsulation can be used for spoke PW.
n-PE Operation
An n-PE device supports all bridging functions for VPLS service and supports the routing and MPLS encapsulation like in basic VPLS. The operation of n-PE is independent of the type of device at the other end of the spoke PW. Thus the n-PE will switch traffic between spoke PW, hub PWs, and ACs once it has learned the MAC addresses.
Dual-Homed u-PE
The failure of n-PE or connection of PW, the u-PE can suffer total loss of connectivity. To prevent this, redundant connections can be provided. The u-PE is dual-homed into two n-PE routers. In figure 2, the u-PE sets up two PWs for each VPLS instance. One of the two PWs is designated as primary and the other as standby. The u-PEs negotiate pseudowire labels for both PWs, but does not use the standby PW unless the primary PW fails. Spanning tree instance or manual configuration can be used to designate primary and standby status.
Upon failure of primary PW, the u-PE immediately switches to the standby PW. At this point, the n-PE that terminates the standby PW, starts learning MAC addresses on the spoke PW. All other n-PEs initially continue to send traffic to initial n-PE until they learn that the devices are now connected to a new n-PE. To enable faster unlearning process, the new n-PE may send out a flush message using the MAC List TLV (Type 0x404) to all n-PEs. Upon receiving the message, the n-PEs flush the MAC addresses associated with that VPLS instance.
H-VPLS Model using Ethernet Access Network
H-VPLS model is expanded to include an Ethernet access network. It still requires n-PEs fully meshed in MPLS core, but there is no restriction on the topology of Ethernet access network, so u-PEs and n-PEs are not hubs and spokes. One approach of tunneling customer's Ethernet traffic via an Ethernet access network is to add an additional VLAN tag customer's data i.e. a S-VLAN tag. The customer's data may be tagged or untagged. Inside the provider's network, each S-VLAN designates a VPLS instance for a customer. Therefore, there is 1-1 correspondence between S-VLAN and VPLS instance. The u-PEs must have the capability of adding S-VLAN tag to customer data.
The n-PEs need to perform bridging functionality over the standard Ethernet ports toward the access network, as well as over the PWs towards the core network. Also, the n-PEs may need to run STP in the access network as well as split horizon in core network. The n-PEs need to map a S-VLAN to a VPLS instance and its associated PWs, and vice versa.
H-VPLS Summary
H-VPLS provides a solution for VPLS for improved pseudowire scalability. This improvement is achieved by reducing the number of PE devices connected in full mesh topology and thus improves control plane scalability. It reduces the burden on core devices presented by frame replication. However, there are better way to address scalability problems than those defined by LDP-based H-VPLS. Also, H-VPLS does not offer a better solution to efficiently address multicast traffic. BGP-based VPLS with point-to-multipoint LSPs is a good option for service providers. 
(source: https://sites.google.com/site/amitsciscozone/home)

Multi-chassis LAG (MC-LAG) in VPLS Service

MC-LAG

Multi-chassis LAG (MC-LAG) is an extension
to the LAG feature to provide not only link redundancy but also node-level
redundancy. This feature is not defined in any IEEE standard, but Alcatel-Lucent
has developed a proprietary solution.
A proprietary messaging between
redundant-pair nodes supports coordinating the LAG
switchover.
Multi-chassis LAG supports LAG switchover
coordination: one node connected to two redundant-pair peer nodes with the LAG.
During the LACP negotiation, the redundant-pair peer nodes act like a single
node using active/stand-by signaling to ensure that only links of one peer nodes
is used at a time.



In this Configuration Note a setup is built
with 1 CE and 4 PEs. The CE node can be any routing/switching device that can be
configured for LACP. The PE routers can be SR, SRc or ESS.
Figure 1
shows the physical topology of the setup:





Your browser may not support display of this image.
                        Figure 1: Physical
Topology 
Note the test tool Figure 1 is only used to
send/receive traffic to create MAC entries in the VPLS service. It can be
replaced by a router/switch.

Configuration

Base
Topology


This Configuration Note assumes that
following base configuration has been implemented on the PEs:
  • Cards, MDAs and ports configured

  • Interfaces configured

  • IGP configured and converged

  • MPLS configured

  • SDPs configured between all PE
    routers

Note that you can choose between OSPF and ISIS as the IGP. Both LDP or RSVP can be used for signaling the transport MPLS labels. Alternatively, GRE can be used for the transport tunnels.
It does not matter if the SDPs are using LDP, RSVP or GRE. RSVP has the added value of offering FRR to get faster convergence in the core.
In this setup OSPF and LDP are used.


The following commands can be used to check if OSPF has converged and to make sure the SDPs are up:

*A:PE1# show router route-table
===============================================================================
Route Table (Router: Base)
===============================================================================
Dest Prefix                                   Type    Proto    Age         Pref
       Next Hop[Interface Name]                                     Metric    
-------------------------------------------------------------------------------
10.0.0.1/32                                   Local   Local    00h33m00s   0  
       system                                                       0
10.0.0.2/32                                   Remote  OSPF     00h27m17s   10 
       10.1.2.2                                                     1000
10.0.0.3/32                                   Remote  OSPF     00h28m37s   10 
       10.1.3.3                                                     1000
10.0.0.4/32                                   Remote  OSPF     00h25m25s   10 
       10.1.2.2                                                     2000
10.1.2.0/24                                   Local   Local    00h30m06s   0  
       toPE2                                                        0
10.1.3.0/24                                   Local   Local    00h30m00s   0  
       toPE3                                                        0
10.2.4.0/24                                   Remote  OSPF     00h25m25s   10 
       10.1.2.2                                                     2000
10.3.4.0/24                                   Remote  OSPF     00h28m37s   10 
       10.1.3.3                                                     2000
-------------------------------------------------------------------------------
No. of Routes: 8
===============================================================================


*A:PE1# show service sdp

===============================================================================
Services: Service Destination Points
===============================================================================
SdpId    Adm MTU   Opr MTU   IP address       Adm  Opr         Deliver Signal 
-------------------------------------------------------------------------------
12       0         9190      10.0.0.2         Up   Up          LDP     TLDP   
13       0         9190      10.0.0.3         Up   Up          LDP     TLDP   
14       0         9190      10.0.0.4         Up   Up          LDP     TLDP   
-------------------------------------------------------------------------------
Number of SDPs : 3






MC-LAG Configuration

LAG configuration on CEs


Auto-negotiation needs to be switched off (or configured to limited) on all ports that will be included into the LAG.
Configure LACP on the LAG. At least 1 side of the LAG needs to be configured in ‘active’ mode.

*A:CE5# configure port 1/1/[1..4] ethernet no autonegotiate
*A:CE5# configure port 1/1/[1..4] no shut
*A:CE5# configure lag 1 port 1/1/1 1/1/2 1/1/3 1/1/4
*A:CE5# configure lag 1 lacp active                 
*A:CE5# configure lag 1 no shutdown

LAG configuration on PEs


The PE ports facing the CEs have to be configured as access ports since they will be used in the VPLS service. The LAG also needs to be configured in mode access.
Remark: the LAG encapsulation type (null|dot1q|qinq) must match the port encapsulation type of the LAG members.
Auto-negotiation needs to be switched off (or configured to limited).
Configure LACP on the LAG. At least 1 side of the LAG needs to be configured in ‘active’ mode.

*A:PE1# configure port 1/1/[1..2] ethernet no autonegotiate
*A:PE1# configure port 1/1/[1..2] ethernet mode access    
*A:PE1# configure port 1/1/[1..2] no shut                 
*A:PE1# configure lag 1 mode access
*A:PE1# configure lag 1 port 1/1/1 1/1/2                  
*A:PE1# configure lag 1 lacp active                       
*A:PE1# configure lag 1 no shutdown




MC-LAG configuration on PE1 and PE2


The redundant PEs must act as 1 virtual node toward the CE. They have to communicate the same LACP parameters to the CE side.
3 parameters uniquely identify a LAG instance:
    • lacp-key

    • system-id

    • system-priority

These 3 parameters must be configured with the same value on both redundant PEs.
Configure multi-chassis redundancy with a peering session toward the redundant PE system address and enable mc-lag redundancy.



*A:PE1# configure redundancy multi-chassis
*A:PE1>config>redundancy>multi-chassis# info
----------------------------------------------
            peer 10.0.0.2 create
                mc-lag
                    lag 1 lacp-key 1 system-id 00:00:00:00:00:01 system-priority 100
                    no shutdown
                exit
                no shutdown
            exit
----------------------------------------------


*A:PE2# configure redundancy multi-chassis
*A:PE2>config>redundancy>multi-chassis# info
----------------------------------------------
            peer 10.0.0.1 create
                mc-lag
                    lag 1 lacp-key 1 system-id 00:00:00:00:00:01 system-priority 100
                    no shutdown
                exit
                no shutdown
            exit
----------------------------------------------


MC-LAG Verification

Verify MC peers



*A:PE1# show redundancy multi-chassis sync                           

===============================================================================
Multi-chassis Peer Table
===============================================================================
Peer
-------------------------------------------------------------------------------
Peer IP Address         : 10.0.0.2                                            
Authentication          : Disabled                                            
Source IP Address       : 0.0.0.0                                             
Admin State             : Enabled                                             
===============================================================================
===============================================================================

Note that if the source IP address is not configured explicitly the output shows ‘0.0.0.0’. In that case the system IP address will be used as source. (In future releases ‘0.0.0.0’ will be replaced by the used Source IP Address)
The source IP address can be configured with the command:

*A:PE1# configure redundancy multi-chassis        
*A:PE1>config>redundancy>multi-chassis# peer 10.0.0.2 source-address 10.0.0.1

Authentication can also be configured:

*A:PE2# configure redundancy multi-chassis
*A:PE2>config>redundancy>multi-chassis# peer 10.0.0.1 authentication-key Alcatel

Remark: when configuring authentication or a source address the MC peer needs to be shutdown first.


If source IP address and authentication are configured the result looks like:

*A:PE1# show redundancy multi-chassis sync                     

===============================================================================
Multi-chassis Peer Table
===============================================================================
Peer
-------------------------------------------------------------------------------
Peer IP Address         : 10.0.0.2                                            
Authentication          : Enabled                                             
Source IP Address       : 10.0.0.1                                            
Admin State             : Enabled                                             
===============================================================================
===============================================================================


Verify MC-LAG peer status and LAG parameters


*A:PE1# show redundancy multi-chassis mc-lag peer 10.0.0.2

===============================================================================
Multi-Chassis MC-Lag Peer 10.0.0.2
===============================================================================
Last Changed    : 03/07/2007 17:38:53                                         
Admin State     : Up                  Oper State           : Up               
KeepAlive       : 10 deci-seconds     Hold On Ngbr Failure : 3                
-------------------------------------------------------------------------------
Lag Id Lacp Key Remote Lag Id System Id          Sys Prio Last Changed      
-------------------------------------------------------------------------------
1      1        1             00:00:00:00:00:01  100      03/07/2007 17:40:17
-------------------------------------------------------------------------------
Number of LAGs : 1
===============================================================================

In this example the Lag-Id is 1 on both redundant PEs. This is not mandatory. If the Lag-Id on PE2 is eg. 2, the following should be configured on PE1:

*A:PE1# configure redundancy multi-chassis
*A:PE1>config>redundancy>multi-chassis# peer 10.0.0.2 mc-lag lag 1 remote-lag 2 lacp-key 1 system-id 00:00:00:00:00:01 system-priority 100

Verify MC-LAG status


*A:PE1# show lag 1

===============================================================================
Lag Data
===============================================================================
Lag-id         Adm     Opr     Port-Threshold   Up-Link-Count   MC Act/Stdby  
-------------------------------------------------------------------------------
1              up      down    0                0               standby       
===============================================================================

*A:PE2# show lag 1

===============================================================================
Lag Data
===============================================================================
Lag-id         Adm     Opr     Port-Threshold   Up-Link-Count   MC Act/Stdby  
-------------------------------------------------------------------------------
1              up      up      0                2               active        
===============================================================================

In this case the Lag on PE2 is Active/Operationally up whereas the Lag on PE1 is Standby/Operationally down.

The selection criteria by default is highest # of links and priority. In this example the # of links and the priority of the links is the same on both redundant PEs. Whichever PE’s LAG gets in operational up status first will be the active.

LAG ports of one PE could be preferred over the other PE by configuring port priority (e.g. the following command lowers the priority of the LAG ports on PE1, thus giving this LAG higher preference).

*A:PE1# configure lag 1 port 1/1/1 1/1/2 priority 10

Note : lower S priority is preferred

Verify detailed MC-LAG status on PE1


*A:PE1# show lag 1 detail

===============================================================================
LAG Details
===============================================================================
Description:

-------------------------------------------------------------------------------
Details
-------------------------------------------------------------------------------
Lag-id              : 1                     Mode                 : access     
Adm                 : up                    Opr                  : up         
Thres. Exceeded Cnt : 19                    Port Threshold       : 0          
Thres. Last Cleared : 03/07/2007 19:57:18   Threshold Action     : down       
Dynamic Cost        : false                 Encap Type           : null       
Configured Address  : 1e:2f:ff:00:01:41     Lag-IfIndex          : 1342177281 
Hardware Address    : 1e:2f:ff:00:01:41     Adapt Qos            : distribute 
Hold-time Down      : 0.0 sec                                                 
LACP                : enabled               Mode                 : active     
LACP Transmit Intvl : fast                  LACP xmit stdby      : enabled    
Selection Criteria  : highest-count         Slave-to-partner     : disabled   
Number of sub-groups: 1                     Forced               : -          
System Id           : 1e:2f:ff:00:00:00     System Priority      : 32768      
Admin Key           : 32768                 Oper Key             : 1          
Prtr System Id      : 1e:2d:ff:00:00:00     Prtr System Priority : 32768      
Prtr Oper Key       : 32768                                                   

MC Peer Address     : 10.0.0.2              MC Peer Lag-id       : 1          
MC System Id        : 00:00:00:00:00:01     MC System Priority   : 100        
MC Admin Key        : 1                     MC Active/Standby    : active     
MC Lacp ID in use   : true                  MC extended timeout  : false      
MC Selection Logic  : peer decided
MC Config Mismatch  : no mismatch

-------------------------------------------------------------------------------
Port-id        Adm    Act/Stdby Opr    Primary   Sub-group     Forced   Prio
-------------------------------------------------------------------------------
1/1/1          up     active    up     yes       1             -        10     
1/1/2          up     active    up               1             -        10     

-------------------------------------------------------------------------------
Port-id        Role      Exp   Def   Dist  Col   Syn   Aggr  Timeout  Activity
-------------------------------------------------------------------------------
1/1/1          actor     No    No    Yes   Yes   Yes   Yes   Yes      Yes     
1/1/1          partner   No    No    Yes   Yes   Yes   Yes   Yes      Yes     
1/1/2          actor     No    No   Yes   Yes   Yes   Yes   Yes      Yes     
1/1/2          partner   No    No    Yes   Yes   Yes   Yes   Yes      Yes     
===============================================================================

After changing the Lag port priorities the Lag on PE1 is in up/up state and the ports are in up/active/up status.
This show command also displays MC peer info and actor info. More information about this can be found in the MC-LAG workshop presentation.


VPLS  Configuration


Configure a VPLS service on every PE and add SAPs and SDPs. In this example mesh-SDPs are used. In case of a BTV service spoke-SDPs in combination with R/MSTP can be used to avoid sending duplicate traffic.

Figure 2 shows the logical topology of Services, SAPs and SDPs

Your browser may not support display of this image.
                         Figure 2: Logical Topology


*A:PE1>config>service>vpls# info
----------------------------------------------
            stp
                shutdown
            exit
            sap lag-1 create
           exit
            mesh-sdp 12:60 create
            exit
            mesh-sdp 13:60 create
            exit
            mesh-sdp 14:60 create
            exit
            no shutdown
----------------------------------------------

Likewise, a VPLS service, SAPs and mesh-SDPs need to be configured on the other PE routers.

VPLS Service Verification

Verify service status


*A:PE1# show service service-using

===============================================================================
Services
===============================================================================
ServiceId    Type      Adm    Opr        CustomerId        Last Mgmt Change   
-------------------------------------------------------------------------------
60           VPLS     Up     Up         1                 03/09/2007 20:32:34
-------------------------------------------------------------------------------
Matching Services : 1
-------------------------------------------------------------------------------
===============================================================================

The VPLS service should be Up on all PE routers.

Verify FDB of the VPLS service.


On CE5 and on the 2 ports of the tester create ip interfaces in the same subnet.
Note that you could also create an Epipe service and connect the test tool to it.

*A:CE5>config>router>if# info
----------------------------------------------
            address 192.168.1.5/24
            port lag-1
----------------------------------------------


Send traffic from CE5 towards the ports of the test tool. E.g. ping the access port connected on PE3. You can check the FDB on PE3:

*A:PE3# show service id 60 fdb detail

===============================================================================
Forwarding Database, Service 60
===============================================================================
ServId    MAC               Source-Identifier     Type/Age  Last Change       
-------------------------------------------------------------------------------
60        1e:2e:ff:00:01:41 sdp:31:60             L/0       03/12/2007 17:48:20
60        1e:4e:01:01:00:01 sap:1/1/1             L/0       03/12/2007 17:48:20
-------------------------------------------------------------------------------
No. of MAC Entries: 2
===============================================================================


Notice that the first entry shows MAC address of the LAG configured on CE5. The MAC address was learned via sdp:31:60. Stop sending traffic.


*A:CE5# show lag detail

===============================================================================
LAG Details
===============================================================================
-------------------------------------------------------------------------------
LAG 1
-------------------------------------------------------------------------------
Description:

-------------------------------------------------------------------------------
Details
-------------------------------------------------------------------------------
Lag-id              : 1                     Mode                 : network    
Adm                 : up                    Opr                  : up         
Thres. Exceeded Cnt : 36                    Port Threshold       : 0          
Thres. Last Cleared : 03/04/2002 16:31:13   Threshold Action     : down       
Dynamic Cost        : false                 Encap Type           : null       
Configured Address  : 1e:2e:ff:00:01:41     Lag-IfIndex          : 1342177281 
Hardware Address    : 1e:2e:ff:00:01:41
...


In Figure 3 the active access link between CE5 and PE1 is broken (e.g. by shutting down port 1/1/1 on PE1). The MC-Lag will switch to PE2 and PE1 will send out an LDP MAC-Flush message over all mesh-SDPs.


Your browser may not support display of this image.
            Figure 3: Active MC-LAG is broken and PE1 sends out LDP MAC-Flush




*A:PE3# show service id 60 fdb detail

===============================================================================
Forwarding Database, Service 60
===============================================================================
ServId    MAC               Source-Identifier     Type/Age  Last Change       
-------------------------------------------------------------------------------
60        1e:4e:01:01:00:01 sap:1/1/1             L/15      03/12/2007 17:48:20
-------------------------------------------------------------------------------
No. of MAC Entries: 1
===============================================================================

Note that the MAC address was flushed from the FDB on PE3.

If the same test is done but 1/1/1 is shut on CE5 instead of PE1 the result is different:

*A:PE3# show service id 60 fdb detail

===============================================================================
Forwarding Database, Service 60
===============================================================================
ServId    MAC               Source-Identifier     Type/Age  Last Change       
-------------------------------------------------------------------------------
60        1e:2e:ff:00:01:41 sdp:32:60             L/0       03/12/2007 17:48:39
60        1e:4e:01:01:00:01 sap:1/1/1             L/15      03/12/2007 17:48:20
-------------------------------------------------------------------------------
No. of MAC Entries: 2


(source: http://networkaids.blogspot.com/2010/01/multi-chassis-lag-mc-lag-in-vpls.html)