MPLS L3 VPN White Paper

Enterprise Network Design White Paper
For Metropolitan and Campus Networks –
Robert Shields
Sr. Network Engineer, CCIE # 12096 
January 2012 
Enterprise Network Challenges of Today and Beyond 
Enterprise network and security managers continue to see their responsibilities increase within their respective IT organizations as applications and services continue to migrate to IP as a fundamental means of communication. These applications and services include telephony, video, wireless and mobility clients, storage area networks, etc., with the list going on and on. The reality today is that most enterprise networks are now converged IP services networks with many different and diverse customer requirements. With the network requirements changing to resemble more of a service provider model, so too should the network architecture model. 
Enterprise networks traditionally supported server-based data applications, many of which were deemed critical applications, which could be secured with a tiered firewall architecture in the data center. Today, all types of new devices and appliances are being plug into the Enterprise IP network at numerous locations and they all have their own security requirements. Compound this by the new technical challenges of the day, including server virtualization, cloud computing, business continuity, and IPv6, and there is no lack of pressure on the network and security IT groups in providing a secure, scalable, and robust network environment that can continue to meet the challenges of today and the future. 

MPLS L3VPNs – Providing a Flexible Network Architecture Solution 
This white paper focuses on how MPLS L3VPN technology can provide many benefits to Enterprise networks by enabling a flexible network architecture that can accommodate a diverse set of customer requirements across a Metropolitan Area or Campus network. These benefits stem from the ability to logically segment or virtualize the network into multiple isolated routing domains called VPNs. The following is a list of the benefits that MPLS L3VPN can provide.

  • Reduced Cost – L3VPNs allow multiple networks to be converged into a single network while still maintaining their privacy from one another. These results in less infrastructure devices, less power consumption, less vendor support contracts, and fewer personnel required to maintain multiple networks.
  • Network Virtualization – Major router manufacturers support literally thousands of virtual networks to be configured across the shared Enterprise network, thus supporting traffic isolation for specific applications, services, user groups, agencies or divisions, or any other type of isolated IP transport requirement.
  • Routable DMZs/Centralized Firewalls – By connecting the L3VPN network to a centralized firewall architecture at a physically secure data center, this essentially enables a VPN to become a boundless internal DMZs that can be extended anywhere across the Enterprise network. These DMZs will still have the ability to communicate back to a main corporate network through an approved security policy, thereby allowing access to common network services such as DNS, DHCP, SNMP, etc., while still severely restricting the vulnerability that this DMZ can pose to the internal network. Other data access or communication requirements between different DMZs/VPNs could also be granted and added to the security policy on a case by case basis. The L3VPN architecture also eliminates the need to deploy firewalls at remote sites where physical security of these devices cannot be as easily assured, since alternatively a new VPN could be created and mapped back to the centralized firewall systems.
  • VLAN to VPN/VRF Mappings – The ability to map VLANs into VPNs will be explained further throughout this document, but for now the beneficial point to make is that not all network devices need to be MPLS or VPN/VRF aware. Only the core or backbone routers will need to be evaluated to run this technology and not all the edge sites. Some edge routers may need to support VRF-Lite, which is a scaled-down version on the technology that does not require MPLS. This keeps the initial investment down and also allows a seamless and phased migration approach when transitioning to the new network architecture.
  • Additional Services Supported by an MPLS Backbone – In addition to L3VPNs, there are other services that the backbone network will be able to support as a result of having MPLS enabled. L2VPN technologies such as pseudo-wire and VPLS can also be deployed rather easily once an MPLS core is achieved.

These benefits will be illustrated and substantiated further in the MPLS L3VPN architecture example that is described later in this document. A brief and high-level overview of the MPLS L3VPN technology is described next to ensure conceptual understanding of how traffic is isolated across the shared backbone infrastructure. 

Overview of MPLS L3VPN Technology 
There are a several main concepts that network and security managers should understand with L3VPNs.

  1. Routers maintain separate routing tables for each VPN that they are connected to, called Virtual Routing and Forwarding tables (VRFs), thus VPN and VRF are used synonymously with one another when describing these isolated routing domains.
  1. Any IP packets that enter the MPLS L3VPN backbone network must enter on either a physical or logical interface that is defined to be within a specific VRF. Once a packet enters on an ingress VRF interface, it can only exit out an egress interface that is in the same VRF. This ensures that traffic is isolated between different VRFs. Examples of logical interfaces are sub-interfaces or VLANs on an 802.1Q tagged port.
  1. The ingress router for an IP packet entering the MPLS L3VPN backbone is called the Provider Edge (PE) router. As part of the intricacies of the technology, MPLS label switch paths (LSPs) are built in a full-mesh topology between all the PE routers in the backbone network. Therefore, when an IP packet enters through a VRF defined interface on the backbone network, one IP routing table lookup occurs at that PE router (within that specific VRF table), and the result is the packet gets forwarded to the appropriate LSP and is then label-switched through the MPLS network until it exits the associated egress port for its destination. These efficient techniques of a single IP lookup and label-switching packets through the network result in the technology being very secure and scalable.
  1. Routers that connect to an L3VPN network are referred to as Customer Edge (CE) routers. Although these routers are referred to as “customer” routers, they can certainly be managed by the same Enterprise network group that manages the MPLS or PE routers. CEs that need to connect to two different VPN subnets on the MPLS core will need to run VRF-Lite. VRF-Lite enables a router to function as a stand-alone virtualized router – following the same principals of L3VPNs; multiple VRFs can be configured with physical and/or logical interfaces defined within the VRFs (no MPLS). A CE can also be a L2 switch with just a trunk port to the PE.

All of these concepts are illustrated in the Figure 1. CE1 has two LANs that need to communicate using the L3VPN core using two different VPNs. VPN A is colored in blue, and VPN B is in red. Therefore, CE1 must connect to the MPLS network using either sub-interfaces or a trunked (IEEE 802.1Q – tagged) link so that the PE can define these logical connection into the appropriate VPN/VRF. Notice that PE1 is configured and has knowledge of both VRF tables, while PE2 & PE3 are only configured with the single VRF for which they have a CE connected to. When CE1 sends packets destined to both CE2 and CE3, it is PE1 that does an IP lookup for these packets using the appropriate VRF table, and then forwards the packets onto the correct LSP that will carry it straight out the egress port towards its destination.


Figure 1. – MPLS L3VPN Concepts 
Other technologies that can also be deployed along with L3VPNs include the following:

  • Quality of Service (QoS) – For traffic classification, RFC 4594 “Configuration Guidelines for DiffServ Service Classes” is used as the basis for the recommendations for traffic classification in MPLS L3VPN environment. The 4, 8 and 12-class models can be used for traffic classification.
  • Traffic Engineering (TE) and LSP Protection – TE with LSP protection schemes can be configured to support under 50 ms failover times should a link in the network core fail. TE can also be used to move traffic from over-utilized use links to underutilized links, thus taking advantage of all resources available.
  • Encryption – MPLS does not imply encryption, however several different encryption methods exist if this is required over a L3VPN, including GRE over IPSec, Cisco’s Dynamic Multipoint VPN (DMVPN), and Cisco’s Group Encrypted Transport VPN (GET VPN).
  • Multicast in a VRF – Multicast traffic can be transported across the MPLS backbone on a per VRF basis. Next-generation multicast has recently been deployed by vendors which also contains support for point-to-multipoint LSPs which employ true label switching transport similar to the unicast implementation (previous implementations actually used GRE and the global routing table).
  • IPv6 Transport – While the enterprises slowly move their infrastructure to support IPv6, they can use their existing IPv4 MPLS infrastructure to support IPv6. MPLS L3VPN use IPv6 VPN provider edge (6VPE) approach to transport IPv6 over MPLS network. 6VPE as specified in RFC 4659, “BGP-MPLS IP VPN Extension for IPv6,” it enables the IPv6 sites to communicate with each other using MPLS LSP over MPLS IPv4 core network.

The 6VPE implementation provides scalability with no IPv6 addressing restriction. This mode enables the enterprises to deploy the IPv6 MPLS VPN service over their existing IPv4 backbone by just upgrading the PE router to the dual-stack-capable software. 
This overview was only intended to cover the major concepts of MPLS L3VPNs, for more detailed technical and protocol information, please refer to RFC 4364 “BGP/MPLS IP Virtual Private Networks”.
Legacy Enterprise Network Architectures 
Many Enterprise network today look very similar to the topology depicted in Figure 2 below. Two or more physically distinct networks were built to meet different network and security criteria such as providing services on a Corporate network (a private internal network – only for internal staff use) as well as potentially a Public network (a network open for guests access – primarily for Internet). Also on the corporate side, there are likely some access control lists (ACLs) or Policy Based Routing (PBR) that has been implemented to facilitate requests from certain groups or applications owners that required addition security or access restrictions. These workarounds can lead to increased downtime due to configuration error and extended troubleshooting timeframes. Common network services such as DNS, DHCP, and SNMP will need to be deployed and managed for each separate physical network as well. These techniques are cumbersome to manage and are simply not sophisticated technologies that were intended to support an Enterprise network architecture design. 
Finally, there could also be several firewalls deployed at remote parts of the network to accommodate partner type connections or to securely protect a remote application user group or small server farm. MPLS L3VPN technology can allow the Enterprise network to alleviate all of these unpleasant practices as will be demonstrated in the following section. 
Figure 2. – Legacy Network Architecture 
MPLS L3VPN Enabled Enterprise Network Architectures 
Figure 3 below shows a high-level drawing of a typical Metropolitan Area Network (MAN) or Campus network design using L3VPN technology. Depending on the overall topology of the network, the MPLS L3VPN enabled backbone can include core, distribution, and even some edge site routers. This depends on which sites need the flexible capabilities of MPLS. As Figure 3 illustrates, a typical Enterprise network will have a contiguous MPLS enabled backbone encompassing core routers at the data center and distribution routers, connecting the remote sites. This enables network virtualization within the heart of the network, resulting in the ability to provide flexible solutions for any number of unique network or security related requirements that are needed in the network today or in the future. 

Figure 3. – Example MPLS L3VPN Architecture for MAN or Campus Network 

In this example, we can see several segmented VPNs that are serving different purposes within the Enterprise design. First, there is a “Corporate VRF” (blue) that is isolating the internal staff and data centers from other entities on the network. The corporate network existed prior to the L3VPN implementation, so there are likely several sites with multiple routers that were already running an internal routing protocol such as OSPF. Therefore the “Corporate VRF” can be setup to redistribute routes with OSPF at these locations. Notice that the remote sites each have routers running VRF-Lite with 802.1Q trunked uplinks connecting to the distribution or PE routers, thereby allowing a logical path for the remote Corporate LAN segments to be mapped into the “Corporate VRF” at the distribution routers. The corporate network is also connected to centralized firewalls (dubbed, the “VRF Meet-me Point”), that allows the other VRFs to access common network services (DNS, DHCP, SNMP, etc.) that reside there. 
The second VRF to discuss is the “Public VRF” (red). This VRF shows from our previous legacy network example that a separate physical network is not needed when an additional isolated network is required. Here, a “Public VRF” is created for users who need be allowed guess-type access to the Internet. One topic that is yet to be discussed is that within a VRF itself, several different topologies are supported including full-mesh, partial-mesh, and hub-and-spoke. In this case, the “Public VRF” can be configured as a hub-and-spoke so that the only destination a user on the “Public VRF” can reach is the Internet. The two public sites within this example, Sites 1 and 2, are not able to communicate with one another as designed. A practical example for this might be a State or Local Government network providing Internet access to its constituents at library sites. A hub-and-spoke design will allow just the Internet access, without enabling library to library public access communication. 
The last VRFs to examine in the example are the “Secure-App VRF” (purple) and the “DMZ/Partner VRF” (green). The “Secure-App VRF” is representative of any type of application that has increased security requirements and therefore should be contained in an isolated network. As can be seen from Figure 3, Site 1 & 3 both have a Secure-App VLAN that will be able to communication with one another directly. Any other network services or access required from the Corporate network will need to added to the security policy enabled on the VRF Meet-Me Point firewall(s). The “DMZ/Partner VRF” is representative of a remote site where either an external partner connection or remote internal servers are located requiring additional protection. Enterprise network seem infamous for having “one-off” requests such as these. Either or these requests in a legacy Enterprise network might lead to an additional firewall being purchased and placed at the remote site, where physical security of the firewall is not ensured. With VRF technology available, these servers or partner connections can be connected to the “DMZ/Partner VRF” and transported back to the centralized VRF Meet-Me Point firewalls and again only allowed access to what is necessary on either sides of the connections. 
Just a few more technical notes to reiterate the flexibility of this new Enterprise model:

  • VRFs support overlapping IP address space between VRF – This means that two different VRFs can use the same IPv4 address space such as the prefix. Thus, the Enterprise network will have the capability to transport IP traffic for other agencies or sub-division that have their own IT departments and equipment, or for a strategic partner where backhaul their traffic makes sense, or potentially even offering revenue generating service, etc.
  • Dynamic Routing Protocols between PE and CE – At the edge of the network, many different routing protocols are supported between the PE and CE routers. Depending on the router manufacturer, OSPF, EIGRP, or eBGP can be used if dynamic routing is required. For stub-site, static route injection into a VRF is available as well.
  • Enriched Enterprise Network Features – Since MPLS L3VPN technology has been used in production network for over a decade now, adopted early by Service Providers and recently into more and more Enterprise networks, many additional features have been added to support unique Enterprise network requirements. These features include support for Super-backbones and backdoor links. A Super-backbone is really a simulated OSPF backbone across a VRF; similar features are supported for EIGRP as well with Cisco. Backdoor links are remote site that may have a redundant WAN link that does not connect to the MPLS network and instead goes direct to another site via an IGP such as OSPF. Links such as these can be configured a primary, secondary, or load-sharing link for the site. These are just a few of several features specific to Enterprise network integration.

MPLS L3VPNs is a very mature and scalable technology as it has had its roots in the service provider realm since the turn of the millennium. Enterprise network today share an expanding customer base similar to service providers as IP transport becomes more and more prevalent as the fundamental communication protocol within all IT fields. Ashburn Consulting believes strongly in MPLS L3VPN technology in the enterprise network space as it allows for a flexible, secure, robust and scalable network architecture that is well positioned for the foreseeable future due to its unique ability to virtualize the network. Ashburn Consulting has successfully designed and deployed this technology for both large and small customers and would be pleased to discuss our experience and possible solutions for your Enterprise network as well.

Cisco XR IOS Code Upgrade

Summary of XR IOS Code Upgrade:

1.Download the correct image tarball from The image cannot be used as-is. The CCO image tarball for Release 4.3.4 (Cisco recommended) must be downloaded first and only relevant Packages Installation Emvelope (PIEs) or Software Maintenance Upgrade (SMUs) must be taken for use on the router.

2. Before upgrading to the new code, all the mandatory SMUs required to be active in the current release must be installed. For example we are upgrading from code 4.2.3 to 4.3.4 and required SMU for upgrade are:

  • CSCud98419
  • CSCud37351
  • CSCud54093

Here is the table with Mandatory SMUs: Mandatory SMUs.png. These mandatory SMUs are available in the downloaded tar file.

3. First copy SMUs to USB drive. USB drive on the ASR9000 use disk1: slot. You should see all SMUs on disk1:

RP/0/RSP0/CPU0:XR-2#dir disk1:
Fri May  2 18:08:27.040 est
Directory of disk1:
131616    -rw-  52518619    Tue Dec 18 19:48:48 2012  asr9k-p-4.2.3.CSCud37351.pie
131936    -rw-  40910815    Thu Jan 10 06:52:20 2013  asr9k-p-4.2.3.CSCud54093.pie
132448    -rw-  724868      Fri Feb  1 23:35:02 2013  asr9k-p-4.2.3.CSCud98419.pie

4. Add and activate the required SMUs. By default, they are added to installed to disk0:. All add and install operations should be performed in the admin mode.

(admin)#install add disk1:asr9k-p-4.2.3.CSCud37351.pie disk1:asr9k-p-4.2.3.CSCud54093.pie disk1:asr9k-p-4.2.3.CSCud98419.pie active

You can list all the SMUs in one line divided by space. The installation will be performed on the background. If you want to see the install status, issue the command:

show install request

If you actually want to see the full installation process, add “synchronous” at the end of the install request:

install add disk1:asr9k-p-4.2.3.CSCud37351.pie active synchronous

5. After install finished, make sure that SMUs are active now:

(admin)#show install active
Fri May  2 18:28:30.921 est
Secure Domain Router: Owner

  Node 0/RSP0/CPU0 [RP] [SDR: Owner]
    Boot Device: disk0:
    Boot Image: /disk0/asr9k-os-mbi-4.2.3.CSCud54093-1.0.0/0x100000/mbiasr9k-rp.vm
    Active Packages:

6. Commit the install. After commit, router will reboot automatically:

   (admin)#install commit

7. After reboot, login and add/activate all required PIEs with new code 4.3.4. mini.pie is mandatory and consist of several packages required to run the router. All other packages are for additional features (mpls, multicast, security, management, etc)

All the packages specified in the install command must match older version packages already installed on the router. Although it is not possible to upgrade with lesser number of packages than these installed, it is ok to install additional packages.

(admin)#install add disk1:asr9k-mini-px.pie-4.3 disk1:asr9k-mpls-px.pie-4.3 active synchronous

You can add and activate multiple PIEs in one command. Just list them separated by space.

After installation complete (may take 10-30 minutes), router will reboot automatically.

8. After reboot – commit the install:

(admin)#install commit

9. Verify that proper set of PIEs are active now. Old code PIEs are automatically deactivated:

(admin)#show install

10. Verify the new IOS XR version:

(admin)#show version

11. Verify that all the nodes are in “IOS XR RUN” state, SPAs in “OK” state, Fan Tray and Power Modules are in “READY” state:

(admin)#show platform

12. Upon successful upgrade checks for package and image integrity. Any found issues must be repaired using the command: “install verify package repair”.

(admin)#show install verify packages

13.  Once software upgrade or downgrade has been completed, disk space can be recovered by removing any inactive packages that are no longer needed (if the packages are required at a later time, they can be re-added)

Windows Media conversion to stream live to USTREAM

Our Client owns old Vbrick Encoders that can only encode Windows Media video format. This is a big problem if the video stream is being offered for Public viewing to support a wide variety of devices (Tablets, IPADs, IPhones, Android devices, Macs, PCs, etc.). To avoid purchasing at least two new $10k H.264 encoders and the need to build a new media server, we are going to use 3 types of software running from a PC or laptop. Make sure the the PC has a fast cpu and at least 4 Gigs of ram to run this type of setup.

Here’s an overview of the rally setup showed in the diagram below.


Here’s a diagram that shows traffic flow:


The 3 applications that will need to run in the PC are:
  1. Windows Media Player or VLC Player (Will play the Windows Media stream from the Vbrick Encoder)
  2. Telestream Screencast (provides a screencapture from the PC’s display)
  3. USTREAM Producer software (Grabs the screencapture from Telestream Screencast and other display sources like a Webcam, IP Camera, video device, etc.)
In order to stream to USTREAM outbound and watch Ustream streams you have to create the following stateful firewall rules to the Internet:

•Outgoing TCP destination port 80, 443 to any IP (WEB)

•Outgoing TCP destination port 1935 to any IP (RTMP – this is used to deliver the stream)


1. Make sure that the Vbrick encoders are online and has live streaming enabled.

2. From the PC, open either Windows Media Player or VLC player and play the video URL configured in the Vbrick.








3. Once video is playing, open the Telestream Desktop Presenter software. Choose the Source display and the Selection portion. In this case it’s just a certain region of the screen so pick “Select Screen Region” under “Selection”. A small box will show up that can be cropped and resized to the portion of the screen you want to show. Hit “ENTER” to save the cropped region. The software is now doing a live screen capture. You have an option there to also capture audio from your PC.












4. Open USTREAM Producer, login to your USTREAM account, pick the broadcast channel you have set from your account. You will need to setup a new “Remote Desktop Presenter” display source. Why Remote Desktop? This is because USTREAM Producer will need to connect to the Telestream Desktop Presenter software from Step 3 to pull the screen capture. Give your Remote Desktop Source a new name.



5. In the USTREAM Producer, add the “Remote Desktop” profile you set from step 4 into the Master Layer. Just hit the “monitor icon” and add the remote desktop name or profile you’ve set from step 4.


6. Once you see the right thumbnail display render from what you are playing from Windows Media or VLC player, highlight the thumbnail in the Master Layer section and then hit the “Stream” button at the upper left hand corner of the application.

If video streaming to USTREAM pauses from time to time, you can adjust the resolution and traffic rate. Go to: Output, then “Output Settings”



7. Test by pulling up the video webpage from UStream.

Download the USTREAM Producer software here:

Download the Telestream Desktop Presenter from here:

How to encode video and screencasts optimally for the web using Handbrake

What format should you use when you make video for the web? MP4

It is important to optimize your video for web use especially for our internal use using ACTube. Non-optimized video formats and settings can cause video pauses and stuttering.

Notice that when you upload a video to YouTube or Vimeo, they transcode and prepare the video in some way that it is web optimized. There are some very good reasons for self-hosting our videos.  One reason is security for some videos that we don’t want to make public.

Handbrake is free software you can use to make web video optimized for self hosting. Handbrake does a good job in making sure self-hosted web videos are compatible with virtually all platforms.

Quicksteps for settings optimizing web video using the free Handbrake software

  1. Preferences menu.  De-select the option for “Use iPod/iTunes friendly (.m4v) file extension for MP4″
  2. Main window.
    1. Choose the Source button.  Select the video you want to optimize for the web.
    2. Under Format.  Choose “Mp4 file.”
    3. Choose (enable) the Web Optimized checkbox.
  3. Main window / Video tab.
    1. Under Video Codec, choose “H.264″
  4. Main window / Audio tab.
    1. Under Bitrate, choose 128.
  5. Main window / Advanced tab.
    1. Next to Reference Frames, choose “4.”
  6. Back to Main window / Video tab.
    1. In the Quality section, choose Average bitrate.  Use the following rules of thumb to set bitrate value:
      1. For screencasting / screencast recordings:  use 600
      2. For live action videos / “talking heads”:  use about 800 to 900
    2. Enable the checkbox labeled “2-pass encoding”
  7. Main window:
    1. In the Destination field, choose the folder where you want your converted (transcoded) file to be stored.
  8. In the tool bar, choose “Start.”

INC 5000 list of fast growing Companies

[frame style=”modern” image_path=”” link_to_page=”” target=”” float=”” lightbox=”#” lightbox_group=”” size=”three_col_large”]

Ashburn Consulting has made the Inc. 5000 list of the fast growing companies in the nation!  This is a tremendous accomplishment and I couldn’t be any more proud of each and every one of you that has made this a reality!  We thank all of our dedicated and talented employees for their commitment to excellence to make AC what it has become today!   Grateful, Jim Burris

HTTP Video Streaming

Most video streams through a corporate network are using the Real Time Messaging Protocol (RTMP).  The key thing to know about RTMP is that it is different from Hypertext Transport Protocol (HTTP), the common protocol that is used to deliver Web content to your browser. In other words, a lot of video in your enterprise streams using a protocol that is not like the one that the vast majority of content in your network is using to get moved around.

Why does this matter? Chances are, if you work in a large enterprise, your network has all kinds of devices and infrastructure to speed up and optimize the flow of information traveling with HTTP. You will have all manner of caching devices, WAN accelerators, and so forth, to enable rapid and efficient transport of HTTP data all over the globe. Can RTMP video take advantage of that infrastructure? It will be tough.

For this reason, if you can enable HTTP video streaming in your enterprise, you can power better delivery of video across your network. This mostly applies to video on demand (VOD). When users in your enterprise need to access VOD material, if it can stream with HTTP, the video content can be cached and optimized all over the place. This could be more beneficial than other video streaming mediums.