Over the past year I have been fortunate enough to work on several Cisco SD-WAN (formally Viptela) deployments. These projects have ranged from small three or four site implementations here in the bay area, right through to large scale international rollouts incorporating hundreds of sites spread-out across the globe with regional POPs providing branch services and backbone connectivity.
My role on these projects has been both on the initial design and implementation phases, which I believe has given me a good insight and understanding of this latest technology to join the Cisco family of products.
My hope for this blog series is to capture and share the knowledge I have acquired over the past year while working with Cisco’s SD-WAN technology, and hopefully give you our loyal LookingPoint blog readers a peek under the hood to reveal just how this grouping of traditional and new protocols come together to build our SD-WAN fabric. So…. no marketing slides in this series you will be pleased to hear.
Below is my current plan for topics that I would like to cover during this series. Please feel free to reach out in the comments section below if there are any other topics you would like me to cover.
But before we dive into the deep end I would like to use this initial post to introduce you to the components and the new terminology that we will need to be familiar with as we navigate this new technology together.
The Management/Orchestration and Control plane components can be either cloud hosted or deployed on-premise. Most customers (95% seems to be a number I hear quoted a lot) choose the Cisco cloud hosted route. This option makes a lot of sense as it speeds up deployments, does not cost any additional money and saves those precious on-premise resources. vManage can be particularly resource hungry. All of the customers I have worked with have gone with the cloud hosted option, but it’s always good to know that you have options should you work in an environment with stringent security requirements or that is resistant to all things cloudy.
vManage is the network management system (NMS) and thus your window into the system. It is the dashboard that you will interact with daily. If you are familiar with the Meraki dashboard you can very much think of vManage in the same light. It is responsible for collecting network telemetry from our vEdge devices and alerting on events and outages in the SD-WAN environment. It is also the location where you will build your device configurations (Device Templates) and overlay traffic engineering policies.
This is also the programmatic interface into the system supporting REST API.
On-Premise deployments can be hosted on either ESXi or KVM hypervisors, with even the smallest footprint requiring a minimum of 16 vCPUs, 32GB of dedicated RAM and 500GB of storage. Now you can see why the cloud hosted option is so appealing. A single vManage instance can support up to 2,000 devices and can be deployed as part of a cluster containing 6 instances.
vBond is considered the orchestrator of the system and for good reason. Its job is to orchestrate connectivity between all the other components in the system. In other words, it tells our vEdges where and how to connect to our organizations vManage and vSmart controllers, while also advising our vSmart controllers as new vEdges join the SD-WAN fabric. It also serves the role of informing our vEdges if they are behind a NAT device which facilitates IPsec NAT traversal and allows Authentication Header security to be applied to our data plane tunnels (more on that in upcoming posts).
vBond is the first point of contact and thus our first point of authentication for all SD-WAN components as they boot up and join the SD-WAN fabric.
On-Premise deployments can be hosted on either ESXi or KVM hypervisors. The service can also be run as an agent service on one of your vEdge hardware appliances (although this is strongly discouraged). Each vBond requires a dedicated public IP address.
vSmart as the name implies is the brains of the system. This is the device that constitutes the control plane component of the architecture. vSmart controllers advertise routing, data plane policies and security. They are positioned as hub devices in the control plane topology with all vEdges peering with a vSmart (vEdges never form control plane peering’s between each other). If you are familiar with BGP route reflectors or DMVPN NHRP servers then you can kind of liken vSmarts to them. Although as noted above they never insert themselves into the data plane and advertise a lot more than just standard reachability information.
vEdge is the software or hardware component that sits at your sites. In fact, if you choose a cloud hosted control/management plane deployment this is the only component of the architecture that you will need to deploy. vEdges are responsible for the data plane of the SD-WAN fabric as they bring up IPsec or GRE tunnels between your sites. As mentioned above vEdges form control plane connections with vSmart controllers, and not between each other.
vEdge hardware comes in many form factors. You have the 100, 1000, 2000 and 5000 models. The main difference being greater interface choices, higher supported throughput and data plane tunnels as the model number increases. With the Cisco integration you can now utilize Cisco’s ASR1K, ISR4K and ISR1K router platforms along with the ENCS to perform this SD-WAN role.
Okay, that about wraps it up for our SD-WAN components. Please keep posted for your next installment of this SD-WAN series where I will be introducing you to our Lab topology and breaking down the SD-WAN initialization sequence of events.
Until then may your overlays and underlays routing never get leaked and your WAN bandwidth be plentiful and lossless.
Written By: Chris Marshall, LookingPoint Senior Solutions Architect - CCIE #29940
Check out our awesome tech talk on SD-WAN:
If you are interested in having LookingPoint install SD-WAN into your network, feel free to contact us here!