What is VCS?
Virtualized Cloud Services (VCS) is a datacenter based SDN platform provided by Nuage Networks. VCS provides an overlay solution to interconnect bare metal assets and virtual machines together within your network, via an overlay. The VSC platform is built upon the following components:
- VSD (Management Plane)
- VSC (Control Plane)
- VRS (Data Plane)
Figure 1 - VCS Components.
Management Plane (VSD)
VSD (Virtual Services Director) is the management plane component, it holds the policies and service templates and provides RESTful and WebUI (VSD Architect) northbound interfaces. Southbound it communicates and pushes down its policies to the controller (VSC) using XMPP (Extensible Messaging and Presence Protocol).
VSD can be installed as a VM upon either KVM or VMware based hypervisors. When installing there are 2 types of install. They are,
- Standalone deployment - All VSD components are deployed onto one server (physical or VM). Only recommended for Dev or Lab purposes.
- Cluster deployment - A x3 VSD instance is deployed, to provide management plane redundancy. This results in an XMPP and MySQL cluster formed. Furthermore, an external load balancer is required (such as HAProxy or F5), so that traffic can be equally distributed to each of the VSD nodes.
Based upon Elasticsearch, VSD provides the option for statistical collection. Once enabled it can be deployed based on a single statistics node or as a 3 node cluster.
Within VSD, domain templates are created (and instantiated), these are basically network policies, that define how you want the network to look. These policies (templates) are built upon a number of components, such as Domains, Zones, Subnets, and vPorts. As shown below:
- Domain - A logical network that provides Layer2 or Layer3 communication between VMs. An L3 domain can be thought of as a VPRN or VRF instance.
- Zone - Defined from within the domain, a zone is a logical grouping of subnets that a common policy can be applied to.
- Subnet - Defined within a zone and is synonymous to an L2 broadcast domain.
- vPort - Associated within a given subnet, the vPort provides a link between the subnet and endpoint (i.e VM).
Figure 2 - Domain Components.
There are 2 models to describe the interaction between the CMS (Cloud Management System) and VSD - push and pull.
Each model is viewed from the standpoint of the CMS. Like so,
- Push - Network configuration is performed on the CMS and then transferred to the VSD via API. The VSD still manages the network via the VSC and VRG agents.
- Pull - Network configuration is performed on the VSD. The VSD pushes the configuration southbound to the network.
Control Plane (VSC)
The control plane layer to VCS is provided via the VSC (Virtualized Services Controller).
The VSC is a virtual machine (VMware or KVM), powered by the Service Router OS (SROS).
The VSC receives the network policy/service from the VSD via XMPP. The FIBs are calculated and pushed down to the VRS via OpenFlow. In addition, BGP (EVPN/VPNv4) is used to advertise IP/MAC information either to other VSC's and/or DC routers.
Northbound - VSD to VSC
Communication between the VSC and VSD is performed via the eXtensible Messaging and Presence Protocol (XMPP). XMPP, an XML based protocol, allows configuration, and policy distribution between the XMPP server (VSD) and the XMPP clients (VSCs).
- VSD to VSC - XMPP is used to push down policies associated with a new VM.
- VSC to VSD - XMPP is used to notify from the VSD to the VSCs.
By default XMPP is unencrypted. TLS encryption can be enabled to secure communication between the VSD and VSC. The XMPP server daemon (ejabberd) provides threes encryption modes:
- Clear - Only encrypted sessions are accepted.
- Allow - Unencrypted and encrypted sessions are accepted.
- Require - Only TLS encrypted sessions are accepted.
One point is that once you are in Allow or Require mode, you cannot change the mode back to clear.
Southbound - VSC to VRS
Once the VSC has calculated the L2/L3 FIBs, OpenFlow is used to push the FIBs (flow tables) own the data plane (VRS). OpenFlow is also used by the data plane to notify the VSC to any VM updates upon the hypervisor.
Furthermore, OVSDB is also used by the VSC to push down VLAN, VXLAN and port configuration down to each of the VRS's.
East-West - VSC to VSC/DC Router
East-West control plane communication is based on BGP. Both BGP EVPN and VPN-IPv4 address families are supported. BGP peering is performed between:
- VSC to VSC - Also known as a VSC Federation. This allows additional VSCs to be added, allowing you to horizontally scale the control plane.
- VSC to DC Router - This allows the VSC to peer and communicate with other DC network devices, in order to integrate the Nuage domains out to existing WANs/VPNs.
Data Plane (VRS)
The data plane layer is provided via the VRS (Virtual Routing and Switching).
The VRS receives the FIB from the control plane, via OpenFlow, it then forms overlay tunnels (VXLAN or GRE) which it uses to send/receive VM traffic from other VRS's (VMs).
VRS is built upon 2 main components:
- VRS Agent - Communicates to the VSC via OpenFlow, and provides local ARP and DHCP reply agents, along with programming the OVS with the L2/L3 FIBS learned from the VSC. Furthermore, the VRS agent listens for VM updates (deletion, creation etc) via the virtualization APIs (such as libvirtd), which is then sent to the VSC (controller).
- OVS (Open vSwitch) - OVS is an Open Source multilayer switch. Some of its key features include support for 802.1Q VLAN tagging, LACP, QoS, and various tunneling protocols (such as GRE and VXLAN). In addition, OVS supports OpenFlow. This allows the flow of packets to be programmatically configured via the use of a controller.
For VMware based CMS's (Cloud Management Systems) the VRS-V is provided. This is a VM based machine containing the VRS components above, which deployed to each ESXi host within your domain.
The accelerated VRS (AVRS) was introduced in VSP 5.0 and provides DPDK support. DPDK is a set of libraries and techniques to optimize packet throughput. This includes kernel bypassing and Poll Mode Drivers (PMD) in order to eliminate the traditional ring buffers drivers, along with NUMA and hugepage libraries to improve memory allocation/access times.
Note: At time of writing the AVRS is only supported on KVM based hypervisors.
SR-IOV is another technology that provides packet throughput optimization. With SR-IOV (Single Root I/O Virtualization) virtual Functions (VF) are created that act like a separate physical NIC for each VM. Each VF them performs DMA transfer of the packet from NIC to the userspace within the VM. bypassing the virtual switch, providing interrupt-free operation, and resulting in high-speed packet processing.
Nuage introduced SR-IOV support in VSP 5.0. As per the Nuage website they state:
We have added automated topology discovery, configuration, and management of SRIOV-enabled server ports. The functionality enables automated stitching of SRIOV ports to switch ports via VLANs.
Due to the nature of SR-IOV, the virtual switch, in our case, the VRS is bypassed. However, VSP 5.0 brings early support for Smart NICs.
Flood and Learn
Each VRS provides DHCP and ARP responders, which are populated with the various VM information via the controller. This means that DHCP/ARP broadcasts never have to send across the network.
Because of this flood and learn in VCS is unicast-based. This simply means if a broadcast from the VM was to make it to the VTEP, the VTEP would perform head-end replication (send multiple unicast flows of the packet out to each of the other VTEPs).
Virtualized Services Gateway (VSG)
The purpose of the VSG is to provide bare metal devices with the ability to participate in the SDN overlays and the Nuage virtual domains. The VSG supports a range of Layer2 and Layer 3 encapsulation methods such as VXLAN, GRE, and VLAN, in addition, they also provide the ability to act as a VXLAN gateway.
The VSG types are :
Used for small DC deployments, and should only be used to control a small amount of BMS. The software VRS is provided by the VRS-G. The VRS-G provides 2 port types:
- Network - Port towards the Nuage - VXLAN - overlay.
- Access - Port towards the legacy - VLAN - network.
Used within large DCs, and provided high performance based on hardware acceleration. With the hardware VSG's it runs an inbuilt VSC, therefore it communicates directly with the VSD vis XMPP. There are 2 models of the VSG,
- VSG (7850) - Hardware based appliance, providing 10Gb and 40Gb interfaces.
- WBX210 (7950) - Hardware based appliance, providing greater performance over the VSG. Offers 10GbE, 25GbE, 40GbE, 50GbE and 100GbE interfaces.
Furthermore, as these switches are full-blown switches, i.e they run SROS, in addition to acting as a gateway they can also be used as the spine and leaf switches.
Provides the ability the use 3rd party vendor gateways. Support includes (L2/L3 depends upon model/vendor),
- Arista 7050TX-96.
- Dell S4k/S6k.
- Cumulus VX3.0.1.
- HP A5930.
Both the VRS-G and the VSG provide they own form of redundancy, as shown below:
- VRS-G - Redundancy is provided in the form of redundancy groups. This provides a VRS-G cluster, the interfaces are monitored. In the event of failure, the secondary unit is promoted and processes the traffic.
- VSG - With the VSG multi-chassis LAG (Link Aggregation) is used. This allows both units to synchronize the MACs and ARPs, in turn allowing connecting devices to perform Active/Active LAG ports across both units in order to provide redundancy.
The Data Center Interconnect Gateway extends connectivity from Nuage domain out the customers WAN/VPN networks. In other words, the DCI gateway provides DC-WAN integration with both the control and data plane.
There are 4 service models available. They are:
- Layer 2 Service Model - The Layer 2 domain is extended up to the DC-WAN (VPLS). This service uses BGP/EVPN between VSC and the DC GW, along with VXLAN overlays between DC-GW and Hypervisor VRS's.
- Layer 2 to Layer 3 Service Binding - The Layer 2 domain is extended up to the DC GW, however, the Layer 2 domain is terminated, and an IRB interface is used. This service uses BGP/EVPN between VSC and the DC GW, along with VXLAN overlays between DC-GW and Hypervisor VRS's.
- Layer 3 Service Model (GRE) - Layer 3 domains are extended up to the DC GW. Used in environments where the DC gateway does not support BGP EVPN. Instead, BGP VPN-IPv4 is used between VSC to DC GW, and GRE is used as the encapsulation protocol.
- Layer 3 Service Model (VXLAN) - Layer 3 domains are extended up to the DC GW. This is built via BGP/EVPN between VSC and the DC GW, along with the use of VXLAN overlays between DC-GW and Hypervisor VRS's.
Based on these 4 models, there are 2 types of instantiation:
Services are manually configured on the DC gateway, resulting in the DC GW communicating with the VSC (control plane), with no operation or communication being provided to/from the VSD (Management plane).
Note: The reference DC gateway is the Alcatel-Lucent 7750 Service Router.
The SR7750 runs the same operating system as the VSC - the SROS. However, with regards to manual instantiation 3rd party devices can be used (such as Cisco and Juniper).
Auto-instantiation is where the DC GW and the VSD communicate via XMPP. Basic configuration parameters are still configured, however, VPLS/VPRN details are pushed down from the VSD to the DC GW via XMPP.
Note: Auto-instantiation can only be performed on the SR7750.
7750 vs VSG?
You may be asking, why would you use a 7750 over a VSG or WBX device? Good question. In short, it is down to the datapath. The VSG is DC optimized, and has fixed configuration components (ie. MPLS RSVP/LDP disabled) resulting in low latency and a high port density over the SR7750.
"Nuage Networks VSP Continued Evolution – Release 5.0 – Pt. 3 ...." 9 Mar. 2018, http://www.nuagenetworks.net/blog/nuage-networks-vsp-continued-evolution-release-5-0-pt-3/. Accessed 24 Apr. 2018. ↩︎