Tag Archives: barbican

Difference between neutron LBaaS v1 and LBaaS v2 ?

LBaaS v2 is not a new topic anymore most of the customers are switching to LBaaS v2 from LBaaS v1. I have written blog posts in past related to the configuration of both in case you have missed, those are located at LBaaSv1 , LBaaSv2

Still the in Red Hat Openstack, no HA functionality is present for load balancer itself, it means if your load balancer service is running on controller node present in HA setup and if that node is getting down then we have to manually fix the things. There are some other articles present in internet to make LBaaS HA work using some workarounds but I have never tried them.

In this post I am going show the improvements of lbaasv2 over lbaasv1. I will also shed some light on Octavia project which can help us to provide HA capabilities for load balancing service basically it used for Elastic Load Balancing.

Let’s start with comparison of lbaasv2 and lbaasv1

lbaasv1 has provided the capabilities like :

  • L4 Load balancing
  • Session persistence including cookies based
  • Cookie insertion
  • Driver interface for 3rd parties.

Basic flow of the request in lbaas v1 :

Request —> VIP —> Pool [Optional Health Monitor] —> Members [Backend instances]

untitled

Missing features :

  • L7 Content switching [IMP feature]
  • Multiple TCP ports per load balancer
  • TLS Termination at load balancer to avoid the load on instances.
  • Load balancer running inside instances.

lbaasv2 is introduced in Kilo version, at that time it was not having the features like L7, Pool sharing, Single create LB [Creating load balancer in single API call] these features are included in liberty. Pool sharing feature is introduced in Mitaka.

Basic flow of the request in lbaas v2 :

Request —> VIP —> Listeners –> Pool [Optional Health Monitor] —> Members [Backend instances]

lbaas3

Let’s see what components/changes have been made in  which makes the missing feature available in newer version :

  1. L7 Content switching

Why we require this feature :

A layer 7 load balancer consists of a listener that accepts requests on behalf of a number of back-end pools and distributes those requests based on policies that use application data to determine which pools should service any given request. This allows for the application infrastructure to be specifically tuned/optimized to serve specific types of content. For example, one group of back-end servers (pool) can be tuned to serve only images, another for execution of server-side scripting languages like PHP and ASP, and another for static content such as HTML, CSS, and JavaScript.

This feature is introduced by adding additional component “Listener” in lbaasv2 architecture. We can add the policies and then apply the rules to policy to have L7 layer load balancing. Very informative article about the L7 content switching is available at link , it covers lot of practical scenarios.

2. Multiple TCP ports per load balancer

In lbaas v1 we were only having one TCP port like 80 or 443 at load balancer associated with VIP (Virtual IP), we can’t have two ports/protocols associated with VIP that means either you can have HTTP traffic load balanced or HTTPS. This limit has been lifted in case of Lbaas v2, as now we can have multiple ports associated with single VIP.

It can be done with pool sharing or without pool sharing.

With pool sharing :

with-pool-sharing

Without Pool Sharing :

pool-sharing

3. TLS Termination at load balancer to avoid the load on instances.

We can have the TLS termination at load balancer level instead of having the termination at backend servers. It reduces the load on backend servers and also it provides the capability of having L7 content switching if the TLS termination done at load balancer. Barbican containers are used to do the termination at load balancer level.

4. Load balancer running inside instances.

I have not seen this implementation without Octavia which is using “Amphora” instances to run the load balancer.

IMP : Both load balancer versions can’t be run simultaneously.

As promised at the beginning of article, let’s see what capabilities “Octavia” adds to lbaasv2 version.

Here is the architecture of Octavia :

octavia

Octavia API lacks the athentication facility hence it accepts the APIs from neutron instead of exposing direct APIs.

As I mentioned earlier, in case of Octavia load balancer runs inside the nova instances hence it need to communicate with components like nova, neutron to spawn the instances in which load balancer [haproxy] can run. Okay, what about other components required to spawn instance :

  • Create amphora disk image using OpenStack diskimage-builder.
  • Create a Nova flavor for the amphorae.
  • Add amphora disk image to glance.
  • Tag the above glance disk image with ‘amphora’.

But now amphora instance becomes single point of failure and also the capability to handle the load is limited. From Mitaka version onwards we can run single load balancer replicated in two instances which can run in A/P mode and send the heartbeat using VRRP. If one instance is getting down other can start serving load balancer service.

So what’s the major advantage of Octavia, okay, here comes  the term Elastic Load Balancing (ELB), currently VIP is associated with single load balancer it’s 1:1 relation but in case of ELB relation between VIP and load-balancer is 1:N, VIP distribute the incoming traffic over pool of “amphora” instances.

In ELB, traffic is getting distributed at two levels :

  1. VIP to pool of amphora instances.
  2. amphora instances to back-end instances.

We can also use HEAT orchestration with CEILOMETER (alarm) functionality to manage the number of instances in ‘amphora’ pool.

Combining the power of “pool of amphora instances” and “failover” we can have a robust N+1 topology in which if any VM from pool of amphora instance is getting failed, it’s getting replaced by standby VM.

 

I hope that this article shed some light on the jargon of neutron lbaas world 🙂