Load testing with master-slave Locust lab in AWS

  • On April 2, 2018

Load testing with master-slave Locust in AWS

 

Introduction

Locust is an open source load testing tool used to help developers find out how many concurrent users a website or system can handle without crashing. Locust needs a python in order to run so you need to have installed python and its packages that is if you haven’t installed them. Therefore, let’s delve into the different ways we can run it in on all the major platforms that are Linux (Ubuntu), windows and OS X, assuming you have already installed locust on your machine.

 

Getting Started

This article assumes that one has already created an AWS account.

The Diagram below shows the simple architecture of two EC2 instances in AWS.

The instances will sit behind the Load balancer which will intercept requests from clients and route the requests to the different instances. The instances and clients can be an arbitrary number up to 1000 instances for one load balancer

Below are the steps of how the requests are sent

  • The client sends requests to the AWS load balancer public address.
  • Load balancer receives the requests and internally determines to which instance to route the request
  • Load balancer routes the request to the instances using the private IP address of the Load balancer and the Instances.
  • The instance that received the requests processes and returns a response to the load balancer
  • Load balancer sends the response to the respective client

 

Illustration 1: High Level Architecture

 

Creating load balancer in AWS

  • Select Load balancer on the EC2 Dashboard

Illustration 2:  EC2 Dashboard

 

Below are the links to the details of setting up a load balancer, health checks, etc on AWS

– Getting Started: http://docs.aws.amazon.com/
– Migrating your existing load balancer:http://docs.aws.amazon.com/

A few How-Tos that will help you set up your Application load balancer initially.
– Applying Path-Based Routing: http://docs.aws.amazon.com/
– Using ECS Containers as Targets: http://docs.aws.amazon.com/
– Creating an HTTPS Listener: http://docs.aws.amazon.com/
– Creating a Target Group: http://docs.aws.amazon.com/
– Configuring Health Checks: http://docs.aws.amazon.com/

 

 

Setting up the server for testing

 

For this test, we will create a simple Application Server that will be receiving the requests from clients. The Application server is developed using java vertx toolkit but any language can be used.

 

For demonstration purpose the Application Server will process requests from the following context paths

  • /ping – This will be used by AWS as the health check path. Its returns http code 200
  • /sum – This path will receive two numbers and return the sum
  • /product –  This path will receive 2 numbers and return the product

 

Below is the snippet code for the Application server:

public class Test extends AbstractVerticle {

    Vertx v = Vertx.vertx();
    Router router;
    Double sum;
    Double product;
    HttpServer server;

    @Override
    public void start() {
        router = Router.router(vertx);
        server = vertx.createHttpServer();
        Route ping = router.route("/ping");
        ping.handler(this::ping);
        Route req = router.route(HttpMethod.POST, "/sum");
        req.handler(this::sum);

        Route pro = router.route(HttpMethod.POST, "/product");
        pro.handler(this::product);

        server.requestHandler(router::accept).listen(8089);
    }

    private void ping(RoutingContext rc) {
        HttpServerResponse res = rc.response();
        res.setStatusCode(200);
        res.end();
        System.out.println("responded with pong");

    }

    private void sum(RoutingContext rc) {
        try {
            InetAddress add = InetAddress.getLocalHost();
            HttpServerRequest req = rc.request();
            if (req.method() == HttpMethod.POST) {
                req.bodyHandler(l
                        -> {
                    //JsonArray ar=l.toJsonArray();

                    String[] params = l.toString().split("&");

                    for (int i = 0; i < params.length; i++) {
                        double x = Double.parseDouble(params[0].split("=")[1]);
                        double y = Double.parseDouble(params[1].split("=")[1]);
                        sum = x + y;
                    }

                    HttpServerResponse res = rc.response();
                    res.setStatusCode(200);
                    Buffer b = Buffer.buffer(sum.toString().getBytes());
                    res.end(Double.toString(sum));
                    System.out.println("Sum Operation yileded" + sum);

                });
            }

        } catch (Exception ex) {
            ex.printStackTrace();
        }
    }

    private void product(RoutingContext rc) {
        try {

            HttpServerRequest req = rc.request();
            req.bodyHandler(l
                    -> {

                String[] params = l.toString().split("&");

                for (int i = 0; i < params.length; i++) {
                    double x = Double.parseDouble(params[i].split("=")[1]);
                    double y = Double.parseDouble(params[i].split("=")[1]);
                    product = x * y;
                }
                HttpServerResponse res = rc.response();
                res.setStatusCode(200);
                Buffer b = Buffer.buffer(product.toString().getBytes());
                res.end(Double.toString(product));
                System.out.println("Sum Operation yielded" + product);

            });

        } catch (Exception ex) {
            ex.printStackTrace();
        }
    }
}

}

The Code above is compiled, deployed and started in the two AWS instances.

The load balancer sends a ping request to the application on port 8089 context path /ping to determine the health status of the application.

The application responds with a 200 code and displays a message responded with pong on the screen”.

The above instances are ready to receive and process requests.

 

Setting up Locust

Locust is an easy to use distributed testing tools used for load testing Web applications.

During testing, the website will be attacked by a swarm of Locust. The behavior of each (locust) is defined swarming process is monitored in real-time. This will help establish the bottlenecks in the code before going live.

Locust will run be made to run locally on the machine and will fire requests to the AWS load balancer. The load balancer should be able to distribute the requests to the two instances which host the Application server above.

Prerequisites

Python is installed

 

Windows and Linux

Locust is installed using the following command:

pip install locustio //should work both for Windows and Linux

 

 OS X

Install  OS X using Homebrew;

  1. Install Homebrew
  2. Install libev (dependency for gevent):
  3. brew install libev

 

 

Testing  Locust

Upon successful installation, we can test by creating the following code which will be executed during the simulation.

The code below is saved in a file named locustTestFile.py

 

import random

class UserTester(TaskSet):

def on_start(self):

pass

@task(2)

def ping(self):

response=self.client.post('/ping')

print response



@task(1)

def sum(self):

x=random.random()

y=random.random()

response=self.client.post('/sum',{'x':'2','y':'4'})

@task(3)

def product(self):

response=self.client.post('/product',{'x':'2','y':'4'})

class User(HttpLocust):

task_set=UserTester

min_wait=5000

max_wait=9000

During execution, the functions/methods annotated with the task decorator are the methods to be executed towards the AWS Load balancer.

 

Load Testing at Scale with Locust

 

To run the code we use the locust command and specify the DNS of the AWS load balancer rather than the DNS of the AWS instances.

The command to run the test is as shown below

 

locust -f locustTestFile.py –host=http://<DNS-of-AWS>:<Load-balancer-listener-port>

 

This will start a process on the local machine.

The Web monitor on the default port of 8089 will be started on the default port 8089

 

 

Open the Web Interface from the browser

 

Illustration 5: Locust Web Page

 

Input the number of users to simulate

Input the hatch rate (users spawned /second)

 

 

Click on Start swarming button

 

Upon clicking on the button, requests will be sent to the AWS load balancer which in turn distribute the traffic to the different instances.

Below is a screenshot of the instances as they execute the requests from Locust.

 

Illustration 7: AWS Running Instances

 

Below is the screenshot of the Web interface displaying the results of the requests

 

Illustration 8: Locust Web Statistics Page

 

 

 

If Locust runs distributed processes, include this command –master;

locust -f locustTestFile.py  --master  --host=http://<DNS-of-AWS>:<Load-balancer-listener-port>

and then we would start an arbitary number of slave processes

 

If Locust is running on distributed machines, specify the master host during the slave’s startup.

On the slave machine below is the command to be issued.

locust -f locustTestFile.py --slave --master-host=<master-host-ip> --host=--host=http://<DNS-of-AWS>:<Load-balancer-listener-port>

The command will spawn processes on the slave and the results will be redirected to the master node where the web interface is running for statistics

 

 

Conclusion

Locust stands out from other load testing tools in the market. It’s an open source tool, It runs on python that is a pretty easy language. Locust can be integrated with any other language.

On the flip side, it’s not very easy to get the errors during the load balance testing besides the status.

The stats are lost as soon as the requested set of users is reached.

Locust won’t request all the URLs requested during a request to load a page.

With little knowledge of Python and want to test your application load, Locust is the tool you need.

It is important to remember that if you’re performing load tests against an application living in the cloud, you should first ask your cloud provider for a permission using the template the will provide you. The information required usually includes source/destination, bandwidth planned to generate, start and end date/time of the test as well as 2 or three emergency contacts. It could take a few days (sometimes up to a few weeks if you first need to comply to your cloud provider’s rules ) to get your “go!” so include this time in your estimations.

The data you need to have prepared when submitting a pentest request (which is pretty much the same as the one required when requesting a load test) to AWS can be found here.

 

Happy testing!