LPS Distributed Preview Version

The Distributed Version (Preview) of the LPS Tool introduces support for distributed load testing using a master-slave architecture. This feature enables scaling across multiple nodes, where a master node coordinates the test and workers execute it.

How It Works

The architecture follows a master-slave model, where:

  • Master Node: Aggregates metrics, maintains the global state, and hosts the dashboard.
  • Worker Nodes: Receive test commands locally and wait until the master node is in a Ready or Running state before executing tests.

All communication between master and workers is done over gRPC.

Important: The master node does not send start commands to workers. Instead, the test command must be executed on each worker node. If the master is not yet ready or running when a worker starts, the worker registers as an observer and waits to be instructed to run once the master transitions to a ready state.

If the GRPC port is not open on the master/local node the connecting party will timeout after around 21 or 42 seconds due to TCP handshake failure.

Cluster Configuration

To enable distributed mode, configure the following section in config/lpsSettings.json:

"Cluster": {
  "MasterNodeIP": "192.168.27.196",
  "MasterNodeIsWorker": true,
  "GRPCPort": 8999,
  "ExpectedNumberOfWorkers": 2
}

Settings Description

  • MasterNodeIP: IP address of the master node. Workers use this to send their metrics and status updates.
  • MasterNodeIsWorker: If true, the master node will also run test iterations in addition to orchestrating them.
  • GRPCPort: Port used for gRPC communication. Default is 5001.
  • ExpectedNumberOfWorkers: Currently not used in the preview version.

Notes

  • The dashboard is hosted on the master node, which displays test metrics aggregated from all workers.
  • Workers will not start the test unless the master node is in a Ready or Running state.
  • gRPC ports must be open and accessible between all participating nodes.
  • Test execution must be triggered individually on each node.

Example

Configure the lps cluster:

"Cluster": {
  "MasterNodeIP": "192.168.27.196",
  "MasterNodeIsWorker": true,
  "GRPCPort": 8999,
  "ExpectedNumberOfWorkers": 2
}

CLI Example

Configure a master node with 3 workers:

lps cluster -mip 192.168.1.100 -gp 8999 -ew 3

Configure master node that also acts as a worker:

lps cluster -mip 192.168.1.100 -gp 8999 -ew 2 -miw

Configure with custom gRPC port:

lps cluster -mip 10.0.0.50 -gp 5001 -ew 5

Notes

  • The cluster command configures the Cluster section in the config/lpsSettings.json file.
  • After configuration, run your test on the master node and each worker node using the same test plan.
  • Workers will automatically connect to the master and wait for the test to start.
  • The master node aggregates metrics from all workers and displays them on the dashboard.
  • Ensure the gRPC port is open and accessible between all nodes in the cluster.

Run your test using the CLI command, for example:

lps --url "https://www.example.com" --requestcount 3000

The command must be executed on every node.