Stages Examples

Stages let you define multiple concurrency levels within a single round. These examples demonstrate common patterns like ramping up load, natural ramp-down, traffic spikes, startup delays, and staged POST requests.

Example 1: Ramp-Up Load Test

Gradually increase the number of concurrent users across two stages.

name: RampUpTest
rounds:
- name: RampUpRound
    stages:
    - numberOfClients: 10
        arrivalDelay: 2000
    - numberOfClients: 50
        arrivalDelay: 500
    iterations:
    - name: GetHomePage
        mode: D
        duration: 120
        httpRequest:
            url: https://api.example.com/home
            httpMethod: GET

Run Command:

lps run RampUpTest.yaml

What happens:

  1. Stage 1 — 10 clients, each arriving 2000 ms apart (spawn window: 18 s).
  2. Stage 2 — 50 clients arrive 500 ms apart immediately after stage 1 finishes spawning.

All 60 clients share the same iteration and run concurrently once spawned. Each client runs for 120 seconds and exits when done — no explicit ramp-down stage is needed.

Example 2: Natural Ramp-Down

Unlike tools such as k6, LPS does not require an explicit stage targeting 0 clients to ramp down. When a client finishes its iteration, it is simply gone. The staggered arrival naturally produces a ramp-down at the end.

name: NaturalRampDown
rounds:
- name: RampDownRound
    stages:
    - numberOfClients: 100
        arrivalDelay: 200
    iterations:
    - name: FetchData
        mode: D
        duration: 100
        httpRequest:
            url: https://api.example.com/data
            httpMethod: GET

Run Command:

lps run NaturalRampDown.yaml

What happens:

  • 100 clients arrive over ~20 seconds (99 × 200 ms).
  • From t=20 s to t=100 s (~80 s), all 100 clients are running concurrently.
  • From t=100 s to t=120 s, clients finish in the order they started — concurrency naturally ramps from 100 down to 0.

One stage is all you need.

Example 3: Spike Test

Simulate a sudden traffic spike to test system resilience and recovery.

name: SpikeTest
rounds:
- name: SpikeRound
    stages:
    - numberOfClients: 5
        arrivalDelay: 1000
    - numberOfClients: 100
        arrivalDelay: 100
    - numberOfClients: 5
        arrivalDelay: 1000
    iterations:
    - name: HealthCheck
        mode: D
        duration: 60
        httpRequest:
            url: https://api.example.com/health
            httpMethod: GET

Run Command:

lps run SpikeTest.yaml

What happens: A baseline of 5 clients, then a burst of 100 clients arriving rapidly, then 5 more trailing clients. Because each client exits after its 60 s duration, the spike subsides naturally.

Example 4: Startup Delay Between Stages

Use startupDelay to insert a pause before a stage begins spawning. This is useful when you want a quiet period between stages.

name: DelayedRampUp
rounds:
- name: DelayedRound
    stages:
    - numberOfClients: 10
        arrivalDelay: 1000
    - numberOfClients: 50
        arrivalDelay: 500
        startupDelay: 10000
    iterations:
    - name: GetHomePage
        mode: D
        duration: 120
        httpRequest:
            url: https://api.example.com/home
            httpMethod: GET

Run Command:

lps run DelayedRampUp.yaml

What happens:

  1. Stage 1 — 10 clients arrive 1000 ms apart.
  2. A 10000 ms pause after stage 1 finishes spawning.
  3. Stage 2 — 50 clients arrive 500 ms apart.

Example 5: Staged POST Requests

Stages work with any HTTP method and iteration mode.

name: StagedPostTest
rounds:
- name: StagedPostRound
    stages:
    - numberOfClients: 5
        arrivalDelay: 2000
    - numberOfClients: 20
        arrivalDelay: 500
    iterations:
    - name: CreateOrder
        mode: R
        requestCount: 50
        httpRequest:
            url: https://api.example.com/orders
            httpMethod: POST
            payload:
                raw: '{"item":"widget","quantity":1}'

Run Command:

lps run StagedPostTest.yaml

What happens: 5 clients start posting orders, then 20 more join with a faster arrival rate — all sharing the same POST iteration.