Dashboard in LPS Tool

The LPS Tool features a modern and intuitive dashboard designed to help users efficiently monitor and analyze essential metrics for their testing endpoints. This built-in dashboard provides a user-friendly interface for real-time insights into testing performance.

Key Features

  1. Real-Time Monitoring: The dashboard updates dynamically to reflect the current state of testing, displaying essential metrics in an organized and accessible way.
  2. Comprehensive Metrics: The dashboard tracks and displays a variety of critical metrics to help users evaluate the performance of their endpoints.
  3. Built-In Configuration: The dashboard can be enabled or disabled through the global configuration file (config/lpsSettings.json). This flexibility is particularly useful when integrating with DevOps tools or custom workflows.
  4. REST API Support: Even when the built-in dashboard is disabled, the REST API remains functional, allowing users to fetch metrics programmatically.

Real-Time Monitoring

The LPS Dashboard provides two views for monitoring your load test: Windowed and Cumulative.

Windowed View

The Windowed view shows you what's happening right now. Every few seconds, it receives a snapshot of your test performance and displays it on a live chart.

This is your go-to view while a test is running. If latency suddenly spikes or errors start appearing, you'll see it immediately. The charts let you pinpoint exactly when performance changed.

How It Works

LPS divides your test into time windows (for example, 5 seconds each). At the end of each window, it calculates metrics using only the requests from that window, then resets and starts fresh. These snapshots are pushed to the dashboard in real-time.

Each point on the chart represents one window. If a window captured 250 requests, the P95 shown is the 95th percentile of those 250 requests alone.

Why This Helps Detect Anomalies

Because each window contains a small sample, any performance change stands out immediately.

Consider a test running at 100 requests per second. In a 5-second window, you have 500 requests. If the server slows down and 50 of those requests take 3 seconds instead of 100ms, your P90 for that window will jump noticeably.

In contrast, if you calculated P90 across 50,000 total requests, those 50 slow requests would barely move the number. The problem would be hidden in the average.

Windowed metrics are sensitive by design which helps you to detect at which point exactly the performance may have degraded.

Cumulative View

The Cumulative view shows the overall performance since your test started. It calculates metrics across all requests, giving you the final numbers you need for reports and SLA validation.

Once your test completes, this view tells you whether you met your performance targets.

Metrics Displayed in the Dashboard

Time Metrics

TotalTime

Definition: Complete end-to-end request duration.

Includes: DNS + TCP + TLS + Sending + Waiting (upload + server + network) + Receiving

Safe Use Cases:

  • Measure overall user-perceived latency.
  • Compare releases or environments (baseline vs new version).
  • SLA monitoring (e.g., TotalTime.P95 < 500ms).

TimeToFirstByte (TTFB)

Definition: Time from the start of the request until the first response byte arrives.

Includes: TCP + TLS + Sending (over the network) + server processing + network latency

Safe Use Cases:

  • Good indicator of "initial responsiveness."
  • Compare server + network combined latency across regions.
  • Detect delays before any response is returned.

Not Safe To Assume: That TTFB equals backend processing time.

WaitingTime

Defined as: TTFB - DNS time – TCP time – TLS time - upload time

What It Actually Includes:

  • Server processing time
  • Network latency for the first byte to return

Safe Use Cases (When Used Carefully):

  • Compare the same endpoint under stable conditions:
    • Same request payload size
    • Same client/network
    • Same region
  • If TCP/TLS/Sending stay stable and WaitingTime increases: → Likely indicates slower backend, but not guaranteed.

Not Safe To Claim: "WaitingTime = pure server time" (incorrect).

ServerTime (New)

Definition: Actual server-side processing time, extracted directly from the response header (e.g., Server-Timing).

How It Works: When enabled, LPS reads a designated response header that your server populates with the actual processing duration. This provides the true server processing time without network latency.

Supported Header Formats:

  • W3C Server-Timing: Server-Timing: db;dur=53.2, app;dur=47.2 (parses dur= values)
  • Plain milliseconds: X-Response-Time: 150 or X-Response-Time: 150ms
  • Seconds: X-Runtime: 0.150 (converted to milliseconds)

Safe Use Cases:

  • Accurately measure backend processing time independent of network conditions
  • Compare server performance across different deployments
  • Identify whether latency issues are server-side or network-related
  • SLA monitoring for pure backend performance (e.g., ServerTime.P95 < 200ms)

Prerequisites:

  • Your server must include a timing header in responses
  • Enable server time collection via command line (see HTTP Client Configuration)

Example: If your API returns Server-Timing: total;dur=45.3, LPS will record 45.3ms as the ServerTime metric, giving you visibility into actual backend performance separate from network overhead.

TCPHandshakeTime

Definition: Time spent performing the TCP 3-way handshake.

Use Cases: Would be good indication of the network latency

TLSHandshakeTime

Definition: Time for SSL/TLS negotiation and certificate validation.

Safe Use Cases:

  • Compare HTTPS overhead vs HTTP.
  • Verify TLS session resumption:
    • P50 ≈ 0 ms → resumption working
  • Detect slow certificate validation (chain/OCSP issues).

SendingTime

Definition: Time to write the request body into the local send buffer. Does not represent network upload time.

ReceivingTime

Definition: Time to read and parse the response body from the local receive buffer. Does not represent network download time.

Final Note

Avoid misleading assumptions such as:

  • "WaitingTime is pure server time"
  • "TTFB only measures backend time"
  • "ReceivingTime equals network bandwidth"

Each counter is accurate only within its proper context, and combining them gives a reliable performance picture.

Response Breakdown Metrics

  • HttpStatusCode: The status code of HTTP responses (e.g., 200 for success).
  • HttpStatusReason: The reason phrase for the status code (e.g., OK).
  • Count: The number of responses for each status code.

Connection Metrics

  • RequestsRate: Rate of requests per second.
  • RequestsRatePerCoolDownPeriod: Rate of requests per cooldown periods.
  • RequestsCount: Total number of requests sent.
  • ActiveRequestsCount: Number of currently active requests.
  • SuccessfulRequestCount: Number of successful requests.
  • FailedRequestsCount: Number of failed requests based of the failure criteria defined.

Data Transmission Metrics

  • DataSent: Total amount of data sent (in bytes).
  • DataReceived: Total amount of data received (in bytes).
  • AverageDataSent: Average amount of data sent per request (in bytes).
  • AverageDataReceived: Average amount of data received per request (in bytes).
  • AverageDataSentPerSecond: Average data sent per second (in bytes per second).
  • AverageDataReceivedPerSecond: Average data received per second (in bytes per second).
  • AverageBytesPerSecond: Total average data throughput (in bytes per second).

Save Reports

The LPS dashboard allows you to download your dashboard results as a PDF report for future analysis.

Configuring the Dashboard

The dashboard can be configured through the config/lpsSettings.json file under the LPSDashboardConfiguration section. Below is an example configuration:

"LPSDashboardConfiguration": {
  "BuiltInDashboard": true,
  "Port": 8009,
  "RefreshRate": 5
}

Configuration Options

  • BuiltInDashboard: Enables (true) or disables (false) the built-in dashboard.
  • Port: Specifies the port on which the dashboard runs (default: 8009).
  • RefreshRate: Determines how often the dashboard refreshes (in seconds).

Disabling the Dashboard

The built-in dashboard can be disabled without affecting the REST API. This is useful for scenarios like integrating with DevOps pipelines or running tests in environments where a dashboard UI is not required.

Example:

"LPSDashboardConfiguration": {
  "BuiltInDashboard": false,
  "Port": 8009,
  "RefreshRate": 5
}

The LPS Dashboard is an integral tool for visualizing and analyzing test performance metrics, offering both flexibility and ease of use for testers and developers.

LPS Dashboard