Performance Journey

There is rarely an occasion when you have all of the required information to undertake a performance test. Be methodical, understand the requirements and design a scenario to test the application.

In this post we have created a fictitious scenario. I will walk through the process that I would undertake to test the new APIs.

Scenario

An existing web application is to be extended with 10 new APIs providing additional information from the customer database. Each API will be called twice during the web journey and the existing SLAs still need to be adhered too.

  • 10 x New APIs.

  • Small set of test data available from functional testing.

  • Swagger used for APIs creation/documentation.

  • No defined NFRs.

  • Existing volumes known. Calls to APIs to be a multiple of 2.

  • Web application supports 500 concurrent users with with a constant throughput of 100 tps.

  • 1 week to go live date.

Gap Analysis

When initiating the engagement it is important to understand the scenario and if there are any areas where there are gaps. To do this you need to work with the business, stakeholders, business analysts and other testers.

Post gap analysis I discover:-

  • Missing NFRs.

  • Insufficient test data for performance test.

  • No documented volumetrics.

Requirements Gathering

Building on the work undertaken in the gap analysis process, the next step is to build up requirements. If any existing components such as SLAs exist these can be used to understand how to test the application under test. Where new components are introduced determine what the business expect the behaviour to be.

Post requirement’s gathering the following information is available :-

  • Using existing SLAs as a basis to understand performance requirement for APIs.

    • SLA = each page must respond within 5 seconds.

    • Average response time per page is 3.5 seconds.

    • 99% percentile response time per page is 4 seconds.

    • So each API call response should be sub seconds to allow for this variation in performance.

  • Since there are two API calls per web request we expect a concurrency of 200 tps. across the 10 APIs in addition to the existing 100 tps. So the overall infrastructure must support 300 tps.

Test Asset Creation

Building on the requirements we can begin to build test assets. In this case Jmeter scripts can be created along with test data. If any data exists this may help to understand data syntax which can be build on.

  • Jmeter test plan covering 10 New API calls.

    • Each API to be tested in isolation up to 200 tps.

    • All 10 APIs to be test in a combined scenario up to 200 tps.

  • Jmeter test plan covering the existing web application requests.

    • Each web application request to be tested up to 100 tps.

    • Sub calls out to APIs should subsequently generated an additional 200 tps.

  • Test data for each API to be created to satisfy 200 tps for a duration of 1 hour and 8 hours.

    • 720,000 rows of data required. (1 hour)

    • 5,760,000 rows of data required. (8 hours)

  • Test data for each web application request to be created to satisfy 100 tps for a duration of 1 hour and 8 hours.

    • 360,000 rows of data required. (1 hour)

    • 2,880,000 rows of data required. (8 hours)

Test Execution

When approaching test execution, don’t feel that you have to execute every possible test type. Instead, understand the requirements and design your scenarios to provide sufficient coverage.

Since, the APIs are new in this case it makes sense to test them in isolation to understand each footprint. Then look to do a combined series of tests. A soak test is a good method of identifying memory leaks which is important to understand in new components.

Scenarios :-

  • Load

    • 1 hour Isolated API call test

    • 1 hour combined API test

  • Soak

    • 8 hour combined API soak test.

    • 8 hour combined web application requests.

Reporting

Reporting is the critical part of performance testing. It’s important to understand the data provided by the test tool. It’s also important to analyse the hardware behaviour and system and application logs. By looking across all areas of information it is possible to fully understand the application under test.

Don’t just communicate the final picture, communicate throughout test execution with the developers and business.

Areas to include in reporting :-

  • Grafana

    • Jmeter backend listener to log out test results to InfluxDB and Grafana.

    • Grafana graphs to be included in report.

    • Data analysed within Grafana

  • Jmeter

    • HTML report generated.

    • JTL file generated.

    • Data analysed within HTML Report.

  • API

    • Isolated API results reported.

    • Combined API results reported.

  • Web application

    • Combined web application requests reported.

    • Web application request - average response times analysed against SLA.

    • Soak test run analysed to check for memory leaks.

    • Percentile graph analysed to understand response time stability.

    • Error rate reported.

    • Error messages analysed.

    • Hardware monitoring analysed and reported.

    • Defects reported.

Final Words

Don’t be afraid to jump into the performance testing journey at any point. Ensure though, that you understand the requirements. Then design a test scenario that will stimulate the application sufficiently to analyse it’s behaviour when placed under load. Then dig deep and understand what happened.

Previous
Previous

Powershell Commands

Next
Next

Scheduling tests