Distributed Load Testing on Kubernetes: JMeter, k6, and Gatling — All Three, Done Right
Most distributed load testing solutions on Kubernetes support one tool. Here's why that's a problem, and how to run JMeter, k6, and Gatling distributed tests natively on K8s.
Mark
Performance Testing Expert
When teams move their infrastructure to Kubernetes, their load testing setup rarely keeps pace. The test scripts stay on a developer’s laptop. Or they run from a single CI runner that maxes out at a few hundred virtual users before the test machine becomes the bottleneck rather than the system under test.
The answer is distributed load testing — spreading the load generation across multiple pods so you can scale to thousands of concurrent users without any single machine being the limit. The problem is that most guidance, tools, and Helm charts you’ll find online cover exactly one tool. Usually k6.
If your team runs JMeter, you’re largely on your own. If you run Gatling, you’re in even rarer territory.
This post covers how each tool’s distributed model actually works, why a unified approach fails, and how to deploy all three correctly on Kubernetes.
Why distributed load testing matters
A single load generator has a ceiling. Once CPU, memory, or network throughput is saturated on the runner, you’re not measuring your application’s limits — you’re measuring the load generator’s. For serious capacity testing — proving a system can handle 10,000 concurrent users, or validating autoscaling behaviour under sustained load — you need to spread the work.
Distributed testing also enables geographic distribution (generate load from multiple regions simultaneously) and avoids the client-side TLS/TCP overhead that can distort results at high concurrency on a single machine.
The fundamental problem: three tools, three distributed models
JMeter, k6, and Gatling each have a completely different approach to distribution. This is not a quirk — it reflects each tool’s underlying architecture. Understanding the model is essential to deploying it correctly.
JMeter: RMI controller-worker
JMeter uses Java RMI (Remote Method Invocation) for distributed testing. A controller node coordinates the test and a set of worker nodes execute it. The controller connects to workers over two ports: 1099 (RMI registry) and 50000 (RMI data channel).
The critical constraint: the controller connects to the workers, not the other way around. Workers must be resolvable by DNS name or IP from the controller at startup. Workers must also be able to reach the controller for result streaming — this is a bidirectional communication channel.
This has direct implications for Kubernetes deployment:
- Workers need stable, predictable network identities — a
StatefulSetwith headless service is the right primitive, not aDeployment - Pod ordinal names (
jmeter-worker-0,jmeter-worker-1) give you the DNS names you need - The controller must wait for all workers to be ready before starting — an init container doing DNS resolution handles this
- The
-Rflag passed to the controller lists all worker hosts — this must be generated dynamically from the StatefulSet replicas
k6: stateless segment runners
k6’s distributed model is fundamentally different. There is no coordinator. Each k6 runner is stateless and executes an independent slice of the total virtual user load. If you have 3,000 VUs and three runners, each runs 1,000 VUs independently.
Coordination happens through configuration, not communication. Each runner is told its index (RUNNER_INDEX) and the total runner count (RUNNERS_TOTAL). The script uses this to calculate its share:
const vuShare = Math.ceil(__ENV.VU_COUNT / __ENV.RUNNERS_TOTAL);
Because runners are stateless and interchangeable, a Kubernetes Deployment is the correct primitive. Runners can be scaled up or down without any cluster reconfiguration. Metrics stream independently to Prometheus or InfluxDB.
This simplicity is k6’s biggest advantage for Kubernetes. There’s no choreography, no init containers waiting for peers, no bidirectional communication. You deploy pods and they run.
Gatling: Akka actor coordination
Gatling’s distributed model uses Akka remoting. A master node coordinates workers over a cluster formed using Akka’s actor system. The complication: Akka requires IP addresses, not DNS names, for cluster membership.
This makes Kubernetes deployment non-trivial. A StatefulSet gives you stable DNS (gatling-worker-0.gatling-workers), but Gatling cannot use these names directly — it needs the actual pod IPs. An init container must run kubectl get pods to discover worker IPs and write them to a shared volume that the master reads at startup.
Because the init container needs to query the Kubernetes API, it requires RBAC permissions — a ServiceAccount, Role, and RoleBinding scoped to get and list on pods.
Workers also need a dedicated port (2552) open for Akka remoting, which requires explicit NetworkPolicy rules.
Why a single unified chart fails
Some teams try to build one Helm chart that supports all three tools through configuration flags. This always results in a chart that makes bad tradeoffs for every tool.
JMeter needs a StatefulSet for workers and a Job for the controller. k6 needs a Deployment. Gatling needs a StatefulSet for workers with Akka ports, a Job for the master, and RBAC resources. These aren’t configuration details — they’re fundamental to how each tool operates.
A chart that uses Deployment for everything will break JMeter’s RMI discovery. A chart that creates RBAC for everything will add unnecessary permissions to k6 runners. A chart that uses StatefulSet for k6 adds state where none is needed.
The right approach is three separate charts, each using the K8s primitives that match the tool’s distributed model.
| Tool | Worker Workload | Controller Workload | Coordination |
|---|---|---|---|
| JMeter | StatefulSet | Job | Java RMI (ports 1099, 50000) |
| k6 | Deployment | — (all runners equal) | None — config only |
| Gatling | StatefulSet | Job | Akka remoting (port 2552) |
Deploying with Helm
perf-distributed-testing provides production-ready Helm charts for all three tools, published to GHCR OCI. The charts handle the tricky parts: init containers, DNS generation, RBAC, NetworkPolicy, non-root security contexts, and multi-arch Docker images.
Prerequisites
# Kubernetes 1.24+ and Helm 3.10+
kubectl create namespace perf-testing
# Get your free license key at:
# martkos-it.co.uk/store/perf-distributed-testing-download
# Then generate a registry token and create the pull secret:
TOKEN=$(curl -s https://updates.martkos-it.co.uk/api/v1/registry-token \
-H "X-License-Key: YOUR_LICENSE_KEY" \
-H "X-Product: perf-distributed-testing" | jq -r .token)
kubectl create secret docker-registry martkos-registry \
--docker-server=registry.martkos-it.co.uk \
--docker-username=license \
--docker-password="$TOKEN" \
--namespace perf-testing
JMeter distributed test
# Load your test plan
kubectl create configmap my-test-plan \
--from-file=test-plan.jmx=./test.jmx \
--namespace perf-testing
helm install my-jmeter-test \
oci://ghcr.io/markslilley/charts/perf-jmeter \
--version 1.0.0 \
--namespace perf-testing \
--set test.configMapName=my-test-plan \
--set worker.replicas=5 \
--set global.imagePullSecrets[0].name=martkos-registry
Five worker pods spin up as a StatefulSet. An init container on the controller waits for all five to resolve via DNS before submitting the test plan via RMI. Workers stream results back as the test runs.
k6 distributed test
kubectl create configmap my-k6-script \
--from-file=script.js=./test.js \
--namespace perf-testing
helm install my-k6-test \
oci://ghcr.io/markslilley/charts/perf-k6 \
--version 1.0.0 \
--namespace perf-testing \
--set test.configMapName=my-k6-script \
--set runner.replicas=3 \
--set test.vuCount=3000 \
--set global.imagePullSecrets[0].name=martkos-registry
Each of the three runners gets RUNNER_INDEX and RUNNERS_TOTAL injected as environment variables and runs 1,000 VUs independently.
Gatling distributed test
kubectl create configmap my-gatling-sim \
--from-file=MySimulation.scala=./MySimulation.scala \
--namespace perf-testing
helm install my-gatling-test \
oci://ghcr.io/markslilley/charts/perf-gatling \
--version 1.0.0 \
--namespace perf-testing \
--set test.configMapName=my-gatling-sim \
--set test.simulationClass=MySimulation \
--set worker.replicas=4 \
--set global.imagePullSecrets[0].name=martkos-registry
The chart creates a ServiceAccount with pod get/list permissions. An init container discovers worker pod IPs via kubectl get pods and writes them to a shared volume. The master reads this file to form the Akka cluster.
Scaling
All three charts scale by changing a single Helm value:
helm upgrade my-jmeter-test oci://ghcr.io/markslilley/charts/perf-jmeter \
--version 1.0.0 \
--reuse-values \
--set worker.replicas=10
There are no license-enforced caps on pods, workers, or runners. Add nodes to your cluster and scale the replicas.
Security
All three charts ship with production-grade defaults:
- Non-root containers — all images run as UID 1000
- Read-only root filesystem — writable paths explicitly mounted as volumes
- NetworkPolicy — egress scoped to the system under test; inter-pod communication restricted to required ports only
- RBAC — Gatling’s
kubectlaccess is scoped to podget/listwithin the namespace; JMeter and k6 require no cluster permissions
When to use which tool
The distributed model isn’t the only factor in tool choice, but it does affect operational complexity on Kubernetes:
Choose k6 if you want the simplest Kubernetes deployment. Stateless runners, no init containers, no RBAC. Ideal for teams already using JavaScript and CI-first workflows.
Choose JMeter if you have an existing JMX library and need to reuse those scripts at scale. The StatefulSet + RMI model is more complex but well-understood, and JMeter’s GUI for script authoring remains unmatched for complex correlation scenarios.
Choose Gatling if your team is Scala/JVM-native and values the type safety and IDE support of the Gatling DSL. The Akka coordination model adds setup complexity but Gatling’s HTML reports and simulation assertions are best-in-class.
Getting started
The Helm charts are publicly accessible on GHCR — no credentials required to pull them. A free license key (email registration) gates access to the pre-built multi-arch Docker images from the private registry. You can also build images from source under Apache 2.0.
Get free access at martkos-it.co.uk/store/perf-distributed-testing
Enterprise licenses (£1,000/year) add priority email support with next business day SLA, architecture guidance, and Helm values review.
Tags: