GitHub Actions for Performance Testing: Automated Load Tests in CI/CD
Set up automated performance testing in GitHub Actions using k6 and JMeter, with result storage and performance regression detection.
Mark
Performance Testing Expert
Integrating performance testing into CI/CD pipelines catches regressions before they reach production. GitHub Actions provides a straightforward way to run load tests automatically on every deployment or on a schedule. Here’s how to set it up with both k6 and JMeter.
Basic k6 Workflow
k6 is particularly well-suited for GitHub Actions due to its single binary distribution and low resource requirements.
Create .github/workflows/performance-test.yml:
name: Performance Tests
on:
push:
branches: [main]
schedule:
- cron: '0 6 * * *' # Daily at 6 AM UTC
workflow_dispatch: # Manual trigger
jobs:
load-test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Install k6
run: |
sudo gpg -k
sudo gpg --no-default-keyring --keyring /usr/share/keyrings/k6-archive-keyring.gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
echo "deb [signed-by=/usr/share/keyrings/k6-archive-keyring.gpg] https://dl.k6.io/deb stable main" | sudo tee /etc/apt/sources.list.d/k6.list
sudo apt-get update
sudo apt-get install k6
- name: Run load test
run: k6 run tests/load-test.js --out json=results.json
- name: Upload results
uses: actions/upload-artifact@v3
with:
name: k6-results
path: results.json
Using the Official k6 Action
The Grafana team maintains an official action that simplifies setup:
jobs:
load-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run k6 load test
uses: grafana/k6-action@v0.3.0
with:
filename: tests/load-test.js
flags: --out json=results.json
- name: Upload results
uses: actions/upload-artifact@v3
with:
name: k6-results
path: results.json
k6 Test Script with Thresholds
Create tests/load-test.js with pass/fail thresholds:
import http from 'k6/http';
import { check } from 'k6';
export const options = {
stages: [
{ duration: '1m', target: 20 },
{ duration: '3m', target: 20 },
{ duration: '1m', target: 0 },
],
thresholds: {
http_req_duration: ['p(95)<500'], // 95% under 500ms
http_req_failed: ['rate<0.01'], // <1% failure rate
checks: ['rate>0.99'], // 99% checks pass
},
};
export default function () {
const res = http.get(`${__ENV.TARGET_URL}/api/health`);
check(res, {
'status is 200': (r) => r.status === 200,
'response time OK': (r) => r.timings.duration < 500,
});
}
Pass the target URL as an environment variable:
- name: Run k6 load test
uses: grafana/k6-action@v0.3.0
with:
filename: tests/load-test.js
env:
TARGET_URL: https://staging.example.com
JMeter in GitHub Actions
JMeter requires more setup but offers broader protocol support.
name: JMeter Performance Tests
on:
workflow_dispatch:
schedule:
- cron: '0 2 * * 1' # Weekly on Monday at 2 AM
jobs:
jmeter-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up JDK
uses: actions/setup-java@v3
with:
java-version: '11'
distribution: 'temurin'
- name: Download JMeter
run: |
wget -q https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.5.tgz
tar -xzf apache-jmeter-5.5.tgz
- name: Run JMeter test
run: |
./apache-jmeter-5.5/bin/jmeter \
-n \
-t tests/load-test.jmx \
-l results.jtl \
-e \
-o report
- name: Upload JTL results
uses: actions/upload-artifact@v3
with:
name: jmeter-jtl
path: results.jtl
- name: Upload HTML report
uses: actions/upload-artifact@v3
with:
name: jmeter-report
path: report
Storing Results for Trend Analysis
To track performance over time, store results in a persistent location:
- name: Store results in S3
uses: jakejarvis/s3-sync-action@master
with:
args: --acl public-read
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
SOURCE_DIR: 'results'
DEST_DIR: 'performance/${{ github.sha }}'
Or use GitHub’s built-in artifact storage for simpler setups:
- name: Download previous results
uses: dawidd6/action-download-artifact@v2
with:
workflow: performance-test.yml
name: k6-results
path: previous-results
continue-on-error: true
- name: Compare results
run: |
if [ -f previous-results/results.json ]; then
python scripts/compare-results.py previous-results/results.json results.json
fi
Performance Regression Detection
Create a comparison script that fails the build on regressions:
#!/usr/bin/env python3
# scripts/compare-results.py
import json
import sys
def load_metrics(filepath):
with open(filepath) as f:
data = json.load(f)
# Extract p95 response time from k6 JSON output
metrics = [m for m in data if m.get('metric') == 'http_req_duration']
p95_values = [m['data']['value'] for m in metrics
if m.get('data', {}).get('tags', {}).get('percentile') == '95']
return sum(p95_values) / len(p95_values) if p95_values else None
previous = load_metrics(sys.argv[1])
current = load_metrics(sys.argv[2])
if previous and current:
regression = ((current - previous) / previous) * 100
print(f"Previous p95: {previous:.2f}ms")
print(f"Current p95: {current:.2f}ms")
print(f"Change: {regression:+.1f}%")
if regression > 10: # 10% regression threshold
print("ERROR: Performance regression detected!")
sys.exit(1)
Testing Against Different Environments
Use matrix builds to test multiple environments:
jobs:
load-test:
runs-on: ubuntu-latest
strategy:
matrix:
environment: [staging, production]
include:
- environment: staging
url: https://staging.example.com
users: 50
- environment: production
url: https://api.example.com
users: 10
steps:
- uses: actions/checkout@v3
- name: Run k6 load test
uses: grafana/k6-action@v0.3.0
with:
filename: tests/load-test.js
flags: --vus ${{ matrix.users }} --duration 5m
env:
TARGET_URL: ${{ matrix.url }}
Posting Results to Pull Requests
Add test results as PR comments:
- name: Post results to PR
if: github.event_name == 'pull_request'
uses: actions/github-script@v6
with:
script: |
const fs = require('fs');
const results = fs.readFileSync('results-summary.txt', 'utf8');
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: `## Performance Test Results\n\`\`\`\n${results}\n\`\`\``
});
Resource Limits and Timeouts
GitHub Actions runners have limited resources. For larger tests:
jobs:
load-test:
runs-on: ubuntu-latest
timeout-minutes: 30 # Prevent runaway tests
steps:
- name: Run load test
run: k6 run --vus 100 --duration 10m tests/load-test.js
For tests requiring more resources, consider self-hosted runners or cloud-based testing services.
Secrets and Sensitive Endpoints
Never hardcode credentials. Use GitHub Secrets:
env:
API_KEY: ${{ secrets.LOAD_TEST_API_KEY }}
TARGET_URL: ${{ secrets.STAGING_URL }}
And reference them in your test script:
const apiKey = __ENV.API_KEY;
const headers = { 'Authorization': `Bearer ${apiKey}` };
Automated performance testing in CI/CD provides early warning of regressions and builds confidence in deployments. Start with simple tests and thresholds, then expand coverage as you learn what metrics matter most for your application.
Tags: