Running Distributed Locust on Azure Container Instances π
Locust is an open source load testing tool that can be run inside a container
Below is how I got Locust up and running on Azure Container Instances
Note: I prefer head/workers, controller/nodes, etc, but I’ve used master/slave for clarity & consistency with the Locust docs in this doc
Design π
- Single master container
- 3x slave containers
- These communicate with the master over a public IP & DNS name
- Azure File share to upload locustfiles, which are accessed by containers
Basics π
Create a resource group & a storage account with a sample locustfile
export RESOURCE_GROUP=locust
export AZURE_STORAGE_ACCOUNT=testlocust
# Resource group to hold everything
az group create -n $RESOURCE_GROUP -l eastus
# Azure Files for locust load test definitions
az storage account create -n $AZURE_STORAGE_ACCOUNT -g $RESOURCE_GROUP --sku Standard_LRS
az storage share create -n locust
AZURE_STORAGE_KEY=`az storage account keys list -n $AZURE_STORAGE_ACCOUNT -g $RESOURCE_GROUP --query '[0].value' -o tsv`
# Upload a definition
az storage file upload -s locust --source ./scenarios/generic_user_event_grid.py
Creating the master π
The master is exposed with ports & a public IP so that the slaves can communicate with it. Locust docs say it needs the same host/locustfile as the workers even though it’s not doing the load generation Β―_(γ)_/Β―
# Create a master on ACI
az container create -g $RESOURCE_GROUP -n locust-master --image christianbladescb/locustio -e EVENTGRID_KEY=nsX5fV23FlOUBNNk309/8Kms/NnR33tDn2nFifIedKM= --ports 8089 5557 5558 --ip-address public --dns-name-label locust-master --azure-file-volume-account-name $AZURE_STORAGE_ACCOUNT --azure-file-volume-account-key $AZURE_STORAGE_KEY --azure-file-volume-share-name locust --azure-file-volume-mount-path /locust --command-line '/usr/bin/locust --host https://noel-locust.westus2-1.eventgrid.azure.net --master -f generic_user_event_grid.py'
# Wait for this to become fully-available
# URL is based on http://<dns-name-label>.<region>.azurecontainer.io
# http://locust-master.eastus.azurecontainer.io:8089
Create slaves π
These containers will be doing the actual load generation and reporting success/failures
az container create -g $RESOURCE_GROUP -n locust-slave1 --image christianbladescb/locustio -e EVENTGRID_KEY=nsX5fV23FlOUBNNk309/8Kms/NnR33tDn2nFifIedKM= --azure-file-volume-account-name $AZURE_STORAGE_ACCOUNT --azure-file-volume-account-key $AZURE_STORAGE_KEY --azure-file-volume-share-name locust --azure-file-volume-mount-path /locust --command-line '/usr/bin/locust --host https://noel-locust.westus2-1.eventgrid.azure.net --slave --master-host locust-master.eastus.azurecontainer.io -f generic_user_event_grid.py'
az container create -g $RESOURCE_GROUP -n locust-slave2 --image christianbladescb/locustio -e EVENTGRID_KEY=nsX5fV23FlOUBNNk309/8Kms/NnR33tDn2nFifIedKM= --azure-file-volume-account-name $AZURE_STORAGE_ACCOUNT --azure-file-volume-account-key $AZURE_STORAGE_KEY --azure-file-volume-share-name locust --azure-file-volume-mount-path /locust --command-line '/usr/bin/locust --host https://noel-locust.westus2-1.eventgrid.azure.net --slave --master-host locust-master.eastus.azurecontainer.io -f generic_user_event_grid.py'
az container create -g $RESOURCE_GROUP -n locust-slave3 --image christianbladescb/locustio -e EVENTGRID_KEY=nsX5fV23FlOUBNNk309/8Kms/NnR33tDn2nFifIedKM= --azure-file-volume-account-name $AZURE_STORAGE_ACCOUNT --azure-file-volume-account-key $AZURE_STORAGE_KEY --azure-file-volume-share-name locust --azure-file-volume-mount-path /locust --command-line '/usr/bin/locust --host https://noel-locust.westus2-1.eventgrid.azure.net --slave --master-host locust-master.eastus.azurecontainer.io -f generic_user_event_grid.py'
# Make sure that slaves are available
az container list -o table
Run a test π
Check in the UI on the master to see that all 3 slaves are connected, and then start a job!
Unsolved Problems π
I’m not sure if it’s a Locust problem, or what - but when I stop a test, it sometimes works and sometimes doesn’t. It also seems to get stuck in Hatching, and won’t let me create new tests that work properly until I recreate the master & all the slaves.
Next steps π
Most of the things here, minus the local file upload, could be converted to an ARM template, so that you could do the following to start new load tests
az group deployment create -g myloadtest --template-file locust.json --parameters locustfile=test1.py host=http://example.com