Spectrum is now read-only. Learn more about the decision in our official announcement.


A place to chat about load testing and answer questions.


How do I run a really large number of concurrent browsers in a single flood?

October 22, 2020 at 6:22pm

How do I run a really large number of concurrent browsers in a single flood?

October 22, 2020 at 6:22pm (Edited 1 year ago)
I have a simple flood element based load test. My company's website uses a lot of java, and from small scale flood hosted tests, it looks like hardware limitations cause performance to degrade between 5 and 10 browsers on the node.
How would I set up a test with 5000 concurrent browsers? It looks like the hosted hardware limits me to a single node, so the flood hosted option is completely out the question, yes? If I set up AWS integration to deploy a grid on our own vpc, it looks like I am limited to 10 nodes? So that's 50 or 100 browsers per grid before the performance starts to suffer from overloading the node. Is adding additional grids the way to scale arbitrarily? Can I set up 100 grids and use them all in a single flood to get to 5000 concurrent browsers? Thanks!

November 16, 2020 at 2:27am
Hi (gakeller83) - there are several ways to see if you can get the most concurrency per node with Element. Primarily you should definitely verify your stepDelay and actionDelay values within the script. These are recommended to be as realistic as possible (8-10 seconds) as the lower these are the more resources get taken up on the node.
With Flood and Element you can potentially ramp-up to tens of thousands of concurrent browsers. It is recommended to first see how many users you can get on each node as you have mentioned - then you can scale up from there.
We have account limits for brand new accounts (starting from running 2 nodes) - you can request for these limits to be increased so that you can run your larger tests.
With hosted, you can integrate with your existing AWS account to run larger spec'ed nodes - the only difference is that you would be paying the cost of these larger nodes to AWS directly. The up-side is that you would technically be able to fit more users per node than our standard m5.xlarge nodes we use by default.