So you measured and tuned your system like described in the Part I and Part II of these blog post series. Now you want to get some figures how many end users your system will be able to serve. Therefore you define “scenarios” which will be typical for what your users do.
One such a user scenario could i.e. be:

  • log in
  • do something
  • log out

Since your users won’t nicely queue up and wait for other users to finish their business, the pace you need to test your defined system is “starting n scenarios every second”. Many scenarios simulating different users may be running in parallel. If your scenario would require 10 seconds to finish, and you’d start 1 per second, that means that your system needs to be capable to process 10 users in parallel. If it can’t handle that, you will see that more than 10 sessions are running in parallel, and the time required to handle such a scenario will lengthen. You will see the server resource usage go up and up, and finally have it burst in flames.
Read more