How User Ramping Works – Part Two - Web Performance
Menu

How User Ramping Works – Part Two

In part 1 of How User Ramping Works, we discussed how to set up a user ramp configuration for a test.  When you’ve done that, recorded and replayed your test cases to perfection, loaded your datasets, configured your load engines, and set up your server agents, what actually happens when you push the big green button?

The first thing Load Tester does is go through a setup sequence that configures the load engines for the coming test.  This can take a while, especially if you’ve configured large datasets or large numbers of files to be uploaded during the test – at least some of that data has to be transferred to every load engine to provide data for the virtual users.  How, precisely, that happens is important because it can affect your test.

First, if you have only one load engine, all the data will be uploaded to that load engine.  If you have multiple load engines, what gets uploaded where depends on how the Sharable option is set for each dataset.  If the Sharable option is enabled (checked), complete copies of the dataset will be uploaded to all the load engines and used independently by each load engine.  As noted in the help, enabling the Sharable option means that multiple virtual users can and will use the same dataset row at the same time.  For example, if your dataset consists of usernames and passwords, you can expect that there will be multiple logins of that user across several different virtual users.  Additionally, when multiple load engines are in use, this means that the same user account can and likely will be logged in from multiple IP addresses as well.

If the Sharable option is not enabled (unchecked), then Load Tester splits the dataset up as evenly as possible across the load engines.  For example, if you have three load engines and a 3,000 row dataset, each load engine will get 1,000 rows of data.  This split is sequential, not random – load engine #1 will get rows 1-1000, load engine #2 will get rows 1001-2000, and so on.  This tries to ensure that dataset rows are not in use by different virtual users on different load engines at the same time.

Once the upload and configuration process is complete, Load Tester instructs the load engines to begin the test.  The first group of virtual users is added randomly throughout the first ramp period.  When you’ve only got one load engine, all the users are added to that load engine, if it has enough CPU and memory to add those users.  If you have multiple load engines, how the users are distributed depends on an algorithm that tries to optimize the balance of users across the load engines.  Simply put, the controller tries to add the new user to the load engine with the highest estimated capacity.  The capacity calculation is based on the number of current users on that load engine versus the CPU/memory usage, and is recalculated every few seconds.  For example, if a load engine has 100 users and is at 50% CPU and memory usage, the estimated capacity will be approximately 200 users.

This can result in a situation where the load engines become unbalanced, with one or two load engines supporting most of the virtual users.  Sometimes this is what you want, such as when you have one large and powerful load engine and a few smaller supporting load engines.  However, this can (and often does) cause a number of problems.  The most common and most confusing problem caused by unbalanced load engines is virtual users running out of dataset rows.

For example, assume that you have a 200-row dataset of usernames and passwords which is used for logins.  That dataset is split evenly across two load engines, #1 and #2, so each load engine gets 100 rows.  The test is configured to run up to 200 virtual users.  However, during the test, the load engines become unbalanced, and load engine #1 reaches 100 virtual users first … and then tries to add the 101st virtual user.  That virtual user will throw an error and terminate, because all 100 rows of the dataset on load engine #1 are in use and he cannot obtain a row for use in his test case.

How can we stop that problem from occurring?  The best way is to have more dataset rows than you’ll think you need.  If you have two load engines as above, a dataset with 400 rows will make sure that, even if one load engine takes all the load, you’ll still have enough dataset rows to service the virtual users.  Failing that, you can use identical or similar load engines, which avoids hardware-based imbalances; you can use our Instant Load Engine bootable CD to equalize the environment on the load engines and avoid interference from other OS operations; and, perhaps most importantly, you can set the number of starting users on the test to be a multiple of the number of available load engines – this will help make sure that each load engine gets off on the same footing.  It also helps to have a larger number of starting users – 10 per load engine is a good rule of thumb.

During the test, Load Tester is continually instructing the load engines to add users during ramp periods.  However, you may notice that sometimes users do not get added, and that ramp periods go by without users being added.  This can happen for a variety of reasons, but the most common occurrence is when you are using the local load engine and the engine is memory-constrained.  You can either address this by adding more memory to the controller (and the local engine) by editing the webperformance.ini file in the Load Tester installation directory, or you can use external load engines that have more resources available.  You will not receive errors in this case, so if you’re having trouble getting to the proper user count, check your memory settings first.

Finally, when the test ends, you’ll notice that it doesn’t actually stop.  This is because we do not want to leave the target site in an unusual state, for example with large numbers of active sessions that are suddenly cut off, yet remain in the session tracking system until they time out.  When the test ends, each virtual user will attempt to finish the current test case.  Once that’s finished, the virtual user will terminate.  This causes a ramp-down period with a minimum length equal to the duration of the longest test case, plus any delays incurred by the load test.  If you don’t want to wait, you can hit the red button in the Status view to immediately terminate all virtual users, or you can configure that behavior to be the default in the Preferences.

Happy testing!

Matt Drew

Add Your Comment

You must be logged in to post a comment.

Resources

Copyright © 2024 Web Performance, Inc.

A Durham web design company

×

(1) 919-845-7601 9AM-5PM EST

Just complete this form and we will get back to you as soon as possible with a quote. Please note: Technical support questions should be posted to our online support system.

About You
How Many Concurrent Users