Velocity Conference – Day 1
My first day at Velocity was long, but fun. I breathed a sigh of relief when my luggage finally arrived…10 hours after I did.
I attended part of a Load Testing workshop early in the afternoon that raised some interesting topics:
- Why are steady ramps bad? They showed some examples of how this approach can result in the wrong conclusions about system capacity. I agreed heartily – I’ve blogged on the merits of a stepped ramp in load tests previously.
- Abandonment rates – This is a feature that I’d like to get into Load Tester sooner rather than later. A basic implementation is not too difficult, though it is not yet clear to me what information about abandonment should be collected in the metrics and included in the reports. During a later discussion about abandonment rates, it was suggested that the abandonment rates should not only be configurable on a per-page basis, but should also be conditional on the performance of the previous page. This is based on the fact that more users will abandon the site when performance of the pages is poor.
- Arrival Rate vs Concurrent Users – Keynote is pushing the idea that load testing tools should model arrival rate instead of concurrent users. The justification was that at a given user load, if the performance of the servers declines, then the page rate served also declines. They asserted that the page rate should not decline because users keep on coming based on, for example, marketing efforts. I’m not sure I buy this proposition. First, when the performance declines, the abandonment rates, mentioned above, DO lessen the number of users on the site in most cases. Second, you cannot force the server to maintain a particular page rate, you can only generate requests at a particular rate…the server is going to service the requests as fast as it can and no faster. I’m still thinking about this one, but other than a good sales pitch, I don’t see the value is this approach.
One presenter gave a list of where they most frequently see performance problems in load testing engagements:
- 40% front-end / client side inefficiencies
- 35% application / back-end performance limitations
- 10% 3rd-party tags (click-tracking, etc)
- 5% Content Delivery Network (CDN)
- 5% Infrastructure / hosting
- 5% misc / everything else
In one of the more interesting sessions, Metrics That Matter, 9 Key performance metrics were presented that should be used to track ongoing performance of a system. Many of these are directly applicable to load testing:
- availability
- outages
- avg download time – geometric mean
- client vs server time
- variability – 85% and 95%
- geographic variability
- hourly variability (20% non-peak -> peak)
- 3rd party quality (50 ms)
- size / element count / domains
The speaker also talked about how with the increases in broadband penetration, round-trip latency is becoming a much larger performance problem than page size. These steps were suggested as the most effective for improving client-side performance problems:
- reduce round-trips (remove, pack/combine files)
- reduce modularity of jss/css files
- lessen impact of single-threading js (put js at end)
- 3rd party tracking tags
- cache management
I look forward to reporting on Day 2!
Chris