It’s not only about speed
As we can see, measuring performance is not only about speed, but it can dig up much more different type of issues, including the security problems! When asked what the cause of performance issues is, our lead developer Nikola Spasojevic says this is mostly related to bad code writing, which, during time, includes more and more data, that can slows down the whole system. He adds that sometimes it is not applicable to fix these issues because big amount of time must be set aside for restructuring the base of the app in process where the whole team is busy or client doesn’t think they are as important as, for example, functional issues.
• Lacking the code review process is surely one of the biggest mistakes that can be done during development. Also, the time frames that are predetermined for product implementation can cause that performances of an app are put aside. In my opinion, when someone gets stung on bad architecture and app design, he dedicates more time on details and gives more attention to the app foundation- said Nikola.
His team member, Nemanja Svorcan, mentioned that the biggest issue related to performance he encountered was related to slow database.
• Database is not optimised and that is caused by bad DB architecture. It can be fixed by changing (optimising) the DB or with quick-fix such as writing procedures in DB. As C# is a relatively fast programming language, it implies that all is asynchronously developed. So, mentioned DB issue might be easily skipped, but sometimes it occurs- explains Nemanja.
Best Practices of performance testing
As for performance testing best practices, one of the most important is early testing. Team should not wait and rush this test as project winds down. This kind of testing is not reserved for only completed projects.
Testing individual modules or units has its value too. Multiple performance tests should be conducted to ensure co consistent findings and determine metrics. As application often involves multiple systems (database, servers, services…), individual units should be tested separately as well as together. A single test will not tell developers all they need to know.
Successful performance testing is a collection of repeated and smaller tests. Performance test are best conducted in environments that are close to the production systems. Calculating averages will deliver actionable metrics. There is value in tracking outliers also. Those extreme measurements could reveal possible failures.
No performance testing tool will do everything needed. And limited resources may restrict choice even further. Research performance testing tools for the right fit.
Performance testing fallacies
The biggest fallacy is definitely related to the time when this kind of test should be started. “Performance testing is the last step in development.” Wrong! The earlier you implement testing in SDLC, the easier it will be to address problems as they arise. Implementing solutions early will cost less than major fixes.
Next fallacy is adding more hardware to fix performance. More processors, servers or memory simply adds to the cost but problems are still there.
Create realistic test scenarios; for example, don’t start your performance tests at zero load since that is an unrealistic situation.
One performance test scenario is NOT enough because not every issue can be detected in one test scenario. Take care of set of users used in test. If a given set experience issues, this does not mean it occurs for all users (tests).
While it is important to isolate functions in tests, the individual component test results do not add up to a system-wide assessment. As it may not be feasible to test all functionalities, a complete-as-possible performance test must be designed with awareness of what was not tested.
Performance test are not run to find the software bugs or defects. These tests have their benchmarks and standards and they give the diagnostic information for eliminating bottlenecks.
Typical order of fixes is:
1. Improve current application design: algorithms, caching, DB calls, memory use
2. Upgrade hardware: RAM, CPU, network bandwidth
3. Upgrade software infrastructure: OS, web server, database
4. Upgrade system architecture: client-server to basic n-tier, basic n-tier to enterprise n-tier, software and hardware changes
At the core of performance testing is the art of imitating real user behaviour and activity and then evaluating how the application responds. Performance testing is one of the single biggest catalyst to significant change in architecture, code, hardware and environments. This kind of testing is a job that never ends. It should be an ongoing process. As application or website grows, tester will need to make changes to accommodate the larger user base.
The test engineer is forced to make a calculated guess about the behaviour of users based on use cases and requirements. It is a challenging task to perfectly model user behaviour and interaction with the application, but, when you do it, you can maybe find some major issues that can have really bad impact on application.